uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,996,863
arxiv
\section{Introduction} Supersymmetric (SUSY) extensions of the Standard Model (SM) have long been considered attractive candidates for physics beyond the Standard Model (BSM). In their simplest realization they solve the hierarchy problem, have a dark matter candidate and predict gauge coupling unification. As such, superpartners have been the focus of a very large number of searches by collider experiments. Despite these intensive efforts, they have not been seen, putting the limit on their mass above the TeV scale in many cases. For many versions of the Minimal Supersymmetric Standard Model (MSSM), this means introducing large fine-tuning, therefore weakening one of the main motivations for these models. This has led to a renewed interest in supersymmetric models which depart from the MSSM in some way and lead to different collider phenomenology. This includes models with Dirac gauginos \cite{Fayet:1978qc, Hall:1990hq, Fox:2002bu, Nelson:2002ca, Kribs:2007ac, Amigo:2008rc, Benakli:2008pg, Benakli:2010gi, Kribs:2010md, Abel:2011dc, Csaki:2013fla, Frugiuele:2011mh, Davies:2011mp, Fok:2012fb, Frugiuele:2012kp, Frugiuele:2012pe, Beauchesne:2014pra, Bertuzzo:2014bwa, Carpenter:2015mna, Itoyama:2011zi, Itoyama:2013sn, Itoyama:2013vxa} which can exhibit supersoft supersymmetry breaking \cite{Fox:2002bu} and lead to reduced cross section for the production of squarks \cite{Heikinheimo:2011fk,Kribs:2012gx,Kribs:2013eua}. The Dirac nature of the gauginos also enables the building of models that possess a $U(1)_R$ symmetry, which were shown to have weaker flavour constraints \cite{Kribs:2007ac,Fok:2010vk}. Furthermore, the $U(1)_R$ symmetry can be identified with a lepton or baryon number leading to models where the superpartners have non-standard charges under these symmetries. Such identification can lead to models with unusual structure and phenomenology. For example, if the $U(1)_R$ symmetry is identified with a lepton number the sneutrino can acquire a significant vacuum expectation value (vev) and play the role of the down type Higgs \cite{Gherghetta:2003he, Frugiuele:2011mh, Fok:2012fb, Frugiuele:2012kp, Frugiuele:2012pe, Riva:2012hz, Beauchesne:2014pra, Biggio:2016sdu}. In this work we examine the phenomenology of models where the $U(1)_R$ is instead identified with baryon number \cite{Brust:2011tb, Frugiuele:2012pe}. Because this symmetry does not commute with supersymmetry, superpartners have different baryon numbers than their corresponding Standard Model particles which themselves retain their standard baryon number. Under this charge assignment, the standard $R$-parity violating superpotential term of the form $ \lambda'' U^c D^c D^c$ is now baryon number conserving. The bound on such a term is therefore weakened significantly which can modify the LHC phenomenology. For example, superpartners can decay promptly, making displaced vertices signatures, which are very constraining, less prevalent. Furthermore, an exact $U(1)_R$ would forbid stop decays containing two same sign leptons, which is also a very constraining signature. Models with a $U(1)_R$ baryon number also have all the necessary components to generate successful baryogenesis: baryon number violation through unavoidable $U(1)_R$ breaking, the possibility of CP violation and out of equilibrium processes through the late decay of a gaugino \cite{Sakharov:1967dj}. In this paper we first look at how the bounds on the $\lambda''$ couplings are modified by the presence of the approximate $U(1)_R$ symmetry. This is presented in section \ref{Sec:Model}. In section \ref{Sec:collider_constraints} we examine the collider constraints on the model when a single coupling of the form $\lambda''_{3ij}$ is important. This phenomenology is in many cases very similar to the one studied in \cite{Monteux:2016gag} (see also for example \cite{Dreiner:1991pe,Allanach:2012vj,Evans:2012bf,Bhattacherjee:2013gr,Graham:2014vya}). In section \ref{Sec:Baryogenesis}, we study how our model can lead to successful baryogenesis. The mechanism is similar to the one studied in \cite{Cui:2013bta, Cui:2012jh, Arcadi:2015ffa, Arcadi:2013jza} and rely on the out of equilibrium decays of gauginos through a baryon number violating interaction. This requires a split spectrum with gauginos much lighter than the scalars. As we will see, the (pseudo-)Dirac nature of the gauginos leads to new diagrams contributing to the decay process and as a result new portions of the parameter space can have successful baryogenesis. \section{The model}\label{Sec:Model} The model we consider is an extension of the minimal $R$-symmetric Supersymmetric Standard Model (MRSSM) \cite{Kribs:2007ac}. It has an approximate $U(1)_R$ symmetry and Dirac gauginos whose mass terms can be written as: \begin{equation} \label{gauginomass} \sqrt{2} \int d^2 \theta \frac{{W'}^\alpha}{M_*}\left[c_1 W^{(1)}_\alpha S +c_2 W^{(2)i}_\alpha T^i+ c_3{W^{(3)a}_\alpha O^a} \right]+\text{h.c.}, \end{equation} where ${W'}^\alpha = \theta^\alpha D'$ is a spurion vector superfield with a non-zero $D$-term. $S$, $T^i$ and $O^a$ are chiral superfields in the adjoint representation of $U(1)_Y$, $SU(2)_L$ and $SU(3)_c$ respectively, $W^{(k)}_\alpha$ are the Standard Model superfield strengths and $M_*$ is the supersymmetry breaking mediation scale. The gaugino masses then take the form: \begin{equation} M_i^D = c_i \frac{D'}{M_*}. \end{equation} The standard $\mu$ term being forbidden by $U(1)_R$, the chiral superfields $R_u$ and $R_d$ are added to provide mass to the higgsinos. These new fields have the same gauge numbers as the higgsinos but different $U(1)_R$ charges. They have bilinear $\mu$-like terms with the Higgs superfields but their scalar components do not acquire vevs. The $U(1)_R$ symmetry can then be identified with baryon number by assigning the right-handed quark superfields $R$-charge $2/3$ and the left-handed quark superfields $R$-charge $4/3$. The charge assignments of the remaining superfields are shown in table \ref{table:Rcharge}. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|} \hline Fields & $R$-charge \\ \hline $H_{u,d}$ & $0$ \\ \hline $R_{u,d}$ & $2$ \\ \hline $U^c$, $D^c$ & 2/3 \\ \hline $S$, $T$, $O$ & 0 \\ \hline $Q$ & $4/3$ \\ \hline $L$, $E^c$ & 1\\ \hline \end{tabular} \end{center} \caption{$R$-charge assignment of chiral superfields of the model.} \label{table:Rcharge} \end{table} Under this symmetry all the Standard Model particles have their usual baryon number. However superpartners have non-standard baryon numbers. For example, the right-handed squarks have baryon number $2/3$ and thus are diquarks, while the left-handed squarks have baryon number $4/3$ and the gauginos baryon number 1. Gauge symmetries and the $U(1)_R$ symmetry lead to the following superpotential: \begin{equation} \begin{aligned} W = \; &y_u Q H_u U^c - y_d Q H_d D^c - y_e L H_d E^c + \mu_u H_u R_d + \mu_d R_u H_d \\ &+ \lambda_u^t H_u T R_d + \lambda_d^t R_u T H_d +\lambda_u^s S H_u R_d + \lambda_d^s S R_u H_d + \frac{1}{2} \lambda''_{i j k} U^c_i D^c_j D^c_k, \end{aligned} \end{equation} where $T=T^i \sigma^i/2$. This superpotential is equal to the superpotential of the MRSSM to which the standard $R$-parity violating term of the form $U^c D^c D^c$ has been added.\footnote{This term violates the standard $R$-parity but not the $U(1)_R$ symmetry defined in table \ref{table:Rcharge}.} Beside gaugino masses, the soft SUSY breaking terms include non-holomorphic scalar masses, $B_{\mu}$ like terms and a linear term for $S$: \begin{equation} V_\text{soft} = \sum_\Phi M_\Phi \left| \Phi \right|^2 + \left[B_\mu H_u H_d + \frac{1}{2} b_S S^2 + \frac{1}{2} b_T T^2 + \frac{1}{2} b_O O^2 + f_S S+\text{h.c.}\right]. \end{equation} Various tri-linear terms are also allowed by the symmetries of the model but can be suppressed \cite{Frugiuele:2012pe}. In addition, the $f_S$ term needs to be small to avoid destabilizing the hierarchy. On general ground, the $U(1)_R$ symmetry cannot remain an exact symmetry of the theory. The breaking will manifest itself at least through the gravitino mass. This breaking will then unavoidably be communicated to the Standard Model sector through anomaly mediation \cite{Randall:1998uk, Giudice:1998xp}. Majorana gaugino mass terms and tri-linear $A$-terms will in this case be generated with size of order: \begin{equation} M \sim A \sim \frac{1}{16 \pi^2} m_{3/2} . \end{equation} \subsection{Bounds on \texorpdfstring{$\lambda''$}{lambda''}}\label{sSec:BoundsOnLambdapp} The bounds on the $\lambda''$ couplings come in our model from the same sources as in the $R$-parity violating Supersymmetric Standard Model (RPVMSSM), namely flavour violating processes and baryon number violating processes. The situation in the RPVMSSM goes as follow. The flavour violating processes put severe constraints on products of $\lambda''$s with different flavour structures while baryon number violating processes can impose strong constraints on the $\lambda''$ individually \cite{Barbier:2004ez}. The baryon number violating processes that put the most stringent bounds are proton decay, neutron antineutron oscillation and double nucleon decay. The proton decay constraint can be avoided if we assume that lepton number is conserved and that the gravitino is heavier than the proton, leaving neutron antineutron oscillation and double nucleon decay which are still very constraining for many of the $\lambda''$s, with the constraint on $\lambda''_{112}$ being the strongest. One approach to satisfy both the flavour violating and baryon number violating constraints is to assume a minimal flavour violating (MFV) structure for the $\lambda''_{ijk}$ \cite{Csaki:2011ge}. This leads to very small couplings and the LHC phenomenology is then characterized by displaced vertices. Another approach to avoid the bounds is to assume that in the mass eigenstate basis only one coupling of the form $\lambda_{3 i j}''$ is large while the $\lambda''$s with different flavour structures are very suppressed. The bounds are then easily satisfied. Single stop production becomes relevant at the LHC, and neutralinos can decay promptly via an off-shell stop to a top and two jets. This phenomenology was explored in \cite{Monteux:2016gag}. The difficulty in such a scenario is to build a flavour model that leads in the mass basis to a large $\lambda''_{3 i j}$ coupling but a very small $\lambda''_{1 1 2}$ coupling. \subsubsection{Bounds from baryon number violating processes} In the model we consider, baryon number is violated only by the small $U(1)_R$ breaking terms coming from anomaly mediation which are proportional to the gravitino mass. Constraints from baryon number violating processes are then potentially weaker than in the RPVMSSM. However, if the gravitino is lighter than the proton, the proton can decay to a gravitino and a kaon. This process proceeds through a $\lambda''_{112}$ coupling and is the same as in the RPVMSSM, leading to a bound of \cite{Barbier:2004ez}: \[ \lambda''_{112} \lesssim 6 \times 10^{-15} \left(\frac{m_{\tilde{q}}}{1 \text{TeV}} \right)^2 \left(\frac{m_{3/2}}{1 \text{eV}}\right). \] When the gravitino is heavier than the proton, the bounds from neutron antineutron oscillation and double nucleon decay still apply. The best experimental limit on neutron antineutron oscillation comes from the non-observation of $^{16}\text{O}$ decay to various final states with multiple pions and omega particles at SuperKamiokande \cite{Abe:2011ky}. This process receives tree-level contributions from diagrams of the form shown in figure \ref{fig:nnbarRR}. \begin{figure}[t!] \centering \begin{subfigure}{0.41\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 469 339]{nnbar.pdf} \caption{} \label{fig:nnbarRR} \end{subfigure} ~ ~ ~ ~ ~ \begin{subfigure}{0.41\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 469 339]{nnbar_LR.pdf} \caption{} \label{fig:nnbarLR} \end{subfigure} \caption{Diagrams leading to neutron antineutron oscillation. Flavour changing insertions are needed on the squark lines and a Majorana mass insertion is needed on the gluino line. (a) shows a diagram with flavour changing insertions of the right-handed squarks. (b) shows a diagram requiring a left-right squark mixing which is further suppressed by the gravitino mass.}\label{fig:nnbar} \end{figure} This leads to a bound on $\lambda_{11i}''$ which is somewhat model dependent as the diagram requires flavour mixing mass insertions on the squark lines. It also requires the insertion of a Majorana mass term for the gluino which we take to be given by anomaly mediation: $M_3 =3 \alpha_s m_{3/2}/4 \pi $. The amplitude for this process can be estimated to be \cite{Csaki:2011ge}: \[ M_{n-\bar{n}} \sim 4\pi \alpha_s \left(\lambda''_{11i}\right)^2 \frac{\left(\delta_{i1}^{RR}\right)^2 }{m_{\tilde{q}}^4} \frac{M_3}{\left(M^D_3\right)^2} \Lambda^6, \] where $\delta_{ij}^{RR}$ is the ratio of the flavour non-diagonal elements of the right-handed down-type squark mass matrix to the flavour diagonal ones and $\Lambda$ is the characteristic scale for the neutron matrix elements which is expected to be close to the QCD scale. Taking $\alpha_s=0.12$, we find a bound of the form \cite{Csaki:2011ge}: \begin{equation} \lambda''_{11i} \lesssim 2 \times 10^{-5} \left(\frac{1}{\delta_{i1}^{RR}}\right)\left(\frac{M_3^D}{1 \text{TeV}}\right) \left(\frac{1 \text{GeV}}{m_{3/2}}\right)^{1/2}\left(\frac{m_{\tilde{q}}}{1 \text{TeV}}\right)^2 \left(\frac{250 \text{MeV}}{\Lambda}\right)^3. \end{equation} If for some reason the effect of the flavour mixing in the right-handed squark mass matrix is small, which could happen if, for example, this matrix follows an MFV pattern,\footnote{With an MFV structure, there exists a basis where both the right-handed squark matrix and the gauge Yukawa interactions involving down-type quarks are flavour diagonal.} then the process needs to involve left-right squark mixing (see figure \ref{fig:nnbarLR}). In the limit of an exact $U(1)_R$, these mixings, which come from $A$-terms, are forbidden. They are however expected to be generated with a size proportional to the gravitino mass once $U(1)_R$ breaking effects are taken into account. Taking the anomaly mediation value for the $A$-terms, the bound becomes: \begin{equation} \lambda''_{1 1 i} \lesssim 2 \left(\frac{1}{y^d_{1i}} \right) \left(\frac{M_3^D}{1 \text{TeV}}\right) \left(\frac{1 \text{GeV}}{m_{3/2}}\right)^{3/2} \left(\frac{m_{\tilde{q}}}{1 \text{TeV}}\right)^4 \left(\frac{250 \text{MeV}}{\Lambda}\right)^3, \end{equation} where $y^d_{ij}$ is the down-type Yukawa matrix, and we see that order one $\lambda''$ become easily allowed. The bound coming from double nucleon decay is more independent from flavour physics as it can proceed through a diagram such as the one showed in figure \ref{fig:ppKK} which does not require flavour mixing on the squark lines. The diagram on the other hand still requires the insertion of a gluino Majorana mass term. \begin{figure} \begin{center} \includegraphics[width=0.41\textwidth, bb = 0 0 406 406]{ppKK.pdf} \caption{Diagram mediating the $ p p \rightarrow K^+ K^+$ process.} \label{fig:ppKK} \end{center} \end{figure} The best limit on this process also comes from the non-observation of $^{16}\text{O}$ decay to $^{14}\text{C} K^+ K^+$ at Superkamiokande \cite{Litos:2014fxa}. The bound on the partial lifetime is found to be $1.7 \times 10^{32}$ years. A rough estimate for the amplitude can be obtained in a similar way to the $n-\bar{n}$ process \cite{Goity:1994dq,Csaki:2011ge}. It leads to a bound on $\lambda''_{1 1 2}$ of the form: \begin{equation} \label{eq:boundppKK} \lambda''_{112} \lesssim 2 \times 10^{-4} \left(\frac{M_3^D}{1 \text{TeV}}\right) \left( \frac{1 \text{GeV}}{m_{3/2}}\right)^{1/2} \left(\frac{m_{\tilde{q}}}{1 \text{TeV}}\right)^2 \left(\frac{150 \text{MeV}}{\tilde{\Lambda}}\right)^{5/2}, \end{equation} where $\tilde{\Lambda}$ is the hadronic scale which is hard to estimate and introduces significant uncertainty on the bound. It is expected to be suppressed compared to $\Lambda_{\text{QCD}}$ due to nucleon repulsion \cite{Goity:1994dq,Csaki:2011ge}. \subsubsection{Bounds from flavour physics} Flavour physics also puts strong bounds on the $\lambda''$ parameters. The bounds are on products of two $\lambda''$s with different flavour structures \cite{Barbier:2004ez,Giudice:2011ak}. For example, there are loop diagrams that contribute to $\epsilon_K$ and $\Delta m_K$, leading to a bound of the form \cite{Giudice:2011ak}: \begin{equation} \sqrt{|\text{Im}(\lambda''_{i 23} {\lambda''^*_{i13}})^2|} \lesssim 2.8 \times 10^{-3} \left(\frac{m_{\tilde{u}_i}}{1 \text{TeV}}\right), \end{equation} from $\epsilon_K$ while $\Delta m_K$ gives: \begin{equation} \sqrt{|\text{Re}(\lambda''_{i 23} \lambda''^*_{i13})^2|} \lesssim 4.6 \times 10^{-2} \left(\frac{m_{\tilde{u}_i}}{1 \text{TeV}}\right). \end{equation} There are also strong bounds on $\lambda''_{i 23} \lambda''^*_{i 12}$ from $B$-mixing and from bounds on $BR(B^{\pm} \rightarrow \phi \pi^{\pm})$. These bounds can be satisfied by having only one of the $\lambda''$s sizable in the mass eigenstate basis. Whether or not this can be easily achieved depends on the structure of the flavour physics. For example it might be possible to arrange for one of the $\lambda''$ to be dominant in the gauge basis, but when rotating to the mass basis other flavour structures will be generated. If the rotation has the same structure as the CKM matrix a $\lambda''_{312}$ coupling of order one in the gauge basis is allowed by flavour physics constraints provided the squarks have masses in the TeV range \cite{Giudice:2011ak}. However, in order to satisfy the bound on $\lambda''_{112}$ from eq$.$ (\ref{eq:boundppKK}), the rotation of the right-handed up squarks from the gauge to the mass basis must induce a suppression of $\sim 10^{-4}$, and a CKM like structure will be insufficient. If $\lambda''_{313}$ is dominant in the gauge basis, $K-\bar{K}$ mixing constrains this coupling to be $\lesssim 0.1$, and in this case a CKM-like rotation structure for the up squarks will put $\lambda''_{112}$ close to the bound of eq$.$ (\ref{eq:boundppKK}). \subsection{Spectrum and parameter space}\label{subsec:spectrum} In view of the strong constraints on the $\lambda''$ couplings, we focus from now on models where only a single coupling of the form $\lambda''_{3ij}$ is important. The relevant features of the phenomenology will crucially depend on the size of this coupling and on the spectrum. For example, for large $\lambda''_{3ij}$ single stop production can be important while a smaller coupling leads to pair production being dominant. Large $\lambda''_{3ij}$ couplings will also lead to the prompt decay of neutralinos to a top quark and two jets, leading to a distinct phenomenology from the displaced vertices characteristic of the small RPV coupling case. In the limit where the $U(1)_R$ symmetry is exact there is a distinction between neutralinos and antineutralinos. One of them has baryon number $1$ and decays to $t j j$, while the other has baryon number $-1$ and decays to $\bar{t} j j $. In this case, the decay of a stop will always involve opposite sign tops: $\tilde{t} \rightarrow t \bar{\chi}_0 \rightarrow t \bar{t} j j$. However, in the presence of Majorana mass terms for the gauginos, the Dirac neutralinos split into two Majorana states which can both decay to either $t j j$ or $\bar{t} j j $. This is important for the phenomenology as in this case there will be a signature with two same-sign leptons. We can see how this works by looking at a bino LSP interacting with the stop through the following potential: \begin{equation} M^D_1 \tilde{S} \tilde{B} + \frac{1}{2} M_1 \tilde{B} \tilde{B}-\frac{2 \sqrt{2}}{3} g' \tilde{t}_R^\dagger (t_R \tilde{B}) + \lambda''_{323} \tilde{t}_R(b_R s_R) + \text{h.c.}, \end{equation} where $M_1$ is a small Majorana mass term for the bino. The mass eigenstates are two pseudo-Dirac states \cite{DeSimone:2010tf} given by: \begin{eqnarray*} \chi^B_1 &=& i \frac{1}{\sqrt{2}} (\tilde{B} - \tilde{S})\\ \chi^B_2 &=& \frac{1}{\sqrt{2}}(\tilde{B} + \tilde{S}) \end{eqnarray*} with corrections of order $M_1/M^D_1$. The masses of the two eigenstates are given by $m^B_1= M^D_1 - M_1/2$ and $m^B_2=M^D_1+M_1/2$ to leading order. In term of the mass eigenstates, the potential can then be written as: \begin{equation} \frac{m^B_1}{2} \chi^B_1 \chi^B_1 + \frac{m_2^B}{2} \chi^B_2 \chi^B_2 + i \frac{2 g'}{3} \tilde{t}_R^\dagger t_R \chi^B_1 -\frac{2 g'}{3} \tilde{t}_R^\dagger t_R \chi^B_2 + \lambda''_{323} \tilde{t}_R(b_R s_R)+ \text{h.c..} \end{equation} The decay of the stop can proceed via an on-shell $\chi_1^B$ or $\chi^B_2$ as shown in figure \ref{fig:stopdecay}. In the case of a decay to $t t j j$, the two amplitudes have opposite sign and interfere destructively, while for the decay to $t \bar{t} j j$, the amplitudes have the same sign and add. Therefore for a mass splitting smaller than the width of $\chi^B_1$ and $\chi^B_2$, decay chains with same sign tops are suppressed whereas for a larger mass splitting they occur as often as opposite sign tops. \begin{figure} \begin{center} \includegraphics[width=0.41\textwidth, bb = 0 0 471 217]{stop_decay.pdf} \caption{Decay of the stop through the two on-shell pseudo-Dirac states $\chi^B_1$ and $\chi^B_2$. When the mass difference between the two states is smaller than the width, the diagram with $\chi^B_1$ cancels the one with $\chi^B_2$.} \label{fig:stopdecay} \end{center} \end{figure} In our study of the LHC phenomenology we will compute the bound on squark masses as a function of the $\lambda''_{3 ij }$ for bino and Higgsino-up LSP. For gravitino masses slightly above $\sim 1$ GeV, for which the bound from proton decay to gravitino does not apply, the mass splitting between the pseudo-Dirac neutralino states is small enough to be ignored for most processes, except for the stop decay. A mass splitting of order 1 GeV is much larger than the typical neutralino width and as a consequence decay chains with same-sign tops will occur. We will also show bounds for a case where the $U(1)_R$ symmetry is nearly exact with no same-sign top signatures. This last case requires a very low $m_{3/2}$, which might be difficult to achieve, but could have interesting consequences for cosmology \cite{Ipek:2016bpf}. For our study of baryogenesis, we need to consider a different region of parameter space. In order to have a gaugino that decays out of equilibrium and generate a baryon antibaryon asymmetry through its decay, we will be led to consider a split spectrum with very heavy scalar masses. The bounds on $\lambda''$ are then considerably relaxed. Also, as explained in more details in section \ref{Sec:Baryogenesis} we will need to consider significantly larger mass splitting between the gauginos which means larger $U(1)_R$ breaking. \section{Collider constraints}\label{Sec:collider_constraints} In this section we constrain the parameter space of the model by using a variety of LHC searches. We focus on two different scenarios. The first scenario is resonant stop production together with stop pair production. The second scenario is pair production of the first and second generations of squarks. From here on out we simply refer to this scenario as squark production. For both scenarios, we consider the cases in which the $U(1)_R$ symmetry is either strictly preserved or, alternatively, broken. As mentioned above, whether or not the $U(1)_R$ symmetry is broken changes the phenomenology. \subsection{Placing limits on stops}\label{sSec:limits_stops} \subsubsection{Stop production}\label{ssSec:stop_production} The main phenomenological novelty of the model is the presence in the superpotential of the term: \begin{align} \frac{1}{2}\lambda''_{3ij}U_3^c D_i^c D_j^c , \end{align} which can only contain stops that are right-handed. Consequently, we concentrate on the production of right-handed stops, which we simply refer to as stops from now on. The left-handed stop, which does not mix with the right-handed one as it possesses a different $R$-charge, is assumed to be decoupled. If any of $\lambda''_{312}$, $\lambda''_{313}$ or $\lambda''_{323}$ is non-zero, resonant stop production can potentially take place at the LHC. For example, turning on $\lambda''_{312}$ will result in the partonic level processes $d s \rightarrow \tilde{t}^*$ and $\bar{d} \bar{s} \rightarrow \tilde{t}$, provided that the stop is not too heavy. Precisely, the partonic level cross section for $d_i d_j \rightarrow \tilde{t}^*$ is \cite{Berger:1999zt}: \begin{align} \hat{\sigma}(d_i d_j \rightarrow \tilde{t}^*) = \frac{\pi}{6}\frac{|\lambda''_{3ij}|^2}{m_{\tilde{t}}^2}\delta(1 - m^2_{\tilde{t}}/\hat{s}), \end{align} where $\hat{s}$ is the partonic centre of mass energy. Due to the valance down quark, the cross section to produce $\tilde{t}^*$ is generally much larger than that to produce $\tilde{t}$ (although if only $\lambda''_{323}$ is non-zero than $\tilde{t}^*$ and $\tilde{t}$ are produced in roughly equal amounts). Additionally, due to the small content of strange and bottom within the proton, stop production through $\lambda''_{312}$ is larger than that through $\lambda''_{313}$, which is itself larger than that through $\lambda''_{323}$, assuming equal values for $\lambda''_{312}$, $\lambda''_{313}$ and $\lambda''_{323}$. We use MadGraph5\_aMC@NLO \cite{Alwall:2014hca} to calculate the leading order (LO) cross section at centre of mass energies of 8 and 13 TeV for resonant stop production (summing both $\tilde{t}^*$ and $\tilde{t}$) turning on $\lambda''_{312}$, $\lambda''_{313}$ and $\lambda''_{323}$ one at a time.\footnote{To simulate collisions, we used the Mathematica package FeynRules 2.0 \cite{Alloul:2013bka} to produce our own MRSSM MadGraph models, one with the $U(1)_R$ symmetry preserved and another with the symmetry broken.} In this fashion, all constraints placed throughout this section assume only a single $\lambda''_{3ij}$ is non-zero. Our limits are then conservative compared to the case where multiple $\lambda''_{3ij}$ are non-zero. Naturally, the LO cross section will be corrected by next-to-leading order (NLO) QCD effects. The NLO cross section for single stop production has been calculated in Ref$.$ \cite{Plehn:2000be}. There, K-factors for each of the $\lambda''_{3ij}$ are presented for stop masses between 200 and 800 GeV at a centre of mass energy of 14 TeV. It was found that the K-factors varied between approximately 1.2 and 1.4. To account for this, we simply multiply the LO cross sections computed with MadGraph by a constant K-factor of 1.3 for all stop masses. Figure \ref{fig:stop_production} shows the resulting cross sections where each $\lambda''_{3ij}$ has been set individually to one. As in the MSSM, stops will also be produced in pairs. However, due to the $\lambda''_{3ij}$ coupling, there are new diagrams that contribute. These diagrams consist of two $\lambda''_{3ij}$ vertices, two initial state quarks and a t-channel quark. Couplings of order one can give significant contributions to the cross sections. For example, using MadGraph to compute the LO pair production cross section for 200 GeV stops at 13 TeV, we find a 20\% increase when $\lambda''_{312}$ is set to one compared to when it is zero. However, as far as we know, NLO corrections have not been computed for these new diagrams. Moreover, for $\lambda''_{3ij}$ of order one, single top production dominates the exclusion in most of the parameter space. For these reasons, we choose not to include this new contribution to our stop pair production cross section. Instead, we compute the cross section for this process using NLL-fast \cite{Beenakker:1997ut, Beenakker:2010nq, Beenakker:2011fu} and NNLL-fast \cite{Beenakker:2016lwe, Beenakker:1997ut, Beenakker:2010nq, Beenakker:2016gmf} for centre of mass energies of 8 and 13 TeV, respectively. We verify the results using Prospino \cite{Beenakker:1996ed}. Our limits from stop pair production are then conservative, particularly when the $\lambda''_{3ij}$ are of order one. The cross section is also shown in figure \ref{fig:stop_production}. As can be seen, resonant stop production is quite a bit larger than stop pair production for any of the $\lambda''_{3ij}$ set to unity. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 453]{Stop_production_8TeV.pdf} \caption{8 TeV} \label{fig:stop_production_8TeV} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 453]{Stop_production_13TeV.pdf} \caption{13 TeV} \label{fig:stop_production_13TeV} \end{subfigure} \caption{Stop production at centre of mass energy of 8 and 13 TeV. Here $\sigma(\text{pp} \rightarrow \text{stop})$ stands for $\sigma(\text{pp} \rightarrow \tilde{t}^*) + \sigma(\text{pp} \rightarrow \tilde{t})$. For resonant stop production, only one $\lambda''_{3ij}$ is non-zero at a time.}\label{fig:stop_production} \end{figure} \subsubsection{Stop LSP}\label{ssSec:Stop_LSP} If the stop is the LSP, then it will decay directly into two quarks through the $\lambda''_{3ij}$ coupling with a branching ratio of one. In this situation, there are two processes of interest: \begin{align*} (1) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \rightarrow d_i d_j \end{array} \quad \quad (2) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow d_i d_j \bar{d}_i \bar{d}_j \end{array} \end{align*} where the final state quarks depend on which one $\lambda''_{3ij}$ is non-zero. We now constrain the parameter space using these two processes. Let us focus on the first process. This case is sensitive to dijet searches performed at the LHC. We examined many of these searches and selected the following ones to recast: \cite{Aad:2014aqa, ATLAS-CONF-2016-030, ATLAS-CONF-2016-069} from ATLAS and \cite{Khachatryan:2016ecr, CMS-PAS-EXO-16-032} from CMS. The procedures used to recast these searches are described below. Notably, each one of these searches is independent of the flavour of the final state quarks as they do not utilize b-tagging. We also considered the ATLAS dijet search \cite{ATLAS-CONF-2016-060} which does utilize b-tagging but found that the exclusion limits did not improve. Particularly, search \cite{ATLAS-CONF-2016-069} provides stronger limits than \cite{ATLAS-CONF-2016-060}. The reason for this is that even though \cite{ATLAS-CONF-2016-060} requires a b-tagged jet, the limits on the cross section times branching ratio times acceptance between \cite{ATLAS-CONF-2016-069} and \cite{ATLAS-CONF-2016-060} are comparable. However, requiring a b-tagged jet results in the acceptance for \cite{ATLAS-CONF-2016-060} being about half that of \cite{ATLAS-CONF-2016-069}, thus making it less constraining. Both the ATLAS and CMS experiments have developed special techniques to place limits on low mass resonances decaying to dijets. The ATLAS technique is known as Trigger-object Level Analysis (TLA) and was implemented in \cite{ATLAS-CONF-2016-030} to constrain masses below 1.1 TeV. The CMS technique is known as data scouting and was implemented in searches \cite{Khachatryan:2016ecr, CMS-PAS-EXO-16-032} to constrain masses below 1.6 TeV. The low mass region is experimentally difficult due to a combination of the limited bandwidth available to record events to disk and the large Standard Model multijet rate. Either a large fraction of events must be discarded or stringent triggers must be used in order to keep the amount of recorded data to an acceptable level. However, both options limit the statistical power of the search. The TLA and data scouting approach is to record only the portion of the event data, such as jet four-momenta, needed to perform the dijet search. By doing so, event sizes can be reduced to 5\% (2\%) of what they would normally be for ATLAS \cite{ATLAS-CONF-2016-030} (CMS \cite{Khachatryan:2016ecr}). This allows for more statistics and hence stronger limits. To recast ATLAS dijet searches, we followed the procedure within Appendix A of \cite{Aad:2014aqa} to set limits on models of new physics with Gaussian resonances. First, for each search we chose a selection of stop masses $M$ to sample. Then, for each $M$, we used MadGraph to generate 10000 events of resonant stop production with the stop subsequently decaying into quarks. The events were given to PYTHIA 8.2 \cite{Sjostrand:2014zea} to simulate non-perturbative effects and then fed into Delphes 3 \cite{deFavereau:2013fsa} for detector simulation. The package HepMC2 \cite{Dobbs:2001ck} was used to interface between PYTHIA and Delphes. Next, code was written to implement the kinematic cuts. The cuts for each search were: \begin{alignat*}{2} &\text{\cite{Aad:2014aqa}}: \ &&|y_{j_1}| < 2.8, \ |y_{j_2}| < 2.8, \ {p_T}_{j_1} > 50 \ \text{GeV}, \ {p_T}_{j_2} > 50 \ \text{GeV}, \\ & &&|\Delta y_{j_1 j_2}| < 1.2, \ m_{j_1 j_2} > 250 \ \text{GeV}, \ 0.8M < m_{j_1 j_2} < 1.2M, \\ &\text{\cite{ATLAS-CONF-2016-030}}: \ &&|\eta_{j_1}| < 2.8, \ |\eta_{j_2}| < 2.8, \ {p_T}_{j_1} > 185 \ \text{GeV}, \ {p_T}_{j_2} > 85 \ \text{GeV},\\ & &&|\Delta y_{j_1 j_2}| < \begin{cases} 0.6 \ \text{if} \ 425 \ \text{GeV} < m_G < 550 \ \text{GeV}, \\ 1.2 \ \text{if} \ 550 \ \text{GeV} < m_G < 1100 \ \text{GeV}, \end{cases} \\ & &&0.8M < m_{j_1 j_2} < 1.2M, \\ &\text{\cite{ATLAS-CONF-2016-069}}: \ &&{p_T}_{j_1} > 440 \ \text{GeV}, \ {p_T}_{j_2} > 60 \ \text{GeV}, \\ & &&|\Delta y_{j_1 j_2}| < 1.2, \ m_{j_1 j_2} > 1100 \ \text{GeV}, \ 0.8M < m_{j_1 j_2} < 1.2M, \end{alignat*} where, for the two leading jets $j_1$ and $j_2$: $y_{j_1}$ and $y_{j_2}$ are their rapidities, $\eta_{j_1}$ and $\eta_{j_2}$ are their pseudorapidities, ${p_T}_{j_1}$ and ${p_T}_{j_2}$ are their transverse momenta, $\Delta y_{j_1 j_2}$ is the difference between their rapidities and $m_{j_1 j_2}$ is their invariant mass. The cut $0.8M < m_{j_1 j_2} < 1.2M$ is designed to remove any long tails in the reconstructed $m_{j_1 j_2}$ distribution which has been assumed to be Gaussian. The acceptance for a search is then the fraction of the events to pass its cuts. The acceptances are shown in the top row of figure \ref{fig:acceptance}. Additionally, events that passed had their values of $m_{j_1 j_2}$ recorded in a histogram. A Gaussian distribution was then fit to the histogram and the standard deviation, $\sigma_G$, and mean, $m_G$, were determined. Finally, each search provided 95\% CL upper limits on the cross section times branching ratio times acceptance as a function of $m_G$ for different values of $\sigma_G/m_G$. We found that the vast majority of $\sigma_G/m_G$ values fell between 0.05 and 0.07. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 463]{acceptance_ATLAS_1407_1376.pdf} \caption{\cite{Aad:2014aqa}} \label{Fig:Acceptance:ATLAS_1407_1376} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 463]{acceptance_ATLAS_CONF_2016_030.pdf} \caption{\cite{ATLAS-CONF-2016-030}} \label{Fig:Acceptance:ATLAS_CONF_2016_030} \end{subfigure} ~ \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 463]{acceptance_ATLAS_CONF_2016_069.pdf} \caption{\cite{ATLAS-CONF-2016-069}} \label{Fig:Acceptance:ATLAS_CONF_2016_069} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.66\textwidth, bb = 0 0 584 463]{acceptance_CMS_1604_08907.pdf} \caption{\cite{Khachatryan:2016ecr}} \label{Fig:Acceptance:CMS_1604_08907} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.66\textwidth, bb = 0 0 584 463]{acceptance_CMS_PAS_EXO_16_032.pdf} \caption{\cite{CMS-PAS-EXO-16-032}} \label{Fig:Acceptance:CMS_PAS_EXO_16_032} \end{subfigure} \caption{Acceptances for the dijet searches recasted in this analysis. Top row: ATLAS searches. Bottom row: CMS searches.}\label{fig:acceptance} \end{figure} Recasting CMS dijet searches followed a similar procedure up until applying the cuts. A major component of the cuts centred around reconstructing two ``wide jets". The two leading jets served as the seeds for the two wide jets and the four-momentum of any other jet would be added to the closest leading jet if the two were separated by less than $\Delta R = 1.1$. Then, for a stop with mass $M$, the cuts for each search were: \begin{alignat*}{2} &\text{\cite{Khachatryan:2016ecr}}: \ &&H_T = \sum_j {p_T}_j > 250 \ \text{GeV}, \ \Delta\phi_{j_1 j_2} > \pi/3, \ |\Delta \eta_{J_1 J_2}| < 1.3, \ m_{J_1 J_2} > 390 \ \text{GeV}, \\ &\text{\cite{CMS-PAS-EXO-16-032}}: \ &&H_T = \sum_j {p_T}_j > \begin{cases} 250 \ \text{GeV} \ \text{if} \ 0.6 \ \text{TeV} < M < 1.6 \ \text{TeV}, \\ 800 \ \text{GeV} \ \text{if} \ 1.6 \ \text{TeV} < M < 7.5 \ \text{TeV}, \end{cases} \\ & &&|\Delta \eta_{J_1 J_2}| < 1.3, \\ & &&m_{J_1 J_2} > \begin{cases} 453 \ \text{GeV} \ \text{if} \ 0.6 \ \text{TeV} < M < 1.6 \ \text{TeV}, \\ 1058 \ \text{GeV} \ \text{if} \ 1.6 \ \text{TeV} < M < 7.5 \ \text{TeV}, \end{cases} \end{alignat*} where $H_T$ is the scalar sum of the transverse momenta of all the jets, $\Delta\phi_{j_1 j_2}$ is the azimuthal angle between the two leading jets, and, for the two wide jets $J_1$ and $J_2$, $\Delta \eta_{J_1 J_2}$ is the difference between their pseudorapidities and $m_{J_1 J_2}$ is their invariant mass. Once again, the acceptance for a search is the fraction of the events to pass its cuts. The acceptances are shown in the bottom row of figure \ref{fig:acceptance}. Both CMS searches provided 95\% CL upper limits on the cross section times branching ratio times acceptance for dijets originating from two quarks. It is also possible to constrain the parameter space using the second process outlined at the beginning of this section, stop pair production with subsequent decay into four quarks. As a matter of fact, there have been several experimental searches looking for exactly this signature. These include searches \cite{Aad:2016kww, ATLAS-CONF-2016-022, ATLAS-CONF-2016-084} from ATLAS and \cite{Khachatryan:2014lpa} from CMS. We directly read off the limits on the cross section times branching ratio as a function of stop mass. For stops decaying only into quarks, the most powerful search is \cite{ATLAS-CONF-2016-084}, which is independent of flavour of the final state quarks. That is, it does not explicitly require b-tagged jets. This is in contrast to the other searches, \cite{Aad:2016kww, ATLAS-CONF-2016-022} both require b-jets while \cite{Khachatryan:2014lpa} provides different limits depending on whether or not b-jets are produced. As a result, the limits on the stop mass are the same for all three $\lambda''_{3ij}$, once again assuming a branching ratio of one. Combining the limits from the two types of searches considered in this section, we constrain the $\lambda''_{3ij}$ and stop mass parameter space. The result is shown in figure \ref{fig:stop_lSP}. The small white band of stop masses slightly above 400 GeV fails to be excluded due to an upward fluctuation of the signal in the search \cite{ATLAS-CONF-2016-084}. Interestingly, for each $\lambda''_{3ij}$ exclusion curve resulting from the dijet searches, at least a portion of its left edge happens to fall directly in this small unexcluded range. Future searches will likely close this gap. Disregarding this feature for a moment, we see that stop masses up to 3870, 2910 and 1610 GeV are excluded for $\lambda''_{312}$, $\lambda''_{313}$ and $\lambda''_{323}$ set to one, respectively. A similar plot is also presented within Ref$.$ \cite{Monteux:2016gag}. For comparison, Ref$.$ \cite{Monteux:2016gag} found that stop masses up to 3150, 2830 and 1500 (plus a small region between 1730 and 1870) GeV are excluded for $\lambda''_{312}$, $\lambda''_{313}$ and $\lambda''_{323}$ set to one, respectively. \begin{figure}[t!] \centering \includegraphics[width=0.66\textwidth, bb = 0 0 584 458]{Stop_LSP.pdf} \caption{Exclusion plot for a LSP stop. The area above each curve is excluded by dijet searches. Additionally, the grey area is excluded by stop pair production with subsequent decay into four quarks. As explained in the text, it applies equally to all $\lambda''_{3ij}$.} \label{fig:stop_lSP} \end{figure} \subsubsection{Neutralino LSP}\label{ssSec:stop_neu_lsp} If the LSP is a neutralino, additional phenomenological possibilities emerge. However, the stop is assumed to be right-handed, and as such couples only to the bino or the Higgsino-up. Therefore, we focus on two different possibilities: the LSP neutralino is essentially pure bino or essentially pure Higgsino-up. Naturally, for the Higgsino-up case, there is also an accompanying chargino with approximately the same mass. The next lightest neutralino or chargino is then taken to be heavier than the stop. This assures no cascade decays between neutralinos which would complicate the possible decay topologies. This type of spectrum, while not necessarily the most general, allows us to investigate the parameter space in a fairly straightforward and intuitive manner. We mention here that throughout this section and the section in which we constrain squarks, \ref{ssSec:squark_neu_lsp}, we set $\tan\beta = 10$. In general, there are now three different possibilities for how the stop can decay: $\tilde{t}^* \rightarrow d_i d_j$, $\tilde{t}^* \rightarrow \bar{t} \chi^0$ or $\tilde{t}^* \rightarrow \bar{b} \chi^-$. The first decay mode occurs, as before, through the $\lambda''_{3ij}$ coupling. For the last two decay modes, $\chi^0$ refers to the lightest neutralino and $\chi^-$ is the lightest chargino. Of course, if the LSP is a bino neutralino, only the first two decays will have non-zero branching ratios. On the other hand, if the Higgsino-up neutralino is the LSP then all three decay modes will occur. For both cases, we compute the branching ratios for the stop into each of the possible final states. For example, figure \ref{fig:stop_BR_neu_lambda} presents the branching ratios for a $600$ GeV stop as a function of $\lambda''_{312}$, with the neutralino mass set to $200$ GeV. As the figure shows, the stop decays mostly into dijets for $\lambda''_{312}$ of order one, while the other modes quickly begin to dominate for lesser values. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 458]{Stop_BR_neuB.pdf} \caption{Bino neutralino} \label{fig:stop_BR_neuB_lambda} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 458]{Stop_BR_neuHu.pdf} \caption{Higgsino-up neutralino} \label{fig:stop_BR_neuHu_lambda} \end{subfigure} \caption{Branching ratios for a $600$ GeV stop as a function of of $\lambda''_{312}$. For both plots, the neutralino mass has been set to 200 GeV.}\label{fig:stop_BR_neu_lambda} \end{figure} We now discuss the decay modes for the neutralinos and charginos, starting with the latter. Along with the normal decay of the chargino into the neutralino and a $W$ boson, the $\lambda''_{3ij}$ coupling allows for an additional decay into three quarks through an off-shell stop. Precisely, this new decay is $\chi^- \rightarrow b \tilde{t}^* \rightarrow b d_i d_j$. For the type of spectrum under consideration, the splitting between the chargino and neutralino is quite small. It then follows that the decay of the chargino into a neutralino and an off-shell $W$ boson is highly phase space suppressed. As a result, for essentially all values of $\lambda''_{3ij}$ and stop masses considered in this analysis, the RPV decay for the chargino dominates. We explicitly checked this by computing the branching ratios for the chargino and confirmed that this is indeed the case. Unless otherwise stated, we consider the chargino to decay into three quarks with a branching ratio of one. Due to the $\lambda''_{3ij}$ coupling, the neutralino is also unstable and will decay into three quarks. This decay also occurs within the RPVMSSM but there is now an important difference. As explained in section \ref{subsec:spectrum}, our model has Dirac neutralinos which split into two pseudo-Dirac states once the small $U(1)_R$ breaking is taken into account. For Dirac neutralinos, there is only a single decay mode while for pseudo-Dirac neutralinos there are two. Specifically, the decay mode for Dirac neutralinos is $\chi^0 \rightarrow t \tilde{t}^* \rightarrow t d_i d_j$ (the antineutralino decay is $\bar{\chi}^0 \rightarrow \bar{t} \tilde{t} \rightarrow \bar{t} \bar{d}_i \bar{d}_j$). Pseudo-Dirac neutralinos can decay by $\chi^0 \rightarrow t \tilde{t}^* \rightarrow t d_i d_j$ or $\chi^0 \rightarrow \bar{t} \tilde{t} \rightarrow \bar{t} \bar{d}_i \bar{d}_j$. For a mass splitting, proportional to the scale of the $U(1)_R$ breaking, larger than the width, the two decay modes for the pseudo-Dirac neutralinos become equally relevant. The neutralinos will then behave similarly to the standard Majorana neutralinos of the RPVMSSM. Conversely, for mass splitting smaller than the width, the neutralino behaves as a purely Dirac state with a single decay mode. To demonstrate this feature, consider $600$ GeV stops decaying through approximately $200$ GeV binos with $\lambda''_{312} = 1$. In figure \ref{fig:stop_BR_neuB} we show the partial decay widths and corresponding branching ratios for the stop as a function of the bino Majorana mass term. The branching ratios for the two different decays become equal when the Majorana mass is about five times the decay width of the neutralinos. We thoroughly explore the parameter space and find equivalent behaviour for the opposite sign and same sign decay widths. However, we note that this result crucially depends on the neutralinos being produced on-shell. If the stops decay through off-shell neutralinos, then the propagators of the neutralinos are not inversely proportional to their widths. In this case, the equality of branching ratios occurs when the mass splitting is comparable to the Dirac mass.\footnote{Although this discussion has been in terms of a bino LSP, similar results also hold for a Higgsino-up LSP. However, as a Majorana mass term for the Higgsino-up is not necessarily generated, the mass splitting between the two pseudo-Dirac Higgsino-up neutralinos results from a combination of Majorana gaugino masses for the bino and wino and mixing. For example, setting $\lambda''_{312} = 1$, $m_{\tilde{t}} = 600 \ \text{GeV}$, $\mu_u = 200 \ \text{GeV}$, $M_1^D = M_2^D = 10 \ \text{TeV}$ and $M_1$ and $M_2$ to their anomaly mediation masses, we find equal branching ratios for opposite sign and same sign tops resulting from stop decays for $m_{3/2} \gtrsim 7 \times 10^{-2} \ \text{GeV}$. If the Dirac masses for the bino and wino are lowered to $1 \ \text{TeV}$, then equal branching ratios occur for $m_{3/2} \gtrsim 3 \times 10^{-4} \ \text{GeV}$.} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 479]{Stop_width_neu_200.pdf} \caption{} \label{fig:stop_width_neuB_200} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 479]{Stop_BR_neu_200.pdf} \caption{} \label{fig:stop_BR_neuB_200} \end{subfigure} \caption{Partial decay widths (a) and branching ratios (b) for $600$ GeV stops decaying into opposite sign (OS) and same sign (SS) tops through approximately $200$ GeV bino neutralinos with $\lambda''_{312} = 1$.}\label{fig:stop_BR_neuB} \end{figure} To understand the phenomenological significance of this, suppose a $\tilde{t}^*$ is produced at the LHC and that it decays into a top and a neutralino. Then, for Dirac neutralinos, the final state quarks will always be $\bar{t} t d_i d_j$ while, for Majorana neutralinos, the final state quarks can either be $\bar{t} t d_i d_j$ or $\bar{t} \bar{t} \bar{d}_i \bar{d}_j$. Resonant stop production with Majorana neutralinos can lead to same sign tops whereas opposite sign tops are always produced for Dirac neutralinos. Same sign tops can potentially lead to same sign leptons, which is a powerful phenomenological signature for separating signal from background. In contrast, a final state of $\bar{t} t d_i d_j$ is difficult to distinguish from a background such as $t\bar{t}$ and jets. Similarly, stop pair production is also affected by whether or not the neutralino is Dirac or Majorana. If both stops decay into neutralinos, then a total of four tops will be produced. Dirac neutralinos will always result in two positively and two negatively charged tops. However, Majorana neutralinos will result in two positively and two negatively charged tops only half of the time. For the other half, three tops with the same sign will be produced, along with a single top with the opposite sign. Of note, the latter case has a larger probability of producing a same sign lepton pair. There is one more possible decay mode that we need to consider. If the Majorana mass term is large enough so that the mass splitting between the pseudo-Dirac neutralino states is non-negligible, then the decays $\chi_2^B \rightarrow \chi_1^B Z$ and $\chi_2^B \rightarrow \chi_1^B h$ potentially open up. In these decays the $Z$ and $h$ are off-shell for small mass splittings. However, for all neutralino masses and $\lambda''_{3ij}$ couplings considered in this analysis, the decay width for the neutralinos is relatively small (generally less than 1 GeV). As a result, only a modest Majorana mass term is needed to ensure that opposite sign and same sign tops are produced equally from stop decays. Thus, we make the following assumption. If the $U(1)_R$ symmetry is broken, then the Majorana mass term is large enough such that the stops decay into opposite sign and same sign tops with equal branching ratios, while, at the same time, is small enough so that the decays $\chi_2^B \rightarrow \chi_1^B Z$ and $\chi_2^B \rightarrow \chi_1^B h$ can be safely ignored. This also has the added benefit of making the analysis of the possible decay chains simpler. Finally, we also note that under this assumption, the phenomenology of the MRSSM with a broken $U(1)_R$ symmetry is essentially identical to the RPVMSSM. Now that we have discussed the various decay modes for the stop, neutralino and chargino, we consider all processes involving stop production. First, consider the case were the $U(1)_R$ symmetry is strictly preserved. Then, enumerating all the possibilities, we get the following list: \begin{alignat*}{3} &(1) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \rightarrow d_i d_j \end{array} \quad \quad &&(2) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \rightarrow \bar{t} \chi^0 \rightarrow \bar{t} t d_i d_j \end{array} \quad \quad &&(3) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \rightarrow \bar{b} \chi^- \rightarrow \bar{b} b d_i d_j \end{array} \\[1.0ex] &(4) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \\ \phantom{p p} \rightarrow d_i d_j \bar{d}_i \bar{d}_j \end{array} \quad \quad &&(5) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow \bar{t} \chi^0 t \bar{\chi}^0 \\ \phantom{p p \rightarrow \tilde{t}^* \tilde{t}} \rightarrow \bar{t} t d_i d_j t \bar{t} \bar{d}_i \bar{d}_j \end{array} \quad \quad &&(6) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow \bar{b} \chi^- b \chi^+ \\ \phantom{p p \rightarrow \tilde{t}^* \tilde{t}} \rightarrow \bar{b} b d_i d_j b \bar{b} \bar{d}_i \bar{d}_j \end{array} \\[1.0ex] &(7) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow d_i d_j t \bar{\chi}^0 \\ \phantom{p p \rightarrow \tilde{t}^* \tilde{t}} \rightarrow d_i d_j t \bar{t} \bar{d}_i \bar{d}_j \end{array} \quad \quad &&(8) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow d_i d_j b \chi^+ \\ \phantom{p p \rightarrow \tilde{t}^* \tilde{t}} \rightarrow d_i d_j b \bar{b} \bar{d}_i \bar{d}_j \end{array} \quad \quad &&(9) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow \bar{t} \chi^0 b \chi^+ \\ \phantom{p p \rightarrow \tilde{t}^* \tilde{t}} \rightarrow \bar{t} t d_i d_j b \bar{b} \bar{d}_i \bar{d}_j. \end{array} \end{alignat*} If, instead, the $U(1)_R$ symmetry is broken, then processes 2, 5, 7 and 9 need to be modified: \begin{alignat*}{3} &(2) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \rightarrow \bar{t} \chi^0 \rightarrow \begin{cases} \bar{t} t d_i d_j \\ \bar{t} \bar{t} \bar{d}_i \bar{d}_j \end{cases} \end{array} \quad \quad &&(5) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow \bar{t} \chi^0 t \chi^0 \rightarrow \begin{cases} \bar{t} t d_i d_j t \bar{t} \bar{d}_i \bar{d}_j \\ \bar{t} t d_i d_j t t d_i d_j \\ \bar{t} \bar{t} \bar{d}_i \bar{d}_j t \bar{t} \bar{d}_i \bar{d}_j \\ \bar{t} \bar{t} \bar{d}_i \bar{d}_j t t d_i d_j \end{cases} \end{array} \\[1.0ex] &(7) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow d_i d_j t \chi^0 \rightarrow \begin{cases} d_i d_j t \bar{t} \bar{d}_i \bar{d}_j \\ d_i d_j t t d_i d_j \end{cases} \end{array} \quad \quad &&(9) \ \begin{array}{l} p p \rightarrow \tilde{t}^* \tilde{t} \rightarrow \bar{t} \chi^0 b \chi^+ \rightarrow \begin{cases} \bar{t} t d_i d_j b \bar{b} \bar{d}_i \bar{d}_j \\ \bar{t} \bar{t} \bar{d}_i \bar{d}_j b \bar{b} \bar{d}_i \bar{d}_j. \end{cases} \end{array} \end{alignat*} Where appropriate, each process also includes its charge conjugated version. Processes 1 and 4 can be constrained by using the results for stop LSP (section \ref{ssSec:Stop_LSP}) with appropriate modifications to the branching ratios. When presenting plots of the parameter space, the region ruled out by process 1 is referred to as the region excluded by dijets searches. Likewise, the region ruled out by process 4 is referred to as the region excluded by paired dijet searches. Further exclusion is possible if other types of experimental searches are considered. Our methodology for choosing which searches to recast is as follows. First, there have been several experimental searches featuring supersymmetric particles decaying through the $\lambda''_{3ij}$ couplings. We select three of the most recent searches of this variety. These are \cite{ATLAS-CONF-2016-037, ATLAS-CONF-2016-057, ATLAS-CONF-2016-094}, of which all are from ATLAS. Next, notice that many of the different possible final states contain either four tops or two same sign tops. We therefore examine searches that constrain these types of final states. This led us to choosing searches \cite{ATLAS-CONF-2016-013, ATLAS-CONF-2016-032} from ATLAS and \cite{CMS-PAS-SUS-16-020} from CMS. A brief outline of the strategy for each search is summarized in table \ref{table:searches}. The region of parameter space ruled out by these searches is referred to as the region excluded by neutralino LSP searches. \begin{table}[t!] \begin{center} \begin{tabular}{ |C{0.18\textwidth}|C{0.18\textwidth}|C{0.54\textwidth}| } \hline Collaboration & Search & Strategy \\ \hline ATLAS & \cite{ATLAS-CONF-2016-037} & \vspace{-9pt} \parbox{0.54\textwidth}{\centering 2 (potentially negative) same sign leptons,\\total number of leptons,\\jets with $p_T > 25$, $40$ or $50 $ GeV, b-jets,\\MET, $m_{\text{eff}} = \sum\limits_ {\substack{\text{jets}\\\text{leptons}}} {p_T} + \text{MET}$} \vspace{-3pt} \\ \hline ATLAS & \cite{ATLAS-CONF-2016-057} & \vspace{-9pt} \parbox{0.54\textwidth}{\centering large ($R = 1.0$) jets $J_i$,\\${{p_T}_{J_1}} > 440$ GeV, $|\Delta \eta_{J_1 J_2}| < 1.4$,\\$M_J^\Sigma = \sum\limits_{i=1}^4 m_{J_i}$, small ($R = 0.4$) b-jets} \vspace{-3pt} \\ \hline ATLAS & \cite{ATLAS-CONF-2016-094} & \vspace{-9pt} \parbox{0.54\textwidth}{\centering at least 1 lepton,\\jets with $p_T > 40$ or $60 $ GeV,\\b-jets with $p_T > 40$ or $60 $ GeV} \vspace{-3pt} \\ \hline ATLAS & \cite{ATLAS-CONF-2016-013} & \vspace{-9pt} \parbox{0.54\textwidth}{\centering exactly 1 lepton, jets, b-jets,\\mass-tagged jets = large ($R = 1.0$) jets with cuts, $m_{bb}^{\min\Delta R} = \text{invariant mass of closest b-jets}$,\\MET, MET + $M_T(\ell,\text{MET})$\\where $M_T = \text{transverse mass}$} \vspace{-3pt} \\ \hline ATLAS & \cite{ATLAS-CONF-2016-032} & \vspace{-9pt} \parbox{0.54\textwidth}{\centering 2 same sign leptons, jets, b-jets,\\MET, $H_T = \sum\limits_ {\substack{\text{jets}\\\text{leptons}}} {p_T}$} \vspace{-3pt} \\ \hline CMS & \cite{CMS-PAS-SUS-16-020} & \vspace{-9pt} \parbox{0.54\textwidth}{\centering 2 same sign leptons, jets, b-jets,\\$M_T^{\text{min}} = \min(M_T(\ell_1,\text{MET}), M_T(\ell_2,\text{MET}))$\\where $M_T = \text{transverse mass}$,\\MET, $H_T = \sum\limits_ {\text{jets}} {p_T}$} \vspace{-3pt} \\ \hline \end{tabular}% \caption{Neutralino LSP searches. For searches \cite{ATLAS-CONF-2016-057, ATLAS-CONF-2016-013}, we use FastJet \cite{Cacciari:2011ma, Cacciari:2005hq} for manipulating large ($R = 1$) jets. This mainly involves jet reclustering and jet trimming. Additionally, searches that feature missing transverse energy (MET) either have very lenient cuts on this quantity or also contain signal regions probing $R$-parity conserving (RPC) supersymmetry signatures.}\label{table:searches} \end{center} \end{table} The procedure used to recast these searches is similar to the procedures used to recast dijets searches described above. First, the neutralino mass is set to 200 GeV and the stop mass is scanned between 200 and 1000 GeV. For all combinations, we use MadGraph, PYTHIA and Delphes to simulate 10000 events for each of the nine possible decay chains. This was done twice for processes 2, 5, 7 and 9, once with the $U(1)_R$ symmetry preserved and a second time with it broken. Code was implemented to simulate the cuts for each of the six searches. We verified our code by reproducing each search with good accuracy. Using the simulated events, our code produced acceptances for every signal region described within each search. Then, within the stop mass and $\lambda''_{3ij}$ parameter space, the acceptances are combined with production cross sections and appropriate branching ratios to determine the number of expected signals for each signal region. The 95\% CL upper limit for each signal region were then determined. Searches \cite{ATLAS-CONF-2016-037, ATLAS-CONF-2016-057, ATLAS-CONF-2016-094} explicitly provided these upper limits. Conversely, searches \cite{ATLAS-CONF-2016-013, ATLAS-CONF-2016-032, CMS-PAS-SUS-16-020} did not, and so we calculate the upper limits using the $CL_S$ technique \cite{Read:2002hq, Junk:1999kv}. A point in parameter space is then excluded if the expected number of signals in any of the signal regions exceed its upper limit. The region of small $\lambda''_{3ij}$ can additionally be constrained by searches for displaced vertices. The efficiency for reconstructing a single displaced vertex is to good approximation only a function of the mass $m$ and decay length $c\tau$ of the particle involved. We make the further approximation that this function, which we call $f(m,c\tau)$, can be factorized as $f_1(m) f_2(c\tau)$. This is justified by the results of Ref$.$ \cite{Cui:2014twa}, which presents upper limits on the cross section for pair production of hadronically decaying neutralinos as a function of their mass for a fixed decay length. These results are based on the CMS search \cite{CMS:2014wda}. The function $f_1(m)$ can be read from Ref$.$ \cite{Cui:2014twa} up to a multiplicative factor that can be absorbed in $f_2(c\tau)$. The latter function can be extracted from Ref$.$ \cite{Liu:2015bma}, which presents exclusion limits on displaced vertices for Higgsino LSP in the parameter space of Higgsino mass and decay length. These results are based on the same search and assume charged Higgsinos decay promptly to the almost degenerate neutral one. Higgsinos are again assumed to be pair produced and for the lightest one to decay hadronically. Knowing the cross section involved and upper limit on the signal, $f_2(c\tau)$ can be reconstructed everywhere except for very short and very long decay lengths. In these regions, the efficiency decreases exponentially as expected and we extrapolate this behaviour. This allows for a complete reconstruction of $f(m,c\tau)$, which is shown in figure \ref{DV:f1f2}. Next, note that the displaced vertices in our model result from neutralino decays. We consider both neutralino pair production and neutralinos produced from stop decays. Furthermore, in this part of the parameter space, we assume the Higgsino-up charginos decay dominantly into neutralinos. This is in contrast to our previous benchmark points where the RPV decay for charginos was assumed. For small values of $\lambda''_{3ij}$ the charginos will decay into neutralinos provided that the spectrum is not too degenerate. A large enough splitting can easily be generated provided that the wino is not exceptionally heavy. To compute the cross sections for Higgsino-up and bino pair production we use Prospino. The bino cross section depends on the masses for the first and second generations of squarks. We consider two different cases. For the first case, we decouple the squarks by setting their masses to 10 TeV. For the second case, we set their masses to 1 TeV. Combining these cross sections with the known $f(m,c\tau)$, limits on displaced vertices can easily be applied to our parameter spaces using once again Ref$.$ \cite{CMS:2014wda}. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 455]{f1.pdf} \caption{} \label{fig:f1} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 455]{f2.pdf} \caption{} \label{fig:f2} \end{subfigure} \caption{The efficiency for reconstructing a single vertex $f(m,c\tau)$ for (a) fixed $m = 1$ TeV and (b) fixed $c\tau = 0.1$ m for the CMS search \cite{CMS:2014wda}.}\label{DV:f1f2} \end{figure} Combining all types of constraints discussed above, we present exclusion plots within the stop mass and $\lambda''_{3ij}$ parameter space. Figures \ref{fig:stop_exclusion_Bino_U1R} and \ref{fig:stop_exclusion_Hu_U1R} show the regions excluded provided that the $U(1)_R$ symmetry is strictly preserved for bino LSP and Higgsino-up LSP, respectively. Similarly, figures \ref{fig:stop_exclusion_Bino_U1R_B} and \ref{fig:stop_exclusion_Hu_U1R_B} show the regions excluded when the $U(1)_R$ symmetry is broken, again for bino LSP and Higgsino-up LSP, respectively. Notice that the limits coming from the neutralino LSP searches (green area) do not extend into the smallest values of $\lambda''_{3ij}$ shown in the plots. These searches rely on promptly decaying particles and so we conservatively cutoff their exclusion capabilities when the neutralino's decay length becomes longer than 1 mm \cite{Liu:2015bma}. The green area excluded for $\lambda''_{3ij} \lesssim 0.1$ mostly results from stop pair production with subsequent decay into neutralinos. Larger stop masses are excluded for the bino than the Higgsino-up for this range of $\lambda''_{3ij}$ coupling because there is no competing chargino decay. The green area for $\lambda''_{3ij} \gtrsim 0.1$ is mostly excluded by resonant stop production with subsequent decay through neutralinos. Note that quite a bit more of this parameter space is excluded when the $U(1)_R$ symmetry is broken. This is largely due to the production of same sign tops which is absent when the $U(1)_R$ symmetry is preserved. Another interesting feature for this part of the parameter space is that approximately equal areas are excluded for $\lambda''_{312}$ and $\lambda''_{313}$. The reason for this is that while the cross section for resonant stop production is smaller for $\lambda''_{313}$, the efficiencies are generally larger than for $\lambda''_{312}$ due to the production of extra bottom quarks. These two effects are seen to approximately compensate one another. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_Stop_NeuB_200_312.pdf} \label{fig:stop_exclusion_U1R_bino_312} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_Stop_NeuB_200_313.pdf} \label{fig:stop_exclusion_U1R_bino_313} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_Stop_NeuB_200_323.pdf} \label{fig:stop_exclusion_U1R_bino_323} \end{subfigure} \caption{Exclusion plots for a 200 GeV bino neutralino LSP with the $U(1)_R$ symmetry strictly preserved. The grey region on the left side of the plots ($m_{\tilde{t}} \lesssim 375$ GeV) is excluded by paired dijet searches. Next, consider the middle region of the plots. Starting from large $\lambda''_{3ij}$ couplings and working downwards, the blue region is excluded by dijet searches, the green region is excluded by neutralino LSP searches and the red regions are excluded by displaced vertices searches. Bino pair production, which contributes to the displaced vertices limits, depends on the masses of the first and second generations of squarks. Setting the masses of these squarks to 10 TeV results in the darker red region being excluded. Instead, setting the masses of these squarks to 1 TeV excludes both the darker red and lighter red regions.}\label{fig:stop_exclusion_Bino_U1R} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_Stop_NeuHu_200_312.pdf} \label{fig:stop_exclusion_U1R_Higgsino_up_312} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_Stop_NeuHu_200_313.pdf} \label{fig:stop_exclusion_U1R_Higgsino_up_313} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_Stop_NeuHu_200_323.pdf} \label{fig:stop_exclusion_U1R_Higgsino_up_323} \end{subfigure} \caption{Exclusion plots for a 200 GeV Higgsino-up neutralino LSP with the $U(1)_R$ symmetry strictly preserved. The grey region on the left side of the plots ($m_{\tilde{t}} \lesssim 205$ GeV) is excluded by paired dijet searches. Next, consider the middle region of the plots. Starting from large $\lambda''_{3ij}$ couplings and working downwards, the blue region is excluded by dijet searches, the green region is excluded by neutralino LSP searches and the red region is excluded by displaced vertices searches.}\label{fig:stop_exclusion_Hu_U1R} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_B_Stop_NeuB_200_312.pdf} \label{fig:stop_exclusion_U1R_B_bino_312} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_B_Stop_NeuB_200_313.pdf} \label{fig:stop_exclusion_U1R_B_bino_313} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_B_Stop_NeuB_200_323.pdf} \label{fig:stop_exclusion_U1R_B_bino_323} \end{subfigure} \caption{Exclusion plots for a 200 GeV bino neutralino LSP with the $U(1)_R$ symmetry broken. The grey region on the left side of the plots ($m_{\tilde{t}} \lesssim 375$ GeV) is excluded by paired dijet searches. Next, consider the middle region of the plots. Starting from large $\lambda''_{3ij}$ couplings and working downwards, the blue region is excluded by dijet searches, the green region is excluded by neutralino LSP searches and the red regions are excluded by displaced vertices searches. Bino pair production, which contributes to the displaced vertices limits, depends on the masses of the first and second generations of squarks. Setting the masses of these squarks to 10 TeV results in the darker red region being excluded. Instead, setting the masses of these squarks to 1 TeV excludes both the darker red and lighter red regions.}\label{fig:stop_exclusion_Bino_U1R_B} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_B_Stop_NeuHu_200_312.pdf} \label{fig:stop_exclusion_U1R_B_Higgsino_up_312} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_B_Stop_NeuHu_200_313.pdf} \label{fig:stop_exclusion_U1R_B_Higgsino_up_313} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{U1R_B_Stop_NeuHu_200_323.pdf} \label{fig:stop_exclusion_U1R_B_Higgsino_up_323} \end{subfigure} \caption{Exclusion plots for a 200 GeV Higgsino-up neutralino LSP with the $U(1)_R$ symmetry broken. The grey region on the left side of the plots ($m_{\tilde{t}} \lesssim 205$ GeV) is excluded by paired dijet searches. Next, consider the middle region of the plots. Starting from large $\lambda''_{3ij}$ couplings and working downwards, the blue region is excluded by dijet searches, the green region is excluded by neutralino LSP searches and the red region is excluded by displaced vertices searches.}\label{fig:stop_exclusion_Hu_U1R_B} \end{figure} Figures \ref{fig:stop_exclusion_Bino_U1R_B} and \ref{fig:stop_exclusion_Hu_U1R_B}, with the $U(1)_R$ symmetry broken, are similar to figures shown in Ref$.$ \cite{Monteux:2016gag}. (The figures within Ref$.$ \cite{Monteux:2016gag} are presented within a RPVMSSM context. However, as previously noted, the phenomenology of the MRSSM with the $U(1)_R$ symmetry broken is nearly identical to the RPVMSSM.) Our neutralino LSP searches exclude larger amounts of the parameter space than the corresponding searches considered by Ref$.$ \cite{Monteux:2016gag}. This is simply because we use more recent, and, in particular, 13 TeV experimental searches. Our exclusion regions for displaced vertices searches are, on the other hand, significantly smaller than Ref$.$ \cite{Monteux:2016gag}. \subsection{Placing limits on first and second generation squarks}\label{sSec:squarks} \subsubsection{Squark production}\label{ssSec:squark_production} As previously mentioned, the label squarks refers to only the first and second generations. Explicitly, we consider the states $\tilde{d}_L$, $\tilde{u}_L$, $\tilde{s}_L$, $\tilde{c}_L$, $\tilde{d}_R$, $\tilde{u}_R$, $\tilde{s}_R$ and $\tilde{c}_R$ and their charge conjugates. The squarks are taken to be mass degenerate. As we are considering the $\lambda''_{3ij}$ couplings, squarks can only be produced in pairs. In general, squark pairs are produced either from initial state gluons or initial state quarks with a t-channel gluino propagator. (We again ignore potential squark production involving two $\lambda''_{3ij}$ couplings.) Although, due to the Dirac nature of gluino, some of the production mechanisms present in the MSSM are forbidden within the MRSSM. In particular, diagrams which require a gluino Majorana mass insertion are forbidden \cite{Heikinheimo:2011fk,Kribs:2012gx}. This prevents the production of $\tilde{q}_L \tilde{q}_L$, $\tilde{q}_R \tilde{q}_R$ and $\tilde{q}_L \tilde{q}_R^*$ and their charge conjugates. Additionally, breaking the $U(1)_R$ symmetry with a small gluino mass will only reintroduce the forbidden diagrams by negligible amounts. There is no enhancement comparable to stops decaying into same sign tops. As noted above, this enhancement requires the neutralino from the stop decay to be produced on-shell. Here, the four-momentum of the t-channel gluino is spacelike and thus the gluino is never on-shell. As a result, we are interested in the production of $\tilde{q}_L \tilde{q}_L^*$, $\tilde{q}_R \tilde{q}_R^*$, $\tilde{q}_L \tilde{q}_R$ and $\tilde{q}_L^* \tilde{q}_R^*$. We use MadGraph to calculate the LO cross sections for these final states. To estimate higher order effects, we use NNLL-fast \cite{Beenakker:2016lwe, Beenakker:1996ch, Kulesza:2008jb, Kulesza:2009kq, Beenakker:2009ha, Beenakker:2011sf, Beenakker:2013mva, Beenakker:2014sma} to compute MSSM K-factors for squark-antisquark and squark-squark production as a function of the mass of the squarks. The gluino mass is set to 2 TeV for both steps. Below, we present plots using both the LO and the MSSM K-factor improved cross sections. \subsubsection{Neutralino LSP}\label{ssSec:squark_neu_lsp} To avoid long lived squarks, we require a neutralino LSP. We again only consider a bino neutralino or a Higgsino-up neutralino. Furthermore, we consider the stop heavier than the squarks but light enough so that the neutralinos and charginos decay promptly. Then, the possible decay chains are: \begin{alignat*}{3} &(1) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \\ \phantom{p p} \rightarrow d_{i/j} t \bar{d}_{i/j} \bar{t} \end{array} \quad &&(2) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow \bar{q} \chi^0 q \bar{\chi}^0 \\ \phantom{p p \rightarrow \tilde{q}^* \tilde{q}} \rightarrow \bar{q} t d_i d_j q \bar{t} \bar{d}_i \bar{d}_j \end{array} \quad &&(3) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow \bar{q}' \chi^+ q' \chi^- \\ \phantom{p p \rightarrow \tilde{q}^* \tilde{q}} \rightarrow \bar{q}' \bar{b} \bar{d}_i \bar{d}_j q' b d_i d_j \end{array} \\[1.0ex] &(4) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow d_{i/j} t q \bar{\chi}^0 \\ \phantom{p p \rightarrow \tilde{q}^* \tilde{q}} \rightarrow d_{i/j} t q \bar{t} \bar{d}_i \bar{d}_j \end{array} \quad &&(5) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow d_{i/j} t q' \chi^- \\ \phantom{p p \rightarrow \tilde{q}^* \tilde{q}} \rightarrow d_{i/j} t q' b d_i d_j \end{array} \quad &&(6) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow \bar{q} \chi^0 q' \chi^- \\ \phantom{p p \rightarrow \tilde{q}^* \tilde{q}} \rightarrow \bar{q} t d_i d_j q' b d_i d_j. \end{array} \end{alignat*} The decays involving the $\lambda''_{3ij}$ coupling can only occur for squarks $\tilde{d}_R$ and $\tilde{s}_R$ and their charge conjugates. The notation $d_{i/j}$ stands for $d_i$ or $d_j$. Whether the final state is $d_i$ or $d_j$ depends on which squark is decaying and which one of the $\lambda''_{3ij}$ is non-zero. The other squarks are required to decay through either a neutralino or chargino. If, instead, the $U(1)_R$ symmetry is broken, then processes 2, 4 and 6 are modified: \begin{alignat*}{2} &(2) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow \bar{q} \chi^0 q \chi^0 \rightarrow \begin{cases} \bar{q} t d_i d_j q \bar{t} \bar{d}_i \bar{d}_j \\ \bar{q} t d_i d_j q t d_i d_j \\ \bar{q} \bar{t} \bar{d}_i \bar{d}_j q \bar{t} \bar{d}_i \bar{d}_j \\ \bar{q} \bar{t} \bar{d}_i \bar{d}_j q t d_i d_j \end{cases} \end{array} \quad \quad &&(4) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow d_{i/j} t q \chi^0 \rightarrow \begin{cases} d_{i/j} t q \bar{t} \bar{d}_i \bar{d}_j \\ d_{i/j} t q t d_i d_j \end{cases} \end{array} \\[1.0ex] &(6) \ \begin{array}{l} p p \rightarrow \tilde{q}^* \tilde{q} \rightarrow \bar{q} \chi^0 q' \chi^- \rightarrow \begin{cases} \bar{q} t d_i d_j q' b d_i d_j \\ \bar{q} \bar{t} \bar{d}_i \bar{d}_j q' b d_i d_j. \end{cases} \end{array} \end{alignat*} Note that these are very similar to the decay chains for stop pair production. Although, a major difference is that at most two tops are produced whereas stop pair production resulted in four tops. If the $U(1)_R$ symmetry is broken, then production of two same sign leptons can still potentially take place, but now with a much lower probability than in the stop scenario. A nearly identical procedure to the one described above is used to constrain the parameter space. Here, we scan the stop mass and neutralino mass parameter space simulating each of the decay chains above. The acceptances are once again determined for the neutralino LSP searches of table \ref{table:searches}. We then compute the branching ratios for each of the squarks. Combining the cross sections, branching ratios and acceptances, we produce exclusion curves within the stop mass and neutralino mass parameter space. Figures \ref{fig:squark_exclusion_neuB} and \ref{fig:squark_exclusion_neuHu} present the exclusions curves for bino and Higgsino-up LSP, respectively. For each plot, the $\lambda''_{3ij}$ have been set to one. Note that curves are shown only for the case in which the $U(1)_R$ symmetry is preserved. As previously noted, breaking the symmetry only introduces a small probability of producing a same sign lepton pair. Consequently, the searches that require same sign leptons are less constraining then the searches that require multiple jets. The ATLAS search \cite{ATLAS-CONF-2016-094}, which does not rely on a same sign lepton pair, dominates for the entirety of the exclusion curve for both the $U(1)_R$ symmetry preserved and broken. As a result, the exclusion curves are for the two cases are the same. Additionally, the cases $\lambda''_{313}$ and $\lambda''_{323}$ are presented together. The only difference between these two cases is the production of down versus strange quarks, which is irrelevant to the searches involved. An interesting feature is that the excluded region prefers large neutralino masses. This follows from the decay chains as most of the final state quarks come from decaying neutralinos or charginos. This is in contrast to RPC MSSM searches with decaying squarks, which exclude light neutralino masses preferentially (see figure 11(a) of the ATLAS search \cite{ATLAS-CONF-2016-078} for example). Finally, these limits on squark production are presented in a MRSSM framework. However, the exclusion curves can also be seen as lower limits for the RPVMSSM as the major difference is the exclusion of some of the possible production cross sections. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 459]{Squark_NeuB_312_lambdapp1.pdf} \caption{$\lambda''_{312}$} \label{fig:squark_exclusion_neuB_312} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{Squark_NeuB_323_lambdapp1.pdf} \caption{$\lambda''_{313} \ / \ \lambda''_{323}$} \label{fig:squark_exclusion_neuB_323} \end{subfigure} \caption{Exclusion curves within the neutralino-squark mass parameter space for a bino neutralino. Figure (a) presents $\lambda''_{312} = 1$ while (b) presents either $\lambda''_{313} = 1$ or $\lambda''_{323} = 1$. The dashed green curves assume leading order squark production whereas the solid blue curves include MSSM K-factors.}\label{fig:squark_exclusion_neuB} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 459]{Squark_NeuHu_312_lambdapp1.pdf} \caption{$\lambda''_{312}$} \label{fig:squark_exclusion_neuHu_312} \end{subfigure} ~ \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 584 451]{Squark_NeuHu_323_lambdapp1.pdf} \caption{$\lambda''_{313} \ / \ \lambda''_{323}$} \label{fig:squark_exclusion_neuHu_323} \end{subfigure} \caption{Exclusion curves within the neutralino-squark mass parameter space for a Higgsino-up neutralino. Figure (a) presents $\lambda''_{312} = 1$ while (b) presents either $\lambda''_{313} = 1$ or $\lambda''_{323} = 1$. The dashed green curves assume leading order squark production whereas the solid blue curves include MSSM K-factors.}\label{fig:squark_exclusion_neuHu} \end{figure} \section{Breaking of \texorpdfstring{$U(1)_R$}{U(1)R} baryon number as an explanation of baryogenesis}\label{Sec:Baryogenesis} Supersymmetry with an $U(1)_R$ baryon number presents a unique possibility to address baryogenesis. Indeed, the $U(1)_R$ symmetry is broken by the gravitino mass and the breaking could be transmitted to the SM sector through anomaly mediation or through Planck scale suppressed operators. This signifies a breaking of baryon number conservation, which can potentially lead to matter-antimatter asymmetry. In this section, we discuss whether this can lead to successful baryogenesis. Essentially, Dirac binos are split into two Majorana gauginos by the introduction of Majorana masses. The resulting binos decouple early and decay out of equilibrium. The heaviest bino presents an asymmetry in its decay to baryons and antibaryons, while the lightest one can exhibit similar behaviour in the presence of light gluinos. This leads to a net baryon density. For the mechanism to be successful, a Mini-Split \cite{Arvanitaki:2012ps} spectrum is however required. The mechanism is therefore similar to previous works on baryogenesis within a Mini-Split scenario \cite{Cui:2013bta, Arcadi:2013jza, Arcadi:2015ffa}. See also \cite{Cui:2016rqt} for the LHC phenomenology of such models. This section goes as follows. We first discuss the impact of $U(1)_R$ breaking on the gaugino sector and the assumptions on the parameter space. We then calculate the decay widths of the binos and their scattering cross sections. We finally discuss our calculation of the baryon relic density and present some results. \subsection{\texorpdfstring{$U(1)_R$}{U(1)R} breaking} The $U(1)_R$ breaking manifests itself in the gaugino sector mainly via the introduction of Majorana masses which modify the mass eigenstates structure of the gauginos. We consider the effect of this for both binos and gluinos. For binos, the mass Lagrangian becomes: \begin{equation}\label{BG:Eq:Masses1} \mathcal{L}_{\text{masses}}=-\frac{1}{2}\left(\begin{matrix} \tilde{B} & \tilde{S} \end{matrix}\right)\left(\begin{matrix} M_1 & M_1^D \\ M_1^D & \rho_1 \end{matrix}\right)\left(\begin{matrix} \tilde{B} \\ \tilde{S} \end{matrix}\right)+\text{h.c.}, \end{equation} where $\rho_1$ is the singlino Majorana mass which we add for generality's sake. The Majorana masses cause the Dirac bino to split into two Majorana particles of different masses. We label the lightest one by $\chi_1^B$ and the heaviest by $\chi_2^B$. We refer to their masses as $m_1^B$ and $m_2^B$ respectively. The bino and the singlino will then be a linear combination of mass eigenstates of the form: \begin{equation}\label{BG:Eq:Masses3} \begin{aligned} \tilde{B} &=a_1 \chi^B_{1}+a_2 \chi^B_{2},\\ \tilde{S} &=b_1 \chi^B_{1}+b_2 \chi^B_{2}. \end{aligned} \end{equation} For convenience, we refer to $\chi_1^B$ and $\chi_2^B$ as binos when the context is clear. Similar results hold for gluinos, where masses are instead labeled as $M_3^D$, $M_3$ and $\rho_3$. The result is two eigenstates $\chi_1^g$ and $\chi_2^g$ of mass $m_1^g$ and $m_2^g$ respectively. They are related to gauge eigenstates by: \begin{equation}\label{BG:Eq:Masses4} \begin{aligned} \tilde{g} &=c_1 \chi^g_{1}+c_2 \chi^g_{2},\\ \tilde{O} &=d_1 \chi^g_{1}+d_2 \chi^g_{2}. \end{aligned} \end{equation} Other $U(1)_{R}$ breaking terms could also potentially affect our results. First, $A$-terms could be introduced, but their effects are typically suppressed by the scalar masses which are assumed large in Mini-Split. Even if they were important, they would not spoil any mechanism and could in fact be used for generating additional baryon asymmetry. Second, the $\mu$-term of the MSSM could reappear in the superpotential. As will be further discussed in section \ref{sSec:BinoDecay}, this would spoil a mechanism that allows for the Higgsinos to be lighter than in simple Mini-Split versions of the MSSM. The $\mu$-term can however be naturally small as it is a coefficient in the superpotential. Finally, the most dangerous possibility is soft-terms that mix Higgses such as $H_u R_d+\text{h.c.}$ and $H_u^\dagger R_u+\text{h.c.}$. These terms can lead to the lightest Higgs containing parts of $R_u$ and $R_d$, which would also reintroduce the need for heavy Higgsinos. This effect can however be suppressed by $R_u$ and $R_d$ having large soft masses, which we assume from now on to be the case. The only $U(1)_R$ breaking that we consider is then the Majorana masses. One property of anomaly mediation worth mentioning is that the problematic terms are either not generated or are suppressed. \subsection{Assumptions on the parameter space} In the mechanism that we consider, the baryon asymmetry originates from the decay of the binos to three quarks through the $\lambda''_{ijk}$ coupling. However, the Nanopoulos-Weinberg theorem \cite{Nanopoulos:1979gx} states that, if a particle is to exhibit an asymmetry in its decay to baryons and antibaryons, it must also be able to decay via a baryon number conserving channel. In our case, this corresponds to decays to quarks and lighter gauginos. Nonetheless, it is necessary for the baryon number breaking decays to dominate. Else, the baryon asymmetry will be suppressed by a small branching ratio. This will require some of the $\lambda''_{ijk}$ to be of $\mathcal{O}(0.1)$ or more. As explained in section \ref{sSec:BoundsOnLambdapp}, this is only possible for a few of them, though the fact that the optimal region of parameter space that we will obtain corresponds to a Mini-Split spectrum relaxes the constraints on several $\lambda''_{ijk}$. As such, we assume that a single $\lambda''_{ijk}$ is non-zero and refer to it as $\lambda''$. The generalization to several non-zero $\lambda''_{ijk}$'s is trivial. We refer to the associated up quark as $u_1$ and the associated down quarks as $d_1$ and $d_2$. We choose to concentrate on the case of a single relevant right-handed sdown-type squark and take it to be $\tilde{d}_2$. Analytical results can easily be converted to multiple sdown-type squarks or right-handed sup-type squarks. We also assume that $\tilde{d}_2$ is considerably heavier than the binos. This is necessary for two reasons. First, binos are required to decay after they decouple to avoid washout effects. This is simply stating that the decay must be out of equilibrium to satisfy the Sakharov conditions \cite{Sakharov:1967dj}. We will use this fact to calculate would-be relic number densities of binos which corresponds to what the relic densities would be if the binos were stable. Second, the binos are also required to decouple early. This is simply a question that the baryon number density coming from bino decay is several orders of magnitude smaller than the bino would-be number relic density. We define $x=m_2^B/T$ and label the value of $x$ around which the bino decouples by $x_f$. A viable baryogenesis will typically require $x_f<5$ \cite{Cui:2013bta}. Additionally, we assume that winos are heavy. We also consider Higgsinos to be heavy but not so much as to be irrelevant. This opens a decay channel to Higgses. Finally, we assume for simplicity's sake that $\chi_2^B$ is considerably heavier than the electroweak scale. \subsection{Binos decays}\label{sSec:BinoDecay} We now proceed to list the widths associated to all $\chi_1^B$ and $\chi_2^B$ decay channels. We neglect the mass of all quarks. \subsubsection{Baryon breaking decay at tree-level} The binos can decay to three quarks with a tree-level decay width of: \begin{equation}\label{BG:Eq:DecayWidthRPV} \Gamma_{\chi^B_{i}\to u_1 d_1 d_2}=\frac{g'^2Y_d^2|a_i \lambda''|^2}{512 \pi^3}\frac{(m_i^B)^5}{m_{\tilde{d}_2}^4}, \end{equation} where $Y_d=1/3$ is the weak hypercharge of $d_2$. The tree-level decay width to antiquarks is the same. \subsubsection{Decay of \texorpdfstring{$\chi_2^B$}{chi2B} to \texorpdfstring{$\chi_1^B$}{chi1B} and quarks} The baryon conserving decay of $\chi_2^B$ to $d_2$, $\overline{d}_2$ and $\chi_1^B$ corresponds to a decay width of: \begin{equation}\label{BG:Eq:DecayWidthB2B1bbar} \Gamma_{\chi^B_{2}\to \chi^B_{1} d_2 \overline{d}_2}=\frac{g'^4 Y_d^4}{256\pi^3}\left[|a_1 a_2|^2 f\left(\frac{m^B_1}{m^B_2}\right)+2 \text{Re}\{a_1^{2} a_2^{*2}\}\frac{m^B_1}{m^B_2}g\left(\frac{m^B_1}{m^B_2}\right)\right]\frac{(m^B_2)^5}{m_{\tilde{d}_2}^4}, \end{equation} where: \begin{equation}\label{BG:Eq:KinematicFunctions} \begin{aligned} f(x) &= (1-8x^2-12x^4\ln x^2+8x^6-x^8)\theta(1-x),\\ g(x) &= (1+9x^2+6x^2(1+x^2)\ln x^2-9x^4-x^6)\theta(1-x). \end{aligned} \end{equation} \subsubsection{Decay of \texorpdfstring{$\chi_i^B$}{chiiB} to \texorpdfstring{$\chi_j^g$}{chijg} and quarks} If a bino $\chi_i^B$ is heavier than a gluino $\chi_j^g$, it can also decay to this gluino and quarks with a decay width of: \begin{equation}\label{BG:Eq:DecayWidthBiGjbbar} \Gamma_{\chi^B_{i}\to \chi^g_{j} d_2 \overline{d}_2}=\frac{g'^2 g_s^2 Y_d^2}{192\pi^3}\left[|a_i c_j|^2 f\left(\frac{m_j^g}{m_i^B}\right)+2 \text{Re}\{a_i^{2} c_j^{*2}\}\frac{m_j^g}{m_i^B}g\left(\frac{m_j^g}{m_i^B}\right)\right]\frac{(m_i^B)^5}{m_{\tilde{d}_2}^4}. \end{equation} \subsubsection{Decay of \texorpdfstring{$\chi_2^B$}{chi2B} to \texorpdfstring{$\chi_1^B$}{chi1B} and Higgses} \begin{figure}[t!] \centering \includegraphics[width=0.33\textwidth, bb = 0 0 209 178]{BHiggses.pdf} \caption{Example of diagram contributing to the baryon conserving decays of $\chi_2^B$ to $\chi_1^B$ and Higgses.} \label{BG:Fig:BconservingDecayHH} \end{figure} An example of a diagram that leads to decay of $\chi_2^B$ to $\chi_1^B$ and two Higgses is shown in figure \ref{BG:Fig:BconservingDecayHH}. This decay leads to the only width that is only suppressed by two powers of a superpartner mass. Other decay processes are instead suppressed by four. As such, the Higgsinos are in general required to be considerably heavier than the scalars. The masses of the Higgs doublets are then approximatively given by: \begin{equation}\label{BG:Eq:HiggsesMasses} \mathcal{L}_{\text{masses}}=-\left(\begin{matrix} H_u^\dagger & \tilde{H}_d^\dagger \end{matrix}\right)\left(\begin{matrix} \mu_u^2 & B_\mu \\ B_\mu & \mu_d^2 \end{matrix}\right)\left(\begin{matrix} H_u \\ \tilde{H}_d \end{matrix}\right), \end{equation} where $\tilde{H}_d = i\sigma^2H_d^*$ and where we assumed that $\mu_u$, $\mu_d$ and $B_\mu$ are real. Requiring one of the Higgses to be light necessitates $B_\mu^2=\mu_u^2 \mu_d^2$. The resulting light Higgs $H_L$ is then given by: \begin{equation}\label{BG:Eq:LightHiggs} H_L = \frac{1}{\sqrt{\mu_u^2+\mu_d^2}}\left(\mu_d H_u + \mu_u \tilde{H}_d \right). \end{equation} In this limit, the corresponding decay width is given by: \begin{equation}\label{BG:Eq:DecayToHiggs} \Gamma_{\chi^B_{2}\to \chi^B_{1} H_L H_L^*}=\frac{1}{768\pi^3}\left[|C_{12}|^2 u\left(\frac{m_1^B}{m_2^B}\right)+3\text{Re}\{C_{12}^2\}\frac{m_1^B}{m_2^B}v\left(\frac{m_1^B}{m_2^B}\right)\right](m_2^B)^3, \end{equation} where: \begin{equation}\label{BG:Eq:u_and_v} u(x)=(1-x^2)^3\theta(1-x), \hspace{1cm} v(x)=(1+2x^2\ln x^2-x^4)\theta(1-x), \end{equation} and: \begin{equation}\label{BG:Eq:cH} C_{ij}=\frac{g'}{\mu_u^2+\mu_d^2}\left(\lambda^s_u\frac{\mu_d^2}{\mu_u}-\lambda^s_d\frac{\mu_u^2}{\mu_d}\right)(a_i b_j + a_j b_i). \end{equation} A similar result exists for Mini-Split leptogenesis. In this case, the wino is required to be lighter than the bino for leptogenesis to occur. The bino can then decay to the wino and Higgses with a decay width of \cite{Cui:2013bta}: \begin{equation}\label{BG:Eq:SMleptogenesis} \Gamma_{\tilde{B}\to \tilde{W} H_L H_L^*}^{\text{MSSM}}=\frac{(Y_H g_1 g_2)^2}{384\pi^3}\frac{M_1^3}{\mu^2}, \end{equation} where $Y_H=1/2$ is the weak hypercharge of the Higgs doublet and the wino mass was neglected. The main difference of eq$.$ (\ref{BG:Eq:DecayToHiggs}) is the presence of $\lambda^s_u$ and $\lambda^s_d$, which is an effect of the extended Higgs sector. Being coefficients in the superpotential, $\lambda^s_u$ and $\lambda^s_d$ can naturally be small and the Higgsinos are not required to be as heavy as in the MSSM. As was alluded to earlier, this mechanism can however be spoiled by the presence of either a $\mu$-term or soft-terms like $H_u R_d+\text{h.c.}$. These terms would allow for diagrams that do not require $\lambda^s_u$ and $\lambda^s_d$. That is why we needed to make assumptions to limit their effects. We note that the width of this decay channel does not go to zero when $\lambda^s_u$ and $\lambda^s_d$ are zero, but that it would instead be suppressed by higher powers of $\mu_u$ and $\mu_d$. We also note that this result is not exact as the decay will typically take place after electroweak phase transition (EWPT). The degrees of freedom involved will not be the same and the exact expression depends on the precise details of the scalar sector. As we are more interested in a proof of principle, we will be satisfied with this result. \subsubsection{Decay of \texorpdfstring{$\chi_2^B$}{chi2B} to \texorpdfstring{$\chi_1^B$}{chi1B} and a photon} Finally, $\chi_2^B$ can also decay to $\chi_1^B$ and a photon as shown in figure \ref{BG:Fig:PhotonN2}. The decay width is: \begin{equation}\label{BG:Eq:DecayWidthB2B1Bboson} \begin{aligned} \Gamma_{\chi^B_{2}\to \chi^B_{1} \gamma} &=\frac{e^2 g'^4Y_d^4}{8192\pi^5} \left[|a_1 a_2|^2 +2 \text{Re}\{a_1^{2} a_2^{*2}\} \frac{m_1^B m_2^B}{(m_1^B)^2+(m_2^B)^2} \right]\\ &\hspace{1.9cm}\left(1+\left(\frac{m_1^B}{m_2^B}\right)^2\right)\left(1-\left(\frac{m_1^B}{m_2^B}\right)^2\right)^3\frac{(m_2^B)^5}{m_{\tilde{d}_2}^4}, \end{aligned} \end{equation} which is negligible. Note that it was assumed that the decay takes place after EWPT, which will turn out to be case in most of the successful region of parameter space. If the decay were to take place before phase transition, it would instead involve a $B$ boson. The answer would change slightly but would still remain negligible. The decay to a $Z$ boson is also negligible. \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth, bb = 0 0 258 180]{PhotonN2.pdf} \caption{Example of diagram contributing to the decay of $\chi^B_{2}$ to $\chi^B_{1}$ and a photon.} \label{BG:Fig:PhotonN2} \end{figure} \subsubsection{Net decay width to baryons}\label{BG:Sec:NetDecay} In the case of heavy gluinos, only $\chi^B_{2}$ will present an asymmetry in its decay to baryons and antibaryons. The asymmetry comes from the interference between the tree-level diagram and the loop diagram of figure \ref{BG:Fig:LoopqqqN}. Other diagrams exist, but either do not lead to any baryon asymmetry or require a dangerous amount of flavour mixing \cite{Cui:2013bta}. In this case, the net decay width to baryons is given by: \begin{equation}\label{BG:Eq:DecayWidthBiNetBino} \Delta \Gamma^B_{\chi_2^B}=\Gamma^B_{\chi^B_{2}\to u_1 d_1 d_2}-\Gamma^B_{\chi^B_{2}\to \overline{u}_1 \overline{d}_1 \overline{d}_2}=\frac{g'^4 Y_d^4| a_1 a_2\lambda''|^2\sin\phi}{2048\pi^4}f\left(\frac{m_1^B}{m_2^B}\right)\left(\frac{m_2^B}{m_{\tilde{d}_2}}\right)^6 m_1^B, \end{equation} where the $B$ superscript on $\Gamma$ means that we do not consider contributions from diagrams containing gluinos and: \begin{equation}\label{BG:Eq:SinPhi} \sin\phi=\frac{\text{Im}\{a_1^{*2}a_2^2 \}}{|a_1 a_2|^2}. \end{equation} We note that this result illustrates the Nanopoulos-Weinberg theorem. The net width $\Delta \Gamma^B$ would be zero if $m_1^B>m_2^B$ since $f(x)$ is 0 for $x\geq 1$. This also corresponds to the decay of $\chi_2^B$ to $\chi_1^B$ and quarks being forbidden. Obviously, this is not a problem because $m_1^B<m_2^B$ by assumption, but it does show that the decay of $\chi_1^B$ does not lead to baryon asymmetry without at least some new lighter particle. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 230 148]{LoopqqqN.pdf} \caption{} \label{BG:Fig:LoopqqqN} \end{subfigure}% ~\qquad \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth, bb = 0 0 230 148]{Loopqqq.pdf} \caption{} \label{BG:Fig:LoopqqqG} \end{subfigure} \caption{(a) Loop baryon number breaking decay of $\chi^B_{2}$ via virtual $\chi^B_{1}$. (b) Loop baryon number breaking decay of $\chi^B_{i}$ via virtual $\chi^g_{j}$.}\label{BG:Fig:Loopqqq} \end{figure} An interesting property of $\sin\phi$ is that it is zero if at least one of the Majorana masses is zero. This can be understood as follows. First, consider the case of $\rho_1$ set to zero. The interference term between the tree-level diagram and the diagram of figure \ref{BG:Fig:LoopqqqN} can be factorized as a function of the coupling constants times a function depending only on the kinematics. To obtain a net baryon asymmetry, both of these functions must be complex \cite{Kolb:1990vq}.\footnote{It is assumed that the fields have been redefined such that their masses are real.} First, the kinematic function is complex because the loop diagram \ref{BG:Fig:LoopqqqN} can be cut to obtain a tree-level diagram contributing to the decay of $\chi_2^B$ to $\chi_1^B$ and quarks (see \cite{Cutkosky:1960sp}). Second, the phase of $M_1$ can be reabsorbed by a field redefinition of $\tilde{B}$. This effectively makes $g'$ complex. It also transfers a phase to $M_1^D$, which can then be removed by a field redefinition of $\tilde{S}$. Since none of the couplings associated to this field are involved in these diagrams, the phase will only appear in $g'$. Since $g'$ appears as $|g'|^4$ in the interference term, this would not lead to any asymmetry. The procedure obviously breaks down when $\rho_1$ is non-zero, as the field redefinition we did would lead to a complex $\rho_1$. A similar argument holds for $M_1$. In the case of light gluinos, both binos can potentially contribute to the baryon density: \begin{equation}\label{BG:Eq:DecayWidthBiNetBGluino} \Delta \Gamma^g_{\chi_i^B}=\Gamma^g_{\chi^B_{i}\to u_1 d_1 d_2}-\Gamma^g_{\chi^B_{i}\to \overline{u}_1 \overline{d}_1 \overline{d}_2}=\sum_j\frac{g'^2 g_s^2 Y_d^2| a_i c_j\lambda''|^2\sin\phi'_{ij}}{1536\pi^4}f\left(\frac{m_j^g}{m_i^B}\right)\left(\frac{m_i^B}{m_{\tilde{d}_2}}\right)^6 m_j^g, \end{equation} where the $g$ subscript on $\Gamma$ means that only loop diagrams containing gluinos are taken into account and: \begin{equation}\label{BG:Eq:SinPhiPrimeij} \sin\phi'_{ij}=\frac{\text{Im}\{a_i^{*2}c_j^2 \}}{|a_i c_j|^2}. \end{equation} We mention that, if binos and gluinos are close in mass, it is possible that certain combinations of binos and gluinos do not lead to any baryon asymmetry. Also, the argument about requiring both Majorana masses to be non-zero for binos does not hold in this case. Finally, we define bino decay asymmetries as: \begin{equation}\label{} \epsilon^{CP}_{\chi_i^B}=\frac{\Delta \Gamma^B_{\chi_i^B}+\Delta \Gamma^g_{\chi_i^B}}{\Gamma^{\text{Total}}_{\chi_i^B}}. \end{equation} \subsection{Annihilation and conversion cross sections}\label{BG:Sec:AnnihilationCS} We now discuss the annihilation and conversion cross sections of binos. These enter the Boltzmann equations which will be used to calculate the would-be relic density of the binos. The two most important interactions are those with Higgses and quarks. \subsubsection{Interactions with Higgses} An example of annihilation to two Higgses is shown in figure \ref{BG:Fig:BBHH}. The total cross section is given by: \begin{equation}\label{BG:Eq:CrossSectionBiBjHLHL} \sigma_{\chi^B_{i}\chi^B_{j}\to H_L H_L^*}(s)=\frac{1}{32\pi}\frac{|C_{ij}|^2(s-(m_i^B)^2-(m_j^B)^2)-2\text{Re}\{C_{ij}^2\}m_i^B m_j^B}{\sqrt{(s-(m_i^B+m_j^B)^2)(s-(m_i^B-m_j^B)^2)}}, \end{equation} where $\sqrt{s}$ is the centre of mass energy and the cross section, like every other in this section, is averaged over all incoming degrees of freedom. As we will be interested in binos of around a TeV or heavier and that decouple at very small $x$, freeze-out will typically take place before the electroweak phase transition. As such, $H_L$ is treated as a complex scalar doublet for calculating relic densities, i.e. no particle has been ``eaten" yet by gauge bosons. \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth, bb = 0 0 222 87]{BBHH.pdf} \caption{} \label{BG:Fig:BBHH} \end{subfigure}% ~ \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth, bb = 0 0 218 90]{BHBH.pdf} \caption{} \label{BG:Fig:BHBH} \end{subfigure} \caption{(a) Example of diagram contributing to bino annihilation to Higgses. (b) Example of diagram contributing to bino conversion via Higgs scattering.}\label{BG:Fig:2Bscattering} \end{figure} In addition to annihilation, one must also take into account conversion via scattering. An example is shown in figure \ref{BG:Fig:BHBH}. The associated cross section is: \begin{equation}\label{BG:Eq:CrossSectionBiHLBjHlsubleading} \sigma_{\chi^B_{j}H_L\to \chi^B_{i} H_L}=\frac{1}{64\pi s}\left(\frac{s-(m_i^B)^2}{s-(m_j^B)^2}\right)\left[|C_{ij}|^2\frac{(s+(m_i^B)^2)(s+(m_j^B)^2)}{s}+4 m_i^B m_j^B \text{Re}\{C_{ij}^2\}\right]. \end{equation} \subsubsection{Interactions with quarks} Annihilation and conversion can also take place via interactions with quarks. These interactions can either conserve baryon number or break it. For $\lambda''$ of $\mathcal{O}(0.1)$ or larger and heavy Higgsinos, baryon number breaking annihilation is expected to dominate because of the large multiplicity and lack of $p$-wave suppression \cite{Cui:2013bta}. Examples of these interactions are shown in figure \ref{BG:Fig:SingleAnnihilation}. The cross section is given by: \begin{equation}\label{BG:Eq:CrossSectionBiqqq} \sigma_{\chi^B_{i}q\to q q}(s)=\frac{g'^2Y_d^2|a_i\lambda''|^2}{48\pi m_{\tilde{d}_2}^4}(5s + (m_i^B)^2). \end{equation} \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth, bb = 0 0 213 126]{Single1.pdf} \caption{} \label{BG:Fig:Single1} \end{subfigure}% ~ \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth,bb=0 0 260 208]{Single3.pdf} \caption{} \label{BG:Fig:Single3} \end{subfigure} \caption{(a) Example of $s$-channel diagram contributing to annihilation of a single bino. (b) Example of $t$-channel diagram contributing to annihilation of a single bino.}\label{BG:Fig:SingleAnnihilation} \end{figure} Other subleading interactions with quarks exist that preserve baryon number. These effects are usually subdominant. Annihilation of binos to quarks is shown in figure \ref{BG:Fig:BBqq}. The associated cross section is: \begin{equation}\label{Bg:Eq:CrossSectionBiBjbbbar} \begin{aligned} &\sigma_{\chi^B_{i}\chi^B_{j}\to d_2\overline{d}_2}(s)=\frac{g'^4 Y_d^4}{16 \pi m_{\tilde{d}_2}^4}\frac{1}{\sqrt{(s-(m_i^B+m_j^B)^2)(s-(m_i^B-m_j^B)^2)}}\\ &\hspace{1cm}\left[|a_i a_j|^2(2s^2-((m_i^B)^2+(m_j^B)^2)s-((m_i^B)^2-(m_j^B)^2)^2)-6\text{Re}\{a_i^2 a_j^{*2}\}m_i^B m_j^B s\right]. \end{aligned} \end{equation} In addition, conversion can take place via diagrams like the one of figure \ref{BG:Fig:BqBq}. The associated cross section is: \begin{equation}\label{BG:Eq:BiqBjq} \begin{aligned} &\sigma_{\chi^B_{j}d_2\to \chi^B_{i}d_2}(s)=\frac{g'^4 Y_d^4}{96\pi m_{\tilde{d}_2}^4}\frac{(s-(m_i^B)^2)^2}{s^3}\left[|a_i a_j|^2(8s^2+((m_i^B)^2+(m_j^B)^2)s+2(m_i^B)^2 (m_j^B)^2)\right.\\ &\hspace{6.7cm}\left.+6\text{Re}\{a_i^2 a_j^{*2}\}m_i^B m_j^B s\right]. \end{aligned} \end{equation} \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth, bb = 0 0 214 87]{BBqq.pdf} \caption{} \label{BG:Fig:BBqq} \end{subfigure}% ~ \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth, bb = 0 0 220 87]{BqBq.pdf} \caption{} \label{BG:Fig:BqBq} \end{subfigure} \caption{(a) Example of diagram contributing to bino pair annihilation to $d_2$ $\overline{d}_2$. (b) Example of a diagram contributing to bino conversion via scattering off quarks.}\label{BG:Fig:2Bscatteringquarks} \end{figure} Similar processes exist involving gluinos. Cohannihilation is shown in figure \ref{BG:Fig:Bgqq} and corresponds to a cross section of: \begin{equation}\label{Bg:Eq:CrossSectionBigjbbbar} \begin{aligned} &\sigma_{\chi^B_{i}\chi^g_{j}\to d_2\overline{d}_2}(s)=\frac{g'^2g_s^2 Y_d^2}{24 \pi m_{\tilde{d}_2}^4}\frac{1}{\sqrt{(s-(m_i^B+m_j^g)^2)(s-(m_i^B-m_j^g)^2)}}\\ &\hspace{1cm}\left[|a_i c_j|^2(2s^2-((m_i^B)^2+(m_j^g)^2)s-((m_i^B)^2-(m_j^g)^2)^2)-6\text{Re}\{a_i^2 c_j^{*2}\}m_i^B m_j^g s\right]. \end{aligned} \end{equation} Conversion is shown in figure \ref{BG:Fig:Bqgq} and corresponds to a cross section of: \begin{equation}\label{BG:Eq:Biqgjq} \begin{aligned} &\sigma_{\chi^B_{j}d_2\to \chi^g_{i}d_2}(s)=\frac{g'^2g_s^2 Y_d^2}{18\pi m_{\tilde{d}_2}^4}\frac{(s-(m_j^B)^2)^2}{s^3}\left[|a_j c_i|^2(8s^2+((m_j^B)^2+(m_i^g)^2)s+2(m_j^B)^2 (m_i^g)^2)\right.\\ &\hspace{6.6cm}\left.+6\text{Re}\{a_j^2 c_i^{*2}\}m_j^B m_i^g s\right]. \end{aligned} \end{equation} \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth, bb = 0 0 214 87]{Bgqq.pdf} \caption{} \label{BG:Fig:Bgqq} \end{subfigure}% ~ \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=0.7\textwidth, bb = 0 0 220 87]{Bqgq.pdf} \caption{} \label{BG:Fig:Bqgq} \end{subfigure} \caption{(a) Example of diagram contributing to bino-gluino annihilation to $d_2$ $\overline{d}_2$. (b) Example of diagram contributing to bino conversion to gluino via scattering off quarks.}\label{BG:Fig:Bgscatteringquarks} \end{figure} \subsection{Calculation of the baryon relic density} To obtain estimates of the baryon relic density $\Omega_{\Delta B}$, we calculate would-be relic densities of binos. For this, we use Boltzmann's equation conveniently rewritten as \cite{Edsjo:1997bg} (see also Ref$.$ \cite{Arcadi:2015ffa}): \begin{equation}\label{Bg:Eq:Boltzmann} \begin{aligned} \frac{dY_i}{dx} &=-\sqrt{\frac{ g_*\pi}{45 G}}\frac{m_2^B}{x^2}\left[\langle \sigma_{iX}v_{iX}\rangle(Y_i-Y_i^{\text{eq}})Y_X^{\text{eq}}+\sum_{j=1}^2 \langle \sigma_{ij}v_{ij}\rangle(Y_i Y_j-Y_i^{\text{eq}}Y_j^{\text{eq}})\right.\\ &\hspace{3.0cm}\left. -\sum_{j=1}^2\langle \sigma_{jX}v_{jX}\rangle\left(Y_j-\frac{Y_j^{\text{eq}}}{Y_i^{\text{eq}}}Y_i\right)Y_X^{\text{eq}}\right]. \end{aligned} \end{equation} The parameter $Y_i$ is given by $Y_i=n_i/s$, where $n_i$ is the number density of particle $i$ and $s$ the entropy per comoving volume (not to be confused with the square root of the centre of mass energy). The parameter $g_*$ corresponds to the number of relativistic degrees of freedom. As we deal with particles with masses of the order of a few TeV and which decouple at small $x$, $g_*$ can safely be approximated by a constant $g_*\approx 106.75$. The parameter $Y_i^{\text{eq}}$ represents the equilibrium value of $Y_i$ and is given by \cite{Edsjo:1997bg}: \begin{equation}\label{BG:Eq:Yeq} Y_i^{\text{eq}}=\frac{45x^2}{4\pi^4 g_*}g_i \left(\frac{m_i}{m_2^B}\right)^2 K_2\left(x\frac{m_i}{m_2^B} \right), \end{equation} where $g_i$ is the number of degrees of freedom of the particle $i$ and $K_i(x)$ is a modified Bessel functions of the second kind. The $\langle \sigma_{ij}v_{ij}\rangle$, $\langle \sigma_{iX}v_{iX}\rangle$ and $\langle \sigma_{jX}v_{jX}\rangle$ represent thermally averaged cross sections and can be obtained by combining the results of section \ref{BG:Sec:AnnihilationCS} with the following eq$.$ \cite{Edsjo:1997bg}: \begin{equation}\label{BG:Eq:ThermallyAveragedCrossSection} \langle \sigma_{ij}v_{ij}\rangle=\frac{\int_{(m_i+m_j)^2}^{\infty} \frac{1}{\sqrt{s}} (s-(m_i+m_j)^2)(s-(m_i-m_j)^2)K_1\left(\frac{\sqrt{s}}{T}\right)\sigma_{ij}(s)ds}{8T m_i^2 m_j^2 K_2\left(\frac{m_i}{T}\right)K_2\left(\frac{m_j}{T}\right)}, \end{equation} where the $i$ and $j$ indices can represent any particle. Annihilation of two binos contributes to $\langle \sigma_{ij}v_{ij}\rangle$, annihilation of a single bino contributes to $\langle \sigma_{iX}v_{iX}\rangle$ and conversion contributes to $\langle \sigma_{jX}v_{jX}\rangle$. Since gluinos can annihilate via diagrams that only involve themselves and gluons, they will remain in equilibrium until far later than the binos have decoupled. Therefore, we approximate gluino densities by their equilibrium values when relevant. We also note that neutralinos can potentially decay to the heaviest gluino and that the latter can present an asymmetry in its decay to baryons and antibaryons. We verified that this decay would generally take place long before the gluinos have decoupled and therefore should not contribute to the baryon relic density. We therefore do not consider this contribution. Once the would-be relic densities of binos are obtained, $\Omega_{\Delta B}$ is approximated by: \begin{equation}\label{BG:Eq:OmegaDeltaB} \Omega_{\Delta B}=\left.\frac{m_p}{(\rho_c/s)_0}\left(\epsilon^{CP}_{\chi_1^B} Y_{\chi_1^B}+\epsilon^{CP}_{\chi_2^B} Y_{\chi_2^B}\right)\right|^{t\to\infty}, \end{equation} where $m_p$ is the mass of the proton and $(\rho_c/s)_0$ the current ratio of critical density to entropy density. This is a good approximation as long as the decay temperatures of the binos are considerably lower than their freeze-out temperatures. Also, note that baryon number breaking interactions that take place before freeze-out can lead to an additional source of baryon asymmetry. This was studied in Ref$.$ \cite{Arcadi:2015ffa} and found to be negligible because of washout effects. \subsection{Results and constraints}\label{BG:Sec:Result} We now proceed to discuss the relevant constraints and provide a few benchmark plots to illustrate different features. We stress that we do not aim to cover the full parameter space, but to show that it is indeed possible to obtain a baryon density compatible with the observed value of $\Omega_{\Delta B} \sim 0.05$ \cite{Ade:2015xua} . We first make a few simplifying assumptions out of convenience. We relate parameters by setting $\mu_u=\mu_d\equiv\mu$ and $\lambda_u^s=-\lambda_d^s\equiv\lambda^s$. For decoupled gluinos, we set $\lambda''=0.2$, which is chosen to maximize $\Omega_{\Delta B}$. It is large enough for the $\epsilon^{CP}$'s not to be suppressed by large decay branching ratios to other channels, while not being so large as to make $B$-breaking scattering with quarks too strong. For light gluinos, we instead set it to $\lambda''=1$. Figure \ref{BG:Fig:DeltaBcontours} shows $\Omega_{\Delta B}$ as a function of $M_1$ and $M_1^D$ for heavy Higgsinos and gluinos. The mass $\rho_1$ is set to $1 \times \text{exp}(3i/4\pi)$ TeV and $m_{\tilde{d}_2}$ to 50 TeV. As can be seen, it is possible to obtain a sufficient baryon relic density, but it requires the $U(1)_R$ breaking to be large. We see that $\Omega_{\Delta B}$ peaks in a region where $\chi_2^B$ is very close to being a pure singlino. In this limit, $\chi_2^B$ can easily decouple early when the Higgsinos are heavy as interactions with quarks are suppressed. It is then not necessary to have the squarks as heavy for it to decouple early and $\epsilon^{CP}$ does not need to be as suppressed. We note that $\Omega_{\Delta B}$ is optimized when $m_1^B/m_2^B$ is close to 0.25. This corresponds to the maximum of $x f(x)$, which controls the asymmetry (see eq$.$ (\ref{BG:Eq:DecayWidthBiNetBino})). \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth, bb = 0 0 360 351]{DeltaBcontours.pdf} \caption{Contour plots of constant $\Omega_{\Delta B}$ for decoupled Higgsinos and gluinos. The blue region corresponds to $\chi_2^B$ decaying before electroweak phase transition. The yellow region corresponds to $\chi_2^B$ decaying before freeze-out. The purple and pink regions correspond to $\chi_2^B$ having a decay temperature superior to $m_1^B$ and $m_2^B$ respectively.} \label{BG:Fig:DeltaBcontours} \end{figure} Figure \ref{BG:Fig:DeltaBcontours2} shows $\Omega_{\Delta B}$ as a function of $\mu/\lambda^s$ and $m_{\tilde{d}_2}$ for heavy gluinos. The masses $M_1^D$, $M_1$ and $\rho_1$ are set respectively to 0.02 TeV, 0.25 TeV and $1 \times \text{exp}(3i/4\pi)$ TeV. Obviously, Higgsinos can be made lighter by taking $\lambda^s$ small but eventually the subleading corrections inversely proportional to $\mu^4$ would become non-negligible. In addition, this shows that the requirement of correct $\Omega_{\Delta B}$ indeed leads to a Mini-Split spectrum. \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth, bb = 0 0 360 361]{BaryonDensity2.pdf} \caption{Contour plots of constant $\Omega_{\Delta B}$ for decoupled gluinos. The blue region corresponds to $\chi_2^B$ decaying before electroweak phase transition. The yellow region corresponds to $\chi_2^B$ decaying before freeze-out. The purple region corresponds to $\chi_2^B$ having a decay temperature inferior to $m_1^B$.} \label{BG:Fig:DeltaBcontours2} \end{figure} Figure \ref{BG:Fig:LightGluinos} shows $\Omega_{\Delta B}$ as a function of $M_1$ and $M_1^D$ for decoupled Higgsinos and light gluinos. The mass $\rho_1$ is set to 1 TeV. The gluino Dirac masses is set to 0.5 TeV and the Majorana masses are set to $M_3=0.5 \times \text{exp}(3i/4\pi)$ TeV and $\rho_3=0$ TeV. The mass of $\tilde{d}_2$ is set to 100 TeV. We observe that it is possible to obtain the correct baryon relic density, but that it once again requires the $U(1)_R$ symmetry to be badly broken. \begin{figure}[t!] \centering \includegraphics[width=0.6\textwidth, bb = 0 0 360 351]{LightGluinos.pdf} \caption{Contour plots of constant $\Omega_{\Delta B}$ for decoupled Higgsinos. The blue region corresponds to both binos decaying before electroweak phase transition. The yellow region corresponds to both binos decaying before they have decoupled. The pink region corresponds to all binos having a decay temperature superior to at least one bino or gluino mass.} \label{BG:Fig:LightGluinos} \end{figure} A few additional constraints are also taken into account. The first one concerns washout. The relevant washout processes are carefully discussed in Ref$.$ \cite{Cui:2012jh}. They include inverse decay via on-shell squark and $u_1 d_1 d_2 \to \overline{u}_1 \overline{d}_1 \overline{d}_2$ just to name a few. In the decoupled gluino case, the end result is that these processes are suppressed as long as the decay temperature of $\chi_2^B$ is lower than the masses of the binos. In the case of light gluinos, these can also mediate baryon number breaking interactions and as such we require that the decay temperature of at least one of the binos be smaller than the masses of any of the gluinos and binos. The second constraint is for the binos to decay after their freeze-out. This constraint is not absolute, as decays that take place slightly before still lead to a relic baryon density, albeit suppressed. We estimate the freeze-out temperature of the binos by taking their relic number density, setting them equal to their equilibrium density and solving for $x$. This defines a freeze-out temperature for each bino. For decoupled gluinos, we then include in the figures the region where $\chi_2^B$ decays before freeze-out. For light gluinos, we include the region where both binos decay before decoupling. Generally speaking, these constraints are far more important than washout. Finally, if the binos were to decay before the electroweak phase transition, some of the baryon relic density would be converted away by sphaleron effects. This would reduce the baryon relic density by a factor of $28/79$, which is sizable but does not change the qualitative features \cite{Chen:2007fv}. It also corresponds in our plots to a region that does not produce enough baryon relic density. We take the electroweak phase transition to take place at 100 GeV. For decoupled gluinos, we include the regions where the decay of $\chi_2^B$ takes place before the electroweak phase transition. For light gluinos, we show the region where both binos decay before electroweak phase transition. Another effect to consider is the possibility of entropy dilution. Ref$.$ \cite{Arcadi:2015ffa} studied this and found that it is only relevant for very large scalar masses where the bino decouples while still relativistic. In the case of heavy gluinos, this would lead to a suppression of $\Omega_{\Delta B}$ by a dilution factor of \cite{Arcadi:2015ffa,Baldes:2014rda}: \begin{equation}\label{BG:Eq:EntropyDilution} \xi_s=\text{Max}\left[1,1.8g_*^{1/4}\frac{Y_2(x_{f.o.})m_2^B}{\sqrt{\Gamma_{\chi^B_{2}}^{\text{total}}M_{\text{Pl}}}}\right], \end{equation} where $\Gamma_{\chi^B_{2}}^{\text{total}}$ is the value of $Y_2(x_{f.o.})$ when $\chi_2^B$ freezes-out and $M_{\text{Pl}}$ the Planck scale. We have verified that this factor is simply one over all the region shown in our plots. Similar results hold for both binos in the light gluinos case. \section{Conclusions} In this paper, we studied the LHC phenomenology of a supersymmetric model with a $U(1)_R$ symmetry which is identified with the baryon number. We also examined how baryogenesis could be realized in this model through the late decay of a neutral gaugino. The model we considered is an extension of the MRSSM with the inclusion of an $R$-parity breaking term of the form $\lambda_{ijk}'' U_i^c D_j^c D_k^c$. Because of the non-standard baryon number assignment of the superpartners, such a term is baryon number conserving in this model. This relaxes the bounds on the $\lambda''$ couplings significantly compared to the RPVMSSM. In particular, the bounds from neutron-antineutron oscillation and from double nucleon decay are considerably loosened. However, they cannot be removed completely as the $U(1)_R$ will be broken by the gravitino mass and communicated to the superpartners of the Standard Model by anomaly mediation. Furthermore, the gravitino must be heavier than the proton to avoid proton decay to a gravitino and a kaon. Flavour physics also puts bounds on products of $\lambda''$ couplings which are the same in our model as in the RPVMSSM. The introduction of large $\lambda''$ couplings leads to a collider phenomenology that is significantly different from the MSSM and from the RPVMSSM with very small $\lambda''$ couplings. We examined simplified models where a single one of the $\lambda''$ involving the third generation is large. We looked at both single and pair production of stops and their subsequent decays for bino or Higgsino-up LSP. When Majorana mass terms are included, a Dirac neutralino splits into two states close in mass. We showed that when this mass splitting is larger than the width of the neutralino, the stop can decay via this neutralino to two same-sign tops. On the other hand when the mass splitting is smaller, the branching ratio of a stop decaying to two same-sign tops is highly suppressed. Because same sign leptons are a powerful tool to reject background, the two cases present different phenomenology and in this work we presented results for both hypotheses. We also presented limits on the masses of the first and second generation of squarks as their production cross section is altered in models with Dirac gluinos. We note that the structure of these models is quite rich and we did not explore the complete phenomenology of all sectors of the theory. For example, the model has extra scalars as part of the adjoint chiral superfields. Some of these fields could in fact be responsible for obtaining the correct Higgs mass in these models \cite{Fok:2012fb, Bertuzzo:2014bwa, Benakli:2012cy, Diessner:2014ksa}. The structure of the model also allows for baryogenesis to proceed through late decay of neutralinos. Because of the extra field content, new diagrams contribute to these decays and allow for the conditions of the Nanopolous-Weinberg theorem to be met. The results of the analysis of baryogenesis are presented in section \ref{Sec:Baryogenesis}. We find that with a Mini-Split spectrum, where the scalars are heavier than the gauginos, successful baryogenesis can be achieved. Such mechanism also exists in the RPVMSSM with a split spectrum, but the extended Higgs sector present in our model allows for the higgsinos to be lighter. The mechanism however requires a large breaking of the $U(1)_R$ symmetry. \acknowledgments This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). HB acknowledges support from the Ontario Graduate Scholarship (OGS) and from FAPESP. KE acknowledges support from the Alexander Graham Bell Canada Graduate Scholarships Doctoral Program (NSERC CGS D). We would like to thank David London for discussions and collaboration at the early stage of the project. We would also like to thank Alejandro de la Puente for discussions about displaced vertices. \bibliographystyle{JHEP}
1,314,259,996,864
arxiv
\section{Introduction} Investigations into the behaviour of stars in globular clusters (GCs) have unearthed peculiarities that are not consistent with the expected behaviour of bound stars. These include extended structure surrounding clusters (\citealt{Grillmair1995}; \citealt{Kuzma2016}) and unusual surface density profiles (\citealt{Cote2002}, \citealt{Carraro2009}; \citealt*{Kupper2011}), individual stars with velocities near or above the escape velocity (\citealt*{Meylan1991}; \citealt{Lutzgendorf2012}), and a flattening of the velocity dispersion profile at large radii. This flattening has been observed in an increasing number of clusters (\citealt{Drukier1998}; \citealt{Scarpa2007}; \citealt{Lane2010}), although there are many cases where self-consistent models (\citealt{King1966}; \citealt{Wilson1975}) have accurately fit the observed velocity dispersion profile of Milky Way and local group clusters (\citealt{McLaughlin2005}; \citealt{Barmby2009}). It is not understood why some clusters show this feature and others do not, or how many clusters would be expected to display it. Attempts to explain the flattening of the dispersion have ranged from the effects of extra-tidal stars to deviations from Newtonian gravity \citep{Scarpa2007}. In modified Newtonian dynamics (MOND, \citealt{Milgrom1983}) there is a transition into this regime from Newtonian dynamics if both the acceleration of the GC around the galaxy and the internal acceleration of stars fall below a threshold acceleration, which can correspond to the radial position where the velocity dispersion profiles begin to flatten \citep*{Hernandez2013}. Alternatively the $\Lambda$ cold dark matter model, $\Lambda$CDM, and the hierarchical merger scenario for galaxy formation suggest that GCs formed in dark matter halos (\citealt{Peebles1984}; \citealt{Kravtsov2005}). Although internal effects expel DM from inside of clusters \citep{Baumgardt2008} and tidal interactions would possibly strip the DM halo (\citealt{Moore1996}; \citealt{Mashchenko2005}), GCs on large galactocentric orbits could still contain this DM component, which would then interact gravitationally with stars in the cluster and increase their velocity dispersion (\citealt{Ibata2013}). However \citet{Kupper2010}, hereafter K10, showed that a flattening of the velocity dispersion profile occurs in simulations using purely Newtonian dynamics. This is due to the effect of potential escapers (PEs), which are stars that orbit inside of GCs but with an energy above the critical energy required for escape (\citealt{Fukushige&Heggie2000}; from now on FH00). If in models of cluster evolution the tidal truncation is approximated as an energy truncation at the critical energy, then the lifetimes are proportional to the half-mass relaxation time $t_{\rm rh}$, because stars gain energy on a relaxation time. FH00 noted that if a tidal field is included the lifetimes show a weaker dependence on $t_{\rm rh}$. They found the cause to be a population of PEs which increases the dissolution time, $t_{\rm diss}$; this effect is more important for simulations with a lower number of stars, $N$. \citet{Baumgardt2001}, hereafter B01, showed with a model of the PEs energy distribution that this delayed escape leads to a scaling of the lifetime of a cluster with $t_{\rm rh}^{3/4}$. A constant fraction of stars are scattered above the critical energy each $t_{\rm rh}$ \citep{Ambartsumian1985}, but they do not escape instantaneously, and it is possible that some can be on stable orbits if the cluster is on a circular orbit \citep{Henon1969}. Stars that gain a large energy kick from a single interaction can escape isotropically, however the majority of stars gain energy gradually via many encounters causing them to drift into the PE regime. These PEs can then only escape via narrow apertures around the Lagrangian points (FH00). For circular orbits, the Lagrangian points L1 and L2 are along a line defined by connecting the centre of the cluster to that of the galaxy, where the radial derivative of the total potential (the sum of the cluster potential, the tidal potential and centrifugal potential) is zero. It is also the furthest distance from the cluster centre of the last closed equipotential surface, or Jacobi surface (see e.g. Section 3.3 of \citealt{Binney1987}). B01 found the scaling of the lifetimes with $t_{\rm rh}^{3/4}$ to be consistent with direct $N$-body models of star clusters orbiting in a point mass galactic potential. \citet{Tanikawa2010}, hereafter TF10, then studied the dynamical evolution of clusters in galaxies with different (power-law) density profiles with direct $N$-body simulations and confirmed the $t_{\rm rh}^{3/4}$ scaling of the lifetimes for clusters that are initially Roche-filling. They also showed that for clusters with the same $N$ and tidal radius, orbiting in different galactic potentials, those with the highest angular frequency (i.e. moving in flatter density profiles) live longest. For clusters orbiting in flatter galactic density profiles, the Jacobi surface is compressed (for the same Jacobi radius, $r_{\rm J}$), resulting in smaller escape annuli and therefore a larger $t_{\rm diss}$ (\citealt*{Renaud2011}; hereafter R11). This is contrary to what is found for clusters on different orbits in a given potential, because in that case $t_{\rm diss} \propto 1/\Omega$, where $\Omega$ is the angular velocity of the cluster orbit about the galaxy centre. Measurements of the kinematics of stars within globular clusters are mostly based on line-of-sight velocities. However, to properly characterise the velocity dispersion anisotropy and rotation of these systems, proper motion data are required. Various proper motion measurements have recently become available including observations using the Hubble Space Telescope (HST) (\citealt{Bellini2014}; \citealt{Watkins2015}), and the first data release (DR1) of the ESA \textit{Gaia} mission \citep{Gaia1}. DR1 provided proper motions of many field stars in the Milky Way and also included open cluster stars, and future releases will provide proper motions of stars in the outer regions of GCs. Therefore, understanding the effects of PEs on the kinematics is paramount to correctly interpreting the new data, as current models have been shown to still have large biases when comparing to projected data from simulations (\citealt{Shanahan2015}; \citealt{Sollima2015}), and will also help to develop a prescription for including their effects in a self-consistent model. The focus of this study is to use a series of simulations to investigate the properties of PEs, including their spatial and energy distribution, their kinematics and their effect on the kinematics of the cluster as a whole. We do this to determine if there are any aspects of the PEs which could be used to observationally constrain the external Galactic potential, or if there are observable features of PEs which can be used to discriminate between alternative predictions proposed by MOND and DM theories. The paper is organized as follows. In Section 2 we describe how the simulations were set up and what initial conditions were chosen. Section 3 investigates the amount of PEs that exist in the simulations, and their distribution and dynamics. In Section 4 we derive a prediction for the velocity dispersion at the Jacobi surface and compare this to simulations and observational data. Finally we present our conclusions in Section 5. \section{Description of the Simulations} All simulations were run using \texttt{\small{NBODY6TT}} \citep{Renaud2015}, a modified version of the direct $N$-body integrator \texttt{\small{NBODY6}} \citep{Aarseth2003} optimised for use with GPUs \citep{Nitadori2012}. \texttt{\small{NBODY6TT}} (mode B) allows any functional input for the galactic potential and avoids a linearised approximation of the tidal forces. We considered power-law mass profiles for our galactic potential, using the notation from \citet*{Innanen1983} and their equation A2 for the mass enclosed within a distance from the centre of the galaxy $R_{\rm{g}}$, \begin{equation} M(<R_{\rm{g}}) = M_{\rm 0}\left(\frac{R_{\rm{g}}}{R_{\rm 0}}\right)^{\lambda}, \label{eq:Gmass} \end{equation} where $M_{\rm 0}$ and $R_{\rm 0}$ are scale-factors. From this they obtain the potential in their equations A11 and A12 \begin{equation} \displaystyle \phi_{\rm g}(R_{\rm{g}})=\begin{cases} \displaystyle\frac{GM_{\rm 0}}{(\lambda-1)R_{\rm 0}}\left[\left(\frac{R_{\rm{g}}}{R_{\rm 0}}\right)^{\lambda-1}-1\right], & \text{if $\lambda > 0$, $\lambda \neq 1$ }.\\ \displaystyle\frac{GM_{\rm{0}}}{R_{\rm 0}}\ln{\frac{R_{\rm{g}}}{R_{\rm 0}}}, & \text{if $\lambda = 1$}. \label{eqn:phi} \end{cases} \end{equation} We consider 3 specific cases, using $\lambda$ =0, 1 and 2 which correspond to a point mass, singular isothermal sphere, and a $1/R_{\rm g}$ density profile \citep*[i.e. the density profile within the scale radius of a Navarro-Frenk-White profile;][]{Navarro1996}, respectively. In each potential we simulate clusters with an initial number of stars $N_0$=16384 of the same mass, or with masses distributed according to the \citet{Kroupa2001} mass function between $0.1 M_{\odot}$ and $1\, M_{\rm\odot}$. We also vary the eccentricities of the orbit, using $\epsilon$= 0, 0.25, 0.5 and 0.75. The equations of motion are solved in a non-rotating reference frame that orbits the galactic centre with the centre of mass of the cluster initially in the origin. For the analysis of the circular orbits we move the data to a corotating reference frame, where the $x$-axis joins the centre of the cluster and the galaxy, which is always located at $-R_{\rm{g}}$, and the $y$-axis is positive in the direction of the tangential component of the orbital velocity (FH00). This is required as it is only possible to explicitly identify PEs in the corotating frame using the Jacobi energy of the stars (see e.g. chapter 5 of \citealt{Spitzer1987}). For the eccentric orbits this is not possible and we therefore carry out our analysis in the non-rotating frame. In this paper we use $N$-body units \citep{Henon1971} where ${G}=1$, the initial total cluster mass $M_{\rm c}=1$ and initial total energy ${E_{\rm t}}=-1/4$. \subsection{Input parameters} \subsubsection{Circular orbits} We set up the simulations such that the clusters on circular orbits in each potential have the same initial half-mass radius, ${r_{\rm hm}}$, and $r_{\rm{J}}$. The initial conditions correspond to a King model with $W_{0} = 5$ \citep{King1966}\footnote{We use \texttt{\small{LIMEPY}} (https://github.com/mgieles/limepy, \citealt{Gieles2015}) to generate the initial positions and velocities of the stars.}. However, as the King model describes spherical distribution of stars within the radius $r_{\rm t}$, using this in a tidal potential will introduce the presence of an initial population of PEs outside the Jacobi surface. This is because a Jacobi surface with $r_{\rm J}=r_{\rm t}$ is triaxial and flatter in the $y$ and $z$ axes than in the $x$-axis. Therefore we define our galactic potential such that $r_{\rm J}$ = 1.5$r_{\rm t}$: in this way the King model will sit within the Jacobi surface and have no initial PEs\footnote{There will still be a small $\lambda$-dependent population of PEs due to the $z$-axis of the Jacobi surface becoming increasingly flattened as $\lambda$ increases.}. The filling factor is then ${r_{\rm hm}}/{r_{\rm J}} \simeq 0.125$. The Jacobi radius for circular orbits in a galaxy defined by equation~(\ref{eq:Gmass}) is \begin{equation} r_{\rmn{J}} = \left[{G M_{\rmn{c}} \over (3-\lambda)\Omega^{2}}\right]^{1/3} \label{eqn:tidalradius} \end{equation} from \citet{King1962}, where $\Omega$, for our galactic potential, is defined as \begin{equation} \Omega^2 = \frac{GM(<R_{\rm{g}})}{R_{\rm{0}}^3} = \frac{GM_0}{R_0^3} \left(\frac{R_{\rm g}}{R_{\rm 0}}\right)^{\lambda}. \end{equation} \texttt{\small{NBODY6TT}} requires astrophysical units for the input values for the galactic potential and the orbit. We find values for $M_{0}$, $R_{0}$ and $R_{\rm g}$ in physical units that give us the desired $r_{\rm J}$. We keep $R_{\rm g}$ the same for the circular orbits in the different potentials and calculate the required circular velocity of the cluster as \begin{equation} V_{\rm c}(R_{\rm g}) = \Omega R_{\rm g} = \sqrt{\frac{GM_0}{R_{\rm g}}}\left(\frac{R_{\rm g}}{R_0}\right)^{\lambda/2}. \label{eqn:Vc} \end{equation} \subsubsection{Reference frame} To analyse the simulations in the corotating frame, the solid-body rotation of the cluster stars relative to the non-rotating frame needs to be removed. To find the velocity components in the corotating reference frame we use $\boldsymbol{v_{\rm cr}} = \boldsymbol{v_{\rm nr}} - \boldsymbol{v_{\rm sb}}$, where $\boldsymbol{v_{\rm cr}}$ and $\boldsymbol{v_{\rm nr}}$ are the velocity vectors in the corotating and nonrotating reference frames respectively, and $\boldsymbol{v_{\rm sb}}$ is the solid body rotation due to the choice of the frame, which corresponds to $(0,0,\Omega_{\varphi}\sqrt{x^2+y^2})$ in spherical coordinates, where $\varphi$ indicates the angle from the positive $x$-axis in the direction of the positive $y$-axis. The positions in the corotating frame are then found by rotating the Cartesian position vector in the nonrotating frame in the negative $\varphi$ direction across the angular offset between the two frames. \subsubsection{Eccentric orbits} \begin{table} \begin{center} \caption{Input values for our series of simulations. Columns from left to right are: name of the simulation, orbital eccentricity $\epsilon$, apocentre radius $R_{\rm a}$, apocentre velocity $V_{\rm a}$ (both in $N$-body units), initial mass function IMF and slope of the enclosed mass of the galaxy, $\lambda$. All simulations have $N_0=16384$ particles.} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Name & $\epsilon$ & $R_{\rm a}$ & $V_{\rm a}$ & IMF & $\lambda$\\ \hline $\lambda$0$\epsilon$0 & 0.00 & 2494.8 & 85.10 & Delta & 0 \\ $\lambda$0$\epsilon$0.25 & 0.25 & 3118.5 & 65.92 & Delta & 0 \\ $\lambda$0$\epsilon$0.5 & 0.50 & 3742.2 & 49.13 & Delta & 0 \\ $\lambda$0$\epsilon$0.75 & 0.75 & 4365.9 & 32.17 & Delta & 0 \\ $\lambda$0$\epsilon$0K & 0.00 & 2494.8 & 85.10 & Kroupa & 0 \\ $\lambda$0$\epsilon$0.25K & 0.25 & 3118.5 & 65.92 & Kroupa & 0 \\ $\lambda$0$\epsilon$0.5K & 0.50 & 3742.2 & 49.13 & Kroupa & 0 \\ $\lambda$0$\epsilon$0.75K & 0.75 & 4365.9 & 32.17 & Kroupa & 0 \\ $\lambda$1$\epsilon$0 & 0.00 & 2494.8 & 104.23 & Delta & 1 \\ $\lambda$1$\epsilon$0.25 & 0.25 & 3118.5 & 79.23 & Delta & 1 \\ $\lambda$1$\epsilon$0.5 & 0.50 & 3742.2 & 54.63 & Delta & 1 \\ $\lambda$1$\epsilon$0.75 & 0.75 & 4365.9 & 29.68 & Delta & 1 \\ $\lambda$1$\epsilon$0K & 0.00 & 2494.8 & 104.23 & Kroupa & 1 \\ $\lambda$1$\epsilon$0.25K & 0.25 & 3118.5 & 79.23 & Kroupa & 1 \\ $\lambda$1$\epsilon$0.5K & 0.50 & 3742.2 & 54.63 & Kroupa & 1 \\ $\lambda$1$\epsilon$0.75K & 0.75 & 4365.9 & 29.68 & Kroupa & 1 \\ $\lambda$2$\epsilon$0 & 0.00 & 2494.8 & 147.40 & Delta & 2 \\ $\lambda$2$\epsilon$0.25 & 0.25 & 3118.5 & 110.55 & Delta & 2 \\ $\lambda$2$\epsilon$0.5 & 0.50 & 3742.2 & 73.70 & Delta & 2 \\ $\lambda$2$\epsilon$0.75 & 0.75 & 4365.9 & 36.85 & Delta & 2 \\ $\lambda$2$\epsilon$0K & 0.00 & 2494.8 & 147.40 & Kroupa & 2 \\ $\lambda$2$\epsilon$0.25K & 0.25 & 3118.5 & 110.55 & Kroupa & 2 \\ $\lambda$2$\epsilon$0.5K & 0.50 & 3742.2 & 73.70 & Kroupa & 2 \\ $\lambda$2$\epsilon$0.75K & 0.75 & 4365.9 & 36.85 & Kroupa & 2 \\ \hline \end{tabular} \end{center} \end{table} The kinematics and other properties such as the mass of the cluster vary over the course of an eccentric orbit. This is because $r_{\rm J}$ will expand and contract causing stars to effectively escape from the cluster and then be recaptured. It can therefore be useful to approximate an eccentric orbit by a circular orbit that has the same dissolution time and mass evolution \citep{Cai2016}. This allows us to reduce these orbital variations by adopting an approximate $r_{\rm J}$, which we refer to as $r_{\rm J, circ}$, at any point in the eccentric orbit by using the angular velocity of a circular orbit with the same lifetime. To achieve this, we set up our eccentric orbit simulations with the same semi-major axis of the orbit, $a$, because then the lifetime is (to first order) independent of eccentricity (\citealt{Cai2016}; Bar-or et al. in prep). The semi-major axes of the eccentric and circular orbits are \begin{equation} a =\begin{cases} (R_{\rm a} + R_{\rm p})/2, & \text{if $\epsilon>0$}\\ R_{\rm g}, & \text{if $\epsilon=0$}, \end{cases} \end{equation} where $R_{\rm a}$ and $R_{\rm p}$ are the apocentre and pericentre distances respectively, and $\epsilon$ is the eccentricity of the orbit. By using the relation $R_{\rm p} = R_{\rm a}(1-\epsilon)/(1+\epsilon)$, we find \begin{equation} R_{\rm a} = (1+\epsilon)a. \label{eqn:Ra} \end{equation} This gives a simple relation for the apocentre value depending only on the eccentricity and is independent of the potential. To calculate the required initial apocentre velocity for the eccentric orbits, we use conservation of $E_{\rm t}$, and angular momentum of the orbit, $J$: \begin{equation} \begin{split} E_{\rm t} &= E_{\rm a} = E_{\rm p}\\ &= 0.5V_{\rm a}^2 + \phi_{\rm g}(R_{\rm a}) = 0.5V_{\rm p}^2 + \phi_{\rm g}(R_{\rm p}), \end{split} \label{eqn:consE} \end{equation} and \begin{equation} \begin{split} J &= J_{\rm a} = J_{\rm p} \\ &= R_{\rm a}V_{\rm a} = R_{\rm p}V_{\rm p}, \end{split} \label{eqn:consJ} \end{equation} where the subscripts again refer to apocentre and pericentre. By substituting $V_{\rm p}$ in equation~(\ref{eqn:consE}) by using equation~(\ref{eqn:consJ}), we find \begin{equation} V_{\rm {a}}^2 = \frac{2\left[\phi(R_{\rm p}) - \phi(R_{\rm a})\right]}{1-\left({R_{\rm a}}/{R_{\rm p}}\right)^2}. \label{eq:apoV} \end{equation} Then by using equations~(\ref{eqn:phi}),~(\ref{eqn:Vc}) and~(\ref{eqn:Ra}) we find the initial apocentre velocities for the eccentric orbits as \begin{equation} V_{\rm a}^2 =\begin{cases} \displaystyle \frac{2V_{\rm c}(R_{\rm a})^2}{(\lambda-1)} \left[\frac{\left(\frac{1-\epsilon}{1+\epsilon}\right)^{\lambda-1}-1}{1-\left(\frac{1+\epsilon}{1-\epsilon}\right)^2}\right], & \text{if $\lambda\neq1$}.\\ \displaystyle V_{\rm c}(R_{\rm a})^2 \frac{2\ln{\left(\frac{1-\epsilon}{1+\epsilon}\right)}}{1-\left(\frac{1+\epsilon}{1-\epsilon}\right)^2}, & \text{if $\lambda=1$}. \end{cases} \end{equation} Table 1 shows the input values for all the simulations. The names of the simulations specify the values of $\lambda$, $\epsilon$ and the type of mass function. \section{Properties of potential escapers} \subsection{Definition and identification} \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{Fractions_new.pdf} \caption{Ratio of the number of PEs to total number of stars remaining in the cluster (left) and the fraction of mass in PEs (right), for the circular orbits in each potential (each identified by a different colour, as indicated in the panel on the left plot) with equal-mass stars (solid lines) and Kroupa IMF (dashed lines).} \label{fig:Npe} \end{figure*} The Jacobi energy of a star is defined as (see e.g. page 2 of FH00) \begin{equation} E_{\rm J} = \frac{v^2}{2} + \phi_c + \frac{1}{2}\Omega^2\left[z^2 - (3-\lambda)x^2\right], \label{eq:Ej} \end{equation} where we include the dependence on $\lambda$ (see the derivation in R11). The third term is a combination of the tidal and the centrifugal potentials when working in a corotating reference frame. We also define $\hat{E} = (E_{\rm J} - E_{\rm crit})/|E_{\rm crit}|$, where $E_{\rm crit} = -3GM_{\rm c}/2r_{\rm J}$ in a corotating reference frame. It is difficult to define exactly what constitutes a PE as there can be some stars with an energy above the critical energy on stable orbits inside the cluster \citep{Henon1969}, and others with apocentres outside of the Jacobi surface of the cluster. At any moment there will also be unbound stars that are in the process of isotropically escaping from the cluster but are still found within the Jacobi surface. To proceed we adopt the following working definition: \textit{PEs are stars inside a sphere of radius $r_{\rm J}$ that have $\hat{E} > 0$}. The maximum extent of the Jacobi surface on the $y$-axis is $(2/3)r_{\rm J}$ and along the $z$-axis the maximum point of the surface is $\lambda$-dependent, and is $\sim 0.638r_{\rm J},0.626r_{\rm J}$ and $0.596r_{\rm J}$ for $\lambda$=0, 1 and 2 respectively (see equation 14 of R11). This means that our definition of PEs using a sphere of radius $r_{\rm J}$ will include most of the stars which have the apocentre of their orbit outside of the Jacobi surface but will also include some unbound stars that have escaped from the Jacobi surface. A similar approximation is used when dealing with observational data, as a circular projected tidal surface is usually assumed. \subsection{Properties and distribution} \subsubsection{Fraction of PEs and mass distribution} We begin our investigation by looking at the fraction of PEs relative to bound stars inside a sphere of radius $r_{\rm J}$. Fig.~\ref{fig:Npe} shows the ratio of the number of PEs to the total number of stars (left-hand panel) and the fraction of the total mass of stars in PEs (right-hand panel). Solid lines represent the simulations with equal-mass stars and dashed lines represent the simulations with a mass spectrum. The later stages of the $\lambda$0$\epsilon$0 simulation are consistent with the evolution found in B01, however there is a clear increase in the fraction when increasing $\lambda$. This increase is possibly due to the dependence of the escape time of individual stars, $t_{\rm e}$, on galactic potential: R11 and TF10 derived a $\lambda$ dependent $t_{\rm e}$ based on the flux of orbits out of the Lagrange points, finding $t_{\rm e}(\lambda=2)/t_{\rm e}(\lambda=0)\sim$ 1.2 and 1.14 respectively\footnote{TF10 however found this ratio to be smaller than would be required from the differences in the dissolution times of their $N$-body simulations.}. The number of PEs also increases when introducing a mass spectrum. The creation of PEs is due to many minor interactions with other stars and there is a constant amount of PEs created on the half-mass relaxation time-scale, \begin{equation} t_{\rm rh} \propto \frac{N_{\rm c}^{1/2}{r_{\rm hm}^{3/2}}}{\ln{\Lambda}{<m>^{1/2}\phi}} \end{equation} where $m$ is the mass of the individual stars, $<>$ indicates a mean, $\phi = < m^{5/2}>/<m>^{5/2}$, which equals 1 when the masses of the stars are equal \citep{Spitzer1971}, $\ln{\Lambda}$ is the Coulomb logarithm with $\Lambda=0.11N_{\rm c}$ \citep{Giersz1994} and $N_{\rm c}$ is the number of stars inside the cluster. Therefore, systems that have a spectrum of masses have a shorter $t_{\rm rh}$ resulting in a higher production rate of PEs compared to a system with equal-mass stars. Because the escape time is not dependent on the mass function, more PEs build up in the simulations of clusters with a spectrum of masses. This increasing fraction of PEs for higher $\lambda$ (i.e. galaxies with flatter density profiles) corroborates the 0.35 fraction found in \citet{Just2009}, where they used a Salpeter IMF \citep{Salpeter1995} and Miyamoto-Nagai disk for their galactic potential \citep{Miyamoto1975}. There is an initial phase of rapid PEs production where more PEs are produced than escape from the cluster. Although our initial value of $r_{\rm t}/r_{\rm J}$ avoided having any primordial PEs, there is a large amount of stars that are very close to the critical energy and therefore take less time to be scattered above it. After this phase the gradient decreases, which is much more noticeable for the $\lambda$=2 and mass spectrum simulations, as the production and loss of PEs becomes closer to being balanced. In simulations with lower particle number (not shown in the figure), we found that by increasing the initial value of $r_{\rm t}/r_{\rm J}$ the same final fraction of PEs is reached, but there is a lower fraction relative to Fig.~\ref{fig:Npe} for much of the lifetime. There is also an $N$-dependence in the fraction of PEs (B01) which possibly reduces their effects in systems with larger particles, but our simulations are directly comparable to the size of open cluster-like systems. The right-hand panel of Fig.~\ref{fig:cummass} shows the fraction of mass in PEs which is lower than the number fraction for each of the models. This means that PEs are predominantly low mass, possibly as it is easier to scatter them above the critical energy. Fig.~\ref{fig:cummass} further investigates the mass of the PEs compared to the bound stars inside the cluster. We plot the cumulative fraction of stars as a function of mass of PEs (dashed) and bound stars (solid) at three snapshots when remaining mass is $0.75M_{0}$ (blue), $0.5M_{0}$ (green) and $0.25M_{0}$ (red), where $M_{0}$ is the initial mass. The PEs have a much higher fraction of stars with low mass, as expected. Even when the mass remaining is $0.25M_{0}$, over $40\%$ of the PEs are below $0.3M_{\odot}$, which means that a large amount of PEs may be below current observational limits. This could explain why the effects of PEs are ubiquitous in simulations yet the peculiarities in observations can vary. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{cumulativeM.pdf} \caption{Cumulative mass functions for bound stars (solid lines) and PEs (dashed) at three different moments in the evolution of the $\lambda$1$\epsilon$0K simulation.} \label{fig:cummass} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{PEs_pos.pdf} \caption{Top panel - Fraction of PEs to total number of stars in spherical bins. Bottom panel - Fractions of PEs in spherical bins to total number of PEs. Each panel shows three different moments through the lifetime of the $\lambda$1$\epsilon$0K simulation when mass of the cluster is 0.75, 0.5 and 0.25 of the initial cluster mass.} \label{fig:cum} \end{figure} \subsubsection{Spatial distributions} The top panel of Fig.~\ref{fig:cum} represents the fraction of PEs to total stars in spherical bins of increasing radius, plotted at three points over the lifetime of the $\lambda$1$\epsilon$0K simulation. At all moments there is roughly an equal number of PEs and bound stars at $\sim$0.5$r_{\rm J}$ suggesting that the effect of PEs on the kinematics should be seen far into the cluster, as found in K10. Beyond this location the PEs dominate and beyond $\sim$0.8$r_{\rm J}$ approximately all stars are PEs, suggesting there are few bound stars that reach close to the Lagrange point, although there will be many PEs outside of the Jacobi surface in the outer spherical bins. The bottom panel shows the fraction of PEs to total number of PEs in spherical bins at that time. This quantity also peaks at around $\sim$0.5$r_{\rm J}$, and the location of this peak moves outwards slowly with time. The behaviour in the $\lambda$=1 simulation shown here is similar to the behaviour of the circular orbits in the other potentials. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{PEs_EJ.pdf} \caption{Histogram of $\hat{J_{z}}$ (top) and $\hat{E}$ (bottom) of PEs normalised to the total number of PEs. Solid lines are the equal-mass simulations, dashed lines are those with a Kroupa IMF. Both cases are shown at three different moments throughout the lifetime of the simulations at 0.75$M_0$, 0.5$M_0$ and 0.25$M_0$. Left-hand panels are $\lambda$=0, middle panels are $\lambda$=1 and right-hand panels are $\lambda$=2.} \label{fig:PE_ehat} \end{figure*} \subsubsection{Energy and angular momentum} Fig.~\ref{fig:PE_ehat} shows dimensionless quantities of the energy and angular momentum of the PEs, scaled to properties of the cluster to determine if there is any variation in time of the PEs relative to the cluster. To do this we divide the $z$-component of the angular momentum by the angular momentum of a circular orbit at the Jacobi radius, $r_{\rm{J}}v_{\rm{c}}$, where $v_{\rm c}$ is the circular velocity of a fiducial star at the Jacobi radius, and call this quantity $\hat{J_{\rm{z}}}$ (top panels). For the energy we use $\hat{E}$ (bottom panels). Solid lines are the equal-mass clusters, dashed lines are the simulations using a Kroupa IMF. Both are displayed for three snapshots, when the mass remaining is 0.75$M_{\rm 0}$ (blue), 0.5$M_{\rm 0}$ (green) and 0.25$M_{\rm 0}$ (red). The panels from left to right represent the $\lambda$=0, 1 and 2 circular orbits respectively. There is minimal evolution in $\hat{J_{z}}$ for all simulations and there is little difference between the equal-mass and mass spectrum clusters. There is a negative bias which suggests a retrograde motion in the corotating reference frame. The distribution in energy becomes wider with time (i.e. at lower $N$) for the clusters in each galactic potential, and this behaviour is more pronounced in the mass spectrum simulations. It is also evident that the distribution becomes wider with increasing $\lambda$, with a larger fraction of stars at higher energies. By solving an equation similar to the Fokker-Planck equation, that considers the production, via diffusion, and escape of PEs, B01 introduced a model for the distribution $N(\hat{E})$ of PEs \begin{equation} N(\hat{E}) \propto \hat{E}^{1/2}K_{1/4}\left[\frac{1}{2}\left(\frac{t_{\rm rh}}{k_{1}t_{\rm esc}}\right)^{1/2}\hat{E}^2\right], \label{eqn:Ne} \end{equation} where $K_{1/4}$ is a modified Bessel function, $t_{\rm esc}$ is the time for escape of a star with $\hat{E}=1$ and $k_{1}$ is a constant that corresponds to the fraction of mass scattered above $E_{\rm crit}$ over one $t_{\rm rh}$, the instantaneous half-mass relaxation time. Fig.~\ref{fig:PE_ehat_model} shows the normalised $N(\hat{E})$ distribution for the $\lambda0\epsilon0$, $\lambda1\epsilon0$ and $\lambda2\epsilon0$ simulations (blue, green and red histograms respectively) when the clusters have a remaining mass of $0.5M_{\rm 0}$. We can express the half mass relaxation time as $t_{\rm rh} \propto (M_{\rm c}/<m>ln\Lambda)(r_{\rm hm}/r_{\rm J})^{3/2}(GM_{\rm c}/r_{\rm J}^3)^{1/2}$ and we consider the following expression (from FH00) for the escape time: $t_{\rm esc} \propto (GM_{\rm c}/r_{\rm J}^3)^{1/2} f(\lambda)$ where we have included a dependence on the galactic potential via $f(\lambda)$. We consider an empirical estimation of the function $f(\lambda) = [3/(3-\lambda)]^\alpha$ and by fitting on the distribution for each potential (blue, green and red lines in the figure), we find $\alpha \sim 1$. This $\lambda$ dependence gives a variation in $t_{\rm esc}$ of $\sim$3 between $\lambda$=0 and $\lambda$=2, which is larger than the values found by R11 and TF10. However, our difference in dissolution times with $\lambda$ are consistent with the $N$-body simulations in TF10. It is important to note that $r_{\rm hm}/r_{\rm J}$ (and therefore $t_{\rm rh}$) also varies with $\lambda$: $r_{\rm hm}/r_{\rm J}$ will reduce to $\sim0.1$ at core collapse and then increase to 0.2 for $\lambda$=0 and 0.25 for $\lambda$=2. This evolution of the energy and the variation with $\lambda$ can be used to derive an expression for the velocity dispersion at the Jacobi surface of a cluster, which we discuss further in Section 4. \subsection{Dynamics of the potential escapers} \subsubsection{Velocity dispersion} \begin{figure} \centering \includegraphics[width=1\columnwidth]{E_dist_fitonly.pdf} \caption{Fraction of PEs as a function of their energy at $M=0.5M_0$ in the $\lambda0\epsilon0$, $\lambda1\epsilon0$ and $\lambda2\epsilon0$ simulations. Blue, green and red lines are fit to each distribution.} \label{fig:PE_ehat_model} \end{figure} We also explore the dynamics of the PEs and their effect on the kinematics of the cluster. The 1D velocity dispersion is calculated for each component of the velocity as \begin{equation} \sigma_{\rm 1D}^2 = <(v_{\rm 1D}^2 - <v_{\rm 1D}>^2)>. \end{equation} The 3D dispersion is then calculated for spherical coordinates, where $r$ is the radial component, $\theta$ is the angle from the positive $z$-axis and $\varphi$ is the angle measured from the $x$-axis in the $xy$ plane, \begin{equation} \sigma_{3D} = \sqrt{\sigma_{r}^2 + \sigma_{\theta}^2 + \sigma_{\varphi}^2}. \end{equation} Fig.~\ref{fig:dispprofile} shows the radial profiles of the $\sigma_{3D}$ in spherical bins for stars with $\hat{E} < 0$ (bound stars) in blue, all stars within $r_{\rm J}$ in green and all stars in red, for the $\lambda$1$\epsilon$0 simulation when the remaining mass is 0.5 $M_0$. The difference between the stars within $r_{\rm J}$ and the bound stars shows the effect of the PEs. The bound stars show a much sharper drop while the dispersion of all stars reduces less rapidly with distance from the centre. The difference between the PEs and the bound stars also increases with eccentricity of the orbit, as shown by K10 with numerical simulations. It can also be seen by taking projected quantities from simulations that the observational angle will also affect the velocity dispersion profile, as including stars belonging to the tidal tails will cause an increase in the dispersion. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{dispprofile_one.pdf} \caption{Radial profile of the dispersion of the $\lambda$1$\epsilon$0 simulation at 0.5 $M_0$. The blue lines represent stars within $r_{\rm J}$ with an energy below $E_{\rm crit}$, the green lines are all the stars within $r_{\rm J}$, and the red lines are all of the stars in the simulation.} \label{fig:dispprofile} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{Beta_profile_cylin.pdf} \caption{Radial profile for the anisotropy, $\beta$, in cylindrical bins of 0.2$r_{\rm J}$ width, for the $\lambda$0$\epsilon$0, $\lambda$1$\epsilon$0 and $\lambda$2$\epsilon$0 simulations. Left hand plot is the mean of two orbits around the time when the remaining mass is 0.8$M_0$, and the right plot is the same at a remaining mass of 0.3$M_0$.} \label{fig:Betapot} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{16rot.pdf} \caption{Radial $<v_{\varphi}>$ profile, normalised to $\Omega r_{\rm J}$, for the circular orbits in each potential using a corotating frame. Left hand plot is the mean of two orbits around the time when the remaining mass is 0.8$M_0$, and the right plot is the same at a remaining mass of 0.3$M_0$. Solid lines are all stars, dashed line are only the PEs.} \label{fig:vphi_norm} \end{figure} \subsubsection{Anisotropy of the dispersion and rotation} To analyse the anisotropy of the dispersion in our simulations we use the $\beta$ parameter defined as \begin{equation} \beta = 1 - \frac{\sigma_{\rm t}^2}{2\sigma_{\rm r}^2} \end{equation} where $\sigma_{\rm t}^2$=$\sigma_{\rm \theta}^2+\sigma_{\rm \varphi}^2$, $0<\beta\le1$ corresponds to radial anisotropy, $\beta<0$ to tangential anisotropy and $\beta=0$ to isotropy. Fig.~\ref{fig:Betapot} shows the radial profile of $\beta$ for all the stars (solid lines) and only the PEs (dashed lines) in the $\lambda$0$\epsilon$0, $\lambda$1$\epsilon$0 and $\lambda$2$\epsilon$0 simulations at snapshots when the mass remaining is $0.8M_{\rm 0}$ (left-hand panel) and $0.3M_{\rm 0}$ (right-hand panel). We calculate $\beta$ in cylindrical bins, denoted by $R=\sqrt{x^2 + y^2}$, for each individual snapshot and take the mean for each bin over two orbits. We use cylindrical bins as the anisotropy for each bin is then the same in the corotating and nonrotating reference frames. The profiles in each potential are similar in the early snapshot (left-hand panel) where all are close to zero. The $\lambda$=0 and $\lambda$=1 simulations show some tangential anisotropy in the outer region, although the error bars are very large, whereas the $\lambda$=2 simulation appears to be isotropic, or slightly radially anisotropic. In the later snapshot a clearer difference between the potentials is visible with the $\lambda$=0 and $\lambda$=1 simulations developing tangential anisotropy, whereas the $\lambda$=2 simulation is isotropic. For all the potentials the bound stars are consistent with isotropy and the anisotropy that develops is contained mostly in the PEs. It has been shown that simulations of GCs with dense starting conditions develop radial anisotropy (\citealt{Sollima2015}; \citealt{Zocchi2016}) but those with larger initial $r_{\rm hm}/r_{\rm J}$, similar to our initial conditions, do not develop any radial anisotropy and instead show tangential anisotropy near the tidal radius \citep{Baumgardt2003}. This is thought to be due to the balance between the preferential production and preferential loss of radial orbits: two-body interactions predominantly scatter stars outwards on radial orbits, and these stars then escape more easily than those on other orbits (\citealt*{Takahashi97}, \citealt*{Tiongco2016}). Therefore for dense initial conditions more stars are scattered outwards than can escape, which causes radial orbits to build up, but for extended clusters these radial orbits can escape as fast or faster than they are created, leading to tangential anisotropy. As it is harder to escape from the cluster when increasing $\lambda$, more stars on radial orbits will build up, which could explain why our $\lambda$=2 simulation develops radial anisotropy. It was also shown by \citet{Oh1992} that the interaction with the tidal field increases the angular momentum of stars in the outer regions of clusters, causing a reduction in the eccentricity of their orbits. For their simulations this led to a reduction of radial anisotropy towards isotropy; in our case, due to the extended initial conditions, this could lead to an increase in the tangential anisotropy. However it is not known how this would effect would change with $\lambda$ or if it could explain the less tangential anisotropy with increasing $\lambda$. We then explore the rotation curve of the PEs, by looking at the $\varphi$ component of the velocity in spherical coordinates. Fig.~\ref{fig:vphi_norm} shows the radial profile of $<v_{\varphi}>$ for all the stars (solid lines) and only the PEs (dashed) binned in cylindrical shells in the $xy$ plane, normalised to $\Omega r_{\rm J}$ to see the amount of rotation as a fraction of the total velocity at $r_{\rm J}$. The left-hand and right-hand panels are at the same moments considered in Fig.~\ref{fig:Betapot} and also take the mean of two orbits as explained previously. The PEs have a negative, i.e. retrograde, rotation and as the bound stars have values between 0 and -0.1 $<v_{\varphi}>/\Omega r_{\rm J}$, the rotation of the cluster becomes more negative and retrograde with increasing distance from the centre as PEs increasingly dominate. This negative rotation is expected as retrograde orbits are more stable against escape (\citealt{Keenan1975}; \citealt{Weinberg1991}). The difference between the left-hand and right-hand panels of Fig.~\ref{fig:vphi_norm} shows that over time the fraction of retrograde rotation for the $\lambda$=0 and $\lambda$=1 simulations stays roughly constant at 0.5$\Omega r_{\rm J}$, as seen in \citet*{Tiongco2016b}, but the $\lambda$=2 simulation becomes more negative. In Section 4, we derive a relation for the velocity dispersion at the Jacobi surface, $\sigma_{\rm J}$. If we instead normalise $<v_{\varphi}>$ to $\sigma_{\rm J}$, the profile is almost identical to Fig.~\ref{fig:vphi_norm} and can be used to study the relationship between our expression for the velocity dispersion and the rotation in the cluster. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{IMF_3way.pdf} \caption{Comparison of the kinematics in the $\lambda$1$\epsilon$0 and $\lambda$1$\epsilon$0K simulations. From top to bottom: dispersion in a spherical bin of $0.9r_{\rm J}$ to $r_{\rm J}$ over the lifetime of the simulations, velocity dispersion anisotropy and $<v_{\varphi}>$ (both at a snapshot when there is 0.5 $M_0$). These quantities have been calculated in the same way as Figs.~\ref{fig:dispprofile},~\ref{fig:Betapot}, and~\ref{fig:vphi_norm}, respectively.} \label{fig:imfcomp} \end{figure} \subsubsection{IMF dependence} Fig.~\ref{fig:imfcomp} compares the kinematics of the $\lambda$1$\epsilon$0 and $\lambda$1$\epsilon$0K simulations. In the top panel we show the mass-weighted velocity dispersion in a spherical bin between $0.9r_{\rm J}$ and $r_{\rm J}$, against cluster mass over the lifetime of the simulations. The middle and bottom panels show the $\beta$ and $<v_{\varphi}>$ profiles respectively, at $0.5M_{\rm 0}$ calculated in the same way as Figs.~\ref{fig:Betapot} and~\ref{fig:vphi_norm}. There are minimal differences when changing the mass function, showing that for simulations with the same $<m>$ changing the IMF has no effect on these aspects of the kinematics. \subsection{Eccentric orbits} \begin{figure} \centering \includegraphics[width=1\columnwidth]{Mboundscale.pdf} \caption{Bound mass evolution for eccentricities of 0, 0.25, 0.5, and 0.75 in each potential. Top plots are from the simulations where we used the same semi-major axis and mean galactocentric distance to approximate the same lifetime. Bottom plots are after scaling the simulations to the same time to reach 0.1 of the initial mass remaining.} \label{fig:sis_mbound} \end{figure} We now consider the effect of introducing eccentricity to the orbits. Fig.~\ref{fig:sis_mbound} shows the total mass evolution for the equal-mass clusters in all potentials and eccentricities. The top panels show the actual evolution in the different potentials for each eccentricity. All orbits had the same semi-major axis, which ensures that the lifetimes are the same at low $\epsilon$, but for larger eccentricities additional scaling is required to achieve the same lifetimes. \citet{Cai2016} compared $t_{\rm diss}$ of clusters in $\lambda=0$ and $\lambda=1$ galaxies, finding that the eccentricity dependence was smaller for $\lambda=1$. Here we confirm this and find that for $\lambda=2$ the effect of eccentricity is also less important. To achieve the same lifetimes we take a scale-factor of the ratio of the dissolution time of the circular orbit to the eccentric simulation that requires scaling, $T_* = {t_{\rm diss}(\epsilon=0)/{t_{\rm diss}(\epsilon>0)}}$, with $t_{\rm diss}$ taken to be when $M_{\rm c} = 0.1 M_{\rm 0}$ and find the scale parameters for position, velocity and angular velocity as $r_* = T_*^{2/3}$, $v_* = T_*^{-1/3}$ and $\Omega_* =T_*^{-1}$. The bottom panels of Fig.~\ref{fig:sis_mbound} show the scaled mass evolution as a function of scaled time. The early evolution of the $\lambda0\epsilon0.75$ simulation is quite different from the others, and this is likely due to the rapid loss of stars at pericentre. The lower eccentricity orbits match the circular orbit profile later in the lifetime of the simulations. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{disp_eccen_4_new.pdf} \caption{Velocity dispersion in the 0.9$r_{J}$ to $r_{J}$ region in the $\lambda$1$\epsilon$0, $\lambda$1$\epsilon$0.25, $\lambda$1$\epsilon$0.5 and $\lambda$1$\epsilon$0.75 simulations. Black lines are our prediction using the mass and angular velocity of the circular orbit (see Section 4).} \label{fig:sis_eccen_scaled} \end{figure} \subsubsection{Velocity dispersion and anisotropy} Fig.~\ref{fig:sis_eccen_scaled} shows the velocity dispersion for stars between $0.9r_{\rm J}$ and $r_{\rm J}$ for the $\lambda$=1 simulations for different $\epsilon$. As the dissolution times of the eccentric orbits have been scaled to be the same as the circular orbit, the $\Omega$ of the circular orbit can be used to approximate that of the eccentric orbits. This gives a smoothly declining $r_{\rm J, circ}$ and mass of the cluster, which we use to calculate our prediction in Section 4 (black lines), and reduces the variation of the dispersion over one orbit. The dispersion is very similar for each simulation, but has an orbital variation that increases with eccentricity. The higher dispersion values are due to a sharp increase at pericentre, but the cluster actually spends most of its time at apocentre and therefore at the lower values of the dispersion. Fig.~\ref{fig:sis_eccen_scaled} shows that the black line prediction well matches the average velocity dispersion of an orbit at any point in the lifetime of the eccentric orbit simulations. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{Beta_profile_eccen_cylin.pdf} \caption{Comparison of the radial profile of the anisotropy for the $\lambda$1$\epsilon$0.25, $\lambda$1$\epsilon$0.5 and $\lambda$1$\epsilon$0.75 eccentric orbit simulations to the circular orbit $\lambda$1$\epsilon$0 simulation, using the mean of snapshots for three orbits around the specified remaining mass for each radial bin.} \label{fig:betaeccen} \end{figure} Fig.~\ref{fig:betaeccen} shows the $\beta$ profile as a function of $R$ for all stars in the $\lambda$=1 simulations using the approximate value of the Jacobi radius, $r_{\rm J, circ}$. The panels are produced as in Fig.~\ref{fig:Betapot} but only showing the profile for all stars in each bin. As the anisotropy in cylindrical shells is not dependent on the reference frame, and because the majority of the anisotropy is due to the PEs, as shown in Fig.~\ref{fig:Betapot}, this means that variations in the anisotropy profiles across the different eccentricities can be inferred to be variations of the population of PEs, assuming bound stars have an isotropic velocity distribution. Fig.~\ref{fig:betaeccen} shows some variation across the eccentricities for the snapshot later in the lifetime, with less tangential anisotropy when increasing $\epsilon$. The $\epsilon$=0.75 simulation has a very different profile but this is possibly due to the different mass evolution shown in Fig.~\ref{fig:sis_mbound}, as different values for the initial filling factor can lead to variations in the anisotropy as explained earlier. \subsubsection{Rotation} For the circular orbits we found that the $<v_{\rm \varphi}>$ of stars near $r_{\rm J}$ is about $0.5\Omega r_{\rm J}$ and retrograde with respect to the orbit. This implies that in a non-rotating frame these stars are on prograde orbits. Fig.~\ref{fig:vphi_eccen} shows the $<v_{\varphi}>$ profile for all stars in the equal-mass $\lambda$=1 case (solid lines) and the mean rotation profile of the frame calculated as $\Omega r$ (dashed lines)\footnote{In the case of the eccentric orbit with $\epsilon =0.75$, the last bin shows a larger rotation than expected from extrapolating the solid-body rotation outwards. This is due to one snapshot not having any stars in that bin and being excluded from the mean. This snapshot corresponded to apocentre where the rotation is at a minimum, and therefore the rotation is higher by not including this snapshot.}. The profiles are calculated again using cylindrical shells in the $xy$ plane. Here however we consider radial positions divided by $r_{\rm J}$ calculated from equation~(\ref{eqn:tidalradius}) for each snapshot. We chose this normalisation because the features of the rotation are washed out when using $r_{\rm J, circ}$ as the cluster expands and contracts over the course of an orbit. From Fig. 13 we see that the $<v_{\varphi}>$ profiles are similar for different $\epsilon$, which at $r_{\rm J}$ are close to the $0.5\Omega r_{\rm J}$ found in Fig.~\ref{fig:vphi_norm}. The left-hand panel shows that early in the simulation there is some variation with eccentricity, as the eccentric orbits have higher $<v_{\varphi}>$ than the circular orbit, but this variation seems to decrease with time. The solid-body rotation of the frame, $\Omega r$ (dashed lines), also varies as it decreases with increasing eccentricity. This means if we subtract the solid-body rotation of the frame from the $<v_{\phi}>$ of the stars, to convert to a fiducial reference frame that rotates at $\Omega_{\rm circ}$, there would be less retrograde rotation in clusters on higher $\epsilon$ orbits. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{sis_rot_profile.pdf} \caption{Radial profiles of $<v_{\varphi}>$ for all the stars in the simulations with a $\lambda$=1 potential with different eccentricities. Left-hand panel is the mean of snapshots for three orbits around 0.8 $M_0$. Right-hand panel is the same for 0.3 $M_0$. } \label{fig:vphi_eccen} \end{figure} \section{Velocity dispersion at the Jacobi surface} \subsection{Derivation} B01 derived a relation for $N(\hat{E})$ (equation~\ref{eqn:Ne}). We can use this result to derive a relation for the velocity dispersion of the PEs. As $N(\hat{E})$ is a probability density function, the mean can be found from \begin{equation} \left\langle\hat{E}\right\rangle = \int_{0}^{\infty}\hat{E}N(\hat{E})d\hat{E} \propto \left(\frac{t_{\rm esc}}{t_{\rm rh}}\right)^{1/4}, \label{eq:meanEhat} \end{equation} including our extra $\lambda$ dependence from Section 3.2.3. If we relate the energy to velocity using $\hat{E} \propto v^2/|{E_{\rm crit}}|$ with $E_{\rm J} = (v^2/2) + E_{\rm crit}$ at the Jacobi surface, and assume that the velocity dispersion is related to $<v^2>$ as for a Maxwellian distribution, we can find \begin{equation} \sigma_{\rm J} \propto \sqrt{<v^2>} \propto (<\hat{E}> E_{\rm crit})^{1/2}. \label{eq:sig} \end{equation} By substituting equation~(\ref{eq:meanEhat}) into equation~(\ref{eq:sig}) and by using $t_{\rm esc}$ and $t_{\rm rh}$ as defined in Section 3.2.3, and $|E_{\rm crit}| \propto M_{\rm c}/r_{\rm J} \propto (3-\lambda)^{1/3}\Omega^{2/3}M_{\rm c}^{2/3}$, we find \begin{equation} \sigma_{\rm J} \propto (3-\lambda)^{-1/12}{M_{\rm c}^{5/24}}\Omega^{1/3}{(<m>\ln{\Lambda})^{1/8}} \left(\frac{r_{\rm hm}}{r_{\rm J}}\right)^{-3/16}. \label{eq:prediction} \end{equation} This can be compared to the MOND prediction which has a $M_{\rm c}^{1/4}$ dependence, very close to the one obtained here. However, equation~(\ref{eq:prediction}) has further dependencies which provide a way of discriminating between the two predictions using observational data. \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{disponlypot.pdf} \caption{Velocity dispersion of stars between 0.9$r_{\rm J}$ and $r_{\rm J}$ against remaining mass of the cluster, $M_{\rm c}$, for the $\lambda$0$\epsilon$0, $\lambda$1$\epsilon$0 and $\lambda$2$\epsilon$0 simulations (coloured points). Black lines are the prediction from equation~(\ref{eq:prediction}), with the constant of proportionality fit to the $\lambda$=0 case.} \label{fig:circdisp} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{higher_M_rhrj.pdf} \caption{Comparison of our $\sigma_{\rm J}$ prediction to simulations with larger number of particles. The dispersion has been divided by $\Omega^{1/3}$ for each simulation to reduce the largest difference so the profiles can more easily be compared.} \label{fig:disp_largeN} \end{figure} \subsection{Comparison of the velocity dispersion prediction to simulations} To establish whether our derived scaling of $\sigma_{\rm J}$ in equation~(\ref{eq:prediction}) holds in our $N$-body simulations, we compare our prediction to the dispersion of the stars near $r_{\rm J}$ for the circular orbits in each potential. We focus on a spherical shell between $0.9r_{\rm J}$ and $r_{\rm J}$ as there will only be PEs in this region of the cluster. Fig.~\ref{fig:circdisp} shows the velocity dispersion of stars in this shell as a function of the mass of the cluster for the $\lambda$0$\epsilon$0, $\lambda$1$\epsilon$0 and $\lambda$2$\epsilon$0 simulations. The black lines reproduce our predictions from equation~(\ref{eq:prediction}), after finding the constant of proportionality by fitting to the $\lambda$=0 case. The velocity dispersion near $r_{\rm J}$ from the simulations increases with higher $\lambda$ which is well reproduced by our $\lambda$ dependence. The mass dependence also accurately reproduces the decline in $\sigma_{\rm J}$ as $M_{\rm c}$ decreases. There is a large amount of scatter in the values from the simulation that could accommodate a range of mass dependencies. We therefore also compare our prediction to simulations with higher number of particles. Fig~\ref{fig:disp_largeN} shows the dispersion, divided by $\Omega^{1/3}$ to remove the largest variation between simulations, against the remaining mass of the cluster over time for the $\lambda$1$\epsilon$0K simulation (blue), an $N_{0} = 10^{5}$ particle simulation run for the Gaia Challenge Workshop (green, http://bit.ly/241CBMJ, \citealt{Peuten2016}), a simulation of the cluster M4 (red, \citealt{Heggie2014}) and the $N=10^6$ particle Dragon simulations (magenta, \citealt{Wang2016}). The predictions for each simulation are plotted using the constant of the fit from Fig.~\ref{fig:circdisp} (solid lines). Our prediction slightly underestimates the 100k and Dragon simulations, but matches the M4 simulation extremely well, including the compact initial conditions and the subsequent expansion to fill the Roche volume. The difference between the predictions for each simulation shows the importance of including the $<m>$ dependence in our prediction. Even though the M4 simulation has a $\lambda$=0 galactic potential and should therefore have a lower $\sigma_{J}$, it has a higher mean mass which increases $\sigma_{\rm J}$. This can also be seen in the difference between the prediction for the M4 simulation and Dragon simulation, as despite the latter also using a point mass potential it has a lower $<m>$ and much more extended intial filling factor. The discrepency between the value of $\sigma_{\rm J}$ and the prediction for the Dragon simulation is possibly due to the cluster having an initial population of remnants that are dynamically unevolved in these snapshots. We also over-plot the prediction of the velocity dispersion for a Plummer model (magenta line), $\sigma = \sqrt{GM_{\rm c}/(6\sqrt{r^2 +r_0^2})}$, at $r_{\rm J}$ and using $r_0 \sim r_{\rm hm}/1.3$ (see page 73 of \citealt{Heggie2003}): \begin{equation} \sigma_{\rm J} = \frac{2^{1/6}}{6^{1/2}}(GM_c\Omega)^{1/3} \left[1+\left(\frac{r_{\rm hm}}{r_{\rm J}}\right)^2 \right]^{-1/4} \end{equation} and adopting $r_{\rm hm}/r_{\rm J}$ as $\sim 0.15$. This also has an $\Omega^{1/3}$ dependence like our prediction, and underpredicts the dispersion for most masses. Due to a steeper $M_{\rm c}^{1/3}$ dependence, this relation approaches $\sigma_{\rm J}$ of equation~(\ref{eq:prediction}) in the mass range of globular clusters ($M_{\rm c} \gtrsim 10^5 {\rm M}_\odot$). \subsection{Comparison of the velocity dispersion prediction to observational data} \begin{table*} \begin{center} \caption{Properties of the sample of Milky Way GCs (Column 1 and 2 are from Baumgardt 2017). Columns indicate: name of the cluster, velocity dispersion in outermost bin $\sigma_{\mathrm{lb}}$, radial position of outermost bin $r_{\mathrm{lb}}$, mass of the cluster $M_{\rm c}$, galactocentric radius of the orbit of the cluster $R_{\rm g}$, half-mass radius $r_{\rm hm}$, Jacobi radius $r_{\rm J}$, prediction of the velocity dispersion at the Jacobi surface, $\sigma_{\mathrm{J}}$, ratio of the position of the last bin to Jacobi radius $r_{\mathrm{lb}}/r_{\rm J}$ and ratio of the dispersion in the last bin to the prediction of the dispersion $\sigma_{\mathrm{lb}}/\sigma_{\mathrm{J}}$.} \begin{tabular}{|l|r|r|r|r|r|r|r|r||r|r} \hline Cluster & $\sigma_{\mathrm{lb}}$ & $r_{\mathrm{lb}}$ & $M_{\rm c}$ & $R_{\rm g}$ & $r_{\rm hm}$ & $r_{\rm J}$ & $\sigma_{\mathrm{J}}$ & $r_{\mathrm{lb}}/r_{\rm J}$ & $\sigma_{\mathrm{lb}}/\sigma_{\mathrm{J}}$\\ & $\mathrm{km\,s^{-1}}$ & pc & $10^5M_{\rm \odot}$ & kpc &pc&pc & km$\,s^{-1}$ & & \\ \hline NGC104 & $4.58^{+0.42}_{-0.36}$ & 54.27 & 10.02 & 7.40 & 6.82 & 117.580 & 1.55 & 0.46 & $2.96^{+0.27}_{-0.23}$\\ NGC288 & $1.77^{+0.20}_{-0.18}$ & 21.80 & 0.86 & 12.00 & 7.78 & 71.499 & 0.68 & 0.30 & $2.60^{+0.29}_{-0.26}$\\ NGC362 & $2.93^{+0.69}_{-0.51}$ & 17.63 & 4.00 & 9.40 & 2.24 & 101.764 & 1.40 & 0.17 & $2.09^{+0.61}_{-0.45}$\\ NGC1851 & $3.11^{+0.56}_{-0.44}$ & 35.38 & 3.67 & 16.60 & 2.46 & 144.184 & 1.19 & 0.25 & $2.61^{+0.38}_{-0.30}$\\ NGC1904 & $2.12^{+0.30}_{-0.25}$ & 33.85 & 2.38 & 18.80 & 3.55 & 135.607 & 0.96 & 0.25 & $2.21^{+0.31}_{-0.26}$\\ NGC2419 & $1.30^{+1.01}_{-3.62}$ & 160.08 & 10.02 & 89.90 & 23.27 & 621.370 & 0.73 & 0.26 & $1.78^{+1.38}_{-4.95}$\\ NGC2808 & $5.61^{+0.69}_{-0.57}$ & 23.12 & 9.75 & 11.10 & 2.58 & 152.660 & 1.69 & 0.15 & $3.31^{+0.41}_{-0.34}$\\ NGC3201 & $2.31^{+0.27}_{-0.23}$ & 38.43 & 1.63 & 8.80 & 7.94 & 72.083 & 0.87 & 0.53 & $2.66^{+0.31}_{-0.26}$\\ NGC4147 & $1.62^{+0.41}_{-0.30}$ & 19.28 & 0.50 & 21.40 & 2.99 & 87.994 & 0.62 & 0.22 & $2.61^{+0.66}_{-0.48}$\\ NGC4372 & $3.21^{+0.40}_{-0.33}$ & 14.61 & 2.23 & 7.10 & 8.08 & 69.345 & 0.99 & 0.21 & $3.24^{+0.40}_{-0.33}$\\ NGC4590 & $0.74^{+0.52}_{-0.40}$ & 25.63 & 1.52 & 10.20 & 4.48 & 77.609 & 0.92 & 0.33 & $0.81^{+0.57}_{-0.44}$\\ NGC4833 & $3.48^{+0.46}_{-0.38}$ & 8.03 & 3.17 & 7.00 & 4.91 & 77.193 & 1.20 & 0.10 & $2.89^{+0.38}_{-0.32}$\\ NGC5024 & $2.05^{+0.42}_{-0.32}$ & 72.49 & 5.21 & 18.40 & 7.01 & 173.536 & 1.06 & 0.42 & $1.94^{+0.40}_{-0.30}$\\ NGC5053 & $1.02^{+0.25}_{-0.20}$ & 40.37 & 0.87 & 17.80 & 13.51 & 93.280 & 0.57 & 0.43 & $1.80^{+0.44}_{-0.35}$\\ NGC5139 & $7.60^{+0.37}_{-0.34}$ & 58.00 & 21.73 & 6.40 & 9.31 & 138.133 & 1.87 & 0.42 & $4.06^{+0.20}_{-0.18}$\\ NGC5272 & $2.43^{+0.48}_{-0.36}$ & 54.48 & 6.10 & 12.00 & 8.06 & 137.499 & 1.18 & 0.40 & $2.06^{+0.41}_{-0.31}$\\ NGC5286 & $7.45^{+0.85}_{-0.71}$ & 6.06 & 5.36 & 8.90 & 1.89 & 107.921 & 1.59 & 0.06 & $4.69^{+0.53}_{-0.44}$\\ NGC5466 & $0.99^{+0.26}_{-0.20}$ & 50.67 & 1.06 & 16.30 & 10.91 & 94.110 & 0.64 & 0.54 & $1.56^{+0.41}_{-0.31}$\\ NGC5694 & $2.57^{+0.50}_{-0.39}$ & 33.05 & 2.32 & 29.40 & 3.42 & 181.027 & 0.87 & 0.18 & $2.94^{+0.57}_{-0.45}$\\ NGC5824 & $3.70^{+0.77}_{-0.59}$ & 30.11 & 5.93 & 25.90 & 3.39 & 227.533 & 1.17 & 0.13 & $3.16^{+0.66}_{-0.50}$\\ NGC5904 & $2.91^{+0.37}_{-0.31}$ & 26.31 & 5.72 & 6.20 & 3.19 & 86.651 & 1.58 & 0.30 & $1.84^{+0.24}_{-0.20}$\\ NGC5927 & $4.09^{+0.60}_{-0.49}$ & 8.27 & 2.28 & 4.60 & 1.47 & 52.242 & 1.50 & 0.16 & $2.73^{+0.40}_{-0.33}$\\ NGC6093 & $6.38^{+0.43}_{-0.39}$ & 1.99 & 3.35 & 3.80 & 0.67 & 52.324 & 2.01 & 0.04 & $3.17^{+0.21}_{-0.19}$\\ NGC6121 & $3.30^{+0.24}_{-0.22}$ & 17.98 & 1.29 & 5.90 & 7.43 & 50.981 & 0.89 & 0.35 & $3.70^{+0.27}_{-0.25}$\\ NGC6139 & $6.43^{+1.23}_{-0.96}$ & 3.57 & 3.78 & 3.60 & 0.89 & 52.527 & 2.00 & 0.07 & $3.22^{+0.62}_{-0.48}$\\ NGC6171 & $2.42^{+0.33}_{-0.28}$ & 8.47 & 1.21 & 3.30 & 1.66 & 33.873 & 1.31 & 0.25 & $1.85^{+0.25}_{-0.21}$\\ NGC6205 & $4.01^{+0.46}_{-0.39}$ & 24.97 & 4.50 & 8.40 & 4.13 & 97.956 & 1.32 & 0.25 & $3.04^{+0.35}_{-0.30}$\\ NGC6218 & $2.67^{+0.46}_{-0.37}$ & 74.17 & 1.44 & 29.80 & 15.34 & 155.712 & 0.57 & 0.48 & $4.65^{+0.80}_{-0.64}$\\ NGC6254 & $2.95^{+0.58}_{-0.46}$ & 9.15 & 1.68 & 4.50 & 2.55 & 46.521 & 1.25 & 0.20 & $2.37^{+0.20}_{-0.37}$\\ NGC6273 & $9.13^{+1.48}_{-1.19}$ & 3.41 & 7.67 & 4.60 & 1.77 & 78.346 & 2.04 & 0.04 & $4.47^{+0.72}_{-0.58}$\\ NGC6341 & $3.19^{+0.39}_{-0.33}$ & 4.24 & 3.29 & 1.70 & 0.50 & 30.419 & 2.50 & 0.14 & $1.28^{+0.16}_{-0.13}$\\ NGC6388 & $7.28^{+0.93}_{-0.77}$ & 16.59 & 9.93 & 9.60 & 1.45 & 139.431 & 1.95 & 0.12 & $3.73^{+0.48}_{-0.39}$\\ NGC6397 & $3.20^{+0.21}_{-0.19}$ & 8.18 & 0.77 & 3.10 & 2.62 & 28.038 & 1.08 & 0.29 & $2.98^{+0.20}_{-0.18}$\\ NGC6402 & $6.08^{+0.92}_{-0.24}$ & 10.15 & 7.47 & 6.00 & 2.27 & 92.671 & 1.83 & 0.11 & $3.33^{+0.50}_{-0.40}$\\ NGC6656 & $3.36^{+0.59}_{-0.54}$ & 17.68 & 4.30 & 4.00 & 3.91 & 58.823 & 1.54 & 0.30 & $2.19^{+0.38}_{-0.35}$\\ NGC6715 & $8.80^{+1.50}_{-1.50}$ & 21.66 & 16.79 & 4.90 & 1.17 & 106.082 & 2.71 & 0.20 & $3.24^{+0.55}_{-0.55}$\\ NGC6723 & $2.83^{+0.31}_{-0.50}$ & 20.92 & 2.32 & 18.90 & 8.41 & 134.840 & 0.81 & 0.16 & $3.50^{+0.79}_{-0.62}$\\ NGC6752 & $2.76^{+0.30}_{-0.28}$ & 9.09 & 2.11 & 2.60 & 1.44 & 34.847 & 1.66 & 0.26 & $1.66^{+0.19}_{-0.17}$\\ NGC6809 & $3.70^{+0.34}_{-0.27}$ & 11.76 & 1.82 & 5.20 & 4.28 & 52.664 & 1.12 & 0.22 & $3.29^{+0.27}_{-0.24}$\\ NGC6838 & $1.02^{+0.39}_{-0.23}$ & 6.58 & 0.30 & 3.90 & 1.89 & 23.817 & 0.83 & 0.28 & $1.23^{+0.41}_{-0.28}$\\ NGC7078 & $3.03^{+0.21}_{-0.19}$ & 14.71 & 8.11 & 6.70 & 1.95 & 102.540 & 1.88 & 0.14 & $1.61^{+0.11}_{-0.10}$\\ NGC7089 & $3.92^{+0.64}_{-0.51}$ & 26.27 & 7.00& 10.40 & 3.21 & 130.878 & 1.50 & 0.20 & $2.61^{+0.43}_{-0.34}$\\ NGC7099 & $2.12^{+0.25}_{-0.22}$ & 14.96 & 1.63 & 7.10 & 2.13 & 62.472 & 1.16 & 0.24 & $1.82^{+0.21}_{-0.19}$\\ Ter8 & $1.46^{+0.47}_{-0.40}$ & 17.35 & 0.18 & 19.40 & 5.36 & 58.800 & 0.42 & 0.30 & $3.44^{+1.11}_{-0.94}$\\ \hline \end{tabular} \end{center} \end{table*} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{fvf_holger.pdf} \caption{Ratio of the velocity dispersion in the last bin of the profiles from the Baumgardt (2016) data, to our prediction $\sigma_{J}$, as a function of the ratio of the position of the last bin to the calculated $r_{\rm J}$ (blue). The bins with the lowest values of velocity dispersion found in each of the profiles are also plotted in green.} \label{fig:obsfract} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\columnwidth]{sigma_Rg_Holger_log.pdf} \caption{Velocity dispersion of the last bin of data for the clusters from Baumgardt (2016), plotted against the galactocentric distance of the cluster). The size of the points is proportional to the mass of the clusters. The black lines are our prediction for the most massive and least massive clusters (solid and dashed respectively).} \label{fig:Rdep} \end{figure} It is also possible to directly compare our prediction of $\sigma_{\rm J}$ to observational data. \citet{Baumgardt2017} presented a compilation of line-of-sight velocities and proper motion data for stars in 50 Milky Way GCs from a wide range of data available from literature, which was used to create combined velocity dispersion profiles. We can consider the outermost bin of each of these velocity dispersion profiles and compare them to the value obtained from our prediction for each cluster. To calculate our estimate of $\sigma_{\rm J}$ we approximate the mass of the cluster using the absolute visual magnitude from the Harris catalogue (\citealt{Harris1996}; 2010 edition) and mass-to-light ratio from \citet{McLaughlin2005}. We also estimate the angular velocity of the clusters using $\Omega = V_{\rm c}/R_{\rm g}$, by assuming $V_{\rm c}=220$ km/s and by taking $R_{\rm g}$ from the Harris catalogue. Table 2 includes the dispersion, $\sigma_{\rm lb}$, and radial position, $R_{\rm lb}$, of the last data point, the ratio of the position of the last bin to the Jacobi radius, $R_{\rm{lb}}/r_{\rm J}$, and ratio of the dispersion in the last bin to the prediction of the dispersion $\sigma_{\mathrm{lb}}/\sigma_{\mathrm{J}}$. Figure~\ref{fig:obsfract} shows $\sigma_{\mathrm{lb}}/\sigma_{\mathrm{J}}$ against $R_{\mathrm{lb}}/r_{\rm J}$. It is clear that the data does not extend to the approximate $r_{\rm J}$, which means that the observed dispersions are expected to be higher than the prediction for $\sigma_{\rm J}$. Most points follow an expected trend, with $\sigma_{\mathrm{lb}}/\sigma_{\mathrm{J}}$ decreasing with increasing $R_{\mathrm{lb}}/r_{\mathrm{J}}$. The are some points that appear to not follow this trend. This is possibly due to internal properties of the cluster, which affect the radial distance from the centre of the cluster at which the effects of PEs or of contamination from field or extra-tidal stars become significant. Moreover, we recall that the calculations of $\sigma_{\rm J}$ and $r_{\rm J}$ include some approximations that for some clusters could be less accurate than others. Fig.~\ref{fig:Rdep} shows the velocity dispersion in the last bin of data against the galactocentric distance for each cluster in our sample, with the point size reflecting the mass of the cluster. We also show our prediction from equation~(\ref{eq:prediction}) for the least massive cluster of the sample (dashed line) and for the most massive (solid line). Although most of the points lie above our prediction, there seems to be an increase in the dispersion with decreasing $R_{\rm g}$, suggesting there is more than just a mass dependence in the velocity dispersion at the Jacobi surface. Our prediction is a lower limit for the dispersion profiles, so it is expected that none of the data points should be lower than our prediction. This is because there are many reasons why the velocity dispersion of the outermost bins in the profile can be above our prediction, including projection effects and observational profiles not extending out to the Jacobi surface. This means that it is likely difficult to discern between the effects that a DM halo may have on the outer regions of a velocity dispersion profile, from the effects of PEs. The proper motion data provided by future releases of the \textit{Gaia} mission will allow for more rigorous selection criteria for cluster members, which can be followed up by further ground based observation of line-of-sight velocities, making it possible to probe closer to the Jacobi surface of Milky Way star clusters. A combination of proper motions and radial velocity measurements for stars in the outer regions of GCs will also provide a way of inspecting the rotation and anisotropy of the dispersion, which may be required to discriminate between these scenarios as the retrograde bias in the orbits of the PEs may not be present when there is the additional effect of a dark matter halo. \section{Conclusions} By running simulations of star clusters and varying the orbital eccentricities, initial mass function, and galactic (power-law) mass profiles, we have explored the distribution and behaviour of a population of energetically unbound stars within the Jacobi radius of a cluster, and found three properties of the PEs to vary with the slope of the enclosed galactic mass: 1) the fraction of PEs inside the Jacobi radius, 2) the velocity dispersion at the Jacobi surface, 3) the velocity anisotropy. For an equal-mass system in a point mass galactic potential we found the fraction of PEs inside the Jacobi radius to be consistent with the value found by B01. However, a mass spectrum and shallower galactic density profiles both cause an increase in the number of PEs, up to 40\% in a $1/R_{\rm g}$ density profile galaxy with a globular cluster type IMF between 0.1 and 1$M_{\odot}$. At $r=0.5r_{\rm J}$ there are equal number of PEs and bound stars, and beyond this radius the PEs dominate. This suggests that PEs should have a large influence on cluster kinematics, especially in the outer parts. By inspecting the fraction of total mass in PEs and the evolution of the distribution of masses for the PEs, we found that a large fraction of PEs will be low mass for most of the lifetime of the cluster, meaning that the majority of these stars could not be observed currently, but can contribute significantly to the total mass. The energy distribution of PEs becomes wider as $N$ decreases. This width is also larger for larger $\lambda$, and we introduce a $\lambda$ dependence to the model established in B01. We then investigated the effect of the PEs on the kinematics. The radial profiles of the anisotropy of the dispersion early in the simulations for the circular orbits are consistent with zero (i.e. isotropy). However, the simulations in the $\lambda$=0 and $\lambda$=1 potentials develop tangential anisotropy in time whereas the $\lambda$=2 simulation shows radial anisotropy. This is possibly due to two-body interactions scattering stars outwards on radial orbits, as these orbits also preferentially escape from the cluster. Therefore the clusters with the larger escape time in the shallowest galactic density profiles create radial orbits faster than the stars can escape. Throughout the entire lifetime the clusters in the $\lambda$=1 and $\lambda$=2 simulations also have some radial anisotropy before $\beta$ decreases towards the tangential anisotropy. This decrease in $\beta$ occurs faster in the $\lambda$=0 simulation. The rotation profiles show a clear negative value for the mean of the $\varphi$ component of the velocity in the corotating reference frame which is also seen in a negative bias of the $J_{z}$ distribution. This retrograde motion is expected as prograde orbits are less stable and preferentially lost from the cluster. The PEs cause the $<v_\varphi>$ profile to become increasingly negative with radius, as PEs dominate further from the centre of the cluster, and at the Jacobi radius they have around half of the circular velocity at $r_{\rm J}$. There is also a difference in the $\lambda$=2 simulation, which seems to develop more negative $<v_\varphi>$ over the lifetime, whereas the $\lambda$=0 and $\lambda$=1 stay roughly constant. For the simulations of clusters with a mass spectrum there seems to be no substantial variation in the dynamics when comparing to the equal-mass simulations. Similarly when using higher values of orbital eccentricity, there seems to be only minimal variation of the dynamics, but there is a suggestion of less tangential anisotropy and less retrograde rotation when increasing eccentricity. We then formulated a relation for the velocity dispersion at $r_{\rm J}$ due to the effect of PEs. From the model of the distribution of $\hat{E}$ of PEs developed in B01, we can approximate the velocity dispersion at the Jacobi surface $\sigma_{\rm J}$ as a function depending on $(3-\lambda)^{-1/12}M_{\rm c}^{5/24}\Omega^{1/3}(<m>\ln{N})^{1/8}$. We compared our prediction to simulations and observational data. By scaling the constant of proportionality of the prediction to match the velocity dispersion between 0.9$r_{\rm J}$ and $r_{\rm J}$ over time in the $\lambda$0$\epsilon$0 simulation, the profile is well matched by the mass dependence of our prediction and the $\lambda$ dependence reproduces the variation across the different potentials. We also found our prediction to be close to the values of the velocity dispersion near the Jacobi radius in simulations with a much larger number of particles. This prediction is useful for testing the different theories that attempt to explain the flattening of the velocity dispersion. For example, some predictions using MOND find the flattened value of the velocity dispersion $\propto M_{\rm c}^{1/4}$, whereas our prediction contains an additional dependence on the orbit, suggesting a way to discriminate between the two scenarios. We show that there is a dependence of the velocity dispersion, anisotropy and rotation properties of PEs on the galactic mass profile (i.e. $\lambda$). This suggests that the PEs can be used as an independent method to determine properties of the underlying dark matter profile, which could be especially important in the core v cusp debate in dwarf galaxies (see e.g. \citealt{Walker2011}; \citealt*{Read2016}). For example, the increasing abundance of PEs with increasing $\lambda$ could lead to a higher mass-to-light ratio. Such a $\lambda$ dependent mass-to-light ratio could help explain why the metal-poor clusters Fornax 3 and Fornax 5 have an observed mass-to-light ratio higher than synthetic stellar population models (\citealt*{Larsen2012}; \citealt*{Strader2011}). This velocity dispersion prediction is also useful for generative models of tidal streams, which require releasing particles from a cluster with a chosen velocity dispersion (\citealt*{Fardal2015}; \citealt*{Erkal2016}). This dispersion affects the width of the stream and therefore using the correct value is important to be able to accurately use the streams to infer galactic properties. We compared our results to available observational data. We used recently compiled velocity dispersion profiles from \citet{Baumgardt2017}, which contain a wide range of radial velocity and proper motion measurements from literature, and showed that most of the observed values of the velocity dispersion in the outermost bins of data lie above our prediction. There are many reasons why the observational data would increase above our prediction, including the fact that the data do not extend close enough to $r_{\rm J}$, projection effects, and that a large fraction of clusters are still under-filling their Roche volumes. Despite this, we found some clusters to be close to our prediction and not to be consistent with a prediction that would only depend on the mass of the cluster, suggesting that there is a dependence on the galactocentric distance consistent with our $\Omega^{1/3}$ dependence. With the upcoming \textit{Gaia} data it will be possible to detect stars further from the centre of globular clusters than it is currently possible. Accurately understanding the behaviour of PEs provides an independent way of inferring galactic properties and avoids the misidentification of other effects, such as the effects of a dark matter halo. \section{Acknowledgements} We are grateful to Justin Read, Florent Renaud, Holger Baumgardt, Douglas Heggie and Anna Lisa Varri for fruitful discussions and to the referee for useful suggestions. We are also grateful to Sverre Aarseth and Keigo Nitadori for making \texttt{\small{NBODY6}} publicly available. We also thank Mr. Dave Munro of the University of Surrey for hardware and software support. MG acknowledges financial support from the Royal Society (University Research Fellowship), AZ acknowledges financial support from the Royal Society (Newton International Fellowship). IC, MG and AZ acknowledge support from the European Research Council (ERC-StG-335936, CLUSTERS). \bibliographystyle{mn2e}
1,314,259,996,865
arxiv
\section{Introduction} The problems of dislocation dynamics, especially nonlinear one, and defects in crystals are still of high interest \cite{Sudzuki,ChCh96,GI:FMM,Sugakov,Natsik}. Among them there is well-known effect of acoustic waves on ionic crystals (mainly, semiconductors) which is studied and investigated in details (see, for example, \cite{MethUS,BGGKR91,KTP93,P93}). Sound and ultrasound (US) treatment results in change of the various important characteristics of semiconducting media, which, in its turn, can depend upon amplitude of acoustic waves. The most interesting here are those phenomena, when the changes induced by such waves have threshold character, i.e., are observed when wave amplitude reaches the certain value. One of these brightly threshold phenomena is sonoluminescence (SL), which was discovered by Ostrovskii et al.\ \cite{Disc} (see also \cite{Ostr:mon}) and represents a glow of ionic crystals, subjected to an US load of an overthreshold amplitude. The analysis of SL spectra has allowed to establish \cite{Ostr:mon} that the large role in SL excitation is played by the crystal point defects of crystal, number of which essentially increases above the threshold. In such a situation it was natural to suppose generation of point defects, which can be stipulated by the motion of dislocations, to be the reason for threshold. As a whole the sequence of processes can be such that, that US shakes dislocations (edge and screw) available in a crystal, the amplitude of their motion being proportional to the amplitude of US wave. Thus, free segments of the dislocations between the pinning points are oscillating only. However any new defects cannot be generated by them. There are several ways known for point defects to be created \cite{Friedel}, one of which is climbing of jogs on screw dislocations. Other ways (for example, intersection with dislocations of the ``forest'') in conditions of rather small density of the dislocations should be less effective. In the present paper an attempt is made to consider the threshold phenomena connected just to generation of defects by a driven jog on a screw dislocation. An equation of such a motion is investigated for US amplitude value up to a threshold and after it. Experimental study on amplitude relations of US damping in ionic crystals is also conducted. The results obtained are compared to the theory. \section{Model of Nonlinear Dynamics of Dislocation with a Jog} \subsection{Approach and Equations} The jogs on a screw dislocation can be regarded as some kind of pinning points, which, however, in difference from usual ones, can at some conditions move together with its ``own'' dislocation. This motion is not free, and any displacement of the jog between its initial position and nearest, final, one is always accompanied by a creation of a point defect --- vacancy or interstitial. Already this implies that such positions appear nonequivalent, and consequently the potential $W_{\rm jog}(y_{\rm jog})$ of a jog proposed in Ref.~\cite{prepr:pd} in difference, for example, with a potential of Peierls relief is not symmetrical under translations (see Figure 1). Just this potential determines the motion of the jogs in a crystal, which we shall consider below. We should notice only that the energy of the point defect creation makes usually a few eV; so the thermal overcoming of appropriate barriers by a jog is improbable. Therefore here the essential role should be played by forces due to oscillating dislocation segments, or, in other words, by forces of linear tension. Let us consider the simplest case of a segment of a screw dislocation of length $2L$ with a jog in the middle, the ends of which are fixed on so-called strong (i.e., immovable) pinning points. Then (in a neglect by a Peierls relief that is fair for the relatively high temperatures) the motion of the free segments of a dislocation in US field is described by well-known model of an elastic string \cite{Granato}; corresponding equation of a motion may be written in the form: \begin{equation} M_{\rm dis}\frac{\partial^{2}y}{\partial t^{2}} + B\frac{\partial y}{\partial t} - T_{\rm dis}\frac{\partial^{2}y}{\partial x^{2}} = -\sigma_{zx}^{'}(t)\,b, \label{o1} \end{equation} where $x$ is a coordinate along a dislocation, $y (x)$ is transversal displacement, $t$ is the time, $M_{\rm dis}$ is a dislocation mass per unit length, $B_{\rm dis}$ is a factor of a friction, $T_{\rm dis}$ is a linear tension of a dislocation string, $\bf b$ is a Burgers vector, $\sigma_{zx}^{'}(t)$ is appropriate (active) component of stress deviator, caused by US. Let the jog be placed in a point $x=x_{\rm jog}$. A condition of a dislocation pinning down (at small US amplitudes) is then \begin{equation} y(x_{\rm jog})=0 \label{o2} \end{equation} At large amplitudes of an external force (i.e., of the exciting US wave) the condition (2) should be replaced by an equation of a jog motion, which can be written as \begin{equation} M_{\rm jog} \frac{\partial^{2}y_{\rm jog}}{\partial t^{2}} + B_{\rm jog} \frac{\partial y_{\rm jog}}{\partial t} + \frac{\partial W_{\rm jog}}{\partial y_{\rm jog}} =T_{\rm dis} \left( \left.\frac{\partial y}{\partial x}\right|_{x_{\rm jog}+0} - \left.\frac{\partial y}{\partial x}\right|_{x_{\rm jog}-0} \right), \label{o3} \end{equation} where $M_{\rm jog}$ and $B_{\rm jog}$ are a jog mass and friction coefficient for it, correspondingly (they, as well as $M_{\rm dis}$, $B_{\rm dis}$, are phenomenological parameters), and $W_{\rm jog}(y_{\rm jog})$ is Loktev-Khalack potential (see Figure 1). In general case its form depends upon all previous positions of a jog. For example, if the last has moved from a position A on Figure~1 to position B with a vacancy being created, the branch AB$^{'}$C$^{'}$D$^{'}$ has ceased to exist. In other words, the jog can return ``initial geometry'' only passing exactly to C$^{'}$ on the curve $W_{\rm jog}(y_{\rm jog})$, for what it needs to overcome a potential barrier, appropriate to the interstitial formation. Thus, it is essential that the energies of the formation of the vacancies and interstitials are definitely different; it naturally makes a potential relief $W_{\rm jog}(y_{\rm jog})$ central-asymmetrical one and is reflected in the jog motion under the action of US wave. \subsection{Threshold characteristics} Let an US wave with the amplitude of acoustic displacement $u_{\rm ac}$ and frequency $\omega_{\rm ac}$ be spreaded in a crystal. Then the force per unit length exerted on the dislocation (see (1)) is \begin{equation} -\sigma_{zx}^{'}(t)b= f_{\rm or}\frac{\omega_{\rm ac}u_{\rm ac} b }{v_{\rm us}} \cos{\omega_{\rm ac}t}=\sigma_{\rm us}b \cos{\omega_{\rm ac}t}, \label{o4} \end{equation} where $f_{\rm or}$ is some factor, dependent on the orientation of an US wave (on its polarization and the direction of propagation in a crystal), and $v_{\rm us}$ is a sound velocity. Under the action of force (4) free segments of a dislocation begin bowing out and in accordance with the increase of $u_{\rm ac}$ a situation will set in, when the amplitude of this bowing out (and consequently, pulling forces of a linear tension exerting on the jog) will become sufficient for the jog to overcome the potential relief. The amplitude of this force is determined (see a right term in Eq.(3)) by condition \begin{equation} \left|\frac{\partial W_{\rm jog}}{\partial y_{\rm jog}} \right|_{\rm max} =T_{\rm dis} \left( \left.\frac{\partial y}{\partial x}\right|_{+0} - \left.\frac{\partial y}{\partial x}\right|_{-0} \right)_{\rm max}. \label{o5} \end{equation} The solution of equation (1) together with (5) for the case of external force (4) gives the following expression for the threshold amplitude of US produced displacement: \begin{equation} u_{\rm ac}^{\rm thr}(\omega_{\rm ac})= \frac{v_{\rm us}}{\omega_{\rm ac}} \frac{M_{\rm dis}L}{8 f_{\rm or}b T_{\rm dis}} \left[ I_{1}^{2}(\omega_{\rm ac})+I_2^2(\omega_{\rm ac}) \right]^{-1/2} \left| \frac{\partial W_{\rm jog}}{\partial y_{\rm jog}} \right|_{\rm max}, \label{o6} \end{equation} with the substitutions \begin{equation} I_1(\omega_{\rm ac})= \sum_{n=0}^{L/2b} \frac{ \Omega^{2}_{n} -\omega_{\rm ac}^{2} }{ \left( \Omega^{2}_{n} -\omega_{\rm ac}^{2} \right)^{2} +\left( \omega_{\rm ac}\Gamma_{\rm dis} \right)^{2} }, \label{o7} \end{equation} \begin{equation} I_2(\omega_{\rm ac})= \sum_{n=0}^{L/2b} \frac{ \omega_{\rm ac}\Gamma_{\rm dis} }{ \left( \Omega^{2}_{n} -\omega_{\rm ac}^{2} \right)^{2} +\left( \omega_{\rm ac}\Gamma_{\rm dis} \right)^{2} }, \label{o8} \end{equation} and \begin{equation} \Gamma_{\rm dis}=B_{\rm dis}/M_{\rm dis}, \label{o9} \end{equation} where \begin{equation} \Omega^{2}_{n} = \frac{\pi^{2}(2n+1)^{2}T_{\rm dis} }{M_{\rm dis}L^{2}} \label{o10} \end{equation} are the eigen-frequencies of the dislocation segments (n=0,1,...). The obtained expressions (7), (8) testify that at $\Gamma_{\rm dis} < \Omega_0$ the threshold characteristics defined by (6), should have resonant character (see Figure~2). \subsection{Point defect generation} The asymmetrical form of a curve $W_{\rm jog}(y_{\rm jog})$ stipulates that basic conclusion, that a threshold condition (5) for the jog motion with a formation of vacancies begins to hold still before an appropriate condition for the motion with interstitials formation. Then it is easy to see that for the amplitude $u_{\rm ac}$ between these two threshold values the jog can move and move during an only one halfperiod of every period of the external force action. It means that in the given range of US amplitudes the jog drift happens only in one direction, and a creation of one vacancy corresponds to each its climb. Already from physical reasons it is clear, that the appearance of vacancy gives premises for lattice relaxation in a vicinity of a defect, which with necessity should be accompanied by an acoustic emission. It is important that such an emission happens before transition of a jog to an oscillatory mode. On the other hand, transfer of the jog into some new position at corresponding value $u_{\rm ac}$ causes a reduction of resultant forces of a linear tension along the $x$ axis. As a result this total force will decrease so that further jog climbing will become impossible, and the jog will stay in new, biased rather initial, equilibrium position. However, such a displacement has also a positive effect, if a reduction of threshold US amplitude necessary for interstitial formation is to be mentioned. This threshold value is determined by the same formula (6), in which instead of value of derivative $\partial W_{\rm jog}/\partial y_{\rm jog} |_{\rm max}$, appropriate to interstitial formation, stands a half-sum of appropriate derivatives for two opposite directions; the observable threshold for full oscillatory jog motion is given by \begin{equation} u_{\rm ac}^{\rm thr}(\omega_{\rm ac})= \frac{v_{\rm us}}{8\omega_{\rm ac}} \, \frac{M_{\rm dis}L}{ f_{\rm or}b T_{\rm dis}} \, \displaystyle \frac{ \left| \frac{\partial W_{\rm jog}}{\partial y_{\rm jog}} \right|_{\rm max}^{i}+ \left| \frac{\partial W_{\rm jog}}{\partial y_{\rm jog}} \right|_{\rm max}^{v}}{ \sqrt{ I_{1}^{2}(\omega_{\rm ac})+I_2^2(\omega_{\rm ac})}}. \label{o11} \end{equation} Thus, after a beginning of large-amplitude oscillatory jog motion (i.e., after overcoming by the amplitude $u_{\rm ac}$ of its threshold value (11)) continuous generation of defects of both types begins. The total nunber of defects generated by jog per one US wave period above the threshold grows proportionally to US amplitude. \subsection{US attenuation} Because of real crystals containing the dislocations of different length, let us consider a pure single-crystal with a network of dislocations, as well as a certain quantity of point defects, being the weak pinning centers for the former ones. It is assumed also that initial concentration of the point defects is small enough for the mean distance $L_{c}$ between them to be of the order of the network length $L_{N}$: $L_{c}^{(0)}\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}} L_{N}$. If the acoustic stress of small amplitude is applied to a crystal, the dislocations are bowing out between pinning points, which results in amplitude-independent US attenuation. At the higher stresses a breakaway occurs, giving rise to hysteresis losses, and consequently to the increase of US attenuation (see \cite{Granato}). But in the case under consideration ($L_{c}^{(0)}\raise.3ex\hbox{$>$\kern-.75em\lower1ex\hbox{$\sim$}} L_{N}$) the increase of attenuation due to dislocations unpinning from the weak pinning centers may be negligible, so that the amplitude dependence of the US attenuation is determined by the motion of the jogs on the skrew dislocations. When the amplitude of an acoustic stress is sufficient for a creation of the vacancies, the jog merely changes its equilibrium position, not causing hysteresis losses by itself. The attenuation is changed by the newly created vacancies. The last serve as the additional weak pinning centers for the dislocations, so that the attenuation coefficient for low frequency range is given by Granato-L\"{u}cke expression \cite{Granato} \begin{equation} \alpha_{\rm H} (\omega_{\rm ac}, \sigma_{\rm us}) = \frac{\omega_{\rm ac}}{2 v_{\rm us}} \frac{\Lambda L_{N}^3}{L_{c}} \frac{8\mu b^2}{\pi^4 T_{\rm dis}} \left[ \frac{\pi f_{m}}{4b \sigma_{\rm us}L_{c}}-1 \right] \exp{\left(-\frac{\pi f_{m}}{4b \sigma_{\rm us}L_{c}}\right)}, \label{012} \end{equation} where $\Lambda$ is the dislocation density, $\mu$ is a shear modulus, $f_{m}$ is the maximum value of the binding force. Here the value of $L_{c}$ is no longer equal to the initial value $L_{c}^{(0)}$, but depends on the amplitude of acoustic stress $\sigma_{\rm us}$. In the assumption that the vacancies are uniformly distributed throughout the volume, $L_{c}\approx (N_{v}(\sigma_{\rm us})+(L_{N})^{-3})^{-1/3}$. The number of vacancies per unit volume is \begin{equation} N_{v}(\sigma_{\rm us}) = \int\limits_{L^{v}(\sigma_{\rm us})}^{L_{\rm max}} N_{\rm jog}(L) \frac{Y(L,\sigma_{\rm us})}{y_0} d L, \label{o13} \end{equation} where $N_{\rm jog}(L)$ is the distribution function for the dislocations with jogs, $Y(L,\sigma_{\rm us})$ is the displacement of a jog from its initial position in the US field, $y_0$ is a lattice spacing, and $L^{v}(\sigma_{\rm us})$ is the minimum half-length of a dislocation, the jog on which can climb at given value of $\sigma_{\rm us}$. If we suggest that $N_{\rm jog}(L)=N_{\rm jog}={\rm const}$, then \begin{equation} N_{v}(\sigma_{\rm us}) = \frac{2 N_{\rm jog}L_{\rm max}^{3}}{3\pi^2 T_{\rm dis}} \frac{(\sigma_{\rm us}-\sigma_{\rm us}^{v})^2 (2\sigma_{\rm us}+\sigma_{\rm us}^{v})}{\sigma_{\rm us}^2}. \label{o14} \end{equation} For stresses high enough for the jog transition into an oscillatory mode (i.e., for $\sigma_{\rm us}>\sigma_{\rm us}^{\rm thr}$) the attenuation of US is determined by the losses due to creation of point defects. The attenuation coefficient for this case is \begin{equation} \alpha_{\rm jog}(\sigma_{\rm us}) = \frac{\omega_{\rm ac}}{2 v_{\rm us}} \mu \left(W_{v}+W_{i}\right) \int\limits_{L^{thr}(\sigma_{\rm us})}^{L_{\rm max}} N_{\rm jog}(L) \frac{\Delta y_{\rm jog}(L,\sigma_{\rm us})}{y_0 \sigma_{\rm us}^2} d L, \label{015} \end{equation} where $W_{v}$ and $W_{i}$ are correspondingly the energies of creation of vacancy and interstitial, $\Delta y_{\rm jog}(L,\sigma_{\rm us})$ is the amplitude of the jog oscillations, and $L^{thr}(\sigma_{\rm us})$ is the minimum half-length of dislocation, the jog on which is oscillating. If we adopt the above assumption $N_{\rm jog}(L)={\rm const}$, the amplitude dependence of attenuation is given by the factor \begin{equation} \frac{(\sigma_{\rm us}-\sigma_{\rm us}^{thr})^2 (2\sigma_{\rm us}+\sigma_{\rm us}^{thr})}{\sigma_{\rm us}^4} \label{o16} \end{equation} (the account is taken of proportionality of $\Delta y_{\rm jog}(L,\sigma_{\rm us})$ to the difference $\sigma_{\rm us}-\sigma_{\rm us}^{thr}$). The further increase of acoustical stress can activate the Frank-Read sources, as well as additional slip planes (in accordance with the factor $f_{\rm or}$ in (4)), the last giving rise to new threshold-like pecularities in the amplitude dependence of US attenuation. The behaviour of the attenuation during the unloading cycle is determined by the newly created defects. Additional dislocations lead to the increase of losses, while the point defects reduce losses to some extent. It is noteworth that the amplitude dependence of US attenuation below the threshold in the case of unloading is given by the expression (12) with the constant value $L_{c}$, because the jogs are fixed at some new positions and cannot create any point defect. The qualitative form of the curve of the amplitude dependence of US attenuation during the loading and unloading cycles is shown at Figure~3. \section{Experiment and Analysis} The monocrystallyne samples NaCl, KCl, and ZnS (sphalerite) were experimentally studied at room temperature. The initial density of dislocations in crystals under investigatin not subjected to US treatment made for NaCl about $10^4$cm$^{-2}$ and for ZnS --- about $5\cdot 10^3$cm$^{-2}$. Longitudinal US waves were excited by piezoceramic transducers of PZT type within a frequency range $1.5\,$MHz$\,\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}} \,(\omega_{\rm ac}/2\pi)\, \raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}} \,7\,$MHz. The factor $\alpha$ of US attenuation was measured by two techniques: 1)~by comparing of the exciting US rf-voltage $V$ with those picked up from receiving transducer, and 2)~with the help of a probing pulse by an echo-pulsing method. The US waves were excited by a continuous rf-voltage. Typical dependence of US attenuation $\alpha$ on US amplitude taken from the sample KCl-1A is shown on Figure~4. Along X axes a rf-voltage $V$ of the frequency $f=(\omega_{\rm ac}/2\pi)=2.5$MHz is given. US amplitude is proportional to this voltage. It is seen that at a low amplitude $V$ the attenuation of US wave is equal to some value (point A) which remains practically constant at $V$ increase up to a point B ($V\approx 10$ V). After this value of $V$ the attenuation begins to decrease (part BC of the curve). This reduction can be explained by the threshold vacancy generation as it was described in Section~2.4. Part CD of this curve corresponds to attenuation growth when the generation of vacancies as well as interstitials begins (cm.\ parts BC and CD of the theoretical curve at Figure~3). It is interesting to notice that according to the calculations the very onset of vacancies generation is accompanied by some small attenuation increase. It can be interpreted as a result of competition between hysteresis losses grows due to the dislocations breakaway near the ``old'' pinning centers and generation of new ones (vacancies), which cause free dislocation segments slowing doun. It seems that observable peak at the part AB of the experimental curve can be qualitatively ascribed to the effect reminded. The relatively strong additional growth of attenuation (part DE) is possibly provoked by activation of dislocations with jogs motion in another slip plane, what results in one more threshold value \cite{Kh:UFZh}. Figure~5 represents the results of the same study of the sample KCl-2 under US of $f=1.75$~MHZ. There were two cycles of a loading. In the first case a voltage raised up to a point C below the threshold (curve~1), and then lowered to zero (curve~2). In the second case $V$ increased up to a value above the threshold (point D$^{'}$, curve~3). It is seen that for both samples a hysteresis character of attenuation dependencies is evidently observed. However the concrete shape of the curves depends directly on the maximum amplitude of the acoustical wave propagating in the sample. If the last is larger than the threshold one, then the US attenuation during the unloading is greater than that for the loading because of additional dislocation density. But if the maximum US amplitude is less than the threshold one (though is sufficient for vacancy generation), then the unloading curve is found below the loading one, in accordance with the greater number of weak pinning centers present at the moment. As a whole, observable US attenuation meets rather good qualitative agreement with the predictions of the model proposed. One of the most important ones among them is the coincidence of the threshold of the SL excitation with that of the point defect generation. Figure~6 comfirms this supposition: such an equality, in fact, takes place (see curve~1 for SL and curve~2 for US attenuation, both of which are sharply changed at the same value of US amplitude). As to quantitative agreement of experimental and theoretical curves, it should be emphasised that their shape depends strongly on the dilocation length distribution, the character of which is not known for samples under investigation. The single quantity to be derived from this studies exactly is a ratio of the threshold values of US amplitude for continuous point defects generation and for creation of vacancies. It gives a ratio of activation energies of interstitial and vacancy creation by a jog by means of the following expression, as it easily can be obtained from (6) and (11): \begin{equation} \frac{W_{i}^{*}}{W_{v}^{*}}= 2\frac{u_{\rm ac}^{\rm thr}}{u_{\rm ac}^{v}}-1. \end{equation} Results of our experiments shown on the Figure~4 and~5 give this ratio to be about 3. Indeed, as it seen from these figures, $(u_{\rm ac}^{\rm thr}/u_{\rm ac}^{v}) \sim 2$, where $u_{\rm ac}^{\rm thr}$ corresponds to 1, and $u_{\rm ac}^{v}$ is $\approx 0.5$ in relative units. The correlation between $W_{i}^{*}$ and $W_{i}^{*}$ well satisfy the values found from radiation experiments \cite{Luschik}. \section{Conclusions} The main results of the given work can be formulated as follows: 1.~The model for point defect generation by moving under US wave action jog on skrew dislocation is proposed. It predicts the threshold character of this motion which is defined by energies of creation of the vacancy or interstitial. It must be however noticed that in the simplest above approximation we restricted ourself to one sort of vacancies and one sort of interstitials. In fact, the jogs in real crystals can generate different point defects of one type which can differ, for example, by their charges (valencies). Namely that can be a reason for observation of various impurity center spectra as SL. 2.~The US wave nonlinear absorption as a function of US amplitude reveals before threshold minimum which was explained above by supposition that new (generated by jog) point defects (vacancies) become the additional pinning centers for free dislocation segments. 3.~The experimental study of some semiconducting compounds showed that observable dependencies neet satisfactory agreement with predictions of the model developed. It allows to estimate some energetical crystal parameters which are in agreement with data obtained from independent investigations. \section*{Acknowledgement} We kindly acknowledge the Bundesministerium f\"ur Bildung, Wissenschaft, Forschung und Technologie, Deutchland, and the ISSEP Program of International Soros Foundation for partial financial supporting of the project. \newpage
1,314,259,996,866
arxiv
\section{Introduction and main results}\label{s1} \setcounter{equation}{0} \noindent A Frobenius manifold comes equipped locally with a potential. If one gives a definition which does not mention this potential explicitly, one nevertheless obtains it immediately by the following elementary fact: Let $z_i$ be the coordinates on ${\mathbb C}^n$ and $\partial_i=\frac{\partial}{\partial z_i}$ be the coordinate vector fields. Let $M$ be a convex open subset of ${\mathbb C}^n$ and ${\mathcal T}_M$ be the holomorphic tangent bundle of $M$. Let $A:{\mathcal T}_M^3\to{\mathcal O}_M$ be a symmetric map such that also $\partial_i A(\partial_j,\partial_k,\partial_l)$ is symmetric in $i,j,k,l$. Then a potential $F\in{\mathcal O}_M$ with $\partial_i\partial_j\partial_k F=A(\partial_i,\partial_j,\partial_k)$ exists. On Frobenius manifolds see \cite{D, M}. This paper is devoted to a nontrivial generalization of this fact. The generalization turns up in the theory of families of arrangements as in \cite[ch. 3]{V2}. The geometry there looks at first view similar to the geometry of Frobenius manifolds, but at second view, it is quite different. At first view, one finds in both cases data $(M,K,\nabla^K,C,S,\zeta)$ with the following properties. $M$ is an open subset of ${\mathbb C}^n$ (with coordinates $z_i$ and coordinate vector fields $\partial_i=\frac{\partial}{\partial z_i}$). $K\to M$ is a holomorphic vector bundle with a flat holomorphic connection $\nabla^K$. $C$ is a Higgs field, i.e. an ${\mathcal O}_M$-linear map \begin{eqnarray}\label{1.1} C:{\mathcal O}(K)\to \Omega_M^1\otimes {\mathcal O}(K) \end{eqnarray} such that all the endomorphisms $C_X:K\to K,$ $X\in{\mathcal T}_M$, commute: $C_XC_Y=C_YC_X$. And $C$ and $\nabla^K$ satisfy the integrability condition \begin{eqnarray}\label{1.2} \nabla^K_{\partial_i}C_{\partial_j} = \nabla^K_{\partial_j}C_{\partial_j} \qquad \textup{for all }i,j\in\{1,...,n\} \end{eqnarray} (which is equivalent to $\nabla^K(C)=0$, see remark \ref{t4.1}). $S$ is a $\nabla^K$-flat symmetric nondegenerate and Higgs field invariant pairing. $\zeta$ is a global nowhere vanishing section of $K$. At second view, one sees the differences. In the case of a Frobenius manifold, $M$ is the Frobenius manifold, $\rk K=n$, and (much stronger) $C_\bullet\zeta:{\mathcal T}_M\to {\mathcal O}(K)$ is an isomorphism and all the sections $C_{\partial_i}\zeta$ are $\nabla^K$-flat. One obtains an identification of $TM$ with $K$ and of the coordinate vector fields $\partial_i$ with the flat sections $C_{\partial_i}\zeta$. In the case of a family of arrangements, $\rk K\geq n$, and the $\nabla^K$-flat sections in $K$ have the following much more surprising form. Define $J:=\{1,...,n\}$. A family of arrangements in ${\mathbb C}^k$ with $k<n$ as in \cite[ch. 3]{V2} comes equipped with vectors $(v_i)_{i\in J}$ in $M(1\times k,{\mathbb C})=\{\textup{row vectors of length }k\textup{ with values in }{\mathbb C}\}$ such that $\langle v_1,..,v_n\rangle =M(1\times k,{\mathbb C})$. A subset $\{i_1,...,i_k\}\subset J$ is called {\it maximal independent} if $v_{i_1},...,v_{i_k}$ is a basis of $M(1\times k,{\mathbb C})$. The sections $C_{\partial_{i_1}}...C_{\partial_{i_k}}\zeta$ in $K$ for such subsets $\{i_1,...,i_k\}$ are $\nabla^K$-flat. The purpose of this paper is to show that also in this situation a potential exists which resembles the potential of a Frobenius manifold. This is nontrivial. The proof combines the integrability condition \eqref{1.2} with intricate combinatorial considerations which are due to the complicated form of the $\nabla^K$-flat sections. Theorem \ref{t1.2} is the main result. Definition \ref{t1.1} gives the frame and the used notions. The frame is in two mild aspects more general than the data above in the case of arrangements. First, $S$ is more general, and second, the maximal independent subsets $\{i_1,...,i_k\}\subset J$ are maximal independent with respect to an arbitrary matroid $(J,F)$ of rank $k$. See definition \ref{t2.1} for the notion of a matroid. \begin{definition}\label{t1.1} (a) A {\it Frobenius like structure of order} $(n,k,m)\in{\mathbb Z}_{>0}^3$ with $n\geq k$ is a tuple $(M,K,\nabla^K,C,S,\zeta,(J,F))$ with the following properties. $M,K,\nabla^K,C,\zeta$ and $J$ are as above. $S$ is a $\nabla^K$-flat $m$-linear form $S:{\mathcal O}(K)^m\to{\mathcal O}_M$, which is Higgs field invariant, i.e. \begin{eqnarray}\label{1.3} S(C_Xs_1,s_2,...,s_m)=S(s_1,C_Xs_2,...,s_m)=...= S(s_1,s_2,...,C_Xs_m) \end{eqnarray} for $s_1,s_2,...,s_m\in{\mathcal O}(K)$ and $X\in {\mathcal T}_M$. $(J,F)$ is a matroid with rank $r(J)=k$. For any maximal independent subset $\{i_1,...,i_k\}\subset J$ the section $C_{\partial_{i_1}}...C_{\partial_{i_k}}\zeta$ is $\nabla^K$-flat. \medskip (b) Some notations: For any subset $I=\{i_1,...,i_k\}\subset J$, the differential operator $\partial_I:=\partial_{i_1}...\partial_{i_k}$ and the endomorphism $C_I:=C_{\partial_{i_1}}...C_{\partial_{i_k}}:{\mathcal O}(K)\to{\mathcal O}(K)$ are well defined (they do not depend on the chosen order of the elements $i_1,...,i_k$). \medskip (c) In the situation of (a), a {\it potential of the first kind} is a function $Q\in {\mathcal O}_M$ with \begin{eqnarray}\label{1.4} \partial_{I_1}...\partial_{I_m}Q=S(C_{I_1}\zeta,...,C_{I_m}\zeta) \end{eqnarray} for any $m$ maximal independent subsets $I_1,...,I_m\subset J$. A {\it potential of the second kind} is a function $L\in{\mathcal O}_M$ with \begin{eqnarray}\label{1.5} \partial_i\partial_{I_1}...\partial_{I_m}L=S(C_{\partial_i}C_{I_1}\zeta,...,C_{I_m}\zeta) \end{eqnarray} for any $m$ maximal independent subsets $I_1,...,I_m\subset J$ and any $i\in J$. \end{definition} \begin{theorem}\label{t1.2} Let $(M,K,\nabla^K,C,S,\zeta,(J,F))$ be a Frobenius like structure of some order $(n,k,m)\in{\mathbb Z}_{>0}^3$. Then locally (i.e. near any $z\in M\subset {\mathbb C}^n$) potentials of the first and second kind exist. \end{theorem} Notice that by formulas (\ref{1.4}) and (\ref{1.5}) the potential of the first kind determines the matrix elements of the $m$-linear form $S$ on the flat sections $C_{\partial_{i_1}}...C_{\partial_{i_k}}\zeta$ and the potential of the second kind determines the matrix elements of the Higgs operators $C_{\partial_{i}}$ acting on the flat sections $C_{\partial_{i_1}}...C_{\partial_{i_k}}\zeta$. Thus all information on the $m$-linear form and the Higgs operators is packed into the two potential functions. At the end of the paper, several remarks discuss the case of arrangements and the relation to Frobenius manifolds. In the case of arrangements, one has an $(n,k,2)$-Frobenius type structure, but also other ingredients, which lead to richer geometry. In the case of a Frobenius manifold, one has an $(n,1,2)$-Frobenius type structure. The potential $L$ above generalizes the potential of a Frobenius manifold. For generic arrangements, a global explicit construction of the potentials $Q$ and $L$ had been given in \cite{V3}. Recently this was generalized in \cite{PV} to all families of arrangements as in \cite[ch. 3]{V2}. Section \ref{s2} cites a nontrivial result of J. Edmonds \cite[4. Theorem]{E} on matroid partition and adds some considerations. Section \ref{s3} applies an implication of it to a combinatorial situation which in turn is needed in the proof of the main theorem \ref{t1.2} in section \ref{s4}. Section \ref{s4} concludes with some remarks. We thank a referee of an earlier version \cite{HV} of this paper for pointing us to the result on matroid partition. This led to the present version of the paper which uses matroids. The second author thanks MPI in Bonn for hospitality during his visit in 2015-2016. \section{Matroid partition}\label{s2} \setcounter{equation}{0} \begin{definition}\label{t2.1} (E.g. \cite{E}) A {\it matroid} $(E,F)$ is a finite set $E$ together with a nonempty family $F\subset {\mathcal P}(E)$ of subsets of $E$, called {\it independent sets}, such that the following holds. \begin{list}{}{} \item[(i)] Every subset of an independent set is independent. \item[(ii)] For every subset $A\subset E$, all maximal independent subsets of $A$ have the same cardinality, called the rank $r(A)$ of $A$. \end{list} \end{definition} For example, if $V$ is a vector space and $(v_e)_{e\in E}$ is a tuple of elements which generates $V$, one obtains a matroid where a subset $B\subset E$ is independent if and only if the tuple $(v_b)_{b\in B}$ is a linearly independent tuple of vectors. In the case of a family of arrangements, such a matroid will be used. The following result on matroid partition was proved by J. Edmonds \cite{E}. \begin{theorem}\label{t2.2} \cite[4. Theorem]{E}. Let $(E,F_i)$, $i=1,...,m$, be matroids which are defined on the same set $E$. Let $r_i(A)$ be the rank of $A\subset E$ relative to $(E,F_i)$. The following two conditions are equivalent. \begin{list}{}{} \item[($\alpha$)] The set $E$ can be partitioned into a family $\{I_i\}_{i=1,...,m}$ of sets $I_i\in F_i$. \item[($\beta$)] Any set $A\subset E$ satisfies \begin{eqnarray}\label{2.1} |A|\leq \sum_{i=1}^m r_i(A). \end{eqnarray} \end{list} \end{theorem} The implication $(\alpha)\Rightarrow(\beta)$ is immediate: Suppose that $\{I_i\}_{i=1,...,m}$ is a partition of $E$ with $I_i\in F_i$. Then for any $A\subset E$ \begin{eqnarray*} A=\dot\bigcup_{i=1}^m A\cap I_i,\quad |A|=\sum_{i=1}^m |A\cap I_i|\leq \sum_{i=1}^m r_i(A). \end{eqnarray*} But the implication $(\beta)\Rightarrow(\alpha)$ is nontrivial. The proof in \cite{E} is an involved inductive algorithm. We are interested in the more special situation in theorem \ref{t2.6}. Before, two lemmata are needed. \begin{definition}\label{t2.3} \cite{E} (a) A minimal dependent set of elements of a matroid is called a {\it circuit}. (b) For any number $l\in {\mathbb Z}_{\geq 0}$ and any finite set $E$ with $|E|\geq l$, the set $F^{(l,E)}:=\{I\subset E\, |\, |I|\leq l\}$ defines obviously a matroid $(E,F^{(l,E)})$, the {\it uniform matroid} of rank $l$. \end{definition} \begin{lemma}\label{t2.4} \cite[Lemma 2]{E} The union of any independent set $I$ and any element $e$ of a matroid contains at most one circuit of the matroid. \end{lemma} \begin{lemma}\label{t2.5} Let $(E,F)$ be a matroid. Let $A_1,A_2\subset E$ be subsets. For $i=1,2$, let $I_i\subset A_i$ be a maximal independent subset of $A_i$. Suppose that $I_1\cup I_2$ is an independent set. Then $I_1\cup I_2$ is a maximal independent subset of $A_1\cup A_2$, and $I_1\cap I_2$ is a maximal independent subset of $A_1\cap A_2$. \end{lemma} {\bf Proof:} Suppose that for some element $b\in (A_1\cup A_2)-(I_1\cup I_2)$ the union $I_1\cup I_2\cup \{b\}$ is independent. Then for some $i\in\{1,2\}$, $b\in A_i$. But $I_i\cup\{b\}$ is a larger independent subset of $A_i$ than $I_i$, a contradiction. This proves that $I_1\cup I_2$ is a maximal independent subset of $A_1\cup A_2$. Suppose that for some element $b\in (A_1\cap A_2)-(I_1\cap I_2)$ the union $(I_1\cap I_2)\cup\{b\}$ is independent. If $b\in I_i$ then $b\notin I_j$ where $\{i,j\}=\{1,2\}$. Then $I_j\cup\{b\}$ is an independent subset of $A_j$, a contradiction to the maximality of $I_j$. Therefore $b\notin I_1\cup I_2$. Thus for $i=1,2$, the set $I_i\cup\{b\}\subset A_i$ is dependent as it is larger than $I_i$. Therefore it contains a circuit $C_i\subset I_i\cup\{b\}$. Obviously $C_i\cap (I_i-I_j)\neq\emptyset$ where $\{i,j\}=\{1,2\}$. Thus $C_1\neq C_2$. Both are circuits in $(I_1\cup I_2)\cup\{b\}$, a contradiction to lemma \ref{t2.4}. This proves that $I_1\cap I_2$ is a maximal independent subset of $A_1\cap A_2$. \hfill$\Box$ \begin{theorem}\label{t2.6} Let $(E,F_i)$, $i=1,...,m$, be matroids which are defined on the same set $E$ and which satisfy together $(\alpha)$ and $(\beta)$ in theorem \ref{t2.2}. Suppose that $F_m=F^{(l,E)}$ for some $l\in {\mathbb Z}_{\geq 0}$ with $l\leq |E|$. Suppose that the set \begin{eqnarray}\label{2.2} G:=\{A\subset E\, |\ |A|=l+\sum_{i=1}^{m-1} r_i(A)\} \end{eqnarray} contains the set $E$. (a) Then this set $G$ is closed under the operations union and intersection of sets. Especially, it contains a set called $A_{min}\subset E$ which is the unique minimal element of $G$ with respect to the partial order given by inclusion. Of course $A_{min}\neq\emptyset$ if and only if $l\geq 1$. (b) Now suppose $l\geq 1$. Then $A_{min}=A_{par}$ where $A_{par}$ is the set \begin{eqnarray}\label{2.3} A_{par}&:=&\{b\in E\, |\, \exists\ \textup{a partition } \{I_i\}_{i=1,...,m}\textup{ of }E\\ && \hspace*{2cm} \textup{such that }I_i\in F_i\textup{ and }b\in I_m\}. \nonumber \end{eqnarray} \end{theorem} {\bf Proof:} (a) Choose a partition $\{I_i\}_{i=1,...,m}$ of $E$ with $I_i\in F_i$. For any subset $A\subset E$, it induces a partition $A=\dot\bigcup_{i=1}^m A\cap I_i$ of $A$ into subsets $(A\cap I_i)\in F_i$. If $A\in G$, then by \eqref{2.2} each set $A\cap I_i$ is a maximal independent subset of $A$ with respect to the matroid $(E,F_i)$. As $|A|\geq l$, especially $|A\cap I_m|=l$. As $E$ itself is in $G$, $|I_m|=l$, and thus $A\cap I_m =I_m$ for any set $A\in G$. Let $A_1,A_2\in G$. For any $i=1,...,m$, lemma \ref{t2.5} applies to the maximal independent sets $A_1\cap I_i$ and $A_2\cap I_i$ of $A_1$ respectively $A_2$ relative to the matroid $(E,F_i)$, because also $(A_1\cup A_2)\cap I_i\in F_i$. Therefore $(A_1\cup A_2)\cap I_i$ is a maximal independent subset of $A_1\cup A_2$ relative to $(E,F_i)$, and $(A_1\cap A_2)\cap I_i$ is a maximal independent subset of $A_1\cap A_2$ relative to $(E,F_i)$. Also, $I_m=A_1\cap I_m=A_2\cap I_m$ shows \begin{eqnarray*} I_m=(A_1\cup A_2)\cap I_m =(A_1\cap A_2)\cap I_m. \end{eqnarray*} Now $A_1\cup A_2\in G$ and $A_1\cap A_2\in G$ are obvious. Therefore $G$ is closed under the operations union and intersection of sets. (b) $A_{par}\subset A_{min}$: Fix an arbitrary element $b\in A_{par}$. Choose a partition $\{I_i\}_{i=1,...,m}$ of $E$ with $I_i\in F_i$ and $b\in I_m$. Recall $A_{min}\cap I_m=I_m$. Thus $b\in A_{min}$. $A_{min}\subset A_{par}$: Fix an arbitrary element $b\in A_{min}$. Define $\widetilde E:=E-\{b\}$. Any set $A\subset\widetilde E$ does not contain $A_{min}$, because $b\in A_{min}$. Therefore any set $A\subset\widetilde E$ satisfies $A\notin G$ and \begin{eqnarray}\label{2.4} |A|\leq -1+l+\sum_{i=1}^{m-1}r_i(A). \end{eqnarray} Consider the matroids $(\widetilde E,\widetilde F_i)$, where $\widetilde F_i:=\{I\in F_i\, |\, b\notin I\}$ for $i\in\{1,...,m-1\}$ and $\widetilde F_m:=F^{(l-1,\widetilde E)}$. For $i\in\{1,...,m-1\}$ the rank of $A\subset\widetilde E$ relative to $(\widetilde E,\widetilde F_i)$ is equal to the rank $r_i(A)$ of $A$ relative to $(E,F_i)$. By \eqref{2.4} and theorem \ref{t2.2}, a partition $\{\widetilde I_i\}_{i=1,...,m}$ of $\widetilde E$ with $\widetilde I_i\in \widetilde F_i$ exists. Now the sets $I_i:=\widetilde I_i$ for $i=1,...,m-1$, and $I_m:=\widetilde I_m\cup\{b\}$ form a partition of $E$ with $I_i\in F_i$. This shows $b\in A_{par}$. \hfill$\Box$ \section{An equivalence between index systems}\label{s3} \setcounter{equation}{0} \noindent In this section we fix three positive integers $n,k,m\in{\mathbb Z}_{>0}$ with $n\geq k$ and a matroid $(J,F)$ with underlying set $J=\{1,...,n\}$, rank function $r:{\mathcal P}(J)\to{\mathbb Z}_{\geq 0}$ and rank $r(J)=k$. \begin{notations}\label{t3.1} As usual ${\mathbb Z}^J:=\{\textup{maps}:J\to{\mathbb Z}\}$ and ${\mathbb Z}_{\geq 0}^J:=\{\textup{maps}:J\to{\mathbb Z}_{\geq 0}\}$. The set ${\mathbb Z}^J$ is an additive group, the set ${\mathbb Z}_{\geq 0}^J$ is an additive monoid. For $j\in J$ denote by $[j]\in{\mathbb Z}_{\geq 0}^J$ the map with $[j](j)=1$ and $[j](i)=0$ for any $i\neq j$. Then any map $T\in {\mathbb Z}^J$ can be written as $T=\sum_{j=1}^nT(j)\cdot [j]$. For $T\in{\mathbb Z}^J$ denote $|T|:=\sum_{j=1}^nT(j)\in{\mathbb Z}$. The support of $T\in {\mathbb Z}^J$ is $\supp T:=\{j\in J\, |\, T(j)\neq 0\}$. The map \begin{eqnarray}\label{3.1} d_H:{\mathbb Z}^J\times{\mathbb Z}^J\to{\mathbb Z}_{\geq 0},\quad (T_1,T_2)\mapsto \sum_{j\in J}|T_1(j)-T_2(j)| \end{eqnarray} is a metric on ${\mathbb Z}^J$. On ${\mathbb Z}^J$ one has the partial ordering $\leq$ with \begin{eqnarray}\label{3.2} S\leq T\iff S(j)\leq T(j)\quad\forall\ j\in J. \end{eqnarray} Any map $T\in {\mathbb Z}_{\geq 0}^J$ with $|T|=t\in{\mathbb Z}_{\geq 0}$ is called a {\it system of elements of $J$} or simply a {\it system} or a {\it $t$-system}. If $S$ and $T$ are systems with $S\leq T$, then $S$ is a {\it subsystem} of $T$. \end{notations} \begin{definition}\label{t3.2} Here $l\in{\mathbb Z}_{\geq 0}$. Here all systems are systems of elements of $J$. \begin{list}{}{} \item[(a)] A system $T\in{\mathbb Z}_{\geq 0}^J$ is a {\it base} if $\supp T\in F$ and $|T|=k$ (so the support $\supp T$ is a maximal independent subset of $J$ and all $T(a)\in\{0;1\}$). \item[(b)] A {\it strong decomposition} of an $(mk+l)$-system $T$ is a decomposition $T=T^{(1)}+...+T^{(m+1)}$ into $m$ $k$-systems $T^{(1)},...,T^{(m)}$ and one $l$-system $T^{(m+1)}$ such that $T^{(1)},...,T^{(m)}$ are bases (and $T^{(m+1)}$ is an arbitrary $l$-system; e.g. if $l=0$ then $T^{(m+1)}=0$ automatically). \item[(c)] An $(mk+l)$-system is {\it strong} if it admits a strong decomposition. \item[(d)] A {\it good decomposition} of an $N$-system $T$ with $N\geq mk+1$ is a decomposition $T=T_1+T_2$ into two systems such that $T_2$ is a strong $(mk+1)$-system of elements of $J$. \item[(e)] Two good decompositions $T_1+T_2=T$ and $S_1+S_2=T$ of an $N$-system $T$ with $N\geq mk+1$ are {\it locally related}, notation: $(S_1,S_2)\sim_{loc} (T_1,T_2)$, if there are strong decompositions $S^{(1)}_2+...+S^{(m+1)}_2=S_2$ of $S_2$ and $T^{(1)}_2+...+T^{(m+1)}_2=T_2$ of $T_2$ with $S^{(j)}_2=T^{(j)}_2$ for $1\leq j\leq m$. Of course, $\sim_{loc}$ is a reflexive and symmetric relation. \item[(f)] Two good decompositions $T_1+T_2=T$ and $S_1+S_2=T$ of an $N$-system $T$ with $N\geq mk+1$ are {\it equivalent}, notation: $(S_1,S_2)\sim (T_1,T_2)$, if there is a sequence $\sigma_1,\sigma_2,...,\sigma_r$ for some $r\in{\mathbb Z}_{\geq 1}$ of good decompositions of $T$ such that $\sigma_1=(S_1,S_2)$, $\sigma_r=(T_1,T_2)$ and $\sigma_j\sim_{loc}\sigma_{j+1}$ for $j=1,...,r-1$. Of course, $\sim$ is an equivalence relation. \end{list} \end{definition} The main result of this section is the following theorem \ref{t3.3}. \begin{theorem}\label{t3.3} Let $T\in{\mathbb Z}_{\geq 0}^J$ be an $N$-system for some $N\geq mk+1$ which has good decompositions. Then all its good decompositions are equivalent. \end{theorem} The theorem will be proved after the proofs of corollary \ref{t3.4} and lemma \ref{t3.5}. Corollary \ref{t3.4} is a corollary of theorem \ref{t2.6}. \begin{corollary}\label{t3.4} Fix a strong $(mk+l)$-system $T\in{\mathbb Z}_{\geq 0}^J$ with $l\in{\mathbb Z}_{\geq 0}$. Then for any $B\subset J$ \begin{eqnarray}\label{3.3} \sum_{j\in B}T(j)\leq l+m\cdot r(B). \end{eqnarray} The set \begin{eqnarray}\label{3.4} G(T)&:=& \{B\subset \supp T\, |\, \sum_{j\in B}T(j) = l+m \cdot r(B)\} \end{eqnarray} contains $\supp T$ and is closed under the operations union and intersection of sets. Especially, it contains a set called $A_{min}(T)\subset \supp T$ which is the unique minimal element with respect to inclusion. In the case $l\geq 1$, define the set \begin{eqnarray}\label{3.5} A_{dec}(T)&:=&\{b\in J\, |\, \exists\ \textup{a strong decomposition }\\ &&T=T^{(1)}+...+T^{(m+1)}\textup{ with }b\in \supp T^{(m+1)}\} .\nonumber \end{eqnarray} Then $A_{min}(T)=A_{dec}(T)$. \end{corollary} {\bf Proof:} We will construct from $T$ certain lifts of the matroids $(J,F)$ and $(J,F^{(l,J)})$ to matroids on the set $E:=\{1,2,...,mk+l\}$ and go with them into theorem \ref{t2.6}. Choose a map $f:E\to J$ with $|f^{-1}(j)|=T(j)$. Define the sets \begin{eqnarray*} F_1=...=F_m&:=& \{A\subset E\, |\, f|_A:A\to J \textup{ injective,}\ f(A)\in F\}\subset{\mathcal P}(E), \\ F_{m+1}&:=& F^{(l,E)}\subset{\mathcal P}(E). \end{eqnarray*} Then $(E,F_i)$ for $i\in\{1,...,m+1\}$ is a matroid. Together they satisfy $(\alpha)$ in theorem \ref{t2.2} (with $m+1$ instead of $m$) because $T$ is a strong $(mk+l)$-system. We go into theorem \ref{t2.6} with $m+1$ instead of $m$. That $T$ is a strong $(mk+l)$-system, gives also $E\in G$ and \eqref{3.3}. Therefore the set $A_{min}$ in theorem \ref{t2.6} is well defined. The set $A_{par}$ is well defined, anyway. One sees easily \begin{eqnarray*} r_1(A)=...=r_m(A)&=&r(f(A))\quad\textup{for }A\subset E,\\ G&=&\{f^{-1}(B)\, |\, B\in G(T)\}. \end{eqnarray*} Therefore $G(T)$ contains $\supp T$ and is closed under the operations union and intersection of sets. Now one sees also easily \begin{eqnarray*} A_{min}&=&f^{-1}(A_{min}(T)),\quad A_{par}=f^{-1}(A_{dec}(T)), \end{eqnarray*} and thus $A_{min}(T)=A_{dec}(T)$. \hfill$\Box$ \begin{lemma}\label{t3.5} Let $S$ and $T\in {\mathbb Z}_{\geq 0}^J$ be two strong $(mk+1)$-systems. At least one of the following two alternatives holds. \begin{list}{}{} \item[$(\alpha)$] $T$ has a strong decomposition $T=T^{(1)}+...+T^{(m+1)}$ with $T^{(m+1)}=[i]$ for some $i\in \supp T$ with $T(i)>S(i)$. \item[$(\beta)$] For any strong decomposition $S=S^{(1)}+...+S^{(m+1)}$ a strong decomposition $T=T^{(1)}+...+T^{(m+1)}$ with $T^{(m+1)}=S^{(m+1)}$ exists. \end{list} \end{lemma} {\bf Proof:} Suppose that $(\alpha)$ does not hold. Then for any $i\in A_{dec}(T)$ $S(i)\geq T(i)$. Especially \begin{eqnarray*} \sum_{i\in A_{dec}(T)}S(i)&\geq& \sum_{i\in A_{dec}(T)}T(i) = 1+m\cdot r(A_{dec}(T)). \end{eqnarray*} The equality uses $A_{dec}(T)=A_{min}(T)\in G(T)$. Now \eqref{3.3} for $S$ instead of $T$ shows that $\geq$ can be replaced by $=$. Therefore $A_{dec}(T)\in G(S)$. Any element of $G(S)$ contains $A_{min}(S)$. This and the equality $A_{dec}(S)=A_{min}(S)$ give $$A_{dec}(S)=A_{min}(S)\subset A_{dec}(T).$$ Thus $(\beta)$ holds. \hfill$\Box$ \bigskip {\bf Proof of theorem \ref{t3.3}:} Let $(S_1,S_2)$ and $(T_1,T_2)$ be two different good decompositions of an $N$-system $T$ of elements of $J$ (with $N\geq mk+1$). Then $S_2$ and $T_2$ are strong $(mk+1)$-systems of elements of $J$. At least one of the two alternatives $(\alpha)$ and $(\beta)$ in lemma \ref{t3.5} holds for $S_2$ and $T_2$. \medskip {\bf First case, $(\alpha)$ holds:} Let $T_2=T_2^{(1)}+...+T_2^{(m+1)}$ be a strong decomposition with $T_2^{(m+1)}=[i]$ for some $i\in\supp T_2$ with $T_2(i)>S_2(i)$. Then a $j\in \supp T$ with $T_1(j)>S_1(j)$ and $T_2(j)<S_2(j)$ exists. The decomposition \begin{eqnarray}\label{3.6} T=R_1+R_2\quad\textup{with }R_1=T_1-[j]+[i],\quad R_2=T_2+[j]-[i] \end{eqnarray} is a good decomposition of $T$ because $T_2^{(1)}+...+T_2^{(m)}+[j]$ is a strong decomposition of $R_2$. The good decompositions $(R_1,R_2)$ and $(T_1,T_2)$ are locally related, $(R_1,R_2)\sim_{loc}(T_1,T_2)$, and thus equivalent, \begin{eqnarray}\label{3.7} (R_1,R_2)\sim(T_1,T_2). \end{eqnarray} Furthermore, \begin{eqnarray}\label{3.8} d_H(R_2,S_2)=d_H(T_2,S_2)-2. \end{eqnarray} \medskip {\bf Second case, $(\beta)$ holds:} Let $T_2=T_2^{(1)}+...+T_2^{(m+1)}$ and $S_2=S_2^{(1)}+...+S_2^{(m+1)}$ be strong decompositions of $T_2$ and $S_2$ with $T_2^{(m+1)}=S_2^{(m+1)}=[a]$ for some $a\in \supp T$. Two elements $b,c\in\supp T$ with $T_1(b)>S_1(b)$, $T_2(b)<S_2(b)$, and $T_1(c)<S_1(c)$, $T_2(c)>S_2(c)$ exist. Consider the decompositions of $T$ and $S$, \begin{eqnarray}\label{3.9} T&=&R_1+R_2\quad\textup{with }R_1=T_1-[b]+[a],R_2=T_2+[b]-[a],\\ S&=&Q_1+Q_2\quad\textup{with }Q_1=S_1-[c]+[a],Q_2=S_2+[c]-[a].\label{3.10} \end{eqnarray} They are good decompositions because $R_2$ has the strong decomposition $R_2=T^{(1)}+...+T^{(m)}+[b]$ and $Q_2$ has the strong decomposition $Q_2=S^{(1)}+...+S^{(m)}+[c]$. The local relations \begin{eqnarray*} (R_1,R_2)\sim_{loc} (T_1,T_2)\quad\textup{and}\quad (Q_1,Q_2)\sim_{loc}(S_1,S_2) \end{eqnarray*} and the equivalences \begin{eqnarray}\label{3.11} (R_1,R_2)\sim (T_1,T_2)\quad\textup{and}\quad (Q_1,Q_2)\sim (S_1,S_2) \end{eqnarray} hold. Furthermore \begin{eqnarray}\label{3.12} d_H(R_2,Q_2)=d_H(T_2,S_2)-2. \end{eqnarray} \medskip The properties \eqref{3.7}, \eqref{3.8}, \eqref{3.11} and \eqref{3.12} show that in both cases the equivalence classes of $(S_1,S_2)$ and $(T_1,T_2)$ contain good decompositions whose second members are closer to one another with respect to the metric $d_H$ than $T_2$ and $S_2$. This shows that $(S_1,S_2)$ and $(T_1,T_2)$ are in one equivalence class. \hfill$\Box$ \section{Potentials of the first and second kind}\label{s4} \setcounter{equation}{0} The main part of this section is devoted to the proof of theorem \ref{t1.2}. At the end some remarks on the relation to families of arrangements and Frobenius manifolds are made. \begin{remark}\label{t4.1} Here a coordinate free formulation of the integrability condition \eqref{1.2} will be given. For $M,\nabla^K$ and $C$ as in the introduction, $\nabla^K(C)\in \Omega^2_M\otimes{\mathcal O}(\textup{End}(K))$ is the 2-form on $M$ with values in $\textup{End}(K)$ such that for $X,Y\in{\mathcal T}_M$ \begin{eqnarray}\label{4.1} \nabla^K(C)(X,Y)&=& \nabla^K_X(C_Y)-\nabla^K_Y(C_X)-C_{[X,Y]}. \end{eqnarray} Now \eqref{1.2} is equivalent to $\nabla^K(C)=0$ \end{remark} {\bf Proof of theorem \ref{t1.2}:} Let $(M,K,\nabla^K,C,S,\zeta,(J,F))$ be a Frobenius like structure of some order $(n,k,m)\in{\mathbb Z}_{>0}^3$. We need some notations. If $T\in{\mathbb Z}_{\geq 0}^J$ is a system of elements of $J$, then \begin{eqnarray*} (z-x)^T&:=&\prod_{i\in J}(z_i-x_i)^{T(i)}\quad\textup{for any }x\in{\mathbb C}^n,\\ T!:=\prod_{i\in J}T(i)!,\quad \partial_T&:=&\prod_{i\in J}\partial_{z_i}^{T(i)}, \quad C_T:=\prod_{i\in J}C_{\partial_{z_i}}^{T(i)}. \end{eqnarray*} Thus, if $S$ and $T$ are systems of elements of $J$, then \begin{eqnarray} \partial_T(z-x)^S=\left\{\begin{array}{ll} 0&\textup{ if }T\not\leq S,\\ \frac{S!}{(S-T)!}\cdot (z-x)^{S-T}& \textup{ if }T\leq S, \end{array}\right. \label{4.2} \end{eqnarray} for any $x\in{\mathbb C}^n$. \medskip The existence of a (not just local, but even global) potential $Q$ of the first kind is trivial. The function \begin{eqnarray}\label{4.3} Q&:=& \sum_{T\textup{ with }(*)}\frac{1}{T!}\cdot S(C_T \zeta,\zeta,...,\zeta)\cdot z^T \quad(m\textup{ times }\zeta), \hspace*{1cm}\\ (*)&:& T\in{\mathbb Z}_{\geq 0}^J \textup{ is a strong }mk\textup{-system (definition \ref{t3.1}(c))}.\nonumber \end{eqnarray} works. It is a homogeneous polynomial of degree $mk$ and contains only monomials which are relevant for \eqref{1.2}. In fact, one can add to this $Q$ an arbitrary linear combination of the monomials $z^T$ for the $mk$-systems $T$ which are not strong, so which are not relevant for \eqref{1.2}. \medskip The existence of a potential $L$ of the second kind is not trivial. Let some $x\in M$ be given. We make the power series ansatz \begin{eqnarray}\label{4.4} L&:=& \sum_{T\in{\mathbb Z}_{\geq 0}^J} a_T\cdot (z-x)^T, \end{eqnarray} where the coefficients $a_T$ have to be determined. If $T$ satisfies $|T|\leq mk$ or if it satisfies $|T|\geq mk+1$, but does not admit a good decomposition (definition \ref{t3.1} (d)), then the conditions \eqref{1.3} are empty for $a_T(z-x)^T$ because of \eqref{4.2}, so then $a_T$ can be chosen arbitrarily, e.g. $a_T:=0$ works. Now consider $T$ with $|T|\geq mk+1$ which admits good decompositions. Then each good decomposition $T=T_1+T_2$ gives via \eqref{1.3} a candidate \begin{eqnarray}\label{4.5} a_T(T_1,T_2)&:=& \frac{1}{T!}\cdot \left(\partial_{T_1} S(C_{T_2}\zeta,\zeta,...,\zeta)\right)(x), \end{eqnarray} for the coefficient $a_T$ of $(z-x)^T$ in $L$. We have to show that the candidates $a_T(T_1,T_2)$ for all good decompositions $(T_1,T_2)$ of $T$ coincide. Suppose that two good decompositions $(T_1,T_2)$ and $(S_1,S_2)$ are locally related, $(T_1,T_2)\sim_{loc}(S_1,S_2)$ (definition \ref{t3.1} (e)), but not equal. Then there are strong decompositions $T_2=T_2^{(1)}+...+T_2^{(m)}+[a]$ and $S_2=T_2^{(1)}+...+T_2^{(m)}+[b]$ with $a\neq b$, and thus also $T_1-[b]=S_1-[a]\in{\mathbb Z}_{\geq 0}^J$ holds. Because any $T_2^{(j)}$, $j\in\{1,...,m\}$, is independent, $C_{T_2^{(j)}}\zeta$ is $\nabla^K$-flat. This and \eqref{4.3} give \begin{eqnarray} &&\partial_{z_b}S(C_{T_2}\zeta,\zeta,...,\zeta)\nonumber\\ &=& \partial_{z_b}S(C_{\partial_{z_a}}C_{T_2^{(1)}}\zeta, C_{T_2^{(2)}}\zeta,..., C_{T_2^{(m)}}\zeta)\nonumber\\ &=& S(\nabla^K_{\partial_{z_b}}(C_{\partial_{z_a}})C_{T_2^{(1)}}\zeta, C_{T_2^{(2)}}\zeta,..., C_{T_2^{(m)}}\zeta)\nonumber\\ &=& S(\nabla^K_{\partial_{z_a}}(C_{\partial_{z_b}})C_{T_2^{(1)}}\zeta, C_{T_2^{(2)}}\zeta,..., C_{T_2^{(m)}}\zeta)\nonumber\\ &=& \partial_{z_a}S(C_{\partial_{z_b}}C_{T_2^{(1)}}\zeta, C_{T_2^{(2)}}\zeta,..., C_{T_2^{(m)}}\zeta)\nonumber\\ &=&\partial_{z_a}S(C_{S_2}\zeta,\zeta,...,\zeta). \label{4.6} \end{eqnarray} This implies \begin{eqnarray}\label{4.7} a_T(T_1,T_2) = a_T(S_1,S_2), \end{eqnarray} so the locally related good decompositions $(T_1,T_2)$ and $(S_1,S_2)$ give the same candidate for $a_T$. Thus all equivalent (definition \ref{t3.1} (f)) good decompositions give the same candidate for $a_T$. By theorem \ref{t3.3}, all good decompositions of $T$ are equivalent. Therefore they all give the same candidate for $a_T$. Thus a potential $L$ of the second kind exists as a formal power series as in \eqref{4.4}. It is in fact a convergent power series because of the following. There are finitely many strong $(mk+1)$-systems $T_2$. Each determines the coefficients $a_T$ for all $T\geq T_2$. We put $a_T:=0$ for $T$ which do not admit good decompositions. The part of $L$ in \eqref{4.4} which is determined by some strong $(mk+1)$-system $T_2$ is a convergent power series. Thus $L$ is the {\it union} of finitely many overlapping convergent power series. It is easy to see that it is itself convergent. This finishes the proof of theorem \ref{t1.2}. \hfill$\Box$ \begin{remark}\label{t4.2} In \cite[ch. 3]{V2} families of arrangements are considered which give rise to Frobenius like structures $(M,K,\nabla^K,C,S,\zeta,(J,F))$ of order $(n,k,2)$, see the special case of generic arrangements in \cite{V1,V3}. Start with two positive integers $k$ and $n$ with $k<n$ and with a matrix $B:=(b_i^j)_{i=1,..,n;j=1,..,k}\in M(n\times k,{\mathbb C})$ with $\rank B=k$. Define $J:=\{1,...,n\}$. Here the matroid $(J,F)$ is the {\it vector matroid} (also called {\it linear matroid}) of the tuple $(v_i)_{i\in J}$ of row vectors $v_i:= (b^j_i)_{j=1,...k}$ of the matrix $B$. More precisely, a subset $A\subset J$ is independent, if the tuple $(v_i)_{i\in A}$ is a linearly independent system of vectors. Consider ${\mathbb C}^n\times {\mathbb C}^k$ with the coordinates $(z,t)=(z_1,...,z_n,t_1,...,t_k)$ and with the projection $\pi:{\mathbb C}^n\times {\mathbb C}^k\to{\mathbb C}^n$. Define the functions \begin{eqnarray}\label{4.8} g_i:=\sum_{j=1}^kb_i^j\cdot t_j,\quad f_i:=g_i+z_i \quad\textup{for }i\in J \end{eqnarray} on ${\mathbb C}^n\times {\mathbb C}^k$. We obtain on ${\mathbb C}^n\times {\mathbb C}^k$ the arrangement ${\mathcal C}=\{H_i\}_{i\in J}$, where $H_i$ is the zero set of $f_i$. Let $U({\mathcal C}):={\mathbb C}^n\times {\mathbb C}^k-\bigcup_{i\in J}H_i$ be the complement. For every $x\in{\mathbb C}^n$, the arrangement ${\mathcal C}$ restricts to an arrangement ${\mathcal C}(x)$ on $\pi^{-1}(x)\cong{\mathbb C}^k$. For almost all $x\in{\mathbb C}^n$ the arrangement ${\mathcal C}(x)$ is {\it essential} (definition in \cite{V2}) with normal crossings. The subset $\Delta\subset{\mathbb C}^n$ where this does not hold, is a hypersurface and is called the {\it discriminant}, see \cite[3.2]{V2}. Define $M:={\mathbb C}^n-\Delta$. A set $I=\{i_1,...,i_k\}\subset J$ is maximal independent, i.e. $(v_{i_1},...,v_{i_k})$ is a basis of $M(1\times k,{\mathbb C})$, if and only if for some (or equivalently for any) $x\in {\mathbb C}^n$ the hyperplanes $H_{i_1}(x),...,H_{i_k}(x)$ are transversal. Let $a=(a_1,...,a_n)\in({\mathbb C}^*)^n$ be a system of {\it weights} such that for any $x\in M$ the weighted arrangement $({\mathcal C}(x),a)$ is {\it unbalanced}: See \cite{V2} for the definition of {\it unbalanced}, e.g. $a\in{\mathbb R}_{>0}^n$ is unbalanced, also a generic system of weights is unbalanced. The {\it master function} of the weighted arrangement $({\mathcal C},a)$ is \begin{eqnarray}\label{4.9} \Phi_a(z,t):=\sum_{i\in J}a_i\log f_i. \end{eqnarray} Several deep facts are related to this master function. We use some of them in the following. See \cite{V2} for references. For $z\in M$ all critical points of $\Phi_{a}$ are isolated, and the sum $\mu$ of their Milnor numbers is independent of the unbalanced weight $a$ and the parameter $z\in M$. The bundle \begin{eqnarray}\label{4.10} K:=\bigcup_{z\in M}K_z\quad\textup{with } K_z:={\mathcal O}(U({\mathcal C})\cap\pi^{-1}(z))/\left(\frac{\partial\Phi_a}{\partial t_j}\, |\, j=1,...,k\right) \end{eqnarray} over $M$ is a vector bundle of $\mu$-dimensional algebras. It comes equipped with the section $\zeta$ of unit elements $\zeta(z)\in K_z$, a Higgs field $C$, a {\it combinatorial connection} $\nabla^K$ and a pairing $S$. The Higgs field $C:{\mathcal O}(K)\to \Omega^1_M\otimes {\mathcal O}(K)$ is defined with the help of the period map \begin{eqnarray}\label{4.11} \Psi:TM\to K,\quad \partial_{z_i}\mapsto \left[\frac{\partial\Phi_a}{\partial z_i}\right] =\left[\frac{a_i}{f_i}\right]=:p_i \end{eqnarray} by \begin{eqnarray}\label{4.12} C_{\partial_{z_i}}(h):=p_i\cdot h\qquad\textup{ for }h\in K_z. \end{eqnarray} Because of \begin{eqnarray}\label{4.13} 0=\left[\frac{\partial\Phi_a}{\partial t_j}\right] =\sum_{i=1}^n b^j_i p_i, \end{eqnarray} the Higgs field vanishes on the vector fields $X_j:=\sum_{i=1}^n b^j_i\partial_i$, $j\in\{1,...,k\}$, \begin{eqnarray}\label{4.14} C_{X_j}=0\qquad\textup{for }j\in\{1,...,k\}. \end{eqnarray} In fact the whole geometry of the family of arrangements is invariant with respect to the flows of these vector fields. The sections $\det(b_i^j)_{i\in I,j=1,...,k}\cdot C_I\zeta$ for all maximal independent sets $I=\{i_1,...,i_k\}\subset J$ generate the bundle $K$, and they satisfy only relations with constant coefficients in ${\mathbb Z}$. The combinatorial connection $\nabla^K$ is the unique flat connection such that the sections $C_I\zeta$ for $I\subset J$ maximal independent are $\nabla^K$-flat. The sections $\det(b_i^j)_{i\in I,j=1,...,k}\cdot C_I\zeta$ for $I\subset J$ maximal independent generate a $\nabla^K$-flat ${\mathbb Z}$-lattice structure on $K$. The pairing $S$ comes from the Grothendieck residue with respect to the volume form \begin{eqnarray}\label{4.15} \frac{dt_1\land...\land dt_k}{\prod_{j=1}^k \frac{\partial\Phi_a}{\partial t_j}}. \end{eqnarray} It is symmetric, nondegenerate, $\nabla^K$-flat, multiplication invariant and Higgs field invariant. \smallskip The existence of potentials of the first and second kind for families of arrangements was conjectured in \cite{V1}. If all the $k\times k$ minors of the matrix $B=(b_i^j)$ are nonzero, the potentials were constructed in \cite{V1}, cf. \cite{V3}. In \cite{PV} this was generalized to all cases in this remark \ref{t4.2}. The potentials are given by explicit formulas in terms of the linear functions defining the hyperplanes in ${\mathbb C}^n$ composing the discriminant. \end{remark} \begin{remarks}\label{t4.3} (i) The situation in remark \ref{t4.2} is in several aspects richer than a Frobenius like structure of type $(n,k,m)$. The bundle $K$ is a bundle of algebras. The sections $C_I\zeta$ for maximal independent sets $I\subset J$ generate the bundle. The sections $\det(b_i^j)_{i\in I,j=1,...,k}\cdot C_I\zeta$ generate a flat ${\mathbb Z}$-lattice structure in $K$. The Higgs field vanishes on the vector fields $X_1,...,X_k$. The $m$-linear form $S$ is a pairing ($m=2$) and is nondegenerate. We will not discuss the ${\mathbb Z}$-lattice structure, but we will discuss some logical relations between the other enrichments and some implications of them. \medskip (ii) Let $(M,K,\nabla^K,C,S,\zeta,V,(v_1,...,v_n))$ be a Frobenius like structure of order $(n,k,m)$. Suppose that it satisfies the {\it generation condition} \begin{eqnarray}\label{4.16} \text{(GC)}&& \textup{The sections }C_I\zeta \textup{ for maximal independent sets } I\subset J\\ &&\textup{generate the bundle }K. \nonumber \end{eqnarray} Let $\mu$ be the rank of $K$. Then for any $x\in M$, the endomorphisms $C_X,X\in T_xM$, generate a $\mu$-dimensional commutative subalgebra $A_z\subset\textup{End}(K_x)$. And any endomorphism which commutes with them is contained in this subalgebra. This gives a rank $\mu$ bundle $A$ of commutative algebras. And the map \begin{eqnarray}\label{4.17} A\to K,\quad B\mapsto B\zeta, \end{eqnarray} is an isomorphism of vector bundles and induces a commutative and associative multiplication on $K_x$ for any $x\in M$, with unit field $\zeta(x)$. Therefore the special section $\zeta$ and the generation condition (GC), which exist and hold in remark \ref{t4.2}, give the multiplication on the bundle $K$ there. \medskip (iii) In the situation in (ii) with the condition (GC), the $m$-linear form is multiplication invariant because it is Higgs field invariant. The condition (GC) implies also that it is symmetric: \begin{eqnarray*} S(C_{I_1}\zeta,C_{I_2}\zeta,...,C_{I_m}\zeta) =S(C_{I_{\sigma(1)}}\zeta,C_{I_{\sigma(2)}}\zeta,...,C_{I_{\sigma(m)}}\zeta) \end{eqnarray*} for any maximal independent sets $I_1,...,I_m$ and any permutation $\sigma\in S_m$. \medskip (iv) The following special case gives rise to Frobenius manifolds without Euler fields. Consider a Frobenius like structure $(M,K,\nabla^K,C,S,\zeta,(J,F))$ of order $(n,1,2)$ with nondegenerate pairing $S$, $\nabla^K$-flat section $\zeta$, the uniform matroid $(J,F)=(J,F^{(1,J)})$ and the condition that the map $C_\bullet \zeta:TM\to K$ is an isomorphism. Then the sections $C_{\partial_i}\zeta$ generate the bundle $K$ and are $\nabla^K$-flat. Here $M$ becomes a Frobenius manifold (without Euler field) whose flat structure is the naive flat structure of ${\mathbb C}^n\supset M$. The potential $L$ is the potential of the Frobenius manifold. \end{remarks}
1,314,259,996,867
arxiv
\section{Conclusions}\label{sec:conc} \begin{table*} \mbox{}\hfill \scalebox{0.9}{\input{resume}} \hfill\mbox{} \caption{Summary of the known complexities of query containment for several Datalog fragments; sources for each claim are shown in square brackets, using $\setminus$ to separate sources for lower and upper complexity bounds, respectively \label{fig_contcompl}} \end{table*} We have studied the most expressive fragments of Datalog for which query containment is still known to be decidable today, and we have provided exact complexities for most of their query answering and query containment problems. While containment for nested queries tends to be non-elementary for unbounded nesting depth, we have shown tight exponential complexity hierarchies for the main cases that we studied. As part of our results, we have also settled a number of open problems for known query languages: the complexity of query containment for \mqlang\xspace and \nestedq{\mqlang}\xspace, the complexity of query containment of \qlang{Dlog}\xspace in \gdatalog, and the expressivity of nested \lindatalog. Moreover, we have built on the recent ``flag~\& check'' approach of monadically defined queries to derive various natural extensions, which lead to new query languages with interesting complexity results. In most cases, we observed that the extension from monadic to frontier-guarded Datalog does not affect any of the complexities, whereas it might have an impact on expressivity. In contrast, the restriction to linear Datalog has the expected effects, both for query answering and for query containment. The only case for which our results for containment complexity are not tight is when we restrict rules to be both linear and monadic: while small variations in the involved query languages lead to the expected tight bounds, this particular combination eludes our analysis. This case could be studied as part of a future program for analyzing the behavior of (nested) conjunctive regular path queries, which are also a special form of monadic, linear Datalog. Another interesting open question is the role of constants. Our hardness proofs, especially in the nested case, rely on the use of constants to perform certain checks more efficiently. Without this, it is not clear how an exponential blow-up of our encoding (or the use of additional nesting levels) could be avoided. Of course, constants can be simulated if we have either predicates of higher arity or special constants as in ``flag \& check'' queries. However, for the case of (linear) monadic Datalog without constants, we conjecture that containment complexities are reduced by one exponential each when omitting constants. An additional direction of future research is to study problems where we ask for the \emph{existence} of a containing query of a certain type rather than merely check containment of two given queries. The most prominent instance of this scenario is the \emph{boundedness} problem, which asks whether a given Datalog program can be expressed by some (yet unknown) \qlang{UCQ}\xspace. It has been shown that this problem can be studied using tree-automata-based techniques as for query containment \cite{Cosmadakis88}, though other approaches have been applied as well \cite{BaranyCO12}. Besides boundedness, one can also ask more general questions of \emph{rewritability}, e.g., whether some Datalog program can be expressed in monadic Datalog or in a regular path query. \section{Introduction}\label{sec:intro} Query languages and their mutual relationships are a central topic in database research and a continued focus of intensive study. It has long been known that first-order logic expressions over the database relations (represented by \emph{extensional database predicates}, EDBs) lack the expressive power needed in many scenarios. Higher-order query languages have thus been introduced, which allow for the recursive definition of new predicates (so called \emph{intensional database predicates}, IDBs). Most notably, Datalog has been widely studied as a very expressive query language with tractable query answering (w.r.t.\ the size of the database). On the other hand, Datalog has been shown to be too expressive a language for certain tasks which are of crucial importance in database management. In particular, the \emph{query containment problem} that, given two queries $Q_1$ and $Q_2$, asks if every answer to $Q_1$ is an answer to $Q_2$ in every possible database, is undecidable for full Datalog \cite{Shm87}. However, checking query containment is an essential task facilitating query optimization, information integration and exchange, as well as database integrity checking. It comes handy for utilizing databases with materialized views and, as part of an offline preprocessing technique, and it may help accelerating online query answering. This motivates the question for Datalog fragments that are still expressive enough to satisfy their purposes but exhibit decidable query containment. Moreover, once decidability is established, the precise complexity of deciding containment provides further insights. The pursuit of these issues has led to a productive and well-established line of research in database theory, which has already produced numerous results for a variety of Datalog fragments. \begin{figure*} \graphicspath{{figures/}} \def5cm{13cm} \mbox{}\hfill\scalebox{0.9}{\input{figures/querycompl.pdf_tex}}\hfill\mbox{} \caption{Query languages and complexities; languages higher up in the graph are more expressive\label{fig_querycompl}} \end{figure*} \paragraph*{Non-recursive Datalog and unions of conjunctive queries} A non-recursive Datalog program does not have any (direct or indirect) recursion and it is equivalent to a union of conjunctive queries (\qlang{UCQ}\xspace) (and thus expressible in first-order logic). The problem of containment of a Datalog program (in the following referred to as \qlang{Dlog}\xspace) in a union of conjunctive queries is {\complclass{2ExpTime}}-complete \cite{ChaudhuriV97}. Due to the succintness of non-recursive Datalog compared to \qlang{UCQ}\xspace{}s, the problem of containment of \qlang{Dlog}\xspace in non-recursive Datalog is \complclass{3ExpTime}-complete \cite{ChaudhuriV97}. Some restrictions for decreasing the complexity of these problems have been considered. Containment of \emph{linear} Datalog programs (\lindatalog), i.e., one where rule bodies contain at most one IDB in a \qlang{UCQ}\xspace, is \complclass{ExpSpace}-complete; complexity further decreases to \complclass{PSpace} when the linear Datalog program is monadic (\linmdatalogconst, see below) \cite{ChaudhuriV94,ChaudhuriV97}. The techniques to prove the upper bounds in these results are based on the reduction to the problem of containment of tree automata for the general case, and to the containment of word automata in the linear case. \paragraph*{Monadic Datalog} A monadic Datalog (\mdatalogconst) program is a program containing only unary intensional predicates. The problem of containment for \mdatalogconst is \complclass{2ExpTime} complete. The upper bound is well known since the 80's \cite{Cosmadakis88}, while the lower bound has been established only recently \cite{BenediktBS12}. Finally, the containment of \qlang{Dlog}\xspace in a monadic \mdatalogconst is also decidable. It is a straightforward application of Theorem~5.5 of \cite{Courcelle91}.\footnote{We thank Michael Benedikt for this observation.} So far, however, tight bounds have not been known for this result. \paragraph*{Guarded Datalog} Guarded Datalog (\gdatalog) allows the use of intensional predicates with unrestricted arities, however for each rule, the variables of the head should appear in a single extensional atom appearing in the body of the rule. While this notion of (frontier-)guarded rules is known for a while \cite{CaliGK08,BLMS11:decline}, the first use of \gdatalog as a query language seems to be only recent \cite{BaranyCO12}. \gdatalog is a proper extension of \mdatalogconst, since monadic rules can always be rewritten into guarded rules \cite{BaranyCO12}. It is know that query containment for \gdatalog is \complclass{2ExpTime}-complete, a result based on the decidability of the satisfiability of the guarded negation fixed point logic \cite{BaranyCS11}. \paragraph*{Navigational Queries} Conjunctive two-way regular path queries (\qlang{C2RPQ}\xspace{}s) generalize conjunctive queries (\qlang{CQ}\xspace{}s) by regular expressions over binary predicates \cite{regularpathqueries1,regularpathqueries2}. Variants of this type of queries are used, e.g., by the XPath query language for querying semi-structured XML data. Recent versions of the SPARQL~1.1 query language for RDF also support some of regular expressions that can be evaluated under a similar semantics. Intuitively, \qlang{C2RPQ}\xspace is a conjunct of atoms of the form $xLy$ where $L$ is a two-way regular expression. A pair of nodes $\tuple{n_1,n_2}$ is a valuation of the pair $\tuple{x,y}$ if and only if there exists a path between $n_1$ and $n_2$ matching $L$. The containment of queries in this language was shown to be \complclass{ExpSpace}-complete \cite{regularpathqueries1,CalvaneseGLV03,AbiteboulV99,Deutsch2001}. The containment of \qlang{Dlog}\xspace in \qlang{C2RPQ}\xspace is \complclass{2ExpTime}-complete \cite{CalvaneseGV05}. \paragraph*{Monadically Defined Queries} More recently, Monadically Defined Queries (\mqlang\xspace{}s) and their nested version (\nestedq{\mqlang}\xspace{}s) have been introduced \cite{RK13:flagcheck} as a proper generalization of \mdatalogconst which also captures (unions of) \qlang{C2RPQ}\xspace{}s. At the same time, they are conveniently expressible both in \qlang{Dlog}\xspace and monadic second-order logic. Yet, as opposed to these two, \mqlang\xspace{}s and \nestedq{\mqlang}\xspace{}s have been shown to have a decidable containment problem, but no tight bounds were known so far. \medskip In spite of these continued efforts, the complexity of query containment is still unclear for many well-known Datalog fragments, especially for the most expressive ones. In this paper, we thus study a variety of known and new query languages in more detail. Figure~\ref{fig_querycompl} gives an overview of all Datalog fragments we consider, together with their respective query-answering complexities. We provide a detailed complexity analysis of the mutual containment between queries of the aforementioned (and some new) formalisms. This analysis is fine-grained in the sense that---in the case of query formalisms that allow for nesting---precise complexities depending on the nesting depth are presented. Moreover, we consider the case where the used rules are restricted to linear Datalog. \begin{itemize} \item We introduce \emph{guarded queries} (\gqlang\xspace{}s) and their nested versions (\nestedq{\gqlang}\xspace{}s), Datalog fragments that properly generalize \mqlang\xspace{}s and \nestedq{\mqlang}\xspace{}s, respectively, while featuring the same data and combined complexities for query answering. On the other hand, already unnested \gqlang\xspace{}s subsume \gdatalog. We also consider the restrictions of all these queries to the linear Datalog case and observe that this drops data complexities to \complclass{NLogSpace} whereas it does not affect combined complexities. \item By means of sophisticated automata-based techniques involving iterated transformations on alternating two-way automata, we show a generic upper bound stating that containment of \qlang{Dlog}\xspace in nested guarded queries of depth $k$ (\kgq{k}) can be decided in \kExpTime{$(k+2)$}. Additionally we show that going down to \gdatalog on the containment's right-hand side allows deciding it in \complclass{2ExpTime}. \item Inductively defining alternating Turing machine simulations on tapes of $(k+1)$-exponential size, we provide a matching generic lower bound by showing that containment of \mdatalogconst in \kmq{k} is \kExpTime{$(k+2)$}-hard. Together with the upper bound, this provides precise complexities for all cases, where the left-hand side of the containment is any fragment between \mdatalogconst and \qlang{Dlog}\xspace (cf. Fig.~\ref{fig_querycompl}) and the right-hand side is any of \mqlang\xspace, \gqlang\xspace, \kmq{k}, \kgq{k}, \nestedq{\mqlang}\xspace, \nestedq{\gqlang}\xspace. In particular, this solves the respective open questions from \cite{RK13:flagcheck}: \mqlang\xspace containment is \kExpTime{$3$}-complete and \nestedq{\mqlang}\xspace containment is \complclass{NonElementary}. \item We next investigate the situation in case only linear rules are allowed in the definition of the Datalog fragment used on the left hand side of the containment problem (this distinction generally makes no difference for the right-hand side). We find that in most of these cases, the complexities mentioned above drop to \kExpSpace{$(k+1)$}. \end{itemize} In summary, our results settle open problems for (nested) MQs, and they paint a comprehensive and detailed picture of the state of the art in Datalog query containment. \section{Linear Datalog}\label{sec:linearity} Not only query answering, but also containment checking is often slightly simpler in fragments of linear Datalog. Intuitively, this is so because derivations can be represented as words rather than as trees. Thus, the automata theoretic techniques that we have used in Section~\ref{sec_gqcontainmentup} can be applied with automata on words where some operations are easier. In particular, containment of (nondeterministic) automata on words can be checked in polynomial space rather than in exponential time. This allows us to establish the following theorems, which reduce the \kExpTime{$2$} upper bound of Theorem~\ref{theo_membdlgdl} to \complclass{ExpSpace} and the \kExpTime{$(k+2)$} upper bound of Theorem~\ref{theo_membdlgqk} to \kExpSpace{$(k+1)$}. \begin{theorem}\label{theo_memblindlgdl} \containmentMembershipStatement{\lindatalog}{\gdatalog}{\complclass{ExpSpace}} \end{theorem} \begin{theorem}\label{theo_memblindlklingq} \containmentMembershipStatement{\lindatalog}{\kgq{k}}{\kExpSpace{$(k+1)$}} \end{theorem} Establishing matching lower bounds for the complexity turns out to be more difficult. In general, we loose the power of alternation, which explains the reduction in complexity. The general approach of encoding (non-alternating) Turing machines is the same as in Section~\ref{sec_atmencoding}, where Definition~\ref{def_atmencode} is slightly simplified since we do not need to consider universal states, so that configuration trees turn into configuration sequences. Moreover, Lemma~\ref{lemma_atmquasienctoenc} applies to this case as well, since it only requires linear queries. Likewise, our general inductive step in Lemma~\ref{lemma_atmencodenesting} uses deterministic (non-alternating) TMs to construct exponentially long tapes. Moreover, it turns out that the construction of an initial exponential space TM in Lemma~\ref{lemma_mducqexpatm} leads to linear queries if the TM has no universal states. Yet it is challenging to lift the exact encodings of Lemma~\ref{lemma_mducqexpatm} and Lemma~\ref{lemma_atmencodenesting}. The same-cell query that we constructed in Lemma~\ref{lemma_atmencodenesting} for our inductive argument is non-linear. As explained in Section~\ref{sec_mqcontainmentlow}, the use of two IDBs to mark both sequences of tape cells is essential there to ensure correctness. The main problem is that we must not loose connection to either of the sequences during our checks. As an alternative to using IDBs on both sequences, one could use the $\textsf{ConfCell}$ query to ensure that the compared cells belong to the right configurations. This leads to the following same-cell query: \noindent{\small% \begin{align*} \textsf{State}_q(\lambda_1)\wedge\textsf{FirstCell}(\lambda_1,x)\wedge\textsf{Symbol}(x,z)\wedge\textsf{Head}(x,v)\wedge{}\\[-0.7ex] \textsf{State}_q(\lambda_2)\wedge\textsf{FirstCell}(\lambda_2,y)\wedge\textsf{Symbol}(y,z)\wedge\textsf{Head}(y,v) &\to\mathtt{U}(y)\\[-0.7ex] & \phantom{{}\to{}}\text{for all $q\in Q$}\\ \mathtt{U}(y)\wedge\textsf{ConfCell}(\lambda_1,x)\wedge\textsf{SameCell}(x,y)\wedge{}\\[-0.7ex] \textsf{NextCell}(x,x')\wedge\textsf{Symbol}(x',z)\wedge\textsf{Head}(x',v)\wedge{}\\[-0.7ex] \textsf{NextCell}(y,y')\wedge\textsf{Symbol}(y',z)\wedge\textsf{Head}(y',v) &\to\mathtt{U}(y')\\ \mathtt{U}(y)\wedge \textsf{LastCell}(y)&\to\mathsf{hit} \end{align*}} While this works in principle, it has the problem that the $\textsf{ConfCell}$ query of Lemma~\ref{lemma_mducqexpatm} is a \linmqlang\xspace, not a \qlang{UCQ}\xspace. Therefore, if we construct a same-cell query for the \kExpSpace{$2$} case, we obtain \klinmq{2} queries, which yields the following result: \begin{theorem}\label{theo_hardlinmqklinmq} \containmentHardnessStatement{\linmdatalogconst}{\klinmq{k}}{\kExpSpace{$k$}} \end{theorem} In order to do better, one can try to express $\textsf{ConfCell}$ as a \qlang{UCQ}\xspace. In general, this is not possible on the database instances that the left-hand query in Lemma~\ref{lemma_mducqexpatm} recognizes, since cells may have an exponential distance to their configuration while \qlang{UCQ}\xspace{}s can only recognize local structures. To make $\textsf{ConfCell}$ local, we can modify the left-hand query to ensure that every cell is linked directly to its configuration with a binary predicate $\textsf{inConf}$. Using binary IDB predicates, we can do this with the following set of frontier-guarded rules: \noindent{\small% \begin{align*} \textsf{firstConf}(x,y)\wedge\mathtt{U}_{\textit{conf}}(y) &\to\mathtt{U}_{\textit{goal}}(x)\\ \textsf{state}_q(x)\wedge\textsf{nextCell}(x,y)\wedge{}&\\[-0.7ex] \textsf{inConf}(y,x)\wedge\mathtt{U}_{\textit{bit}_1}(y,x) &\to\mathtt{U}_{\textit{conf}}(x) & \text{for $q\in Q$}\displaybreak[0]\\ \textsf{bit}_{i-1}(x,0)\wedge\mathtt{U}_{\textit{bit}_i}(y,z)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{bit}_{i-1}}(x,z) & \text{for $i\in\{2,\ldots,\ell\}$}\displaybreak[0]\\ \textsf{bit}_{i-1}(x,1)\wedge\mathtt{U}_{\textit{bit}_i}(y,z)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{bit}_{i-1}}(x,z) & \text{for $i\in\{2,\ldots,\ell\}$}\displaybreak[0]\\ \textsf{symbol}(x,c_\sigma)\wedge\mathtt{U}_{\textit{symbol}}(x,z)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{bit}_\ell}(x,z) & \text{for $\sigma\in\Sigma$}\displaybreak[0]\\ \textsf{head}(x,h)\wedge\mathtt{U}_{\textit{head}}(x,z)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{symbol}}(x,z) &\displaybreak[0]\\ \textsf{head}(x,l)\wedge\mathtt{U}_{\textit{head}}(x,z)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{symbol}}(x,z) &\displaybreak[0]\\ \textsf{head}(x,r)\wedge\mathtt{U}_{\textit{head}}(x,z)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{symbol}}(x,z) & \displaybreak[0]\\ \textsf{nextCell}(x,y)\wedge\mathtt{U}_{\textit{bit}_1}(y,z)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{head}}(x,z) & \displaybreak[0]\\ \textsf{nextConf}_\delta(x,y)\wedge\mathtt{U}_{\textit{conf}}(y)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{head}}(x,z) & \text{for $\delta\in\Delta$} \displaybreak[0]\\ \textsf{lastConf}(x)\wedge\textsf{inConf}(x,z) &\to\mathtt{U}_{\textit{head}}(x,z) \\ \end{align*}% }% Structures matched by this query provide direct links from each element to their configuration element, and we can thus formulate $\textsf{ConfCell}$ as a \qlang{UCQ}\xspace and obtain the following. \begin{theorem}\label{theo_hardlingqklinmq} \containmentHardnessStatement{\lingdatalog}{\klinmq{k}}{\kExpSpace{$(k+1)$}} \end{theorem} It is not clear if this result can be extended to containments of \linmqlang\xspace in \klinmq{k}; the above approach does not suggest any suitable modification. In particular, the propagation of $\textsf{inConf}$ in the style of a transitive closure does not work, since elements may participate in many $\textsf{inConf}$ relations. On the other hand, the special constants $\lambda$ in \linmqlang\xspace{}s cannot be used to refer to the current configuration, since there can be an unbounded number of configurations but only a bounded number of special constants. It is possible, however, to formulate a \linmqlang\xspace $\textsf{Config}[x]$ that generates the required structure for a single configuration, since one can then represents the configuration by $\lambda$. We can generate arbitrary sequences of such structures by using $\textsf{Config}[x]$ as a nested query to that matches a regular expression $\textsf{firstConf}~(\textsf{Config}~\textsf{NextConf})^\ast~\textsf{Config}~\textsf{lastConf}$, where we use $\textsf{NextConf}$ to express the disjunction of all $\textsf{nextConf}_\delta$ relations. This proves the following statement. \begin{theorem}\label{theo_hardtwolinmqklinmq} \containmentHardnessStatement{\klinmq{2}}{\klinmq{k}}{\kExpSpace{$(k+1)$}} \end{theorem} Finally, we can also continue to use the same approach for encoding $\textsf{SameCell}$ as in Section~\ref{sec_mqcontainmentlow}, without using $\textsf{ConfCell}$, while still restricting to linear Datalog (and thus to non-alternating TMs) on the left-hand side. This leads us to the following result. \begin{theorem}\label{theo_hardlinmdlmq} \containmentHardnessStatement{\linmdatalogconst}{\kmq{k}}{\kExpSpace{$(k+1)$}} \end{theorem} We have thus established tight complexity bounds for the containment of nested \gqlang\xspace{}s, while there remains a gap (of one exponential or one nesting level) for \mqlang\xspace{}s. \section*{Proofs for Section~\ref{sec_atmencoding}} \begin{figure*}[t!] \[ \begin{array}{r@{~~~}l} \multicolumn{2}{l}{\textbf{(1)~~Unique head marker and correct left/right head markers:}}\\[0.5ex] \textsf{Head}(y,p_1)\wedge\textsf{NextCell}(y,z)\wedge\textsf{Head}(z,p_2) & \text{where $\tuple{p_1,p_2}\in\{\tuple{h,h},\tuple{h,l},\tuple{r,h},\tuple{r,l}\}$}\\[0.5ex] \textsf{Head}(y,h)\wedge\textsf{Head}(y,p) & \text{where $p\in\{r,l\}$}\\[1ex] \multicolumn{2}{l}{\textbf{(2)~~Unique start configuration:}}\\[0.5ex] \textsf{FirstConf}(x,y)\wedge\textsf{State}_q(y) & \text{where $q\neq q_s$}\\[0.5ex] \textsf{FirstConf}(x,y)\wedge\textsf{FirstCell}(y,z)\wedge\textsf{Head}(z,p)& \text{where $p\in\{l,r\}$}\\[0.5ex] \textsf{FirstConf}(x,y)\wedge\textsf{ConfCell}(y,z)\wedge\textsf{Symbol}(z,c_\sigma) & \text{where $\sigma\neq \square$}\\[1ex] \multicolumn{2}{l}{\textbf{(3)~~Valid, uniquely defined transitions:}}\\[0.5ex] \textsf{State}_q(y)\wedge \textsf{Head}(z,h)\wedge\textsf{ConfCell}(y,z)\wedge\textsf{Symbol}(z,c_\sigma)\wedge \textsf{NextConf}_\delta(y,y')\wedge{} &\text{where $\delta=\tuple{q_1,\sigma_1,q_2,\sigma_2,d}$}\\ \textsf{State}_{q'}(y')\wedge\textsf{ConfCell}(y',z')\wedge\textsf{SameCell}(z',z)\wedge\textsf{Symbol}(z',c_{\sigma'}) &\text{with $q_1\neq q$ or $\sigma_1\neq\sigma$ or $q_2\neq q'$ or $\sigma_2\neq\sigma'$}\\[1ex] \multicolumn{2}{l}{\textbf{(4)~~Unique end state:}}\\[0.5ex] \textsf{LastConf}(y)\wedge\textsf{State}_q(y) & \text{where $q\neq q_e$}\\[1ex] \multicolumn{2}{l}{\textbf{(5)~~Memory:}}\\[0.5ex] \textsf{ConfCell}(y_1,x_1)\wedge\textsf{Head}(x_1,r)\wedge\textsf{Symbol}(x_1,c_\sigma)\wedge\textsf{NextConf}_\delta(y_1,y_2)\wedge{} &\\ \textsf{ConfCell}(y_2,x_2)\wedge\textsf{SameCell}(x_1,x_2)\wedge\textsf{Symbol}(x_2,c_{\sigma'}) &\text{where $\sigma\neq\sigma'$}\\[0.5ex] \textsf{ConfCell}(y_1,x_1)\wedge\textsf{Head}(x_1,l)\wedge\textsf{Symbol}(x_1,c_\sigma)\wedge\textsf{NextConf}_\delta(y_1,y_2)\wedge{} &\\ \textsf{ConfCell}(y_2,x_2)\wedge\textsf{SameCell}(x_1,x_2)\wedge\textsf{Symbol}(x_2,c_{\sigma'}) &\text{where $\sigma\neq\sigma'$}\\[1ex] \multicolumn{2}{l}{\textbf{(6)~~Head movement:}}\\[0.5ex] \textsf{ConfCell}(y_1,x_1)\wedge\textsf{Head}(x_1,h)\wedge\textsf{NextConf}_\delta(y_1,y_2)\wedge{} &\text{where $\delta=\tuple{q_1,\sigma_1,q_2,\sigma_2,\text{right}}$}\\ \textsf{ConfCell}(y_2,x_2)\wedge\textsf{SameCell}(x_1,x_2)\wedge\textsf{NextCell}(x_2,x_2')\wedge\textsf{Head}(x_2',p) &\text{and $p\in\{r,l\}$}\\[0.5ex] \textsf{ConfCell}(y_1,x_1)\wedge\textsf{Head}(x_1,h)\wedge\textsf{NextConf}_\delta(y_1,y_2)\wedge{} &\text{where $\delta=\tuple{q_1,\sigma_1,q_2,\sigma_2,\text{right}}$}\\ \textsf{ConfCell}(y_2,x_2)\wedge\textsf{SameCell}(x_1,x_2)\wedge\textsf{LastCell}(x_2)\wedge\textsf{Head}(x_2,p) &\text{and $p\in\{r,l\}$}\\[0.5ex] \textsf{ConfCell}(y_1,x_1)\wedge\textsf{Head}(x_1,h)\wedge\textsf{NextConf}_\delta(y_1,y_2)\wedge{} &\text{where $\delta=\tuple{q_1,\sigma_1,q_2,\sigma_2,\text{left}}$}\\ \textsf{ConfCell}(y_2,x_2)\wedge\textsf{SameCell}(x_1,x_2)\wedge\textsf{NextCell}(x_2',x_2)\wedge\textsf{Head}(x_2',p) &\text{and $p\in\{r,l\}$}\\[0.5ex] \textsf{ConfCell}(y_1,x_1)\wedge\textsf{Head}(x_1,h)\wedge\textsf{NextConf}_\delta(y_1,y_2)\wedge{} &\text{where $\delta=\tuple{q_1,\sigma_1,q_2,\sigma_2,\text{left}}$}\\ \textsf{ConfCell}(y_2,x_2)\wedge\textsf{SameCell}(x_1,x_2)\wedge\textsf{FirstCell}(z,x_2)\wedge\textsf{Head}(x_2,p) &\text{and $p\in\{r,l\}$}\\[0.5ex] \end{array} \] \caption{Queries to construct a containment encoding as in Lemma~\ref{lemma_atmquasienctoenc}\label{fig_atmquasienctoenc}} \end{figure*} \begin{replemma}{lemma_atmquasienctoenc} Consider an ATM $\mathcal{M}$, and queries as in Definition~\ref{def_atmencode}, including $\textsf{SameCell}[x,y]$, that are \kmq{k} queries for some $k\geq 0$. There is a \kmq{k} query $P[x]$, polynomial in the size of $\mathcal{M}$ and the given queries, such that the following hold. \begin{itemize} \item For every accepting run of $\mathcal{M}$ in space $s$, there is some database instance $\Inter$ with some element $c$ that encodes the run, such that $c\notin P^\Inter$. \item If an element $c$ of $\Inter$ encodes a tree of quasi-configurations of $\mathcal{M}$ in space $s$, and if $c\notin P^\Inter$, then $c$ encodes an accepting run of $\mathcal{M}$ in space $s$. \end{itemize} Moreover, if all input queries are in \klinmq{k}, then so is $P$. \end{replemma} \begin{proof} We construct $P$ from all (polynomially many) positive queries obtained by instantiating the query patterns in Figure~\ref{fig_atmquasienctoenc}. Since $P$ needs to be a unary query with variable $x$, we extend every positive query that does not contain $x$ with the atom $\textsf{FirstConf}[x,x']$ (omitted for space reasons in Figure~\ref{fig_atmquasienctoenc}). By Proposition~\ref{prop_posfcqs} we can express the disjunctions of all the positive queries in Figure~\ref{fig_atmquasienctoenc} as a \klinmq{k} $P[x]$ of polynomial size (for $k=0$ it is a \qlang{UCQ}\xspace). If an element $c$ in a database instance $\Inter$ encodes an accepting run of $\mathcal{M}$ in space $s$, and $\Inter$ contains no other structures, then none of the queries in Figure~\ref{fig_atmquasienctoenc} matches. Hence $c\notin P^\Inter$. Conversely, assume that $c$ encodes a tree of $\mathcal{M}$ quasi-configurations in space $s$ and $c\notin P^\Inter$. If none of the queries in Figure~\ref{fig_atmquasienctoenc} (1) match, the head positions of every configuration must form a sequence $l,\ldots,l,h,r,\ldots,r$; hence all quasi-configurations are actually configurations. Queries (2)--(4) ensure that the first and last configuration are in the start and end state, respectively, and that each transition is matched by suitable state and tape modifications. Queries (5) ensure that tape cells that are not at the head of the TM are not modified between configurations. Queries (6) ensure that the movement of the head is consistent with the transitions, and especially does not leave the prescribed space. Note that the queries allow transitions that try to move the head beyond the tape and require that the head stays in its current position in this case. This allows the ATM to recognize the end of the tape, which is important for the Turing machines that we consider below. With all these restrictions observed, $c$ must encode a run of $\mathcal{M}$ in space $s$. \end{proof} \section*{Proofs for Section~\ref{sec_mqcontainmentlow}} \begin{replemma}{lemma_mducqexpatm} For any ATM $\mathcal{M}$, there is an \mdatalogconst query $P_1[x]$, a \linmqlang\xspace $P_2[x]$, queries as in Definition~\ref{def_atmencode} that are \linmqlang\xspace{}s, and a same-cell query that is a \qlang{UCQ}\xspace, such that $P_1[x]$ and $P_2[x]$ containment-encode accepting runs of $\mathcal{M}$ in exponential space. \end{replemma} \begin{proof} Let $\mathcal{M}=\tuple{Q,\Sigma,\Delta,q_s,q_e}$ with $Q$ partitioned into existential states $Q_\exists$ and universal states $Q_\forall$. In order to use Lemma~\ref{lemma_atmquasienctoenc}, we first construct queries $P'_1$ and $P'_2$ that containment-encode quasi-configuration trees of $\mathcal{M}$ in space $2^\ell$ for some $\ell$ that is linear in the size of the queries (w.r.t.\ to suitable queries as in Definition~\ref{def_atmencode}). Our signature contains the binary predicates (distinguished from the queries of Definition~\ref{def_atmencode} by using lower case letters) $\textsf{firstConf}$, $\textsf{nextConf}_\delta$ for all $\delta\in\Delta$, $\textsf{firstCell}$, $\textsf{nextCell}$, $\textsf{bit}_i$ for all $i\in\{1,\ldots,\ell\}$, $\textsf{symbol}$, $\textsf{head}$, as well as the unary predicates $\textsf{lastConf}$, and $\textsf{state}_q$ for all $q\in Q$. We define $P'_1$ to be the following \mdatalogconst query that has the goal predicate $\mathtt{U}_{\textit{goal}}$ and uses two further constants $0$ and $1$: \noindent{\small% \begin{align*} \textsf{firstConf}(x,y)\wedge\mathtt{U}_{\textit{conf}}(y) &\to\mathtt{U}_{\textit{goal}}(x)\\ \textsf{state}_q(x)\wedge\textsf{firstCell}(x,y)\wedge\mathtt{U}_{\textit{bit}_1}(y) &\to\mathtt{U}_{\textit{conf}}(x) & \text{for $q\in Q$}\displaybreak[0]\\ \textsf{bit}_{i-1}(x,0)\wedge\mathtt{U}_{\textit{bit}_i}(x) &\to\mathtt{U}_{\textit{bit}_{i-1}}(x) & \text{for $i\in\{2,\ldots,\ell\}$}\displaybreak[0]\\ \textsf{bit}_{i-1}(x,1)\wedge\mathtt{U}_{\textit{bit}_i}(x) &\to\mathtt{U}_{\textit{bit}_{i-1}}(x) & \text{for $i\in\{2,\ldots,\ell\}$}\displaybreak[0]\\ \textsf{symbol}(x,c_\sigma)\wedge\mathtt{U}_{\textit{symbol}}(x) &\to\mathtt{U}_{\textit{bit}_\ell}(x) & \text{for $\sigma\in\Sigma$}\displaybreak[0]\\ \textsf{head}(x,h)\wedge\mathtt{U}_{\textit{head}}(x) &\to\mathtt{U}_{\textit{symbol}}(x) &\displaybreak[0]\\ \textsf{head}(x,l)\wedge\mathtt{U}_{\textit{head}}(x) &\to\mathtt{U}_{\textit{symbol}}(x) &\displaybreak[0]\\ \textsf{head}(x,r)\wedge\mathtt{U}_{\textit{head}}(x) &\to\mathtt{U}_{\textit{symbol}}(x) & \displaybreak[0]\\ \textsf{nextCell}(x,y)\wedge\mathtt{U}_{\textit{bit}_1}(y) &\to\mathtt{U}_{\textit{head}}(x) & \displaybreak[0]\\ \textsf{nextConf}_\delta(x,y)\wedge\mathtt{U}_{\textit{conf}}(y) &\to\mathtt{U}_{\textit{head}}(x) & \text{for $\delta=\tuple{q,\sigma,q',\sigma',d}$}\\[-0.7ex] && \text{with $q\in Q_\exists$}\displaybreak[0]\\ \textsf{nextConf}_{\delta_1}(x,y_1)\wedge\mathtt{U}_{\textit{conf}}(y_1) \wedge{}& &\text{for $\delta_1=\tuple{q,\sigma,q',\sigma',d}$, }\\[-0.7ex] \textsf{nextConf}_{\delta_2}(x,y_2)\wedge\mathtt{U}_{\textit{conf}}(y_2) &\to\mathtt{U}_{\textit{head}}(x) & \text{$q\in Q_\forall$, and $\delta_1\neq\delta_2$} \displaybreak[0]\\ \textsf{lastConf}(x) &\to\mathtt{U}_{\textit{head}}(x) & \displaybreak[0]\\ \end{align*}% }% $P'_1$ encodes structures that resemble configuration trees, but with each configuration ``tape'' consisting of an arbitrary sequence of ``cells'' of the form $\textsf{bit}_1(x,v_1),\ldots,\textsf{bit}_\ell(x,v_\ell),\textsf{symbol}(x,c_\sigma),\textsf{head}(x,p)$, where each $v_i$ is either $0$ or $1$. The values for the bit sequence encode a binary number of length $\ell$. We provide a query $P'_2$ which ensures that each sequence of cells encodes an ascending sequence of binary numbers from $00\ldots0$ to $11\ldots1$. More precisely, $P'_2$ checks if there are any consecutive cells that violate this rule, i.e., the structures matched by $P'_1$ but not by $P'_2$ are those where each configuration contains $2^\ell$ cells. The following query checks whether bit $i$ is the rightmost bit containing a $0$ and bit $i$ in the successor configuration also contains a $0$, which is a situation that must not occur if the bit sequences encode a binary counter: \noindent{\small% \begin{align*} \textsf{bit}_i(y,0)\wedge\textsf{bit}_{i+1}(y,1)\wedge\ldots\wedge\textsf{bit}_\ell(y,1)\wedg \textsf{nextCell}(y,z)\wedge\textsf{bit}_i(z,0) \end{align*}% }% In a similar way, we can ensure that every bit to the right of the rightmost $0$ is changed to $0$, every bit that is left of a $0$ remains unchanged, the first number is $0\ldots0$, and the last number is $1\ldots1$. The query $P'_2$ is the union of all of these (polynomially many) conditions, each with new atom $\textsf{firstConf(x,y)}$ added and all variables other than $x$ existentially quantified; this ensures that we obtain a unary query that matches the same elements as $P'_1$ if it matches at all. We claim that the elements matching $P'_1$ but not $P'_2$ encode quasi-configuration trees of $\mathcal{M}$ in space $2^\ell$. Indeed, it is easy to specify the queries required by Definition~\ref{def_atmencode}. The most complicated query is $\textsf{ConfCell}[x,y]$, which can be defined by the following \linmqlang\xspace{}: \noindent{\small% \begin{align*} \textsf{state}_q(\lambda_1)\wedge\textsf{nextCell}(\lambda_1,y) &\to \mathtt{U}(y) & \text{for all $q\in Q$}\\ \mathtt{U}(y)\wedge\textsf{nextCell}(y,z) &\to \mathtt{U}(y)\\ \mathtt{U}(\lambda_2) &\to\mathsf{hit} \end{align*}% }% \noindent The remaining queries are now easy to specify, where we use $\textsf{ConfCell}[x,y]$, knowing that a conjunctive query over \linmqlang\xspace{}s can be transformed into a single \linmqlang\xspace{} using Proposition~\ref{prop_posfcqs}: \noindent{\small% \begin{align*} \textsf{FirstConf}[x,y] \coloneqq{}& \textsf{firstConf}(x,y)\\ \textsf{NextConf}_\delta[x,y] \coloneqq{}& \exists z.\textsf{ConfCell}(x,z)\wedge\textsf{nextConf}_{\delta}(z,y)\\ \textsf{LastConf}[x] \coloneqq{}& \exists z.\textsf{ConfCell}(x,z)\wedge\textsf{lastConf}(z)\displaybreak[0]\\ \textsf{State}_q[x] \coloneqq{}& \textsf{state}_q(x)\displaybreak[0]\\ \textsf{Head}[x,y] \coloneqq{}& \textsf{head}(x,y) \displaybreak[0]\\ \textsf{FirstCell}[x,y] \coloneqq{}& \textsf{firstCell}(x,y))\displaybreak[0]\\ \textsf{NextCell}[x,y] \coloneqq{}& \textsf{nextCell}(x,y)\displaybreak[0]\\ \textsf{LastCell}[x] \coloneqq{}& \textsf{lastConf}(x)\vee \exists z.\textsf{nextConf}(x,z)\displaybreak[0]\\ \textsf{Symbol}[x,y] \coloneqq{}& \textsf{symbol}(x,y)\\ \textsf{SameCell}[x,y] \coloneqq{}& \exists v_1,\ldots,v_\ell.\textsf{bit}_1(x,v_1)\wedge\textsf{bit}_1(y,v_1)\wedge{}\\ &\ldots\wedge\textsf{bit}_\ell(x,v_\ell)\wedge\textsf{bit}_\ell(y,v_\ell) \end{align*}% }% Using these queries, we can construct a \linmqlang\xspace $P$ as in Lemma~\ref{lemma_atmquasienctoenc} such that $P_1=P'_1$ and $P_2=P'_2 \vee P$ containment-encode accepting runs of $\mathcal{M}$. \end{proof} \begin{replemma}{lemma_atmencodenesting} Assume that there is some space bound $s$ such that, for every DTM $\mathcal{M}$, there is a \mdatalogconst query $P_1[x]$ and an \kmq{k+1} query $P_2[x]$ with $k\geq 0$, such that $P_1[x]$ and $P_2[x]$ containment-encode accepting runs of $\mathcal{M}$ in $s$, where the queries required by Definition~\ref{def_atmencode} are \kmq{k+1} queries. Moreover, assume that there is a suitable same-cell query that is in \kmq{k}. Then, for every ATM $\mathcal{M}'$, there is a \mdatalogconst query $P_1'[x]$, an \kmq{k+1} $P_2'[x]$, and \kmq{k+1} queries as in Definition~\ref{def_atmencode}, such that $P_1'[x]$ and $P_2'[x]$ containment-encode an accepting run of $\mathcal{M}'$ in space $s'\geq 2^s$. Moreover, the size of the queries for this encoding is polynomial in the size of the queries for the original encoding. \end{replemma} \begin{proof} There is a TM $\mathcal{M}=\tuple{Q,\Sigma,\Delta,q_s,q_e}$ that counts from $0$ to $2^s$ in binary (using space $s$) and then halts. $\mathcal{M}$ can be small (constant size) since our formalization of (A)TMs allows the TMs to recognize the last tape position to ensure that the maximal available space is used. The computation will necessarily take $s'>2^s$ steps to complete since multiple steps are needed to increment the counter by $1$. Let $P_1[x]$ and $P_2[x]$ be queries that containment-encode accepting runs of $\mathcal{M}$ in $s$, and let $\textsf{ConfCell}$, $\textsf{SameCell}$, etc.\ denote the respective \klinmq{k} as in Definition~\ref{def_atmencode}. Let $\mathcal{M}'=\tuple{Q',\Sigma',\Delta',q_s',q_e'}$ be an arbitrary ATM. We use the signature of $P_1$, extended by additional binary predicates $\textsf{firstConf}'$, $\textsf{nextConf}'_\delta$ for all $\delta\in\Delta'$, $\textsf{symbol}'$, $\textsf{head}'$, as well as unary predicates $\textsf{lastConf}'$, and $\textsf{state}'_q$ for all $q\in Q'$. All of these are assumed to be distinct from predicates in $P_1$. Let $\mathtt{U}_\text{goal}$ be the goal predicate of $P_1$, and let $\mathtt{U}_\text{tape}$ be a new unary IDB predicate. We construct the program $\bar{P}_1$ from $P_1$ as follows. For every rule of $P_1$ that does not contain an IDB atom in its body we add the atom $\mathtt{U}_\text{tape}(x)$ to the body, where $x$ is any variable that occurs in the rule. Intuitively speaking, the IDBs $\mathtt{U}_\text{tape}$ and $\mathtt{U}_\text{goal}$ mark the start and end of tapes of $\mathcal{M}'$, which are represented by runs of $\mathcal{M}$. Moreover, we modify $\bar{P}_1$ to ``inject'' additional state and head information for $\mathcal{M}'$ into configurations of $\mathcal{M}$, i.e., we extend $P_1$ to ensure that every element $e$ with $\textsf{state}_q(e)$ also occurs in some $\textsf{symbol}'(e,c'_{\sigma'})$ and in some relation $\textsf{head}'(e,p)$. This can always be achieved by adding a linear number of IDB predicates and rules Now $P'_1$ is defined to be a \mdatalogconst query with goal predicate $\mathtt{U}'_\text{goal}$ (assumed, like all IDB predicates of form $\mathtt{U}'$ below, to be distinct from any IDB predicate in $\bar{P}_1$), which is obtained as the union of $\bar{P}_1$ with the following rules: \noindent{\small% \begin{align*} \textsf{firstConf}'(x,y)\wedge\mathtt{U}'_{\textit{conf}}(y) &\to\mathtt{U}'_{\textit{goal}}(x)\\ \textsf{state}'_q(x)\wedge\mathtt{U}_{\textit{goal}}(x) &\to\mathtt{U}'_{\textit{conf}}(x) & \text{for $q\in Q$}\displaybreak[0]\\ \textsf{nextCell}'(x,y)\wedge\mathtt{U}_{\textit{goal}}(y) &\to\mathtt{U}_{\textit{tape}}(x) & \text{for $q\in Q$}\displaybreak[0]\\ \textsf{nextConf}'_\delta(x,y)\wedge\mathtt{U}'_{\textit{conf}}(y) &\to\mathtt{U}_{\textit{tape}}(x) & \text{for $\delta=\tuple{q,\sigma,q',\sigma',d}$}\\[-0.7ex] && \text{with $q\in Q_\exists$}\displaybreak[0]\\ \textsf{nextConf}'_{\delta_1}(x,y_1)\wedge\mathtt{U}'_{\textit{conf}}(y_1) \wedge{} & &\text{for $\delta_1=\tuple{q,\sigma,q',\sigma',d}$, }\\[-0.7ex] \textsf{nextConf}'_{\delta_2}(x,y_2)\wedge\mathtt{U}'_{\textit{conf}}(y_2) &\to\mathtt{U}_{\textit{tape}}(x) & \text{$q\in Q_\forall$, and $\delta_1\neq\delta_2$} \displaybreak[0]\\ \textsf{lastConf}'(x) &\to\mathtt{U}_{\textit{tape}}(x) & \displaybreak[0]\\ \end{align*}% }% \noindent $P'_1$ encodes trees of trees of $\mathcal{M}$ quasi-configurations in space $s$. The structures matched by $P'_1$ but not by $P_2$ encode trees of accepting runs of $\mathcal{M}$ in space $s$ (note that these runs are linear, since $\mathcal{M}$ is not alternating). Every such run consists of the same number $s'\geq 2^s$ of configurations; these configurations represent the tape cells of our encoding of $\mathcal{M}'$ sequences. This encoding is formalized by queries as follows. The queries $\textsf{FirstConf}'[x,y]$, $\textsf{State}'_q[x]$, $\textsf{Head}'[x,y]$, and $\textsf{Symbol}'[x,y]$ are directly expressed by singleton \qlang{CQ}\xspace{}s that use the eponymous predicates $\textsf{firstConf}'(x,y)$, etc. To access cells of $\mathcal{M}'$, we can use the analogous queries to access configurations of $\mathcal{M}$: $\textsf{FirstCell}'[x,y]=\textsf{FirstConf}(x,y)$, $\textsf{NextCell}'[x,y]=\textsf{NextConf}(x,y)$, and $\textsf{LastCell}'[x]=\textsf{LastConf}(x)$. The remaining queries can be expressed as \linmqlang\xspace queries. To present these queries in a more readable way, we specify them in regular expression syntax rather than giving many rules for each. It is clear that regular expressions over unary and binary predicates can be expressed in \linmqlang\xspace (it was already shown that \mqlang\xspace{}s can express regular path queries, which is closely related \cite{RK13:flagcheck}). We use abbreviation $\textsf{P1SYMBOL}$ to express the regular expression that is a disjunction of all predicate symbols that occur in $P_1$ (this allows us to skip over any structures generated by $P_1$; with the specific forms of $P_1$ that can occur in our proofs, one could make this more specific to use only certain binary predicates, but our formulation does not depend on internals of $P_1$). Moreover, let $\textsf{STATE}$ be the disjunction of all atoms $\textsf{state}'_q(x)$ and $\exists y.\textsf{head}'(x,y)$ (both unary). \noindent{\small% \begin{align*} \textsf{NextConf}'_\delta[x,y] \coloneqq{}& \textsf{STATE}~\textsf{P1SYMBOL}^\ast~\textsf{nextConf}'_{\delta}\\ \textsf{LastConf}'[x] \coloneqq{}& \textsf{STATE}~\textsf{P1SYMBOL}^\ast~\textsf{lastConf}'\\ \textsf{ConfCell}'[x,y] \coloneqq{}& \textsf{STATE}~\textsf{P1SYMBOL}^\ast~\textsf{HEAD} \end{align*}% }% The unary query $\textsf{LastConf}'[x]$ uses the variable at the beginning of the expression as its answer. It is easy to verify that the elements accepted by $P'_1$ but not by $P_2$ encode sequences of quasi-configurations of $\mathcal{M}'$ in space $s'$ with respect to these queries. To apply Lemma~\ref{lemma_atmquasienctoenc}, we need to specify an additional $\textsf{SameCell}'$ query for this encoding. $\textsf{SameCell}'$ is expressed by an \kmq{k+1} query that can in general not be expressed by a \kmq{k} query: \noindent{\small% \begin{align*} \textsf{FirstCell}(\lambda_1,x) &\to\mathtt{U}_1(x)\\ \mathtt{U}_1(x)\wedge\textsf{NextCell}(x,x') &\to\mathtt{U}_1(x')\\ \textsf{State}_q(\lambda_1)\wedge\textsf{FirstCell}(\lambda_1,x)\wedge\textsf{Symbol}(x,z)\wedge\textsf{Head}(x,v)\wedge{}\\[-0.7ex] \textsf{State}_q(\lambda_2)\wedge\textsf{FirstCell}(\lambda_2,y)\wedge\textsf{Symbol}(y,z)\wedge\textsf{Head}(y,v) &\to\mathtt{U}_2(y)\\[-0.7ex] & \phantom{{}\to{}}\text{for all $q\in Q$}\\ \mathtt{U}_1(x)\wedge\mathtt{U}_2(y)\wedge\textsf{SameCell}(x,y)\wedge{}\\[-0.7ex] \textsf{NextCell}(x,x')\wedge\textsf{Symbol}(x',z)\wedge\textsf{Head}(x',v)\wedge{}\\[-0.7ex] \textsf{NextCell}(y,y')\wedge\textsf{Symbol}(y',z)\wedge\textsf{Head}(y',v) &\to\mathtt{U}_2(y')\\ \mathtt{U}_2(y)\wedge \textsf{LastCell}(y)&\to\mathsf{hit} \end{align*}}% where $\textsf{FirstCell}$, $\textsf{Symbol}$, $\textsf{SameCell}$, and $\textsf{LastCell}$ are the queries for which $P_1$ and $P_2$ containment-encode runs of $\mathcal{M}$. Note that our constructions already ensure that the sequences of $\mathcal{M}$-cells compared by $\textsf{SameCell}'$ are of the same length. To complete the proof, we apply Lemma~\ref{lemma_atmquasienctoenc} to construct an \kmq{k+1} $\bar{P}_2$. The \kmq{k+1} $P_2'$ is obtained by expressing the disjunction of $P_2$ and $\bar{P}_2$ as an \kmq{k+1} using Proposition~\ref{prop_posfcqs}. Then $P_1'$ and $P_2'$ containment encode accepting runs of $\mathcal{M}'$ in space $s'$. \end{proof} \begin{reptheorem}{theo_hardmdlmqk} \containmentHardnessStatement{\mdatalogconst}{\kmq{k}}{\kExpTime{$(k+2)$}} \end{reptheorem} \begin{proof} The claim is shown by induction on $k$. For the base case, we show that deciding containment of $\mqlang\xspace$ queries is \complclass{3ExpTime}-hard. By Lemma~\ref{lemma_mducqexpatm}, for any DTM $\mathcal{M}^0$, there is a \mdatalogconst query $P^0_1$, a \linmqlang\xspace $P^0_2$, \linmqlang\xspace{}s as in Definition~\ref{def_atmencode}, and a same-cell query that is a \qlang{UCQ}\xspace with respect to which $P^0_1$ and $P^0_2$ containment-encode accepting runs of $\mathcal{M}^0$ in exponential space $s$. By applying Lemma~\ref{lemma_atmencodenesting}, we obtain, for an arbitrary ATM $\mathcal{M}^1$, a \mdatalogconst query $P^1_1$, an \mqlang\xspace $P^1_2$, and \mqlang\xspace queries as in Definition~\ref{def_atmencode} (including a same-cell query), that containment-encode accepting runs of $\mathcal{M}^1$ in space $s'\geq 2^s$. The induction step for $k>1$ is immediate from Lemma~\ref{lemma_atmencodenesting}. \end{proof} \section{Lower Bounds}\label{sec:mainlowerbounds} \section{Simulating Alternating Turing Machines}\label{sec_atmencoding} To show the hardness of query containment problems, we generally provide direct encodings of Alternating Turing Machines (ATMs) with a fixed space bound \cite{ATM}. To simplify this encoding, we assume without loss of generality that every universal ATM configuration leads to exactly two successor configurations. The following definition defines ATM encodings formally. Rather than requiring concrete structures to encode ATMs, we abstract the encoding by means of queries that find suitable structures in a database instance; this allows us to apply the same definition for increasingly complex encodings. The following definition is illustrated in Figure~\ref{fig_atmencode}. \begin{figure*} \graphicspath{{figures/}} \mbox{}\hfill\scalebox{0.85}{\input{figures/tm.pdf_tex}}\hfill\mbox{} \caption{Illustration of the ATM encoding of Definition~\ref{def_atmencode}: shaded configurations (top) are used within the configuration tree (bottom); $\textsf{ConfCell}$ queries are omitted for clarity\label{fig_atmencode}} \end{figure*} \begin{definition}\label{def_atmencode} Consider an ATM $\mathcal{M}=\tuple{Q,\Sigma,\Delta,q_s,q_e}$ and queries $\textsf{FirstConf}[x,y]$, $\textsf{NextConf}_\delta[x,y]$ for all $\delta\in\Delta$, $\textsf{LastConf}[x]$, $\textsf{State}_q[x]$ for all $q\in Q$, $\textsf{Head}[x,y]$, $\textsf{ConfCell}[x,y]$, $\textsf{FirstCell}[x,y]$, $\textsf{NextCell}[x,y]$, $\textsf{LastCell}[x]$, and $\textsf{Symbol}[x,y]$. To refer to tape symbols, we consider constants $c_\sigma$ for all $\sigma\in\Sigma$, and to refer to positions of the head, we use constants $h$ (here), $l$ (left), and $r$ (right). With respect to these queries, an element $c\in\domain{\Inter}$ in a database instance $\Inter$ \emph{encodes an $\mathcal{M}$ quasi-configuration of size $s$} if $\Inter$ contains a structure\medskip \noindent{\small% \[% \begin{array}{@{}l} \textsf{State}_q(c), \textsf{FirstCell}(c,d_1), \\\textsf{ConfCell}(c,d_1), \textsf{Symbol}(d_1,c_{\sigma_1}), \textsf{Head}(d_1,p_1), \textsf{NextCell}(d_1,d_2),\\ \textsf{ConfCell}(c,d_2), \textsf{Symbol}(d_2,c_{\sigma_2}), \textsf{Head}(d_2,p_2),\ldots, \textsf{NextCell}(d_{s-1},d_s),\\ \textsf{ConfCell}(c,d_s), \textsf{Symbol}(d_s,c_{\sigma_s}), \textsf{Head}(d_s,p_s),\textsf{LastCell}(d_s), \end{array} \]}% where $q\in Q$, $\sigma_i\in\Sigma$, and $p_i\in\{h,l,r\}$. We say that $c$ \emph{encodes an $\mathcal{M}$ configuration of size $s$} if, in addition, the sequence $(p_i)_{i=1}^s$ has the form $l,\ldots,l,h,r,\ldots,r$ with zero or more occurrences of $r$ and $l$, respectively. An element $c$ in $\Inter$ \emph{encodes a (quasi-)configuration tree of $\mathcal{M}$ in space $s$} if \begin{itemize} \item $\Inter\models\textsf{FirstConf}(c,d_1)$ for some $d_1$, \item $d_1$ is the root of a tree with edges defined by $\textsf{NextConf}_\delta$, \item every node in this tree encodes an $\mathcal{M}$ \mbox{(quasi-)}\allowbreak{}configuration of size $s$, \item if there is a transition $\Inter\models\textsf{NextConf}_{\delta_1}(e,e_1)$, where $\delta_1=\tuple{q,\sigma,q',\sigma',d}$ and $q$ is a universal state, then there is also a transition $\Inter\models\textsf{NextConf}_{\delta_2}(e,e_2)$ with $\delta_1\neq\delta_2$, \item if $e$ is a leaf node, then $\Inter\models\textsf{LastConf}(e)$. \end{itemize} If the tree is an accepting run, then $c$ encodes an accepting run (of $\mathcal{M}$ in space $s$). A \emph{same-cell query} is a query $\textsf{SameCell}[x,y]$ such that, if $c_1,c_2\in\domain{\Inter}$ encode two quasi-configurations, and $d_1,d_2\in\domain{\Inter}$ represent the same tape cell in the encodings $c_1$ and $c_2$, respectively, then $\tuple{d_1,d_2}\in\textsf{SameCell}^\Inter$. Two queries $P_1[x]$ and $P_2[x]$ \emph{containment-encode} accepting runs of $\mathcal{M}$ in space $s$ if, for every database instance $\Inter$ and element $c\in P_1^\Inter\setminus P_2^\Inter$, $c$ encodes an accepting run of $\mathcal{M}$ in space $s$, and every accepting run of $\mathcal{M}$ in space $s$ is encoded by some $c\in P_1^\Inter\setminus P_2^\Inter$ for some $\Inter$. \end{definition} Note that elements $c$ may encode more than one configuration (or configuration tree). This is not a problem in our arguments. The conditions that ensure that a quasi-configuration tree is an accepting run can be expressed by a query, based on the queries given in Definition~\ref{def_atmencode}. More specifically, one can construct a query that accepts all elements that encode a quasi-configuration sequence that is \emph{not} a run. Together with a query that accepts only encodings of quasi-configurations tree, this allows us to containment-encode accepting runs of an ATM. Only linear queries, possibly nested, will be needed to perform the required checks, even in the case of ATMs. To simplify the statements, we use \klinmq{0} as a synonym for \qlang{UCQ}\xspace. \begin{lemma}\label{lemma_atmquasienctoenc} Consider an ATM $\mathcal{M}$, and queries as in Definition~\ref{def_atmencode}, including $\textsf{SameCell}[x,y]$, that are \kmq{k} queries for some $k\geq 0$. There is a \kmq{k} query $P[x]$, polynomial in the size of $\mathcal{M}$ and the given queries, such that the following hold. \begin{itemize} \item For every accepting run of $\mathcal{M}$ in space $s$, there is some database instance $\Inter$ with some element $c$ that encodes the run, such that $c\notin P^\Inter$. \item If an element $c$ of $\Inter$ encodes a tree of quasi-configurations of $\mathcal{M}$ in space $s$, and if $c\notin P^\Inter$, then $c$ encodes an accepting run of $\mathcal{M}$ in space $s$. \end{itemize} Moreover, if all input queries are in \klinmq{k}, then so is $P$. \end{lemma} The previous result allows us to focus on the encoding of quasi-configuration trees and the definition of queries as required in Definition~\ref{def_atmencode}. Indeed, the main challenge below will be to enforce a sufficiently large tape for which we can still find a correct same-cell query. \section{Hardness of Monadic Query Containment}\label{sec_mqcontainmentlow} We can now prove our first major hardness result: \begin{theorem}\label{theo_hardmdlmqk} \containmentHardnessStatement{\mdatalogconst}{\kmq{k}}{\kExpTime{$(k+2)$}} \end{theorem} Note that the statement includes the \complclass{3ExpTime}-hardness for containment of \mqlang\xspace{}s as a special case. To prove this result, we first construct an \complclass{ExpSpace} ATM that we then use to construct tapes of double exponential size. \begin{lemma}\label{lemma_mducqexpatm} For any ATM $\mathcal{M}$, there is an \mdatalogconst query $P_1[x]$, a \linmqlang\xspace $P_2[x]$, queries as in Definition~\ref{def_atmencode} that are \linmqlang\xspace{}s, and a same-cell query that is a \qlang{UCQ}\xspace, such that $P_1[x]$ and $P_2[x]$ containment-encode accepting runs of $\mathcal{M}$ in exponential space. \end{lemma} Figure~\ref{fig_mducqexpatm} illustrates the encoding that we use to prove Lemma~\ref{lemma_mducqexpatm}. While it resembles the structure of Figure~\ref{fig_atmencode}, the labels are now EDB predicates rather than (abstract) queries. The encoding of tapes attaches to each cell an $\ell$-bit address (where bits are represented by constants $0$ and $1$). We can use these bits to count from $0$ to $2^\ell$ to construct tapes of this length. The query on the left-hand side can only enforce that there are cells with bit addresses, not that they actually count; even the exact length of the tape is unspecified. The query on the right-hand side of the containment then checks that consecutive cells (in all tapes that occur in the configuration tree) represent successor addresses, and that the first and last address is as expected. Another difference from Figure~\ref{fig_atmencode} is that we now treat configurations as linear structures, with a beginning and an end. In our representation of the configuration tree, we next configuration therefore connects to the last cell of the previous configuration's tape, rather than its start. We do this to ensure that the encoding works well even when restricting to linear queries. Indeed, the only non-linear rules in $P_1$ are used to enforce multiple successor configurations for universal states of an ATM. For normal TMs, even $P_1$ is in \linmdatalogconst{}. The rules of the $P_1$ are as follows:\smallskip \noindent{\small% \begin{align*} \textsf{firstConf}(x,y)\wedge\mathtt{U}_{\textit{conf}}(y) &\to\mathtt{U}_{\textit{goal}}(x)\\ \textsf{state}_q(x)\wedge\textsf{firstCell}(x,y)\wedge\mathtt{U}_{\textit{bit}_1}(y) &\to\mathtt{U}_{\textit{conf}}(x) & \text{for $q\in Q$}\displaybreak[0]\\ \textsf{bit}_{i-1}(x,0)\wedge\mathtt{U}_{\textit{bit}_i}(x) &\to\mathtt{U}_{\textit{bit}_{i-1}}(x) & \text{for $i\in\{2,\ldots,\ell\}$}\displaybreak[0]\\ \textsf{bit}_{i-1}(x,1)\wedge\mathtt{U}_{\textit{bit}_i}(x) &\to\mathtt{U}_{\textit{bit}_{i-1}}(x) & \text{for $i\in\{2,\ldots,\ell\}$}\displaybreak[0]\\ \textsf{symbol}(x,c_\sigma)\wedge\mathtt{U}_{\textit{symbol}}(x) &\to\mathtt{U}_{\textit{bit}_\ell}(x) & \text{for $\sigma\in\Sigma$}\displaybreak[0]\\ \textsf{head}(x,p)\wedge\mathtt{U}_{\textit{head}}(x) &\to\mathtt{U}_{\textit{symbol}}(x) &\text{for $p\in\{h,r,l\}$}\displaybreak[0]\\ \textsf{nextCell}(x,y)\wedge\mathtt{U}_{\textit{bit}_1}(y) &\to\mathtt{U}_{\textit{head}}(x) & \displaybreak[0]\\ \textsf{nextConf}_\delta(x,y)\wedge\mathtt{U}_{\textit{conf}}(y) &\to\mathtt{U}_{\textit{head}}(x) & \text{for $\delta=\tuple{q,\sigma,q',\sigma',d}$}\\[-0.7ex] && \text{with $q\in Q_\exists$}\displaybreak[0]\\ \textsf{nextConf}_{\delta_1}(x,y_1)\wedge\mathtt{U}_{\textit{conf}}(y_1) \wedge{}& &\text{for $\delta_1=\tuple{q,\sigma,q',\sigma',d}$, }\\[-0.7ex] \textsf{nextConf}_{\delta_2}(x,y_2)\wedge\mathtt{U}_{\textit{conf}}(y_2) &\to\mathtt{U}_{\textit{head}}(x) & \text{$q\in Q_\forall$, and $\delta_1\neq\delta_2$} \displaybreak[0]\\ \textsf{lastConf}(x) &\to\mathtt{U}_{\textit{head}}(x) \end{align*}% }% Note that we do not enforce any structure to define the query $\textsf{ConfCell}$; this query is implemented by a \linmqlang\xspace{} that navigates over an arbitrary number of cells within one configuration. This is the main reason why we need \linmqlang\xspace{}s rather than \qlang{UCQ}\xspace{}s here. \begin{figure*}[t!] \graphicspath{{figures/}} \mbox{}\hfill\scalebox{0.85}{\input{figures/bittm.pdf_tex}}\hfill\mbox{} \caption{Illustration of the ATM encoding of Lemma~\ref{lemma_mducqexpatm}: shaded configurations (top) are used within the configuration tree (bottom)\label{fig_mducqexpatm}} \end{figure*} We now use the exponential space ATM of Lemma~\ref{lemma_mducqexpatm} to encode the tape of \kExpSpace{2} ATM. The following result shows, that one can always obtain an exponentially larger tape by nesting linear queries on the right-hand side. \begin{lemma}\label{lemma_atmencodenesting} Assume that there is some space bound $s$ such that, for every DTM $\mathcal{M}$, there is a \mdatalogconst query $P_1[x]$ and an \kmq{k+1} query $P_2[x]$ with $k\geq 0$, such that $P_1[x]$ and $P_2[x]$ containment-encode accepting runs of $\mathcal{M}$ in $s$, where the queries required by Definition~\ref{def_atmencode} are \kmq{k+1} queries. Moreover, assume that there is a suitable same-cell query that is in \kmq{k}. Then, for every ATM $\mathcal{M}'$, there is a \mdatalogconst query $P_1'[x]$, an \kmq{k+1} $P_2'[x]$, and \kmq{k+1} queries as in Definition~\ref{def_atmencode}, such that $P_1'[x]$ and $P_2'[x]$ containment-encode an accepting run of $\mathcal{M}'$ in space $s'\geq 2^s$. Moreover, the size of the queries for this encoding is polynomial in the size of the queries for the original encoding. \end{lemma} We show this result by using a deterministic space-$s$ Turing machine $\mathcal{M}$ to count from $0$ to $2^s$, which takes a fixed number $s'>2^s$ of steps. We then use the encodings of accepting runs of $\mathcal{M}$ as encodings for tapes of the ATM $\mathcal{M}'$, where every configuration of $\mathcal{M}$ becomes a cell of $\mathcal{M}'$. All tapes simulated in this way are of equal length $s'$. Some queries required by Definition~\ref{def_atmencode} are easy to obtain: for example, the new query $\textsf{NextCell}'[x,y]$ is the query $\textsf{NextConf}[x,y]$ of the encoding of $\mathcal{M}$. The most difficult to express is the new same-cell query, for which we use the following \kmq{k+1}: \noindent{\small% \begin{align*} \textsf{FirstCell}(\lambda_1,x) &\to\mathtt{U}_1(x)\\ \mathtt{U}_1(x)\wedge\textsf{NextCell}(x,x') &\to\mathtt{U}_1(x')\\ \textsf{State}_q(\lambda_1)\wedge\textsf{FirstCell}(\lambda_1,x)\wedge\textsf{Symbol}(x,z)\wedge\textsf{Head}(x,v)\wedge{}\\[-0.7ex] \textsf{State}_q(\lambda_2)\wedge\textsf{FirstCell}(\lambda_2,y)\wedge\textsf{Symbol}(y,z)\wedge\textsf{Head}(y,v) &\to\mathtt{U}_2(y)\\[-0.7ex] & \phantom{{}\to{}}\text{for all $q\in Q$}\\ \mathtt{U}_1(x)\wedge\mathtt{U}_2(y)\wedge\textsf{SameCell}(x,y)\wedge{}\\[-0.7ex] \textsf{NextCell}(x,x')\wedge\textsf{Symbol}(x',z)\wedge\textsf{Head}(x',v)\wedge{}\\[-0.7ex] \textsf{NextCell}(y,y')\wedge\textsf{Symbol}(y',z)\wedge\textsf{Head}(y',v) &\to\mathtt{U}_2(y')\\ \mathtt{U}_2(y)\wedge \textsf{LastCell}(y)&\to\mathsf{hit} \end{align*}}% where $\textsf{FirstCell}$, $\textsf{Symbol}$, $\textsf{SameCell}$, and $\textsf{LastCell}$ are the queries from the encoding of $\mathcal{M}$. The first two rules simply mark the tape starting at $\lambda_1$ with $\mathtt{U}_1$. The next two rules then compare the two (potentially very long) tapes from configurations of $\mathcal{M}$ to check if they contain exactly the same symbols at each position, and the last rule finishes. Since the tapes are not connected in any known way, we have to be careful to ensure never to loose the connection to either of the tapes, to avoid comparing random cells from other parts of the database. Indeed, the last two rules do not mention $\lambda_1$ or $\lambda_2$ at all. We need two IDB predicates to achieve this, which carefully mark the two tapes cell by cell. Another important thing to note is that the query $\textsf{SameCell}$ is only used exactly once in exactly one rule. Indeed, if we were using it twice, then the length of our queries would grow exponentially when applying the construction inductively. This is the reason why we encode symbols and head positions with constants, rather than using unary predicates like for states. In the latter case, we need many rules, one for each predicate, as can be seen in the third rule above. One could try to avoid the use of constants by more complex encodings that encode information using paths of different lengths as done by Bj\"{o}rklund et al.~\cite{BMS08:xmlschemacont}. However, some additional device is needed to ensure that database instances are sufficiently closely connected in this case, which may again require constants, IDBs of higher arity, or a greater nesting level of \linmqlang\xspace queries to navigate larger distances. With the previous results, Theorem~\ref{theo_hardmdlmqk} can be proved by an easy induction: for the base case $k=1$ we apply Lemma~\ref{lemma_atmencodenesting} to the result of Lemma~\ref{lemma_mducqexpatm}; for the induction step we use Lemma~\ref{lemma_atmencodenesting} again. \section*{Proofs for Section~\ref{sec_gqcontainmentup}} \begin{repproposition}{prop_ruleauto} There is an automaton $\aautomaton_{P,\arule}$ that accepts exactly the annotated matching trees for $\arule$ and $\aprogram$, and which is exponential in the size of $\arule$ and $\aprogram$ \end{repproposition} \begin{proof} We first construct an automaton $\aautomaton'_{P,\arule}$ that accepts matching trees where each node is additionally annotated by a partial mapping of the form $\mathsf{Var}(\arule)\to\mathcal{V}_{\aprogram}$ (called \emph{$\mathsf{Var}(\arule)$-label}), such that: every special variable $x\in\mathsf{Var}(\arule)$ occurs in at least one $\mathsf{Var}(\arule)$-label, and whenever a variable $x\in\mathsf{Var}(\arule)$ occurs in two, it is mapped to the same variable and both variable occurrences are connected. Note that this is essentially the same condition that we imposed for $\vec{\lambda}$-annotations. The intersection of tree automata can be computed in polynomial time. We can therefore construct automata to check part of the conditions for (annotated) matching trees to simplify the definitions. We first construct an automaton $\aautomaton_x$ for checking the condition on $\mathsf{Var}(\arule)$-labels for one variable $x\in\mathsf{Var}(\arule)$. We define $\aautomaton_x=\tuple{\Sigma,Q_x,Q^s_x,\delta_x,Q^e_x}$, where the alphabet $\Sigma$ consists of quadruples of proof-tree labels (from $\mathcal{R}_{\aprogram}$), $\vec{\lambda}$-labels, $p$-labels, and $\mathsf{Var}(\arule)$-labels. The state set $Q_x$ is $\{a,b,\textsf{accept}\}\cup\{q_v\mid v\in\mathcal{V}_{\aprogram}\}$, signifying that the current node is \emph{a}bove the first node annotated with a mapping for $x$, \emph{b}elow or \emph{b}esides any nodes that were annotated with a mapping for $x$, or at a node where $x$ is mapped to a variable $v$. That start-state set is $Q^s_x=\{a\}\cup\{q_v\mid v\in\mathcal{V}_{\aprogram}\}$; the end-state set if $Q^e_x=\{\textsf{accept}\}$. Consider a rule $\arule'\in \mathcal{R}_{\aprogram}$ of the form $r_1(\vec{v_1})\wedge\ldots\wedge r_n(\vec{v_n})\wedge h_1(\vec{w_1})\wedge\ldots\wedge h_m(\vec{w_m}) \to h(\vec{v})$, where $r_i$ are EDB predicates and $h_{(i)}$ are IDB predicates. For the case that $m>0$, there is a transition $\tuple{q_1,\ldots,q_m}\in\delta(q, \tuple{\arule',\_,\_,\nu})$ exactly if the following conditions are satisfied: \begin{itemize} \item if $q=a$ and $\nu(x)$ is undefined, then $q_i=a$ for one $1\leq i\leq m$ and $q_j=b$ for all $1\leq j\leq m$ with $i\neq j$; \item if $q=q_v$ and $\nu(x)=v$, then $q_i=q_v$ for all $1\leq i\leq m$ such that $v$ occurs in $\vec{w_i}$ and $q_i=b$ for all other $i$; \item if $q=b$ and $\nu(x)$ is undefined, then $q_i=b$ for all $1\leq i\leq m$. \end{itemize} For the case $m=0$, there is a transition $\tuple{\textsf{accept}}\in\delta(q, \tuple{\arule',\_,\_,\nu})$ exactly if: \begin{itemize} \item if $q=q_v$ and $\nu(x)=v$; \item if $q=b$ and $\nu(x)$ is undefined. \end{itemize} It is easy to check that the automaton $\aautomaton_x$ satisfies the required condition. Now an automaton for checking the condition on $\mathsf{Var}(\arule)$-labels can be constructed as the intersection $\aautomaton'_{\mathsf{Var}(\arule)}=\bigcap_{x\in\mathsf{Var}(\arule)}\aautomaton_x$. The automaton $\aautomaton'_{\vec{\lambda}}$ for checking the condition on $\vec{\lambda}$-labels is constructed in a similar fashion. Likewise, an automaton $\aautomaton'_p$ for checking the condition on $p$-labels is easy to define. It remains to construct an automaton for checking the conditions (a)--(d) of Definition~\ref{defn_annotree}. To do this, we interpret the $\mathsf{Var}(\arule)$-labels and $\vec{\lambda}$-labels as partial specifications of the required mapping $\nu$. Condition (a) further requires that $\nu(\vec{x})=\vec{v}$, i.e., that the $\mathsf{Var}(\arule)$-label at the unique node annotated with $p(\vec{v})$ contains this mapping. It is easy to verify this with an automaton $\aautomaton'_{(a)}$. Together, $\aautomaton'_{(a)}$, $\aautomaton'_{\vec{\lambda}}$, and $\aautomaton'_{\mathsf{Var}(\arule)}$ provide a consistent variable mapping that respects the $p$-label (a) and the connectedness of variable occurrences, i.e., (c) and (d). To check the remaining condition (b), we use an automaton $\aautomaton'_{(b)}$. The automaton for (b) will use auxiliary markers to record which atoms have been matched in the current node and how exactly this was done. We record such a match as a partial function from atoms $q(\vec{z})\in\varphi$ to instances $q(\vec{w})$ of such atoms using variables $\vec{w}\subseteq\mathcal{V}_{\aprogram}$. The set of all such partial functions is denoted $\mathsf{Match}_{\varphi,\aprogram}$. Note that this set is exponential (not double exponential). We now define $\aautomaton'_{(b)}=\tuple{\Sigma,Q,Q_s,\delta,Q_e}$ where $\Sigma$ is as for $\aautomaton_x$ above. The set of states $Q$ is $\{\textsf{accept}\}\cup (2^\varphi\times \mathsf{Match}_{\varphi,\aprogram})$, where elements from $2^\varphi$ encode the subset of $\varphi$ that should be witnessed at or below the current node, and the elements from $\mathsf{Match}_{\varphi,\aprogram}$ encode atoms that must be matched at the current node with their respective instantiations. The start-state set $Q_s$ is $\{\tuple{\varphi,\mu}\mid \mu\in\mathsf{Match}_{\varphi,\aprogram}\}$; the end-state set $Q_e$ is $\{\textsf{accept}\}$. The transition function $\delta$ is defined as follows. Consider a rule $\arule'\in \mathcal{R}_{\aprogram}$ of the form $r_1(\vec{v_1})\wedge\ldots\wedge r_n(\vec{v_n})\wedge h_1(\vec{w_1})\wedge\ldots\wedge h_m(\vec{w_m}) \to h(\vec{v})$, where $r_i$ are EDB predicates and $h_{(i)}$ are IDB predicates. For the case $m>0$, there is a transition $\tuple{\tuple{\beta_1,\mu_1},\ldots,\tuple{\beta_m,\mu_n}}\in\delta( \tuple{\beta,\mu}, \tuple{\arule',\nu_{\vec{\lambda}},\_,\nu_{\mathsf{Var}(\arule)}})$ exactly if the set $\beta\subseteq\varphi$ can be partitioned into sets $\beta',\beta_1,\ldots,\beta_m$ such that $(\nu_{\vec{\lambda}}\cup \nu_{\mathsf{Var}(\arule)})(\beta')=\mu(\beta')$ and $\mu(\beta')\subseteq\{r_1(\vec{v_1}),\ldots, r_n(\vec{v_n})\}$. The element $\mu_i$ of successor states can be chosen freely; the validity of the choice will be checked later. For the case $m=0$, there is a transition $\tuple{\textsf{accept}}\in\delta( \tuple{\beta,\mu}, \tuple{\arule',\nu_{\vec{\lambda}},\_,\nu_{\mathsf{Var}(\arule)}})$ exactly if $(\nu_{\vec{\lambda}}\cup \nu_{\mathsf{Var}(\arule)})(\beta)=\mu(\beta)$ and $\mu(\beta)\subseteq \{r_1(\vec{v_1}),\ldots, r_n(\vec{v_n})\}$. In fact, the information from $\mathsf{Match}_{\varphi,\aprogram}$ is not strictly necessary to define the transition, since the relevant elements $\mu$ are always determined by other choices in the transition. However, having this information explicit will be important in later proofs. The automaton $\aautomaton'_{P,\arule}$ is obtained as the intersection $\aautomaton'_{\mathsf{Var}(\arule)}\cap \aautomaton'_{\vec{\lambda}}\cap \aautomaton'_p\cap \aautomaton'_{(a)}\cap \aautomaton'_{(b)}$. It is easy to verify that it accepts exactly the $\mathsf{Var}(\arule)$-annotated matching trees. Note that $\aautomaton'_{P,\arule}$ is exponential in size, already due to the exponentially large alphabet $\Sigma$. Now the required automaton $\aautomaton_{P,\arule}$ is obtained by ``forgetting'' the $\mathsf{Var}(\arule)$-label in transitions of $\aautomaton'_{P,\arule}$. This projection operation for tree automata is possible with a polynomial increase in size: every state of $\aautomaton_{P,\arule}$ is a pair of a state of $\aautomaton'_{P,\arule}$ and a $\mathsf{Var}(\arule)$-label; transitions of $\aautomaton_{P,\arule}$ are defined as for $\aautomaton'_{P,\arule}$, but keeping $\mathsf{Var}(\arule)$-label information in states and introducing transitions for all possible $\mathsf{Var}(\arule)$-labels in child nodes. \end{proof} \begin{repproposition}{prop_ruleautoplus} There is an alternating 2-way tree automaton $\aautomaton^+_{P,\arule,\vec{v}}$ that is polynomial in the size of $\aautomaton_{P,\arule}$ such that, whenever $\aautomaton_{P,\arule}$ accepts a matching tree $T$ that has the $p$-annotation $p(\vec{v})$ on node $e$, then $\aautomaton^+_{P,\arule,\vec{v}}$ has an accepting run that starts from the corresponding node $e'$ on the tree $T'$ that is obtained by removing the $p$-annotation from $T$. \end{repproposition} \begin{proof} Using alternating 2-way automata, we can traverse a tree starting from any node, visiting each node once. To control the direction of the traversal, we create multiple copies of each state $q$: states $q_{\mathsf{down}}$ are processed like normal states in $\aautomaton_{P,\arule}$, states $q_{\mathsf{up}}$ use an inverted transition of $\aautomaton_{P,\arule}$ to move up the tree into a state $q_{\sigma,i}$; these auxiliary states are used to check that the label of the upper node is actually $\sigma$ and to start new downwards processes for all child nodes other than the one ($i$) that we came from. To ensure that the constructed automaton $\aautomaton^+_{P,\arule,\vec{v}}$ simulates the behavior of $\aautomaton_{P,\arule}$ in case the annotation $p(\vec{v})$ is found, we eliminate all transitions that mention other $p$-annotations. Moreover, we assume without loss of generality that the states of $\aautomaton_{P,\arule}$ that allow a transition mentioning $p(\vec{v})$ cannot be left through any other transition; this can always be ensured by duplicating states and using them exclusively for one kind of transition. Let $Q_p$ be the set of states of $\aautomaton_{P,\arule}$ that admit (only) transitions mentioning $p(\vec{v})$. Let $\aautomaton'_{P,\arule}=\tuple{\Sigma',Q,Q_s,\delta',Q_e}$ denote the automaton over the alphabet $\Sigma'$ of $\vec{\lambda}$-annotated proof trees (without $p$-annotations), with the same (start/end) states as $\aautomaton_{P,\arule}$, and where $\delta'$ is defined based on the transition function $\delta$ of $\aautomaton_{P,\arule}$ as follows: $\delta'(\tuple{\arule',M})$ is the union of all sets of the form $\delta(\tuple{\arule',\vec{\lambda}\text{-label},p\text{-label}})$ where $p\text{-label}$ is either $p(\vec{v})$ or empty. By this construction, there is a correspondence between the accepting runs of $\aautomaton_{P,\arule}$ over trees where one node $e$ is annotated with $p(\vec{v})$ and accepting runs of $\aautomaton'_{P,\arule}$ (on trees without $p$-annotations) for which the node $e$ is visited in some state of $Q_p$. Let $s$ be the maximal out-degree of proof trees for $P$, i.e., the maximal number of IDB atoms in bodies of $P$. The state set $Q^+$ of $\aautomaton^+_{P,\arule,\vec{v}}$ is given by the disjoint union $\{q_{\mathsf{up}}\mid q\in Q\}\cup\{q_{\sigma,i}\mid q\in Q,\sigma\in\Sigma, 1\leq i\leq s\}\cup\{q_{\mathsf{down}}\mid q\in Q\}\cup\{\mathsf{start},\mathsf{accept}\}$. The start-state set is $Q^+_s=\{\mathsf{start}\}$ and the end-state set is $Q^+_e=\{\mathsf{accept}\}\cup\{q_{\mathsf{down}}\mid q\in Q_e\}$. Transitions of $\aautomaton^+_{P,\arule,\vec{v}}$ are defined as follows: \begin{itemize} \item For all $\sigma\in\Sigma$, let $\delta^+(\mathsf{start},\sigma)$ be the disjunction of all formulae $\tuple{0,q_{\mathsf{up}}}\wedge\tuple{0,q_{\mathsf{down}}}$ where $q\in Q_p$. \item For states $q_{\mathsf{down}}$ and $\sigma\in\Sigma$, let $\delta^+(q_{\mathsf{down}},\sigma)$ be the disjunction of all formulae $\tuple{1,q^1_{\mathsf{down}}}\wedge\ldots\wedge\tuple{m,q^m_{\mathsf{down}}}$ for which $\aautomaton'_{P,\arule}$ has a transition $\tuple{q^1,\ldots,q^m}\in\delta'(q,\sigma)$. \item For states $q_{\mathsf{up}}$ and $\sigma\in\Sigma$, let $\delta^+(q_{\mathsf{up}},\sigma)$ be the disjunction of all formulae $\tuple{-1,q'_{\sigma',i}}$ for which $\aautomaton'_{P,\arule}$ has a transition $\tuple{q^1,\ldots,q^{i-1},q,q^{i+1},\ldots,q^m}\in\delta'(q',\sigma')$ and the current node is the $i$th child of its parent (we can assume that this information is encoded in the labels $\sigma$, even for basic proof trees, which increases the alphabet only linearly; we omit this in our definitions since it would clutter all other parts of our proof without need). \item For states $q_{\sigma,i,q'}$, let $\delta^+(q_{\sigma,i,q'},\sigma)$ be the disjunction of all formulae $\tuple{0,q_{\mathsf{up}}}\wedge\tuple{1,q^1_{\mathsf{down}}}\wedge\ldots\wedge\tuple{i-1,q^{i-1}_{\mathsf{down}}}\wedge\tuple{i+1,q^{i+1}_{\mathsf{down}}}\wedge\ldots\wedge\tuple{m,q^m_{\mathsf{down}}}$ for which $\aautomaton'_{P,\arule}$ has a transition $\tuple{q^1,\ldots,q^{i-1},q',q^{i+1},q^m}\in\delta'(q,\sigma)$. \item For all starting states $q\in Q_s$ of $\aautomaton'_{P,\arule}$ and $\sigma\in\Sigma$, let $\delta(q_{\mathsf{up}},\sigma)=\tuple{0,\mathsf{accept}}$. \end{itemize} It is not hard to verify that $\aautomaton^+_{P,\arule,\vec{v}}$ has the required properties. \end{proof} \begin{repproposition}{prop_containmentaltauto} For a \qlang{Dlog}\xspace query $P$ and a \gqlang\xspace query $P'$ with special constants $\vec{\lambda}$, there is an alternating 2-way automaton $\aautomaton^+_{P\sqsubseteq P'}$ of exponential size that accepts the $\vec{\lambda}$-annotated proof trees of $P$ that encode expansion trees with $\vec{\lambda}$ assignments for which $P'$ has a match \end{repproposition} \begin{proof} Let $P'$ be the set $\{\arule_1,\ldots,\arule_\ell\}$. For every IDB predicate $p$, let $P'_p$ denote the set of rules in $P'$ with head predicate $p$ (possibly $\mathsf{hit}$). Without loss of generality, we assume that distinct rules use distinct sets of variables. For every frontier-guarded rule $\arule'$, let $\mathsf{guard}(\arule')$ be a fixed EDB atom that acts as a guard in this rule, i.e., an atom that refers to all variables in the head of $\arule'$. Consider a rule $\arule'\in P'$ with IDB atoms $q_1(\vec{t_1}),\ldots,q_m(\vec{t_m})$ in its body. We construct new rules from $\arule'$ by replacing each atom $q_i(\vec{t_i})$ with a guard atom $\mathsf{guard}(\arule'_i)$, suitably unified. Formally, assume that there are rules $\arule'_i\in P'_{q_i}$ with head $q_i(\vec{s_i})$ and a substitution $\theta$ that is a most general unifier for the problems $\vec{t_i}\theta = \vec{s_i}\theta$, for all $i\in\{1,\ldots,m\}$, and that maps every variable in $\arule'_i$ that does not occur in the head to a globally fresh variable. Then the \emph{guard expansion} of $\arule'$ for $(\arule'_i)_{i=1}^m$ and $\theta$ is the rule that is obtained from $\arule'\theta$ by replacing each body atom $q_i(\vec{t_i})\theta$ by $\mathsf{guard}(\arule'_i)\theta$. By construction, two distinct atoms $\mathsf{guard}(\arule'_i)\theta$ and $\mathsf{guard}(\arule'_j)\theta$ do not share variables, unless at positions that correspond to head variables in rules $\arule'_i$ and $\arule'_j$. The atoms $\mathsf{guard}(\arule'_i)\theta$ in a guard expansion are called \emph{replacement guards}. We consider two guard expansions to be equivalent if they only differ in the choice of the most general unifier. Let $\mathsf{Guard}(\arule')$ be the set of all guard expansions of $\arule'\in P'$, i.e., a set containing one representative of each class of equivalent guard expansions. $\mathsf{Guard}(\arule')$ is exponential since there are up to $|P'|^m$ non-equivalent guard expansions for a rule with $m$ IDB atoms. The automaton $\aautomaton^+_{P\sqsubseteq P'}$ is constructed as follows. For every guard expansion $\arule_g\in \bigcup_{\arule'\in P'}\mathsf{Guard}(\arule')$ and every list $\vec{v}$ of proof-tree variables of the arity of the head of $\arule_g$, consider the alternating 2-way tree automaton $\aautomaton^+_{P,\arule_g,\vec{v}}$ of Proposition~\ref{prop_ruleautoplus}. We assume w.l.o.g.\ that the state sets of these automata are mutually disjoint. Let $\aautomaton^+_{P\sqsubseteq P'}=\tuple{\Sigma,Q,Q_s,\delta,Q_e}$. As before, $\Sigma$ consists of pairs of a rule instance from $\mathcal{R}_{\aprogram}$ and a partial mapping of $\vec{\lambda}$ to $\mathcal{V}_{\aprogram}$. The state set $Q$ is the disjoint union of all state sets of the automata of form $\aautomaton^+_{P,\arule_g,\vec{v}}$. The start-state set $Q_s$ is the disjoint union of all start-state sets of automata $\aautomaton^+_{P,\arule_g,\vec{v}}$ for which $\arule_g$ is a guard expansion of a rule with head $\mathsf{hit}$ (and $\vec{v}$ is the empty list). The end-state set $Q_e$ is the disjoint union of all end-state sets of automata $\aautomaton^+_{P,\arule_g,\vec{v}}$. The transition function $\delta$ is defined as follows. By the construction in Proposition~\ref{prop_ruleauto}, each state $q$ in the automaton $\aautomaton_{P,\arule}$ encodes a partial mapping $\mathsf{match}(q)$ from body atoms of $\arule$ to instantiated atoms that use variables from $\mathcal{V}_{\aprogram}$, which are matched at the current tree node. This information is preserved through alphabet projections, intersections, and even through the construction in Proposition~\ref{prop_ruleautoplus}. We can therefore assume that each state $q$ of $\aautomaton^+_{P\sqsubseteq P'}$ is associated with a partial mapping $\mathsf{match}(q)$. For every state $q\in Q_{P,\arule_g,\vec{v}}$ and every $\sigma\in\Sigma$, we define $\delta(q,\sigma) = \delta_{P,\arule_g,\vec{v}}(q,\sigma)\wedge \psi$, where $\psi$ defined as follows. For every replacement guard atom $\alpha$ of $\arule_g$ for which $\mathsf{match}(q)(\alpha)$ is defined, we consider the formula $\psi_\alpha=\tuple{0,q_1}\vee\ldots\vee\tuple{0,q_\ell}$, where \begin{itemize} \item $\alpha=\mathsf{guard}(\arule')\theta$ for some rule $\arule'$ and substitution $\theta$; \item $\mathsf{match}(q)(\alpha)=\alpha\theta'$ for some substitution $\theta'$; \item $q_1,\ldots q_\ell$ are the start states of the automaton $\aautomaton_{P,\arule',\vec{z}\theta\theta'}$ where $p(\vec{z})$ is the head of $\arule'$. \end{itemize} Now $\psi$ is the conjunction of all formulae $\psi_\alpha$ thus defined. \end{proof} \section{Upper Bounds}\label{sec:mainupperbounds} \section{Deciding Query Containment with Automata}\label{sec_containmentautomata} We first recall a general technique of reducing query containment to the containment problem for (tree) automata \cite{ChaudhuriV97}, which we build our proofs on. An introduction to tree automata is included in the appendix. A common way to describe the answers of a \qlang{Dlog}\xspace query $P=\tuple{\aprogram,p}$ is to consider its \emph{expansion trees}. Intuitively speaking, the goal atom $p(\vec{x})$ can be rewritten by applying rules of $\aprogram$ in a backward-chaining manner until all IDB predicates have been eliminated, resulting in a \qlang{CQ}\xspace. The answers of $P$ coincide with the (infinite) union of answers to the \qlang{CQ}\xspace{}s obtained in this fashion. The rewriting itself gives rise to a tree structure, where each node is labeled by the instance of the rule that was used in the rewriting, and the leaves are instances of rules that contain only EDB predicates in their body. The set of all expansion trees provides a regular description of $P$ that we exploit to decide containment. To formalize this approach, we describe the set of all expansion trees as a tree language, i.e., as a set of trees with node labels from a finite alphabet. The number of possible labels of nodes in expansion trees is unbounded, since rules are instantiated using fresh variables. To obtain a finite alphabet of labels, one limits the number of variables and thus the overall number of possible rule instantiations \cite{ChaudhuriV97}. \begin{definition}\label{defn_prooftree} Given a \qlang{Dlog}\xspace query $P=\tuple{\aprogram,p}$, $\mathcal{R}_{\aprogram}$ is the set of all instantiations of rules of $\aprogram$ using only the variables $\mathcal{V}_{\aprogram}=\{v_1,\ldots,v_n\}$, where $n$ is twice the maximal number of variables occurring in any rule of $\aprogram$. A \emph{proof tree} for $P$ is a tree with labels from $\mathcal{R}_{\aprogram}$, such that (a) the root is labeled by a rule with $p$ as its head predicate; (b) if a node is labeled by a rule $\arule$ with an IDB atom $B$ in its body, then it has a child node that is labeled by $\arule'$ with head atom $B$. The label of a node $e$ is denoted $\pi(e)$. Consider two nodes $e_1$ and $e_2$ in a proof tree with lowest common ancestor $e$. Two occurrences of a variable $v$ in $\pi(e_1)$ and $\pi(e_2)$ are \emph{connected} if $v$ occurs in the head of $\pi(f)$ for all nodes $f$ on the shortest path between $e_1$ and $e_2$, with the possible exception of $e$. \end{definition} A proof tree encodes an expansion tree where we replace every set of mutually connected variable occurrences by a fresh variable. Conversely, every expansion tree is represented by a proof tree that replaces fresh body variables by variables that do not occur in the head; this is always possible since proof trees can use twice as many variables as any rule of $\aprogram$. The set of proof trees is a regular tree language that can be described by an automaton. \begin{proposition}[Proposition 5.9 \cite{ChaudhuriV97}]\label{prop_taprooftree} For a \qlang{Dlog}\xspace query $P=\tuple{\aprogram,p}$, there is a tree automaton $\aautomaton_P$ of size exponential in $P$ that accepts exactly the set of all proof trees of $P$. \end{proposition} In order to use $\aautomaton_P$ to decide containment of $P$ in another query $P'$, we construct an automaton $\aautomaton_{P\sqsubseteq P'}$ that accepts all proof trees of $P$ that are ``matched'' by $P'$. Indeed, every proof tree induces a \emph{witness}, i.e., a minimal matching database instance, and one can check whether or not $P'$ can produce the same query answer on this instance. If this is the case for all proof trees of $P$, then containment is shown. \section{Deciding Guarded Query Containment}\label{sec_gqcontainmentup} Our first result provides the upper bound for deciding containment of \gqlang\xspace queries. In fact, the result extends to arbitrary \qlang{Dlog}\xspace queries on the left-hand side. \begin{theorem}\label{theo_membdlgq} \containmentMembershipStatement{\qlang{Dlog}\xspace}{\gqlang\xspace}{\complclass{3ExpTime}} \end{theorem} To prove this, we need to construct the tree automaton $\aautomaton_{P\sqsubseteq P'}$ for an arbitrary \gqlang\xspace $P'$. As a first step, we construct an alternating 2-way tree automaton $\aautomaton^+_{P\sqsubseteq P'}$ that accepts the proof trees that we would like $\aautomaton_{P\sqsubseteq P'}$ to accept, but with nodes additionally being annotated with information about the choice of $\lambda$ values to guide the verification. We first construct automata to verify the match of a single, non-recursive rule that may refer to $\lambda$ constants. The rule does not have to be monadic or frontier-guarded. Our construction is inspired by a similar construction for \qlang{CQ}\xspace{}s by Chaudhuri and Vardi \cite{ChaudhuriV97}, with the main difference that the answer variables in our case are not taken from the root of the tree but rather from one arbitrary node that is marked accordingly. To define this formally, we introduce trees with additional annotations besides their node labels. Clearly, such trees can be viewed as regular labelled trees by considering annotations to be components of one label; our approach, however, leads to a more readable presentation. \begin{definition}\label{defn_annotree} Consider a Datalog program $\aprogram$, a rule $\arule=\varphi\to p(\vec{x})$, and $n\geq 0$ special constants $\vec{\lambda}=\lambda_1,\ldots,\lambda_n$. The proof-tree variables $\mathcal{V}_{\aprogram}$ used in $\mathcal{R}_{\aprogram}$ are as in Definition~\ref{defn_prooftree}. A proof tree for $\aprogram$ is \emph{$\vec{\lambda}$-annotated} if every node has an additional \emph{$\vec{\lambda}$-label} that is a partial mapping $\{\lambda_1,\ldots,\lambda_n\}\to\mathcal{V}_{\aprogram}$, such that: every special constant $\lambda_i$ occurs in at least one $\vec{\lambda}$-label, and whenever a constant $\lambda_i$ occurs in two $\vec{\lambda}$-labels, it is mapped to the same variable and both variable occurrences are connected. A proof tree for $\aprogram$ is \emph{$p$-annotated} if exactly one node has an additional \emph{$p$-label} of the form $p(\vec{v})$, where $\vec{v}$ is a list of variables from $\mathcal{V}_{\aprogram}$. A \emph{matching tree} $T$ for $\arule$ and $\aprogram$ is a $\vec{\lambda}$-annotated and $p$-annotated proof tree for $\aprogram$ for which there is a mapping $\nu:\mathsf{Var}(\arule)\cup\{\lambda_1,\ldots,\lambda_n\}\to\mathcal{V}_{\aprogram}$ such that \begin{enumerate}[(a)] \item $\nu(p(\vec{x}))=p(\vec{v})$; \item for every atom $\alpha$ of $\varphi$, there is a node $e_\alpha$ in $T$ such that the rule instance that $e_\alpha$ is labeled with contains the EDB atom $\nu(\alpha)$ in its body; \item if $\lambda_i$ occurs in $\alpha$, then the $\vec{\lambda}$-label maps $\lambda_i$ to the occurrence of $\nu(\lambda_i)$ in $e_\alpha$; \item if $\alpha,\alpha'\in\varphi$ share a variable $x$, then the occurrences of $\nu(x)$ in $e_\alpha$ and $e_{\alpha'}$ are connected. \end{enumerate} \end{definition} \begin{proposition}\label{prop_ruleauto} There is an automaton $\aautomaton_{P,\arule}$ that accepts exactly the annotated matching trees for $\arule$ and $\aprogram$, and which is exponential in the size of $\arule$ and $\aprogram$ \end{proposition} We want to use the automata $\aautomaton_{P,\arule}$ to verify the entailment of a single rule within a Datalog derivation. We would like an automaton to check whether a whole derivation is possible. Unfortunately, we cannot check these derivations using automata of the form $\aautomaton_{P,\arule}$, which each need to be run on a $p$-annotated tree which has the unique entailment of the rule marked. The length of a derivation is unbounded, and we would not be able to distinguish an unbounded amount of $p$-markers. To overcome this problem, we create a modified automaton $\aautomaton^+_{P,\arule,\vec{v}}$ that simulates the behavior of $\aautomaton_{P,\arule}$ on a tree with annotation $p(\vec{v})$. For $\aautomaton^+_{P,\arule,\vec{v}}$ to know which node the annotation $p(\vec{v})$ refers to, it has to be started at this node. This is a non-standard notion of run, where we do not start at the root of the tree. Moreover, starting in the middle of the tree makes it necessary to consider both nodes below and above the current position, and $\aautomaton^+_{P,\arule,\vec{v}}$ therefore needs to be an \emph{alternating 2-way tree automaton}. \begin{proposition}\label{prop_ruleautoplus} There is an alternating 2-way tree automaton $\aautomaton^+_{P,\arule,\vec{v}}$ that is polynomial in the size of $\aautomaton_{P,\arule}$ such that, whenever $\aautomaton_{P,\arule}$ accepts a matching tree $T$ that has the $p$-annotation $p(\vec{v})$ on node $e$, then $\aautomaton^+_{P,\arule,\vec{v}}$ has an accepting run that starts from the corresponding node $e'$ on the tree $T'$ that is obtained by removing the $p$-annotation from $T$. \end{proposition} Using the automata $\aautomaton^+_{P,\arule,\vec{v}}$, we can now obtain the claimed alternating 2-way automaton $\aautomaton^+_{P\sqsubseteq P'}$ for a \gqlang\xspace $P'$. Intuitively speaking, $\aautomaton^+_{P\sqsubseteq P'}$ concatenates the automata $\aautomaton^+_{P,\arule,\vec{v}}$ using alternation: whenever a derivation requires a (recursive) IDB atom, a suitable process $\aautomaton^+_{P,\arule,\vec{v}}$ is initiated, starting from a node in the middle of the tree. The construction relies on guardedness, which ensures that we can always find a suitable start node (corresponding to the node that was $p$-annotated earlier), by finding a suitable guard EDB atom in the tree. \begin{proposition}\label{prop_containmentaltauto} For a \qlang{Dlog}\xspace query $P$ and a \gqlang\xspace query $P'$ with special constants $\vec{\lambda}$, there is an alternating 2-way automaton $\aautomaton^+_{P\sqsubseteq P'}$ of exponential size that accepts the $\vec{\lambda}$-annotated proof trees of $P$ that encode expansion trees with $\vec{\lambda}$ assignments for which $P'$ has a match \end{proposition} We are now ready to prove Theorem~\ref{theo_membdlgq}. The automaton $\aautomaton^+_{P\sqsubseteq P'}$ allows us to check the answers of $P'$ on a proof tree that is $\vec{\lambda}$-annotated to assign values for answer constants. We can transform this alternating 2-way automaton into a tree automaton $\aautomaton'_{P\sqsubseteq P'}$ that is exponentially larger, i.e., doubly exponential in the size of the input. To remove the need for $\vec{\lambda}$-labels, we modify the automaton $\aautomaton'_{P\sqsubseteq P'}$ so that it can only perform a transition from its start state if it finds that the constants in $\vec{\lambda}$ are assigned to the answer variables of $P$ in the root. Finally, we obtain $\aautomaton_{P\sqsubseteq P'}$ by projecting to the alphabet $\mathcal{R}_{\aprogram}$ without $\vec{\lambda}$-annotations; this is again possible in polynomial effort. The containment problem $P\sqsubseteq P'$ is equivalent by deciding the containment of $\aautomaton_P$ in $\aautomaton_{P\sqsubseteq P'}$, which is possible in exponential time w.r.t.\ to the size of the automata. Since $\aautomaton_P$ is exponential and $\aautomaton_{P\sqsubseteq P'}$ is double exponential, we obtain the claimed triple exponential bound. Our proof of Theorem~\ref{theo_membdlgq} can be used to obtain another interesting result for the case of frontier-guarded Datalog. If $P$ is a \gdatalog query, which does not use any special constants $\lambda$, then the $\vec{\lambda}$-annotations are not relevant and $\aautomaton^+_{P\sqsubseteq P'}$ can be constructed as an alternating 2-way automaton on proof trees. For this, we merely need to modify the construction in Proposition~\ref{prop_containmentaltauto} to start in start states of automata for rules that entail the goal predicate of $P'$ with the expected binding of variables to answer variables of $P$. We can then omit the projection step, which required us to convert $\aautomaton^+_{P\sqsubseteq P'}$ into a tree automaton earlier. Instead, we can construct from $\aautomaton^+_{P\sqsubseteq P'}$ a complement tree automaton $\bar{\aautomaton}_{P\sqsubseteq P'}$ that is only exponentially larger than $\aautomaton^+_{P\sqsubseteq P'}$, i.e., doubly exponential overall \cite{Cosmadakis88}[Theorem~A.1]. Containment can then be checked by checking the non-emptiness of $\aautomaton_P\cap \bar{\aautomaton}_{P\sqsubseteq P'}$, which is possible in polynomial time, leading to a \complclass{2ExpTime}{} algorithm overall. \begin{theorem}\label{theo_membdlgdl} \containmentMembershipStatement{\qlang{Dlog}\xspace}{\gdatalog}{\complclass{2ExpTime}} \end{theorem} This generalizes an earlier result of Cosmadakis et al.\ for monadic Datalog \cite{Cosmadakis88} using an alternative, direct proof. Finally, we can lift our results to the case of nested queries. Using Proposition~\ref{prop_posfcqs}, we can make the simplifying assumption that rules with some nested query in their body contain only one nested query and a guard atom as the only other atom. Thus all rules with nested queries have the form $g(\vec{s})\wedge Q(\vec{t})\to p(\vec{u})$, where $g$ is an EDB predicate, $Q$ is a nested query, and the variables $\vec{u}$ occur in $\vec{s}$. In Proposition~\ref{prop_ruleautoplus}, we constructed alternating 2-way automata $\aautomaton^+_{P,\arule,\vec{v}}$ that can check the entailment of a particular atom $p(\vec{v})$ starting from a node within the tree. Analogously, we now construct automata $\aautomaton^+_{P,Q,\theta}$ that check that the nested query $Q$ matches partially, where $\theta$ is a substitution that interprets query variables in terms of proof-tree variables on the current node of the tree. Only the variables that occur in $g(\vec{s})$ and $Q(\vec{t})$ are mapped by $\theta$; the remaining variables can be interpreted arbitrarily, possibly in distant parts of the proof tree. To construct $\aautomaton^+_{P,Q,\theta}$, we use the alternating 2-way automaton $\aautomaton^+_{P\sqsubseteq Q}$, constructed in Proposition~\ref{prop_containmentaltauto} (assuming, for a start, that $Q$ is not nested). This automaton is extended to an alternating 2-way automaton $\aautomaton^+_{P,Q}$ that accepts trees with a unique annotation of the form $\tuple{Q,\theta}$, for which we check that it is consistent with the $\vec{\lambda}$-annotation (i.e., for each query variable $x$ mapped by $\theta$, the corresponding constant $\lambda$ is assigned to $\theta(x)$ at the node that is annotated with $\tuple{Q,\theta}$). We then obtain a (top-down) tree automaton $\aautomaton_{P,Q}$ by transforming $\aautomaton^+_{P,Q}$ into a tree automaton (exponential), and projecting away the $\vec{\lambda}$-annotations (polynomial). The automaton $\aautomaton_{P,Q}$ is analogous to the tree automaton $\aautomaton_{P,\arule}$ of Proposition~\ref{prop_ruleauto}. Using the same transformation as in Proposition~\ref{prop_ruleautoplus}, we obtain an alternating 2-way automaton $\aautomaton^+_{P,Q,\theta}$ for each $\theta$. The automaton $\aautomaton^+_{P\sqsubseteq P'}$ for a nested query $P'$ is constructed as in Proposition~\ref{prop_containmentaltauto}, but using the automata $\aautomaton^+_{P,Q,\theta}$ instead of automata $\aautomaton^+_{P,\arule,\vec{v}}$ to check the entailment of a subquery $Q$. The size of $\aautomaton^+_{P\sqsubseteq P'}$ is increased by one exponential since the size of $\aautomaton^+_{P,Q,\theta}$ is exponentially increased when projecting out $\vec{\lambda}$-labels for $Q$. Applying this construction inductively, we obtain the following result. \begin{theorem}\label{theo_membdlgqk} \containmentMembershipStatement{\qlang{Dlog}\xspace}{\kgq{k}}{\kExpTime{$(k+2)$}} \end{theorem} \section{Monadic Datalog without Constants}\label{sec:noconstants} \begin{theorem} \containmentMembershipStatement{\mdatalog}{\gqlang\xspace}{\complclass{2ExpTime}} \end{theorem} \begin{theorem} \containmentMembershipStatement{\mdatalog}{\kgq{k}}{\kExpTime{$(k+1)$}} \end{theorem} \begin{theorem}[\todo{know result by Vardi; add reference}] \containmentHardnessStatement{\mdatalog}{\qlang{UCQ}\xspace}{\complclass{2ExpTime}} \end{theorem} \begin{theorem} \containmentHardnessStatement{\mdatalog}{\kctworpq{k}}{\kExpTime{$(k+1)$}} \end{theorem} \begin{theorem} \containmentMembershipStatement{\linmdatalog}{\mqlang\xspace}{\complclass{ExpSpace}} \end{theorem} \begin{proof} \input{linmdatalog-linMQ} \end{proof} \todo{The following is an optional remark (based on Vardi's construction for the non-nested case).} \containmentMembershipStatement{\qlang{Dlog}\xspace}{\kgq{k}}{\kExpTime{$(k+1)$}} \section{References} {\small \bibliographystyle{abbrv} \newcommand\refname{} \renewcommand{\section}[2][\empty]{} \section{Preliminaries}\label{sec:prelims} We consider a standard language of first-order predicate logic, based on an infinite set \Ilang{} of \emph{constant symbols}, an infinite set \Plang{} of \emph{predicate symbols}, and an infinite set \Vlang{} of first-order \emph{variables}. Each predicate $p\in\Plang$ is associated with a natural number $\arity(p)$ called the \emph{arity} of $p$. The list of predicates and constants forms the language's \emph{signature} $\mathscr{S}=\tuple{\Plang,\Ilang}$. We generally assume $\mathscr{S}=\tuple{\Plang,\Ilang}$ to be fixed, and only refer to it explicitly if needed. \paragraph*{Formulae, Rules, and Queries} A \emph{term} is a variable $x\in\Vlang$ or a constant $c\in\Ilang$. We use symbols $s,t$ to denote terms, $x,y,z,v,w$ to denote variables, $a,b,c$ to denote constants. Expressions like $\vec{t}$, $\vec{x}$, $\vec{c}$ denote finite lists of such entities. We use the standard predicate logic definitions of \emph{atom} and \emph{formula}, using symbols $\varphi$, $\psi$ for the latter. Datalog queries are defined over an extended signature with additional predicate symbols, called \emph{IDB predicates}; all other predicates are called \emph{EDB predicates}. A \emph{Datalog rule} is a formula of the form $\forall\vec{x},\vec{y}.\varphi[\vec{x},\vec{y}] \to\psi[\vec{x}]$ where $\varphi$ and $\psi$ are conjunctions of atoms, called the \emph{body} and \emph{head} of the rule, respectively, and where $\psi$ only contains IDB predicates. We usually omit universal quantifiers when writing rules. Sets of Datalog rules will be denoted by symbols $\mathbb{P},\mathbb{R},\mathbb{S}$. A set of Datalog rules $\mathbb{P}$ is \begin{itemize} \item \emph{monadic} if all IDB predicates are of arity one; \item \emph{frontier-guarded} if the body of every rule contains an atom $p(\vec{t})$ such that $p$ is an EDB predicate and $\vec{t}$ contains all variables that occur in the rule's head; \item \emph{linear} if every rule contains at most one IDB predicate in its body. \end{itemize} A \emph{conjunctive query} (\qlang{CQ}\xspace) is a formula $Q[\vec{x}]= \exists\vec{y}.\psi[\vec{x},\vec{y}]$ where $\psi[\vec{x},\vec{y}]$ is a conjunction of atoms; a \emph{union of conjunctive queries} (\qlang{UCQ}\xspace) is a disjunction of such formulae. A \emph{Datalog query} $\tuple{\mathbb{P},Q}$ consists of a set of Datalog rules $\mathbb{P}$ and a conjunctive query $Q$ over IDB or EDB predicates ($Q$ could be expressed as a rule in Datalog, but not in all restrictions of Datalog we consider). We write \qlang{Dlog}\xspace for the language of Datalog queries. A monadic Datalog query is one where $\mathbb{P}$ is monadic, and similarly for other restrictions. We use the query languages \mdatalogconst (monadic), \gdatalog (frontier-guarded), \lindatalog (linear), and \linmdatalogconst (linear, monadic). \paragraph*{Databases and Semantics} We use the standard semantics of first-order logic (FOL). A \emph{database instance} $\Inter$ consists of a set $\Delta^\Inter$ called \emph{domain} and a function $\cdot^\Inter$ that maps constants $c$ to domain elements $c^\Inter\in\Delta^\Inter$ and predicate symbols $p$ to relations $p^\Inter\subseteq (\Delta^\Inter)^{\arity(p)}$, where $p^\Inter$ is the \emph{extension} of $p$. Given a database instance $\Inter$ and a formula $\varphi[\vec{x}]$ with free variables $\vec{x}=\tuple{x_1,\ldots,x_m}$, the \emph{extension} of $\varphi[\vec{x}]$ is the subset of $(\Delta^\Inter)^m$ containing all those tuples $\tuple{\delta_1,\ldots,\delta_m}$ for which $\Inter,\{x_i \mapsto \delta_i \mid 1 \leq i \leq m\} \models \varphi[\vec{x}]$. We denote this by $\tuple{\delta_1,\ldots,\delta_m}\in\varphi^\Inter$ or by $\Inter\models\varphi(\delta_1,\ldots,\delta_m)$; a similar notation is used for all other types of query languages. Two formulae $\varphi[\vec{x}]$ and $\psi[\vec{x}]$ are called \emph{equivalent} if their extensions coincide for every database instance $\Inter$. The set of answers of a \qlang{UCQ}\xspace $Q[\vec{x}]$ over $\Inter$ is its extension. The set of answers of a Datalog query $\tuple{\mathbb{P},Q}$ over $\Inter$ is the intersection of the extensions of $Q$ over all extended database instances $\Inter'$ that interpret IDB predicates in such a way that all rules of $\mathbb{P}$ are satisfied. Datalog \cite{Alice} can also be defined as the least fixpoint on the inflationary evaluation of $Q$ on $I$. Note that we do not require database instances to have a finite domain, since all of our results are valid in either case. This is due to the fact that every entailment of a Datalog program has a finite witness, and that all of our query languages are positive, i.e., that their answers are preserved under homomorphisms of database instances. \section*{Proofs for Section~\ref{sec_nesting}} \begin{reptheorem}{theo_lindatalognesting} \lindatalog = \nestedq{\linq{\qlang{Dlog}\xspace}}. \end{reptheorem} \begin{proof} We will prove that any \nestedq{\linq{\qlang{Dlog}\xspace}} query can be rewritten into a \lindatalog query of polynomial size. We make simplifying assumptions on the structure of the nested query which can be easily obtained by polynomial transformations and make the presentation easier: we assume that every rule body of any query occurring at any nesting depth contains at most one subquery atom (using, e.g., Proposition~\ref{prop_posfcqs}). Second, we assume that all variables and IDB predicates that are not in the same scope are appropriately renamed apart. In order to proof our claim, we will first show that any \knestedq{\linq{\qlang{Dlog}\xspace}}{2} can be rewritten into an equivalent \lindatalog query. Applying the rewriting iteratively inside-out (and observing that even manyfold application can be done in polynomial total time) then allows to conclude that there is a polynomial rewriting of any \nestedq{\linq{\qlang{Dlog}\xspace}} query of arbitrary depth into a \lindatalog query. Consider a \knestedq{\linq{\qlang{Dlog}\xspace}}{2} query $P=\tuple{\aprogram,p}$ and assume w.l.o.g. that every rule body of the rules contains at most one \knestedq{\linq{\qlang{Dlog}\xspace}}{1} subquery. Now, going through all rules of $\aprogram$ we produce the rules $\aprogram'$ of the unnested but equivalent version. Consider a rule $\rho\in \aprogram$ having the shape $$ Q(x_1,\ldots,x_n) \wedge p(y_1,\ldots y_\ell) \wedge B_1 \wedge \ldots \wedge B_k \to H $$ where $p$ is the body IDB predicate and where $Q=\tuple{\mathbb{Q},q}$ is a \knestedq{\linq{\qlang{Dlog}\xspace}}{1} query. For any $k$-ary IDB predicate $r$ inside $\mathbb{Q}$ we increase its arity by $\ell$ and let $\aprogram'$ contain all rules of $\mathbb{Q}'$ which is obtained from the rules $\rho'$ of $\mathbb{Q}$ by \begin{itemize} \item replacing any (head or body) IDB atom $r(z_1,\ldots,z_k)$ of $\rho'$ by $r(z_1,\ldots,z_k,y_1,\ldots y_\ell)$ and \item in case $\rho'$ does not contain any IDB body atom, add $p(y_1,\ldots y_\ell)$ to the body. \end{itemize} Further we let $\aprogram'$ contain the rule $$ q(x_1,\ldots,x_n,y_1,\ldots y_\ell) \wedge \wedge B_1 \wedge \ldots \wedge B_k \to H. $$ In case of a rule $\rho\in \aprogram$ having the shape $$ Q(x_1,\ldots,x_n) \wedge B_1 \wedge \ldots \wedge B_k \to H $$ we add $\mathbb{Q}$ to $\aprogram'$ without change and let $\aprogram'$ contain the rule $$ q(x_1,\ldots,x_n) \wedge B_1 \wedge \ldots \wedge B_k \to H. $$ In case a rule $\rho\in \aprogram$ does not contain a subquery atom we simply add $\rho$ to $\aprogram'$. It can now easily verified that $\tuple{\aprogram,p}$ and $\tuple{\aprogram',p}$ are equivalent: first it is straightforward, that $\tuple{\aprogram,p}$ is equivalent to $\tuple{\aprogram^\flat,p}$ where $\aprogram^\flat$ is obtained from $\aprogram$ by replacing every $Q(x_1,\ldots,x_n)$ by $q(x_1,\ldots,x_n)$ (that is, the according goal predicate) and then adding all rules from $\mathbb{Q}$ with no changes made to them. Second one can show that there is a direct correspondence between proof trees of $\tuple{\aprogram^\flat,p}$ and linearized proof trees of $\tuple{\aprogram',p}$ which yields the desired result. \end{proof} \begin{repproposition}{prop_posfcqs} Let $P$ be a positive query, i.e., a Boolean expression of disjunctions and conjunctions, of \klinmq{k} queries with $k\geq 1$. Then there is a \klinmq{k} query $P'$ of size polynomial in $P$ that is equivalent to $P$. Analogous results hold when replacing \klinmq{k} by \kmq{k}, \kgq{k}, or \klinmq{k} queries. \end{repproposition} \begin{proof} We show the claim by induction, by expressing the innermost disjunctions and conjunctions of $P$ with equivalent \klinmq{k} queries of linear size. We consider positive queries without existential quantifiers (i.e., where all variables are answer variables), but the inner \klinmq{k} may use existential quantifiers. Let $P[\vec{x}]=P_1[\vec{x_1}]\vee \ldots\vee P_n[\vec{x_n}]$ be a disjunction of \klinmq{k} queries. Each query $P_i$ is of the form $\exists\vec{z_i}.P'_i[\vec{x'_i}]$, where $\vec{x'_i}$ is the list of free variables of $P'_i$ (corresponding to constants $\lambda$), and $\vec{z_i}$ contains exactly those variables of $\vec{x'_i}$ that do not occur in $\vec{x_i}$. We assume without loss of generality that $\vec{z_i}$ is disjoint from $\vec{z_j}$ if $i\neq j$, and that each $P'_i$ uses a unique set of IDBs that does not occur in other queries. We consider queries $\bar{P}_i$ obtained by replacing the special constant that represents a variable $x_j\in\vec{x}$ by the special constant $\lambda_j$ (assumed to not occur in $P$ yet). Thus, the queries $\bar{P}_i$ share special constants exactly where queries $P_1$ share variables. We can now define the \klinmq{k} $P'$ as $\exists\vec{z_1}\ldots\vec{z_n}.\bar{P}_1\cup\ldots\cup \bar{P}_n$, where we assume that the correspondence of special constants to free variables is such that the existential quantifiers refer to the same variables as before. Let $P[\vec{x}]=P_1[\vec{x_1}]\wedge \ldots\wedge P_n[\vec{x_n}]$ be a conjunction of \klinmq{k} queries. Let $P_i=\exists\vec{z_i}.P'_i[\vec{x'_i}]$ as before, and let $\mathtt{U}_i$ for $i\in\{1,\ldots,n-1\}$ be fresh IDB predicates. The queries $\bar{P}_i$ are defined as before by renaming special constants to reflect shared variables. For each $i\in\{1,\ldots,n\}$, the set of rules $\hat{P}_i$ is obtained from $\bar{P}_i$ as follows: if $i<n$, then every rule $\varphi\to\mathsf{hit}\in \bar{P}_i$ is replaced by the rule $\varphi\to\mathtt{U}_i(\lambda_1)$, where $\lambda_1$ is a fixed special constant in the queries; if $i>1$, then every rule $\varphi\to \psi\in\bar{P}_i$ where $\varphi$ does not contain an IDB predicate is replaced by the rule $\varphi\wedge\mathtt{U}_{i-1}(\lambda_1)\to \psi$, where $\lambda_1$ is as before. The \klinmq{k} $P'$ is defined as $\exists\vec{z_1}\ldots\vec{z_n}.\hat{P}_1\cup\ldots\cup \hat{P}_n$. These constructions lead to equivalent \klinmq{k} queries of linear size, so the claim follows by inductions. The cases for \kmq{k}, \kgq{k}, and \klinmq{k} follow from the same constructions (note that, without the requirement of linearity, a simpler construction is possible in the case of conjunctions). \end{proof} \begin{reptheorem}{theo_gqquerycomlpexity} The combined complexity of evaluating \gqlang\xspace queries over a database instance is \complclass{NP}-complete. The same holds for \gdatalog queries. The combined complexity of evaluating \nestedq{\gqlang}\xspace queries is \complclass{PSpace}-complete. The data complexity is \complclass{P}-complete for \gdatalog, \gqlang\xspace, and \nestedq{\gqlang}\xspace. \end{reptheorem} \begin{proof} \input{evaluation-complexity} \end{proof} \begin{reptheorem}{theo_linquerycomlpexity} The combined complexity of evaluating \linmqlang\xspace queries over a database instance is \complclass{NP}-complete. The same holds for \lingdatalog{} and \lingqlang\xspace. The combined complexity of evaluating \nestedq{\linmqlang}\xspace queries is \complclass{PSpace}-complete. The same holds for \nestedq{\lingqlang}\xspace. The data complexity is \complclass{NLogSpace}-complete for all of these query languages. \end{reptheorem} \begin{proof} The claimed \complclass{NP}-completeness is immediate. Hardness follows from the hardness of \qlang{CQ}\xspace query answering. Membership follows from the membership of \gqlang\xspace. The claimed membership in \complclass{PSpace} follows from the \complclass{PSpace}-membership of \lindatalog; note that this uses Theorem~\ref{theo_lindatalognesting}. Hardness for \nestedq{\lingqlang}\xspace follows from the hardness for \nestedq{\linmqlang}\xspace, which we show by modifying the \complclass{PSpace}-hardness proof for monadically defined queries from \cite{RK13:flagcheck}. \newcommand{\predstyle}[1]{\mathit{#1}} \newcommand{FCP}{FCP} We show the result by providing a reduction from the validity problem of quantified Boolean formulae (QBFs). We recap that for any QBF, it is possible to construct in polynomial time an equivalent QBF that has the specific shape $$Q_1 x_1 Q_2 x_2 \ldots Q_n x_n \bigvee_{L\in\mathcal{L}} \bigwedge_{\ell\in L} \ell,$$ with $Q_1,\ldots Q_n \in \{\exists,\forall\}$ and $\mathcal{L}$ being a set of sets of literals over the propositional variables $x_1,\ldots,x_n$. In words, we assume our QBF to be in prenex form with the propositional part of the formula in disjunctive normal form. For every literal set $L = \{x_{k_1},\ldots,x_{k_i},$ $\neg x_{k_{i+1}},\ldots, \neg x_{k_j}\}$, we now define the $n$-ary FCP{} $\predstyle{p}_L = \{ \predstyle{t}(\lambda_{k_1})\wedge\ldots\wedge\predstyle{t}(\lambda_{k_i})\wedge \predstyle{f}(\lambda_{k_{i+1}})\wedge\ldots\wedge\predstyle{f}(\lambda_{k_j}) \to \mathsf{hit}\}$. Moreover, we define the $n$-ary FCP{} $\predstyle{p}_\mathcal{L} = \{ \predstyle{p}_L(\lambda_1,\ldots,\lambda_n)\to \mathsf{hit} \mid L\in\mathcal{L} \}$. Letting $\predstyle{p}_\mathcal{L} = \predstyle{p}_n$ we now define FCP{}s $\predstyle{p}_{n-1} \ldots \predstyle{p}_{0}$ in descending order. If $Q_i = \exists$, then the $i{-}1$-ary FCP{} $\predstyle{p}_{i-1}$ is defined as the singleton rule set $\{ \predstyle{p}_{i}(\lambda_1,\ldots,\lambda_{i-1},y) \to \mathsf{hit} \}$. In case $Q_i = \forall$, we let $\predstyle{p}_{i-1}$ contain the rules \begin{align*} \predstyle{f}(x) & \to \mathtt{U}_{?}(x)\\ \mathtt{U}_{!}(x) \wedge \predstyle{f}(x) \wedge \predstyle{t}(y)& \to \mathtt{U}_{?}(y)\\ \mathtt{U}_{!}(x) \wedge \predstyle{t}(x) & \to \mathsf{hit}\\ \mathtt{U}_{?}(x) \wedge \predstyle{p}_{i}(\lambda_1,\ldots,\lambda_{i-1},x) & \to \mathtt{U}_{!}(x)\\ \end{align*} Note that $\predstyle{p}_{0}$ is a Boolean \nestedq{\linmqlang}\xspace query the size of which is polynomial in the size of the input QBF. Now, let $D$ be the database containing the two individuals $0$ and $1$ as well as the facts $\predstyle{f}(0)$ and $\predstyle{t}(1)$. We now show that the considered QBF is true exactly if $D\models \predstyle{p}_{0}()$. To this end, we first note that by construction the extension of $\predstyle{p}_L$ contains exactly those $n$-tuples $\tuple{\delta_1,\ldots,\delta_n}$ for which the corresponding truth value assignment $val$, sending $x_i$ to $\mathbf{true}$ iff $\delta_i = 1$, makes the formula $\bigwedge_{\ell\in L} \ell$ true. In the same way, the extension of $\predstyle{p}_\mathcal{L}$ represents the set of truth value assignments satisfying $\bigvee_{L\in\mathcal{L}} \bigwedge_{\ell\in L} \ell$. Then, by descending induction, we can show that the extensions of $\predstyle{p}_i$ encode the assignments to free propositional variables of the subformula $Q_{i+1} x_{i+1} \ldots Q_n x_n \bigvee_{L\in\mathcal{L}} \bigwedge_{\ell\in L} \ell$ that make this formula true. Consequently, $\predstyle{p}_0$ has a nonempty extension if the entire considered QBF is true. Finally, the \complclass{NLogSpace}-completeness for data complexity is again immediate, where the upper bound is obtained from \lindatalog, and the lower bound follows from the well-known hardness of reachability queries, which can be expressed in \linmdatalogconst. \end{proof} \section{Guarded Queries}\label{sec:queries} Monadically defined queries have been introduced in \cite{RK13:flagcheck} as a generalization of monadic Datalog (\mdatalogconst) and conjunctive two-way regular path queries (\qlang{C2RPQ}\xspace{}s) for which query containment is still decidable.\footnote{The queries were called $\qlang{MODEQ}$ in \cite{RK13:flagcheck}; we shorten this to \mqlang\xspace.} The underlying idea of this approach is that candidate query answers are checked by evaluating a monadic Datalog program, i.e., in contrast to the usual evaluation of Datalog queries, we start with a ``guessed'' answer that is the input to a Datalog program. To implement this, the candidate answer is represented by special constants $\lambda$ that the Datalog program can refer to. This mechanism was called \emph{flag~\& check}, since the special constants act as flags to indicate the answer that should be checked. \begin{example}\label{ex_transitivity} A query that computes the transitive closure over a relation $p$ can be defined as follows. \begin{align*} p(\lambda_1,y) & \to\mathtt{U}(y)\\ \mathtt{U}(y)\wedge p(y,z) & \to\mathtt{U}(z)\\ \mathtt{U}(\lambda_2) & \to \mathsf{hit} \end{align*} One defines the answer of the query to contain all pairs $\tuple{\delta_1,\delta_2}$ for which the rules entail $\mathsf{hit}$ when interpreting $\lambda_1$ as $\delta_1$ and $\lambda_2$ as $\delta_2$. \end{example}% The approach used monadic Datalog for its close relationship to monadic second-order logic, which was the basis for showing decidability of query containment. In this work, however, we develop new techniques for showing the decidability (and exact complexity) of this problem directly. It is therefore suggestive to consider other types of Datalog programs to implement the ``check'' part. The following definition therefore introduces the general technique for arbitrary Datalog programs, and defines interesting fragments by imposing further restrictions. \begin{definition}\label{def_fcq} Consider a signature $\mathscr{S}$. An FCP (``flag \& check program'') of arity $m$ is a set of Datalog rules $\mathbb{P}$ with $k\geq 0$ IDB predicates $\mathtt{U}_1,\ldots,\mathtt{U}_k$, that may use the additional constant symbols $\lambda_1,\ldots,\lambda_m\notin\mathscr{S}$ and an additional nullary predicate symbol $\mathsf{hit}$. An FCQ (``flag \& check query'') $P$ is of the form $\exists\vec{y}.\mathbb{P}(\vec{z})$, where $\mathbb{P}$ is an FCP of arity $|\vec{z}|$ and all variables in $\vec{y}$ occur in $\vec{z}$. The variables $\vec{x}$ that occur in $\vec{z}$ but not in $\vec{y}$ are the \emph{free variables} of $P$. Let $\Inter$ be a database instance over $\mathscr{S}$. The \emph{extension} $\mathbb{P}^\Inter$ of $\mathbb{P}$ is the set of all tuples $\tuple{\delta_1,\ldots,\delta_m}\in(\Delta^\Inter)^m$ such that every database instance $\Inter'$ that extends $\Inter$ to the signature of $\mathbb{P}$ and that satisfies $\tuple{\lambda_1^{\Inter'},\ldots,\lambda_m^{\Inter'}}=\tuple{\delta_1,\ldots,\delta_m}$ also entails $\mathsf{hit}$. The semantics of FCQs is defined in the obvious way based on the extension of FCPs. A \gqlang\xspace is an FCQ $\exists\vec{y}.\mathbb{P}(\vec{z})$ such that $\mathbb{P}$ is frontier-guarded. Similarly, we define \mqlang\xspace (monadic), \linmqlang\xspace (linear, monadic), and \lingqlang\xspace (linear, frontier-guarded) queries. \end{definition} In contrast to \cite{RK13:flagcheck}, we do not define monadic queries as conjunctive queries of FCPs, but we merely allow existential quantification to project some of the FCP variables. Proposition~\ref{prop_posfcqs} below shows that this does not reduce expressiveness. We generally consider monadic Datalog as a special case of frontier-guarded Datalog. Monadic Datalog rules do not have to be frontier-guarded. A direct way to obtain a suitable guard is to assume that there is a unary $\textsf{domain}$ predicate that contains all (relevant) elements of the domain of the database instance. However, it already suffices to require \emph{safety} of Datalog rules, i.e., that the variable in the head of a rule must also occur in the body. Then every element that is inferred to belong to an IDB relation must also occur in some EDB relation. We can therefore add single EDB guard atoms to each rule in all possible ways without modifying the semantics. This is a polynomial operation, since all variables in the guards are fresh, other than the single head variable that we want to guard. We therefore find, in particular, the \gqlang\xspace captures the expressiveness of \mqlang\xspace. The converse is not true, as the following example illustrates. \begin{example}\label{ex_gqvsmq} The following $4$-ary \lingqlang\xspace generalizes Example~\ref{ex_transitivity} by checking for the existence of two parallel $p$-chains of arbitrary length, where each pair of elements along the chains is connected by a relation $q$, like the steps of a ladder. \begin{align*} q(\lambda_1,\lambda_2) & \to\mathtt{U}_q(\lambda_1,\lambda_2)\\ \mathtt{U}_q(x,y)\wedge p(x,x')\wedge p(y,y'), q(x',y') & \to\mathtt{U}_q(x',y')\\ \mathtt{U}_q(\lambda_3,\lambda_4) & \to \mathsf{hit} \end{align*} One might assume that the following \mqlang\xspace is equivalent: \begin{align*} q(\lambda_1,\lambda_2) & \to\mathtt{U}_1(\lambda_1)\\ q(\lambda_1,\lambda_2) & \to\mathtt{U}_2(\lambda_2)\\ \mathtt{U}_1(x)\wedge \mathtt{U}_2(y)\wedge p(x,x')\wedge p(y,y'), q(x',y') & \to\mathtt{U}_1(x')\\ \mathtt{U}_1(x)\wedge \mathtt{U}_2(y)\wedge p(x,x')\wedge p(y,y'), q(x',y') & \to\mathtt{U}_2(y')\\ \mathtt{U}_1(\lambda_3)\wedge\mathtt{U}_2(\lambda_4) & \to \mathsf{hit} \end{align*} However, the latter query also matches structures that are not ladders. For example, the following database yields the answer $\tuple{a,b,c,d}$, although there is no corresponding ladder structure: $\{q(a,b),p(a,c),p(b,e),q(c,e),p(a,e'),p(b,d),q(e',d)\}$. One can extend the \mqlang\xspace to avoid this case, but any such fix is ``local'' in the sense that a sufficiently large ladder-like structure can trick the query. \end{example}% It has been shown that monadically defined queries can be expressed both in Datalog and in monadic second-order logic \cite{RK13:flagcheck}. While we lose the connection to monadic second-order logic with \gqlang\xspace{}s, the expressibility in Datalog remains. The encoding is based on the intuition that the choice of the candidate answers for $\vec{\lambda}$ ``contextualizes'' the inferences of the Datalog program. To express this without special constants, we can store this context information in predicates of suitably increased arity. \begin{example}\label{ex_gqvsmqdatalog} The $4$-ary \lingqlang\xspace of Example~\ref{ex_gqvsmq} can be expressed with the following Datalog query. For brevity, let $\vec{y}$ be the variable list $\tuple{y_1,y_2,y_3,y_4}$, which provides the context for the IDB facts we derive. \begin{align*} q(y_1,y_2) & \to\mathtt{U}^+_q(y_1,y_2,\vec{y})\\ \mathtt{U}_q(x,y,\vec{y})\wedge p(x,x')\wedge p(y,y'), q(x',y') & \to\mathtt{U}^+_q(x',y',\vec{y})\\ \mathtt{U}_q(y_3,y_4,\vec{y}) & \to \mathsf{goal}(\vec{y}) \end{align*} This result is obtained by a straightforward extension of the translation algorithm for \mqlang\xspace{}s \cite{RK13:flagcheck}, which may not produce the most concise representation. Also note that the first rule in this program is not safe, since $y_3$ and $y_4$ occur in the head but not in the body. According to the semantics we defined, such variables can be bound to any element in the active domain of the given database instance (i.e., they behave as if bound by a unary $\mathsf{domain}$ predicate). \end{example}% This observation justifies that we consider \mqlang\xspace{}s, \gqlang\xspace{}s, etc.\ as Datalog fragments. It is worth noting that the translation does not change the number of IDB predicates in the body of rules, and thus preserves linearity. The relation to (linear) Datalog also yields some complexity results for query answering; we will discuss these at the end of the next section, after introducing nested variants our query languages. \section{Nested Queries}\label{sec_nesting} Every query language gives rise to a nested language, where we allow nested queries to be used as if they were predicates. Sometimes, this does not lead to a new query language (like for \qlang{CQ}\xspace and \qlang{Dlog}\xspace), but often it affects complexities and/or expressiveness. It has been shown that both are increased when moving from \mqlang\xspace{}s to their nested variants \cite{RK13:flagcheck}. We will see that nesting also has strong effects on the complexity of query containment. \begin{definition} We define $k$-nested FCPs inductively. A $1$-nested FCP is an FCP. A $k+1$-nested FCP is an FCP that may use $k$-nested FCPs of arity $m$ instead of predicate symbols of arity $m$ in rule bodies. The semantics of nested FCPs is immediate based on the extension of FCPs. A $k$-nested FCQ $P$ is of the form $\exists\vec{y}.\mathbb{P}(\vec{z})$, where $\mathbb{P}$ is a $k$-nested FCP of arity $|\vec{z}|$ and all variables in $\vec{y}$ occur in $\vec{z}$. A $k$-nested \gqlang\xspace query is a $k$-nested frontier-guarded FCQ. For the definition of \emph{frontier-guarded}, we still require EDB predicates in guards: subqueries cannot be guards. The language of $k$-nested \gqlang\xspace queries is denoted \kgq{k}; the language of arbitrarily nested \gqlang\xspace queries is denoted \nestedq{\gqlang}\xspace. Similarly, we define languages \kmq{k} and \nestedq{\mqlang}\xspace (monadic), \klinmq{k} and \nestedq{\linmqlang}\xspace (linear, monadic), and \klingq{k} and \nestedq{\lingqlang}\xspace (linear, frontier-guarded). \end{definition} Note that nested queries can use the same additional symbols (predicates and constants); this does not lead to any semantic interactions, however, as the interpretation of the special symbols is ``private'' to each query. To simplify notation, we assume that distinct (sub)queries always contain distinct special symbols. The relationships of the query languages we introduced here are summarized in Figure~\ref{fig_querycompl}, where upwards links denote increased expressiveness. An interesting observation that is represented in this figure is that linear Datalog is closed under nesting: \begin{theorem}\label{theo_lindatalognesting} \lindatalog = \nestedq{\linq{\qlang{Dlog}\xspace}}. \end{theorem} Another kind of nesting that does not add expressiveness is the nesting of FCQs in \qlang{UCQ}\xspace{}s. Indeed, it turns out that (nested) FCQs can internalize arbitrary conjunctions and disjunctions of FCQs (of the same nesting level). This even holds when restricting to linear rules. \begin{proposition}\label{prop_posfcqs} Let $P$ be a positive query, i.e., a Boolean expression of disjunctions and conjunctions, of \klinmq{k} queries with $k\geq 1$. Then there is a \klinmq{k} query $P'$ of size polynomial in $P$ that is equivalent to $P$. Analogous results hold when replacing \klinmq{k} by \kmq{k}, \kgq{k}, or \klinmq{k} queries. \end{proposition} Query answering for \mqlang\xspace{}s has been shown to be \complclass{NP}-complete (combined complexity) and \complclass{P}-complete (data complexity). For \nestedq{\mqlang}\xspace, the combined complexity increases to \complclass{PSpace} while the data complexity remains the same. These results can be extended to frontier-guarded queries. We also note the query complexity for frontier-guarded Datalog, for which we are not aware of any published result. \begin{theorem}\label{theo_gqquerycomlpexity} The combined complexity of evaluating \gqlang\xspace queries over a database instance is \complclass{NP}-complete. The same holds for \gdatalog queries. The combined complexity of evaluating \nestedq{\gqlang}\xspace queries is \complclass{PSpace}-complete. The data complexity is \complclass{P}-complete for \gdatalog, \gqlang\xspace, and \nestedq{\gqlang}\xspace. \end{theorem} The lower bounds in the previous case are immediate from know results for monadically defined queries. In particular, the hardness proof for nested \mqlang\xspace{}s also shows that queries of a particular fixed nesting level can encode the validity problem for quantified boolean formulae with a certain number of quantifier alternations; this explains why we show the combined complexity of \kmq{k} to be in the Polynomial Hierarchy in Figure~\ref{fig_querycompl}. A modification of this hardness proof from \cite{RK13:flagcheck} allows us to obtain the same results for the combined complexities in the linear cases; matching upper bounds follow from Theorem~\ref{theo_gqquerycomlpexity}. \begin{theorem}\label{theo_linquerycomlpexity} The combined complexity of evaluating \linmqlang\xspace queries over a database instance is \complclass{NP}-complete. The same holds for \lingdatalog{} and \lingqlang\xspace. The combined complexity of evaluating \nestedq{\linmqlang}\xspace queries is \complclass{PSpace}-complete. The same holds for \nestedq{\lingqlang}\xspace. The data complexity is \complclass{NLogSpace}-complete for all of these query languages. \end{theorem} \subsection{Related Works} \paragraph{Datalog} Datalog is the classical language to define recursive queries over relational databases. The static analyse of this language andsubfragment has been deeply studied for more than two decades and in particular the problem of containment of two programs. Unfortunately, the problem of containment of two datalog programs is undecidable\cite{Shm87}. There exists two main restrictions that have studied making the contianment problem decidable: monadic datalog and nonrecursive datalog. {\bf Monadic Datalog} A monadic datalog program is a program containing only unary intentional predicates. The problem of containment of monadic datalog program is \complclass{2ExpTime} complete. The upper-bound is well known and proven in \cite{Cosmadakis88}. The hardness has been recently proven in \cite{BenediktBS12}. Finally, the containment of datalog program in a monadic datalog program is also decidable. It is a straightforward application\footnote{We thank Michael Benedikt to point us this result} of Theorem 5.5 of \cite{Courcelle91}. However, the tight bounds for this last result are not known. {\bf Non-recursive datalog program and Union of conjunctive queries} A non recursive datalog program does not have any recursion ans it is equivalent to an union of conjunctive queries. The problem of containment of a datalog program in an union of conjunctive queries is $2$-{\sc exptime}-complete \cite{ChaudhuriV97}. Due to the succintness of non recursive datalog compare to union of conjunctive queries, the problem of containment of a datalog program in an union of conjunctive queries is \complclass{3ExpTime}-complete, \cite{ChaudhuriV97}. Some restrictions for decreasing the complexity of these problems have been considered in \cite{ChaudhuriV94} and \cite{ChaudhuriV97}. A linear datalog program has rules with bodies containing at most one intentional predicate. The problem of containment of a linear datalog program in a union of conjunctive queries is \complclass{ExpSpace}-complete and the complexity decreases to \complclass{PSpace} when the linear datalog program is monadic. Intuitively, the techniques to prove the upper bounds in the previous results are based on the reduction to the problem of containment of tree automaton for the general case and to the containment of automata when datalog programs are linear. \pierre{The following is not necessary} \begin{itemize} \item If there exists a witness showing that a program $P_1$ is not contained in another program $P_2$ then there exists another witness which has a tree-like structure which is obtained from the tree proof of $P_1$. \item The proof trees can be abstracted to trees and the set of these trees is regular and denoted $A_1$. The satisfaction of $P_2$ over a proof tree can be reduced to the acceptation of the abstraction of the proof tree by a tree automaton $P_2$. \end{itemize} {\bf Extension of monadic datalog} Different propositions were made to extend the previous results for monadic programs: adding inequality predicate, extension to general intentional predicates, queries based on monadic datalog programs. {\bf Inequalities} First, \cite{LevyMSS93} extends Monadic Datalog by allowing inequalities predicates or negation of predicates in the body of the rules. Unfortunately, this extension leads to the undecidability of the containment problem for this language. \pierre{This can be removed if necessary} It exhibits that the fact that witnesses of the not containment of a program into another are closed under homomorphisms\footnote{i.e. that if $I$ is a witness of the not containment of $P_1$ in $P_2$ and that $I'$ satisfies $P_1$ and there exists an homomorphism from $I'$ to $I$ then $I'$ is a witness of the not containment of $P_1$ in $P_2$} is a key feature for the decidability of the containment of fragments of Datalog. {\bf Guarded Datalog} A Guarded Datalog program allows the use of intentional predicates with unrestricted arities, however for each rule, the variables of the head should appear in a single extensional atom appearing in the body of the rule. it has been introduced in \cite{BaranyCO12}. They proved that the containment of Guarded Datalog is \complclass{2ExpTime}-complete. As notified in \cite{BaranyCO12}, a monadic datalog program can be rewritten in a guarded datalog program as the only variable appearing in the head of a rule of a monadic datalog program can always be made guarded. Guarded Datalog can be extended to Guarded Negation Datalog by also allowing negated atoms guarded as explained previously. The containment of Guarded Negation Datalog programs remains \complclass{2ExpTime}-hard. These results are based on the decidability of the satisfiability of the guarded negation fixed point logic introduced and studied in \cite{BaranyCS11}. {\bf Nested monadically defined queries} These languages have been introduced in \cite{RK13:flagcheck}. Intuitively, a monadically defined query $Q$ is defined by by a set of free variables $X$ and a monadic datalog program $P$ where these variables appears. $Q$ is satisfied over an instance $I$ iff there exists a valuation $\nu$ of the free variables in the values of $I$ such that the monadic datalog $P_{\nu}$ is satisfied by $I$. $P_{\nu}$ is obtained from $P$ by instantiating the free variables in the rules of $P$ by the mapped values. These values are then seen as constants of the program $P_{\nu}$. The nested monadically defined query are defined as monadically defined queries and by allowing the unrestricted use of relations defined by nested monadically programs. For both languages, the containment of programs is decidable \cite{RK13:flagcheck}. The proof is the application of Theorem 5.5 of \cite{Courcelle91}. \paragraph{Queries over graphs} Graph are relationnal databases with relations of arity two. A language of queries, conjunctive two way regular path queries, based on regular languages, has been deepely studied for one decade. Intuively, a query in CR2PQ is a conjunct of atoms of the form $xLy$ where $L$ is a two way regular language. A pair of nodes $(n_1,n_2)$ is a valuation of the pair $(x,y)$ iff there exists a path between $n_1$ and $n_2$ accepted by $L$. The containment of queries in this language is shown \complclass{ExpSpace}-complete in \cite{regularpathqueries1,CalvaneseGLV03,AbiteboulV99,Deutsch2001}. The containment of datalog program in a CR2PQ is in \complclass{2ExpTime}-complete \cite{CalvaneseGV05}. Following \cite{RK13:flagcheck}, CR2PQ are expressible in monadically defined queries. Recently, a notion of nested Regular path queries has been introduced in \cite{BarceloLLW12}. This language and CR2PQ are incomparables even though both are based on regular path. The containment of nested regular path queries is in \complclass{PSpace}, \cite{Reutter13}. \section{Tree Automata} We use standard definitions for two-way alternating tree automata as introduced in \cite{Cosmadakis88}. A regular (one-way, non-alternating) tree automaton is obtained by restricting this definition. Tree automata run over ranked, labelled trees of some maximal arity (out-degree) $f$. A ranked tree can be seen a function $t$ mapping sequences of positive natural numbers (encoding nodes in the tree) to symbols from a fixed finite alphabet (the labels of each node). Each letter of the alphabet is ranked, i.e., associated with an arity that defines how many child nodes a node labeled with this symbol should have. The domain of $t$, denoted $\mathrm{Nodes}(t)$, satisfies the following closure property: if $\sq{i}\cdot j\in\mathrm{Nodes}(t)$, then $\sq{i}\in\mathrm{Nodes}(t)$ and $\sq{i}\cdot k\in\mathrm{Nodes}(t)$ for all $1\le k\le j$. Given a ranked tree $t$, we write $\sq{i}\in\mathrm{Nodes}(t)$ to denote an arbitrary node of $t$ and $t(\sq{i})$ to denote the label of $\sq{i}$ in $t$. We denote by $\mathrm{Trees}(\Sigma)$ the set of trees over the alphabet $\Sigma$. A two-way alternating tree automaton $\aautomaton$ is a tuple $\tuple{\Sigma,Q,Q_s,\delta,Q_e}$ where \begin{itemize} \item $\Sigma$ is a tree alphabet; \item $Q$ is a set of states; \item $Q_s\subseteq Q$ is the set of initial states; \item $Q_e\subseteq Q$ is the set of accepting states; \item $\delta$ is a transition function from $Q \times \Sigma$: let $q\in Q$ be a state and $\sigma\in\Sigma$ be a letter of arity $\ell$; then $\delta(q,\sigma)$ is a positive boolean combination of elements in $ \{-1,0,1, \cdots,\ell\}\times Q$. \end{itemize} The numbers used in transitions encode directions, where $-1$ is up and $0$ is stay. For example $\delta(q,\sigma)=(\tuple{1,s_1} \wedge \wedge{1,s_2}) \vee (\tuple{-1,t_3} \wedge \tuple{2,t_4})$ is an example of transition for a state $q$ and a node labeled $\sigma$: a node labeled by $\sigma$ can be in the state $q$ iff its first child can be in the states $s_1$ and $s_2$, or its parent and its second child can be in the states $s_3$ and $s_4$, respectively. Let $t$ be a tree over $\Sigma$. A run $\tau$ of $\aautomaton$ over $t$ is a tree labeled by elements of $Q \times \{-1,0,1,\cdots,f\} \times \mathrm{Nodes}(t)\cup \{-1\}$. $\tau$ satisfies the following properties: \begin{itemize} \item $\tau$ is finite. \item The root of $\tau$ is labelled by $(q_0,i,n)$, where $q_0$ is in $Q_s$. \item If a node $v$ is labelled by $(q,i,n)$ and $n$ is not a node of $t$, then $v$ is a leaf of $\tau$. \item If a node $v$ is labelled by $(q,i,n')$, $n$ is a node of $\tau$ labelled by $\sigma$ of arity $l$ and $v'$ is labelled by $(q_1,j,n')$ then \begin{itemize} \item if $j=-1$, then there exists $u \leq k$ such that $n = n'.u$ \item if $j=0$, then $n = n'$ \item if $j \leq k$, then $n'= n.j$. \end{itemize} \item if a node $v$ is labelled by $(q,i,n)$, $n \in t$ labelled by $\sigma$ and the children of $v$ are labelled by $(q_1,j_1,n_1) \cdots (q_k,j_k,n_k)$ then $\delta(q,\sigma)$ is satisfied when interpreting the sybmols $\{\tuple{j_1,q_1}, \cdots, \tuple{j_k,q_k}\}$ as \emph{true} and all other symbols as \emph{false}. \end{itemize} $\tau$ is \emph{valid} iff, for each leaf of $\tau$ labelled by $(q,i,n)$, $q$ is in $Q_e$. $\aautomaton$ \emph{accepts} a tree $t$ if there exists a valid run of $t$ over $\aautomaton$. We denote by $\mathrm{Trees}(\aautomaton)$. The set of trees accepted by $\aautomaton$. A regular (one-way, non-alternating) tree automaton is a 2-way alternating tree automaton where all transitions for a symbol $\sigma$ of rank $\ell$ are boolean formulae of the form $(\tuple{1,q_{11}}\wedge\ldots\wedge\tuple{\ell,q_{\ell1}})\vee\ldots\vee(\tuple{1,q_{1n}}\wedge\ldots\wedge\tuple{\ell,q_{\ell{}n}})$ for some $n\geq 0$. In particular, directions $0$ and $-1$ do not occur. In this case, we can represent transitions as sets of lists of states $\{\tuple{q_{11},\ldots,q_{\ell1}},\ldots,\tuple{q_{1n},\ldots,q_{\ell{}n}}\}$. Finally, we recall two useful theorems from \cite{Cosmadakis88}. \begin{theorem}[Theorem A.1 of \cite{Cosmadakis88}] Let $\aautomaton$ be a two-way alternating automaton. Then there exists a tree automaton $\aautomaton’$ whose size is exponential in the size of $\aautomaton$ such that $\mathrm{Trees}(\aautomaton’) = \mathrm{Trees}(\Sigma)\setminus\mathrm{Trees}(\aautomaton)$. \end{theorem} \begin{theorem}[Theorem A.2 of \cite{Cosmadakis88}] Let $\aautomaton$ be a two-way alternating automaton. Then there exists a tree automaton $\aautomaton’$ whose size is exponential in the size of $\aautomaton$ such that $\mathrm{Trees}(\aautomaton’) = \mathrm{Trees}(\aautomaton)$. \end{theorem}
1,314,259,996,868
arxiv
\section{Background and Related Work} \subsection{Secure Aggregation} \emph{Secure aggregation} protocols are secure multiparty computation (MPC)~\cite{evans2017pragmatic} protocols that allow a set of clients to work with a central server to aggregate their secret inputs, revealing only the final aggregated result. Secure aggregation protocols have been developed that are robust against both a corrupt central server and some fraction of corrupt clients, in both the semi-honest and malicious settings. The first scalable (1000 parties or more) secure aggregation protocol is due to Bonawitz et al.~\cite{bonawitz2017aggregation}. In the Bonawitz protocol, each party generates a \emph{mask} to obscure their input, and submits the masked input to the server. The clients then perform pairwise aggregation of their masks, and send the final aggregated masks to the server. Finally, the server uses the aggregated masks to reveal the sum of the inputs. The primary communication cost in this protocol comes from the pairwise aggregation of masks, which is linear in the number of participating clients. Bell et al.~\cite{bell_paper} improve the communication cost of the Bonawitz approach by layering an additional protocol on top of it. The Bell protocol prunes the communication graph of the Bonawitz protocol such that each of the $n$ clients communicates with $\log{n}$ other clients, and runs the Bonawitz protocol using this graph---reducing communication cost to be logarithmic in the number of clients. A complete comparison of asymptotic costs appears in Table~\ref{tab:complex}, for both existing protocols and our new approach. \paragraph{Our Contribution.} Our novel protocol improves on previous work in three primary ways: (1) we achieve similar asymptotic complexity to Bell et al.~\cite{bell_paper} for the client, and improved complexity for the server; (2) our approach has significantly better \emph{concrete} communications and computation compared to previous work; (2) our approach is \emph{orders-of-magnitude} faster than previous work at handling dropouts during aggregation. \subsection{Secret Sharing} Our approach makes extensive use of \emph{threshold secret sharing}. A $(t, n)$-secret sharing scheme splits a secret into $n$ \emph{shares} such that at least $t$ shares are required to reconstruct the secret. Our approach requires a threshold secret sharing scheme with the following properties: \begin{itemize}[leftmargin=12pt, itemsep=5pt] \item $\texttt{share}(t, n, s)$: breaks secret $s$ into $n$ secret shares that can reconstruct $s$ with any subset of at least $t$ shares. \item $\texttt{reconstruct}$: accepts a set of secret shares $[s]$ as input and attempts to reconstruct secret $s$. \item $\forall a, b: [a] + [b] = [a + b]$ (additive homomorphism). \end{itemize} We use Shamir's secret sharing scheme~\cite{shamir1979share}, which satisfies the above requirements. Our implementation uses \emph{packed} Shamir secret sharing~\cite{franklinyung}, also known as batched secret sharing~\cite{baron2015communication}, which speeds up sharing more than one value at a time. As we will prove in Section~\ref{sec:protocol-privacy}, the security of \ensuremath{\mathrm{SHARD}}\xspace is based on the security guarantee of our secret sharing scheme. If the secret sharing scheme is secure in the malicious setting, so is \ensuremath{\mathrm{SHARD}}\xspace. We use a reconstruction scheme similar to Benaloh's~\cite{benaloh86} to ensure security in the malicious model. \subsection{Hypergeometric distribution} The hypergeometric distribution models the process of sampling objects from a population without replacement. $HyperGeom(t, n, m, k)$ is the probability of drawing $t$ successes out of $k$ draws from a population of size $n$ which contains $m$ successes. We use the hypergeometric distribution to model the probability that a subset of our federation will or will not be secure and correct. \subsection{Applications of Secure Aggregation} The target application for the secure aggregation protocol of Bonawitz et al.~\cite{bonawitz2017aggregation} was \emph{federated learning}~\cite{kairouz2019advances}, a distributed approach to machine learning. Secure aggregation is particularly useful as a component in systems for \emph{privacy-preserving deep learning}, in which clients use their sensitive data to locally compute updates for a centralized model. A single client's update may reveal that client's sensitive data, but secure aggregation protocols can be used to aggregate the updates for learning without revealing any single client's information. In this context, secure aggregation protocols operate on gradients or model updates represented by large vectors (containing hundreds of thousands to hundreds of millions of elements). To prevent even the information leakage of aggregated updates, secure aggregation has been combined with \emph{differential privacy}~\cite{dwork2014algorithmic} to enable differentially private federated learning~\cite{kairouz2021distributed, truex2019hybrid}. Differential privacy requires the addition of random noise to ensure privacy; when the central server is trusted, then the server can be responsible for adding the noise. In our setting of a potentially untrusted server, each of the clients can add enough noise that the aggregated results satisfy differential privacy (as described by Kairouz et al.~\cite{kairouz2021distributed}). The combination of scalable secure aggregation protocols with differential privacy allows for a stronger privacy guarantee than either technique by itself. Outside of federated learning, the values being aggregated are typically smaller. Differentially private analytics systems like Honeycrisp~\cite{roth2019honeycrisp}, Orchard~\cite{roth2020orchard}, and Crypt$\epsilon$~\cite{roy2020crypt} use specialized protocols for lower-dimensional data in order to scale to millions of participants, and generally require some trust in the server. Our \ensuremath{\mathrm{SHARD}}\xspace protocol has the potential to replace these specialized approaches and provide a stronger threat model, due to its ability to scale to hundreds of millions of clients. \subsection{MPC for Machine Learning} A plethora of MPC protocols have been proposed to accomplish efficient federated learning. Many of these protocols are designed in a different threat model than \ensuremath{\mathrm{SHARD}}\xspace. Several take advantage of a semi-honest server~\cite{truex2019hybrid}, or use two non-colluding servers~\cite{ryffel2020ariann, davidson2021star, jayaraman2021revisiting}. Secure aggregation protocols~\cite{bell_paper, bonawitz2017aggregation} also leverage MPC techniques, and can be applied to federated learning. Applications of MPC for federated learning tend to use smaller federations than what is described in this work~\cite{byrd2020differentially, xu2019hybridalpha, li2021privacy}. \subsection{Generic MPC} MPC protocols can implement any function through arithmetic or boolean circuits~\cite{yao1986generate, bgw, gmw, bmr, spdz}. These generic MPC protocols work well in the two-party setting, in semi-honest and malicious settings, and tend to be optimized for circuit depth. While some of these protocols can extend to handling hundreds of users, they require a fully connected communication graph and do not scale to the large federations studied in this work. \section{Conclusion}\label{sec:con} We propose a new highly scalable secure aggregation protocol, \emph{\ensuremath{\mathrm{SHARD}}\xspace}, with much better performance in cases with small vectors or many dropped out parties. \ensuremath{\mathrm{SHARD}}\xspace scales gracefully to accommodate hundreds of millions of parties while requiring only hundreds of connections per party in the vast majority of settings. Considering a malicious adversary requires changing very little about the protocol run, and does not substantially affect communication or computation costs. We simply require one additional share per group, and perform the reconstruction twice. Our empirical results show that \ensuremath{\mathrm{SHARD}}\xspace can aggregate over very large federations with a very small computational cost. A small vector secure aggregation protocols have applications in distributed data analytics as well as smaller machine learning models. Histograms, random forests, logistic regression, and small neural networks would all benefit from protocols enabling short vector aggregation~\cite{fedforests,fedlog} We leave for future work a further exploration of sharding configurations in secret sharing. Our experiments have led us to believe that $2$ shards per party is optimal for this protocol, however tighter approximations of the probability of a security failure could imply that more rounds of sharding are indeed optimal. Furthermore, more rounds of sharding open the possibility of packed secret sharing within the sharding round, and a protocol that better supports wider vectors. \section{Conclusion}\label{sec:con} We propose a new highly scalable secure aggregation protocol, \emph{\ensuremath{\mathrm{SHARD}}\xspace}, with much better performance compared to prior work \cite{bell_paper} in settings with small vectors or many dropped out parties. \ensuremath{\mathrm{SHARD}}\xspace scales gracefully to accommodate hundreds of millions of parties while requiring only hundreds of connections per party in the vast majority of settings. Defense against malicious adversaries requires little modification of the protocol, and does not substantially affect communication or computation costs--we simply require one additional share per group, and perform the reconstruction twice. Our empirical results show that \ensuremath{\mathrm{SHARD}}\xspace can aggregate over very large federations with a small computational cost. Small vector secure aggregation protocols have applications in distributed data analytics as well as smaller machine learning models. Histograms, random forests, logistic regression, and small neural networks would all benefit from protocols enabling short vector aggregation~\cite{fedforests,fedlog}. Thus our technology has potentially broad applications. Our experiments suggest that $2$ shards per party is optimal for this protocol, however tighter approximations of the probability of a security failure could suggest otherwise. Furthermore, more rounds of sharding open the possibility of packed secret sharing within the sharding round, and a protocol that better supports wider vectors. Investigation of these threads are future work. \section{Evaluation} This section evaluates the concrete performance of \ensuremath{\mathrm{SHARD}}\xspace with respect to communication and computation. Through a series of experiments, we will answer the following research questions: \begin{enumerate} \item[\textbf{RQ1}] How does \ensuremath{\mathrm{SHARD}}\xspace scale to large federations? \item[\textbf{RQ2}] How does \ensuremath{\mathrm{SHARD}}\xspace handle vector length? \item[\textbf{RQ3}] In practice, what are the computational demands of \ensuremath{\mathrm{SHARD}}\xspace? \end{enumerate} \paragraph{Implementation.} For our experiments, we implemented a simulation of \ensuremath{\mathrm{SHARD}}\xspace in Python, using numpy to perform field arithmetic. We implemented packed Shamir secret sharing based on~\cite{dahl_2017}. The code used in our experiments is available as open source on GitHub.\footnote{Redacted for review} \paragraph{Comparison to Previous Work.} Our comparisons to the protocols of Bonawitz et al.~\cite{bonawitz2017aggregation} and Bell et al.~\cite{bell_paper} are based on concrete results given in their papers, or calculated based on analytical bounds they give (e.g. for expansion factor and number of required neighbors). \subsection{Communication Performance} To answer \textbf{RQ1}, we calculate the communication cost per client for various federation configurations and assumptions. These configurations align closely with those tested by~\cite{bell_paper} in order to provide a clear comparison between our approaches. Federation range from $1000$ to $100,000,000$ parties in these experiments. Figures~\ref{fig:neighbors_dp},~\ref{fig:expansion_dp},~\ref{fig:server_time}, and~\ref{fig:client_time} reference semi-honest and malicious threat models. We note that the threat model of \ensuremath{\mathrm{SHARD}}\xspace is dictated by the security of the secret sharing primitive. The primary difference between our semi-honest and malicious secure secret sharing primitives is the malicious secure primitive uses a slower reconstruction technique, and requires one more share per reconstruction. Each configuration is determined by the parameters described in Table~\ref{tab:vars}: $\sigma$, $\eta$, $\delta$, $\gamma$, $k$, and the federation size. We used a modified binary search to determine the group size and threshold that would appropriately satisfy the constraints formed by the fixed parameters applied to the probability formulas defined in Section~\ref{sec:prob}. Because each party participates in two groups, one for each shard, the total number of neighbors is simply twice the group size. Figure~\ref{fig:neighbors_dp} displays these results for Protocol~\ref{prot:shard}. We see the expected $O(\log{n})$ trend with respect to the number of neighbors required. Both protocols require a comparable number of neighbors to Bell et al.~\cite{bell_paper}, and substantially fewer shares than the na\"ive approach. Notably, using the malicious protocol has very little effect on the communication complexity. To answer \textbf{RQ2}, we evaluate the expansion factor of Protocol~\ref{prot:shard} using packed secret sharing for group level aggregations. A scalable protocol with respect to vector size will have small expansion factors. Expansion factor measures the amount of communication required for a protocol as a multiple of the required communication for the ideal functionality. In our case Expansion factor is: \[EX = \left( \frac{num\_neighbors}{k} \right) \cdot \log(field\_size)\] Figure~\ref{fig:expansion_dp} contains the results. The results show that expansion factor depends on the level of robustness against dropouts and malicious clients, but is consistent across federation sizes. \paragraph{Comparison with Bell et al.~\cite{bell_paper}.} Figure~\ref{fig:expansion_bell} compares the expansion factor of \ensuremath{\mathrm{SHARD}}\xspace against the protocol of Bell et al.~\cite{bell_paper}. The amortized number of shares required to represent each value is relatively small considering that we secret share the entire vector. Packing is especially useful in cases where the expected number of dropouts is low. We calculate the expansion factor for Bell et. al.'s protocol based on the formula in~\cite{bonawitz2017aggregation}, but replacing the federation size with number of neighbors to reflect the optimized communication graph. With small vectors, our protocol provides a substantially smaller expansion factor. Our protocol's expansion factor remains constant or monotonically decreases as vector size increases. For very large vectors (100k+ elements), prior work~\cite{bell_paper, bonawitz2017aggregation} provides a smaller expansion factor. \subsection{Computation Performance} In this section we hope to answer \textbf{RQ3} by simulating \ensuremath{\mathrm{SHARD}}\xspace on large federations and reporting client and server computation performance. We implement our protocol in python and run simulations in a single thread on an AWS z1d.2xlarge instance with 64 Gb of memory~\cite{aws}. Our timing experiments are designed to compare our concrete computation performance with that of prior work. Following the experimental designs of Bonawitz et al.~\cite{bonawitz2017aggregation}, we ignore communication latency and throughput in our experiments. Figures~\ref{fig:server_time} and~\ref{fig:client_time} respectively plot the server and client computation times for Protocol~\ref{prot:shard}. Even for a federation of $100,000,000$ parties, aggregating $100$ values per client requires less than a second of server computation and less than a 10th second of computation per client for all corruption and dropout assumptions we examined. Provided that dropouts do not increase beyond the assumption made when configuring parameters, the protocol will achieve the correct result with no additional computational cost. In order to demonstrate the impact of working with dropouts without additional computation, we partially simulate the use of our protocol to aggregate large vectors and compare with the concrete results presented in Bonawitz et. al.~\cite{bonawitz2017aggregation}. To aggregate a vector with $100,000$ elements, we simply repeat the protocol with $k = 100$ for $1000$ iterations. We simulate a subset of groups and use dummy values for the remainder of the group inputs to server to reduce the total simulation time, and to reduce the effect of memory pressure on client level simulations. \paragraph{Comparison with Bell et al.~\cite{bell_paper}.} The work of Bell et. al.~\cite{bell_paper} improves on the number of neighbors required of each client substantially, we run our protocol on substantially larger federations than the ones considered in ~\cite{bonawitz2017aggregation} to create a more fair comparison to Bell et. al. Our results are included in Table~\ref{tab:scale_time}. While we see that the sharding approach does not perform as well in the case with no dropouts, adding a few dropouts drastically harms the server computation time of the masking based approach. These amounts of dropouts are substantial in the smaller federations used in ~\cite{bonawitz2017aggregation}, but both prospective numbers of dropouts are far more realistic considering the much larger federations we consider here as they are well less than $1\%$. For this comparison, our approach tolerates $5\%$ dropouts, so we could potentially further increase the number of dropouts at no cost to the sharding approach. \section{Intro} Efficient \emph{secure aggregation} protocols allow distributed data owners (\emph{clients}) to aggregate secret inputs, revealing only the aggregated output to a (possibly untrusted) server. Secure aggregation protocols can be used to build privacy-preserving distributed systems, including systems for data analytics~\cite{roth2019honeycrisp} and federated machine learning~\cite{kairouz2019advances, kairouz2021distributed}. The state-of-the-art large vector aggregation protocol~\cite{bonawitz2017aggregation} leverages \emph{masks}---one time pads created with shared random seeds---to encrypt and decrypt the vectors. This reduces communication among parties substantially. Bell et al.~\cite{bell_paper} further reduce communication cost by circumventing the need for a complete communication graph. Rather than sharing a random seed with every other party, each party shares merely with $O(\log{n})$ neighbors. However, masking-based protocols incur significant communications overhead for short vectors. For a vector of size 100, the Bonawitz protocol results in an \emph{expansion factor} equal to the number of neighbors per party. Expansion factor measures the client communication cost relative to the size of their private inputs. Such a large expansion factor implies that masking protocols provide little to no benefit over the na\"ive solution with small vectors. In the case of dropouts, both protocols undergo a costly unmasking procedure that takes several minutes of server computation time. In this paper, we propose \ensuremath{\mathrm{SHARD}}\xspace, a highly scalable secure aggregation protocol with dropout robustness. \ensuremath{\mathrm{SHARD}}\xspace is the first sublinear communication complexity protocol to handle dropouts without a recovery communication phase. Table~\ref{tab:complex} presents the computation and communication complexity of \ensuremath{\mathrm{SHARD}}\xspace along with those of the current state-of-the-art for large federation secure aggregation. \begin{table*}[t] \begin{center} \begin{tabular}{|c||c|c|c|} \hline \textit{Setting} & \textit{Bonawitz et al.~\cite{bonawitz2017aggregation}} & \textit{Bell et al.~\cite{bell_paper}} & \textit{\ensuremath{\mathrm{SHARD}}\xspace (ours)} \\ \hline \hline Client Communication & $O(n + l)$ & $O(\log n + l)$ & $O(l\log n)$ \\ \hline Client Computation & $O(n^2 + nl)$ & $O(\log^2n + l\log n)$ & $O(l\log^2n)$ \\ \hline Server Communication & $O(n^2+ nl)$ & $O(n\log n + nl)$ & $O(ln)$\\ \hline Server Computation & $O(ln^2)$ & $O(n\log^2n + nl\log n)$ & $O(ln)$ \\ \hline \end{tabular} \end{center} \caption{Communication and computation complexities of \ensuremath{\mathrm{SHARD}}\xspace compared with the state of the art, for $n$ parties aggregating vectors of size $l$.} \label{tab:complex} \end{table*} We start with a natural approach to reducing communication complexity: $n$ clients organize into groups of size $O(\log{n})$, aggregate within their groups, and reveal the group's sum to the server. Unfortunately, this approach reveals each group's sum to the server, and the sum of inputs within a small group reveals much more information than the total sum over all $n$ clients. Our approach addresses this problem via \emph{sharding}. Sharding is a technique borrowed from distributed databases~\cite{CorbettShard, GlendenningShard, MegastoreShard} and scalable blockchains~\cite{LuuShard} where a piece of information is fragmented into pieces (called \emph{shards}) to enhance a desired property (in our case, security). In our \ensuremath{\mathrm{SHARD}}\xspace protocol, each client splits their input into $m \geq 2$ shards, such that each shard in isolation reveals nothing about the input. For shard number $i$, the clients organize into groups of size $O(\log{n})$, sum their $i$th shards using a simple secure aggregation protocol, and reveal the group's $i$th shard sum to the server. The key insight of \ensuremath{\mathrm{SHARD}}\xspace is that the sum of a group's $i$th shard reveals nothing about the sum of the original inputs, as long as \emph{different groups are used for each shard}. For $m$ shards, \ensuremath{\mathrm{SHARD}}\xspace requires each client to participate in $m$ instances of a simple secure aggregation protocol with only $O(\log{n})$ other clients, matching the communication complexity of the state-of-the-art protocol~\cite{bell_paper}. In most cases, $m=2$ provides sufficient security. Because it is based on threshold secret sharing, \ensuremath{\mathrm{SHARD}}\xspace is robust to dropouts modulo a minimal threshold for construction of the output. In addition to complexity analysis, our formal results include malicious security of \ensuremath{\mathrm{SHARD}}\xspace in a real-ideal model. We have also implemented \ensuremath{\mathrm{SHARD}}\xspace and performed an empirical evaluation of its performance, demonstrating concrete efficiency of our approach: the computation time for both client and server are less than 100ms, even for federations of size 100 million. \ensuremath{\mathrm{SHARD}}\xspace also provides a significant improvement in concrete communications cost compared to Bell et al.~\cite{bell_paper}, as measured by expansion factor---especially for small private inputs. Moreover, in the presence of dropouts, our approach provides orders-of-magnitude improvement in performance over previous work. \subsection{Contributions} \noindent In summary, we make the following contributions: \begin{enumerate}[leftmargin=20pt] \item We propose a novel scalable secure aggregation protocol, based on layered secret sharing, with improved concrete computation and communications cost compared to previous work (including an orders-of-magnitude improvement in the presences of dropouts). \item We prove malicious security of $\ensuremath{\mathrm{SHARD}}\xspace$ in the real-ideal model with modifications both to reflect dropout resistance and to support messaging efficiency in large network settings. \item We implement our approach and conduct an experimental evaluation demonstrating its concrete efficiency. \end{enumerate} \section{Setting Parameters} \label{sec:prob} Our security proofs for \ensuremath{\mathrm{SHARD}}\xspace assume that group size, and the reconstruction threshold are selected appropriately to guarantee security. This section reasons about the appropriate parameters given specifications about the aggregating environment. All parameters involved in determining security are listed in Table~\ref{tab:vars}. Above the double line are the parameters comprising our configuration. The protocol administrator will selected these parameters to suit their needs. Below the double line are the $g$, $t$ and $k$, the parameters we feed directly to the protocol. \begin{table} \begin{tabular}{|c|p{5cm}|} \hline Parameter & Description \\ \hline \hline $\sigma$ & $1 - 2^{-\sigma}$ is the probability of secure protocol execution\\ \hline $\eta$ & $1 - 2^{-\eta}$ is the probability of correct protocol execution.\\ \hline $\gamma$ & corrupt fraction of federation \\ \hline $\delta$ & fraction of federation that will drop out \\ \hline $f$ & Number of clients in the federation. \\ \hline \hline $g$ & the number of clients in each group. \\ \hline $t$ & the reconstruction threshold in each group. \\ \hline $k$ & the number of values to be shared at once. \\ \hline \end{tabular} \caption{independent and dependent variables to ensure protocol security} \label{tab:vars} \end{table} We would like to set $g$ and $t$ to ensure that two events do not happen. \begin{enumerate} \item A group is corrupted (more adversaries than $t$). \item A group cannot reconstruct its sum (more dropouts than $g - t - k$). \end{enumerate} We can guarantee with absolute certainty that these two events do not happen if we trivially set $g = f$ and $t > f\gamma$ and $g - t - k > f\delta$. However that clearly leads to poor protocol performance. Instead, we use the same convention as Bell et al.~\cite{bell_paper} and select $g$, $t$, and $k$ to keep the probability of events (1) and (2) very low. This is reflected in our parameters in the way of $\sigma$ and $\eta$. They are defined as $P[(1)] < 2^{-\sigma}$ and $P[(2)] < 2^{-\eta}$. To determine how likely these events are over the entire federation, we first start by determining their probability at the group level. Suppose we have a group of size $g$, and a federation of size $n$. The probability of an individual belonging to a group with $i$ corrupt clients is hypergeometric. \[ HyperGeom(i, n - 1, \gamma n, g) \] We are sampling clients to be corrupt without replacement from a population of size $n - 1$ with $\gamma n$ clients in it because we assume that one client in each group is honest. In the incredibly unlikely event that a group is entirely comprised of corrupt clients, it is inconsequential to the security of the protocol because no honest client can have their inputs exposed by this event. Furthermore, they could change the final output of the protocol, but an attack of this variety is no more powerful modifying the adversarial client's inputs to the protocol. We can use the CDF of the hypergeometric distribution to calculate the probability that one group is not corrupted. \[ p_nc = HyperGeomCDF(t - 1, n - 1, \gamma n, g) \] Similarly, the probability of a group reconstructing in spite of dropouts is a CDF of a hypergeometric distribution: \[ p_nd = 1 - HyperGeomCDF(g - t - k, n - 1, \delta n, g) \] Packed secret sharing requires $t + k - 1$ shares to reconstruct a secret. To use malicious reconstruction, we require $t + k$ shares, so we require that fewer than $g - t - k$ clients in that group dropout. Finally we need to consider the security and reliability of all groups. We do so by calculating the probability that all groups are secure and reconstruct properly, then use the complement of these values. \[ p_{corrupt} = 1 - p_nc^{2n/g} \] \[ p_{dropout} = 1 - p_nd^{2n/g} \] The exponent $2n/g$ is the total number of groups over both sharding rounds. Finally we take the negative log of our probabilities to compare them to the security parameters $\sigma$ and $\eta$. \[ \sigma \leq -\log_2(p_{corrupt}) \] \[ \eta \leq -\log_2(p_{dropout}) \] These formulas, allow aggregators to specify specific security and correctness parameters $(\sigma, \eta)$, and assume the probability of corrupt or dropped out individuals $(\gamma, \delta)$, and calculate $g$, $t$, $k$. We implement a search algorithm to determine the minimum number of neighbors each client requires for a given set of security parameters. \section{Protocol definition} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/Sharding} \caption{Overview of \ensuremath{\mathrm{SHARD}}\xspace. Each client splits their input into shards, then aggregates each shard in a small group and reveals the result to the server. The server can reconstruct the total sum, but not the sum of any small group's inputs.} \label{fig:overview} \end{figure*} This section describes $\ensuremath{\mathrm{SHARD}}\xspace$, our novel secure aggregation protocol to emulate Functionality~\ref{prot:ideal}. The ideal functionality sums together the vectors that the trusted third party receives from each client. The output is one vector the same shape as any of the vectors received from any client. \subsection{Overview} We implement \ensuremath{\mathrm{SHARD}}\xspace by applying the intuition of sharding to a secret sharing context. Sharding, when used in distributed databases or blockchains, refers to breaking information into pieces (called shards) and distributing them among a federation for the sake of security or performance. In our protocol, we utilize Shamir sharing to break a parties' secret input into shards. Those shards are then further fragmented by another round of secret sharing. A visual overview of \ensuremath{\mathrm{SHARD}}\xspace appears in Figure~\ref{fig:overview}. The intuition is to secret share each share, and aggregate the secondary shares in small ($O(\log n)$-sized) groups. By doing so, we allow parties to aggregate their secrets among small subsets of the federation. Their secrets are protected by the redundancy of the multi-level Shamir sharing approach. If a small group happens to be controlled by the adversary, the adversary has the ability to learn a share of the secret of each honest party in that small group. Given the definition security properties of secret sharing, an individual shard is useless on it's own, and the adversary needs to control several specific groups in order to find enough shards to reconstruct an honest party's secret. By choosing the number of members in each small group as well as the number of shards into which each secret is broken, we are able to effectively bound the probability of an adversary attacking this protocol in the semi-honest and malicious settings. \paragraph{Protocol Overview.} Protocols~\ref{prot:gagg}, \ref{prot:sub_agg}, and \ref{prot:shard} describe our aggregation method in detail. The three sub-protocols function together as follows: Protocol~\ref{prot:gagg} describes a simple Shamir sharing based aggregation protocol. Each member of a group sends a share of their secrets to every other member of that group. The parties add their shares and reconstruct the sum of their secrets. This is a well documented extant protocol that we use as a subroutine for sharding. Protocol~\ref{prot:sub_agg} refers to the process of secure aggregation with subsets of the federation. Where parties Protocol~\ref{prot:gagg} send secret shares to every other party in their federation, the federation in Protocol~\ref{prot:sub_agg} is broken up into a number of smaller groups and each group performs and instance of Protocol~\ref{prot:gagg}. The returned sums from all instances are then added together to calculate the sum of all secret inputs. This protocol can aggregate among large federations without revealing private inputs provided that the group size and threshold are selected properly. Our formula for calculating both of those parameters is included in Section~\ref{sec:prob}. \subsection{Threat Model} \label{sec:threat} We adopt the threat model of Bell et al.~\cite{bell_paper}, since it is well-suited to the setting of large federations. Our setting involves two classes of parties: (1) a single \emph{server}, and (2) $n$ \emph{clients}. We assume that the adversary may control \textbf{both} the server \textbf{and} a fraction ($\gamma$) of the clients. $\gamma = \frac{1}{2}$ corresponds to assuming an honest majority of clients; for very large federations, it may be reasonable to assume a smaller $\gamma$. Our use of $\gamma$ is similar to a $(t, n)$-Shamir sharing scheme's security against a $t/n$-sized proportion of clients. Our guarantees have several other parameters, described below (and summarized in Section~\ref{sec:prob}, Table~\ref{tab:vars}). \paragraph{Semi-honest security (confidentiality).} In the semi-honest setting, we assume that the server and all clients execute the protocol correctly, but that the adversary-controlled parties (including the server) will attempt to learn the inputs of individual honest clients by observing the protocol's execution. \ensuremath{\mathrm{SHARD}}\xspace guarantees that with probability $1-2^{-\sigma}-2^{-\eta}$, an adversary who controls fewer than $\gamma n$ clients does not learn the input of any honest client. \paragraph{Malicious security (confidentiality).} In the malicious setting, we assume that adversary-controlled parties (including the server) may deviate arbitrarily from the protocol. In the malicious setting, \ensuremath{\mathrm{SHARD}}\xspace guarantees that with probability $1-2^{-\sigma}-2^{-\eta}$, an adversary who controls fewer than $\gamma n$ clients does not learn the input of any honest client (i.e. the same confidentiality guarantee as in the semi-honest setting). We prove malicious security in Section~\ref{sec:protocol-privacy}. \paragraph{Dropouts, correctness, and availability.} \ensuremath{\mathrm{SHARD}}\xspace separately guarantees availability of the output against $\delta f$ clients dropping out. This guarantee is more important among very large federations because the probability of some dropouts increases as the federation size increases. \ensuremath{\mathrm{SHARD}}\xspace cannot guarantee correctness or availability of the output when the server is malicious. In the event that a malicious server forces parties to dropout, we cannot guarantee availability or correctness, but can guarantee confidentiality of honest inputs. Like Bonawitz et al.~\cite{bonawitz2017aggregation}, and Bell et al.~\cite{bell_paper} we make the assumption that clients are authentic and not simulated for the sake of a Sybil attack. We assume the list of clients is public prior to commencing the protocol, and the existence of secure channels among the parties. As described in previous work~\cite{bonawitz2017aggregation, bell_paper}, this problem can be solved using a Public Key Infrastructure (PKI) or by assuming the server behaves honestly in the initialization round. \paragraph{Failure probability.} Traditional MPC security guarantees ensure that there is no chance of an adversary breaking the confidentiality or integrity of a protocol, provided that that adversary is not too strong. In the context of secret sharing, these guarantees inherently limit communication efficiency. For a $(t, n)$- secret sharing scheme, guaranteeing that no adversary smaller than $t$ can compromise security requires that each party communicates with at least $t$ other parties. In order to improve communication efficiency, Bell et al.~\cite{bell_paper} and \ensuremath{\mathrm{SHARD}}\xspace specify our security guarantees with small probabilities of failure, which are parameterized by $\sigma$ and $\eta$. $2^{-\sigma}$ is the probability that the security guarantee is not realized, and $2^{-\eta}$ is the probability that the availability guarantee is not realized. We set $\sigma$ and $\eta$ identically to Bell et al.~\cite{bell_paper} and choose $\sigma = 40$ and $\eta \geq 20$. This relaxation allows \ensuremath{\mathrm{SHARD}}\xspace to significantly reduce communication complexity in exchange for a one-in-a-trillion chance that an adversary can expose private inputs. \paragraph{Realism of the threat model.} In real-world deployments (e.g. federated learning or statistical analysis), the server operator generally has a strong incentive to produce correct outputs---obtaining this output is typically the purpose of deploying the system in the first place. Clients, on the other hand, typically care primarily about confidentiality---the final output is being computed for the benefit of the server operator, and its correctness does not benefit the client directly. Like previous secure aggregation protocols~\cite{bonawitz2017aggregation, bell_paper}, our threat model is designed to align with these incentives. Our primary goal is providing confidentiality for clients; \ensuremath{\mathrm{SHARD}}\xspace does not ensure correctness or availability of the final output when the server is malicious, but the server operator has no incentive to corrupt their own final result. \paragraph{Comparison of the threat model with related work.} Compared to the closest related work---the protocol of Bell et al.~\cite{bell_paper}---our threat model is slightly stronger. Our threat model matches that of Bonawitz et al.~\cite{bonawitz2017aggregation} exactly. Bell et al.~\cite{bell_paper} uses $\alpha \in (0,1]$ to describe the amount of information leaked by a given secure aggregation protocol. For a $n$ party federation, $\alpha$ implies that any party's information will be securely aggregated with at least $\alpha n$ participants. In the protocol of Bell et al., reducing $\alpha$ can improve performance. The ideal functionality has $\alpha = 1 - \delta - \gamma$. This implies that all honest parties will have their values aggregated together. This is the best we can hope for because parties who drop out might not have input, and malicious parties can subtract their inputs from the ideal functionality's output to obtain the sum of just the honest party's inputs. \ensuremath{\mathrm{SHARD}}\xspace always ensures the optimal value of $\alpha$. The earlier protocol of Bonawitz et al.~\cite{bonawitz2017aggregation} also ensures the optimal value of $\alpha$, via communication between all pairs of parties. \begin{algorithm} \newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl} \SetAlgorithmName{Functionality}{}{} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{A set of private vector inputs $s_0 \dots s_n$.} \Output{The sum of all values $s_0 \dots s_g$, which we denote as $s$.} {\nonl \textbf{Round 1:} Each party $j$:} \begin{enumerate} \item send $s_j$ to the trusted third party \end{enumerate} {\nonl \textbf{Round 2:} Trusted third party} $$s \leftarrow \sum_{i=1}^{n} s_i$$. \caption{Ideal Functionality} \label{prot:ideal} \end{algorithm} \setcounter{algocf}{0} \begin{algorithm} \newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl} \SetAlgorithmName{Protocol}{}{} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{a group of $g$ participants, an input for each participant $s_i$, a threshold $t$} \Output{The sum of all values $s_0 \dots s_g$} {\nonl \textbf{Round 1:} Each party $j$:} \begin{enumerate} \item $sh_j^0 \dots sh_j^n \leftarrow \texttt{share}(t, g, s_j)$ \item sends $sh_j^i$ to party $i \forall i \in [0, g]$. \end{enumerate} {\nonl \textbf{Round 2:} Each party $j$:} \begin{enumerate} \item receives $sh_0^j \dots sh_g^j$ \item $sum_j \leftarrow \Sigma sh_0^j \dots sh_g^j$. \item broadcasts $sum_j$. \end{enumerate} {\nonl \textbf{Round 3:} Each party $j$:} \begin{enumerate} \item receives $sum_0 \dots sum_g$. \item $sum \leftarrow \texttt{reconstruct}(sum_0 \dots sum_g)$. \end{enumerate} \caption{\texttt{group\_agg}} \label{prot:gagg} \end{algorithm} \begin{algorithm} \newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl} \SetAlgorithmName{Protocol}{}{} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{a partition $P$ of $n$ participants, a group size $g$, a threshold $t < g$, each participant supplies their secret input $s_i$} \Output{The sum of all values $s_0 \dots s_n$ which we call $S$} {\nonl \textbf{Round 1:} Each party $j$:} \begin{enumerate} \item partitions $P$ into groups of size $g$. Groups are partitioned deterministically such that each party creates the same set of groups. Party $j$ is a member of one group: $G_j$. See section~\ref{sec:group} for more information. \item $sum_j \leftarrow $\texttt{group\_agg}$(G_j, s_j)$ \item sends $sum_j$ to the server. \end{enumerate} {\nonl \textbf{Round 2:} The server:} \begin{enumerate} \item receives $sum_0 \dots sum_n$ \item verifies that $sum_i = sum_h$ if parties $i, h$ are in the same group. If this is not true for all groups, $ABORT$. \item $S \leftarrow \Sigma_{i = 0}^n sum_i /g$ \end{enumerate} \caption{\texttt{sub\_agg}} \label{prot:sub_agg} \end{algorithm} \begin{algorithm} \newcommand{\nonl}{\renewcommand{\nl}{\let\nl\oldnl} \SetAlgorithmName{Protocol}{}{} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Set of $n$ participants $P$ where each participant $i$ has a value $v_i$, a group size $g$ and a number of shards $m$ (almost always $2$). A threshold $t < g$. } \Output{The sum of all values $v_0 \dots v_n$ which we denote $V$.} {\nonl \textbf{Round 1:} Each party $j$:} \begin{enumerate} \item shards $sh_0^j \dots sh_m^j \leftarrow \texttt{share}(m, m, v_j)$ \end{enumerate} {\nonl \textbf{Round 2:} All parties} \\ \For {$ i \in \{0 \dots m\}$} { \begin{enumerate} \item parties agree on $Perm$, a permutation \\ of the participant list. \item $sum_i \leftarrow $\\ \texttt{sub\_agg}$(Perm,\ g,\ t,\ sh_i^0 \dots sh_i^n)$ \end{enumerate} } {\nonl \textbf{Round 2:} The Server:} \begin{enumerate} \item $V \leftarrow \texttt{reconstruct}(sum_0 \dots sum_m)$. \end{enumerate} \caption{\ensuremath{\mathrm{SHARD}}\xspace} \label{prot:shard} \end{algorithm} \subsection{Example Protocol Trace} The following small example illustrates \ensuremath{\mathrm{SHARD}}\xspace in action and higlights its features. Suppose we have parties $A$, $B$, $C$, $D$ with secrets in $\mathbb{F}_2$. First, each party breaks their secret into shards as shown in the table below. For the sake of this example, parties use additive secret sharing for shard generation. $$\begin{tabular}{|c||l|l|} \hline Party & Secret & Shards \\ \hline \hline A & 1 & 1, 0 \\ \hline B & 1 & 0, 1 \\ \hline C & 0 & 0, 0 \\ \hline D & 0 & 1, 1 \\ \hline \end{tabular}$$ The parties will now perform the \texttt{sub\_agg} protocol on their two shards. This includes a partitioning of parties into subsets. $$\begin{tabular}{|c||c|c|} \hline Round & subset 1 & subset 2 \\ \hline \hline 1 & \{A, B\} & \{C, D\} \\ \hline 2 & \{D, B\} & \{C, A\} \\ \hline \end{tabular}$$ We note that for groups of size 2, it is trivial for an adversarial party to determine their group mate's shard in both rounds. That said, the mechanism of sharding, together with partitioning, prevents the adversary from learning the other shard, thus maintaining the privacy of inputs. In this example, if $B$ is an adversary, it can learn $A$'s first shard and $D$'s second shard. However, it cannot determine $A$ or $D$'s other shards either directly- due to choice of partitions- or indirectly- because it knows nothing about $C$'s shards. This outlines the importance of proper group selection to ensure protocol security. If we used the same groups for rounds 1 and 2, then $B$ would learn $A$'s secret, etc. Of course, if two parties $B$ and $C$ are corrupt, then they may collude to obtain the secrets of $A$ and $D$, but we assume an honest majority. Once the parties are broken into groups, they perform \texttt{group\_agg} and aggregate their to find the sums of each sharding round. For the sake of brevity we consider \texttt{group\_agg} a black box that returns the sum of shards. $$\begin{tabular}{|l||l|} \hline Protocol & Result \\ \hline \hline \texttt{group\_agg}(\{A, B\}) & 1 \\ \hline \texttt{group\_agg}(\{C, D\}) & 1 \\ \hline \texttt{sub\_agg}(Round 1) & 0 \\ \hline \hline \texttt{group\_agg}(\{D, B\}) & 0 \\ \hline \texttt{group\_agg}(\{A, C\}) & 0 \\ \hline \texttt{sub\_agg}(Round 2) & 0 \\ \hline \end{tabular}$$ In all cases \texttt{group\_agg} returns the sum of the shards applied as input. In round 1, we have $1 + 0 = 1$ and $0 + 1 = 1$ for the shards of $A$, $B$, $C$, and $D$ respectively. These group-level sums are aggregated per the \texttt{sub\_agg} protocol to obtain the sum of 0. An identical process is applied to the round 2 shards to calculate their sum, which is also $0$. The final step of \ensuremath{\mathrm{SHARD}}\xspace is to reconstruct the output $V$ from the sharding round sums. Because we are using additive secret sharing in this example, this process is simply: $$V = \texttt{sub\_agg}(Round 1) + \texttt{sub\_agg}(Round 2) = 0 + 0 = 0$$ This is correct- the sum of all inputs is $1 + 1 + 0 + 0 = 0$ in $\mathbb{F}_2$. \subsection{Protocol Privacy} \label{sec:protocol-privacy} \subsubsection{Threat models} We consider a semi-honest, and a malicious secure threat model parameterized by $\gamma$ and $\delta$ as described in section~\ref{sec:threat}. With respect to protocol execution, the semi-honest and malicious models are differentiated by the security of the secret sharing scheme. If \ensuremath{\mathrm{SHARD}}\xspace is implemented with semi-honest secure secret sharing, then \ensuremath{\mathrm{SHARD}}\xspace is secure in the semi-honest model. If \ensuremath{\mathrm{SHARD}}\xspace is implemented with malicious secure secret sharing, then \ensuremath{\mathrm{SHARD}}\xspace is secure in the malicious model. Because the semi-honest threat model is a specific case of the malicious threat model, we prove security in the malicious model. In the malicious model, we expect arbitrary deviations from the protocol from both malicious clients and the server. Furthermore, we expect the server and malicious clients to collaborate. We do, however, assume that the server is not simulating parties as part of a Sybil attack. Preventing this behavior can be solved with public key infrastructure, and we consider protection against this type of attack out of scope for \ensuremath{\mathrm{SHARD}}\xspace. This is the only restriction we apply to server behavior for the sake of input confidentiality. It is also worth noting \ensuremath{\mathrm{SHARD}}\xspace ensures correctness and availability against a $\delta$ fraction of clients dropping out, but does not guarantee correctness or availability against a dropped out server. \subsubsection{Malicious Security} Suppose the ideal functionality of addition as $F$, an adversary $A$. Let $v_i$ and $x_i$ be input and view of client $i$ respectively. Let $V$ be the output of $\pi$. Let $U$ be the set of clients. Let $C \subset U \cup \{S\}$ be the set of corrupt parties, and $D \subset U$ be the set of dropped out parties. The set of honest parties is $H = U \setminus (C \cup D)$. In this proof, we consider the dropped out parties as a part of the adversary without loss of generality. \begin{theorem} There exists a PPT simulator \texttt{SIM} such that for all $U$, $|C| \leq \gamma |U|$, and $|D| \leq \delta |U|$ \[ \texttt{REAL}_{\pi, A}(n; x_{H}) \equiv \texttt{IDEAL}_{F, \texttt{SIM}}(n, x_{H}) \] \end{theorem} The intuition behind this statement is that no such adversary can exist on our protocol that is more powerful than an adversary against the ideal functionality. \begin{proof} Proven through the hybrid argument. We assume that any honest party will $ABORT$ if they receive an ill-formed message, an untimely message, or an abort from any other party. Furthermore, we assume secure channels between each pair of parties. \begin{enumerate} \item This hybrid is a random variable distributed exactly like $\texttt{REAL}_{\pi, A}(n; x_{H})$. \item In this hybrid $\texttt{SIM}$ has access to all $\{x_i | i \in U\}$. $\texttt{SIM}$ runs the full protocol and outputs a view of the adversary from the previous hybrid. \item In this hybrid, $\texttt{SIM}$ generates the ideal inputs of the corrupt and dropout parties using a separate simulator $\texttt{SIM}_g$. These sets of inputs, $x_C$ and $x_D$, contain a field element or $\bot$ for each corrupt or dropout party respectively. Through this process, $\texttt{SIM}_g$ may force the output of $F$ to be any field element or $\bot$. Thus $\texttt{SIM}_g$ is able to produce the same protocol outputs that $A$ is able to in $\texttt{REAL}$, so this hybrid is indistinguishable from the previous hybrid. \item In this hybrid, $\texttt{SIM}$ replaces $V$, the output of the protocol, with the known output of the ideal function and the aggregation of all the ideal inputs of the corrupt parties. We exclude the inputs of the dropped out parties. This hybrid is indistinguishable from the previous hybrid with probability $2^{-\eta} $ as defined in Section~\ref{sec:prob}, provided that group assignments satisfy property~\ref{prop:con_sec} \item In this hybrid $\texttt{SIM}$ replaces the shards of each honest parties with a secret sharing of a random field elements such that the field elements sum to the output of $\texttt{IDEAL}$. This hybrid is indistinguishable from the previous hybrid with probability $2^{-\sigma}$ as defined in Section~\ref{sec:prob}. This is because the adversary should not have access to enough shares to reconstruct any individual party's secret. \end{enumerate} \end{proof} \subsection{Complexity Analysis} Suppose $n$ clients with $k$ values to send. \subsubsection{Client Computation} $O(k\log^2 n)$. The client needs to break $k$ values into $log n$ values. For a Shamir sharing of $m$ shares takes $O(m^2)$. The addition and reconstruction take $O(log n)$ and $O(k log^ n)$ time respectively. \subsubsection{Client Communication}$O(k\log n)$. The client needs to send $O(\log n)$ clients $O(k)$ values each. \subsubsection{Server Computation} $O(nk)$. The server needs to add all of the group sums together and reconstruct the shard-level Shamir shares. This includes processing the output of all parties. The shard-level Shamir share is treated as a constant cost because there are always two shard shares. \subsubsection{Server Communication} $O(nk)$. The server receives output from all parties. \section{Group Assignments}\label{sec:group} Beyond assigning groups such that they are unlikely to be corrupted and that they are unlikely to dropout, we also would like to assign groups over the two rounds such that the outputs of multiple groups cannot be combined to leak additional information. In particular, Protocol~\ref{prot:sub_agg} exposes the sums of each subgroup. Protocol~\ref{prot:shard} can also release sums of small sets of parties if groups are not chosen carefully. \paragraph{Information Leakage from Overlapping Groups.} For simplicity we set $m = 2$, which is also consistent with our evaluation. However, we conjecture that the results in this section are easily generalized to $m > 1$. Let $R_1$ and $R_2$ be the sets of groups used in the two respective invocations of \texttt{sub\_agg} within the for loop of Round 2 for protocol 3. Each party is a member of a group in $R_1$ and a member of a group in $R_2$. They aggregate their first shard with the group in $R_1$ and their second shard with the group in $R_2$. Consider the case where a single group $G$ is used in both rounds: $G \in R_1 \land G \in R_2$. An adversary can reconstruct the sum of inputs of parties in $G$ by using \texttt{reconstruct} on the outputs of $G$ in rounds 1 and 2. Requiring $R_1 \cap R_2 =\{\} $ is not sufficient to prevent such an attack. Suppose there exist some groups $G_1, G_2, G_3, G_4$ such that $G_1, G_2 \in R_1$, $G_3, G_4 \in R_2$ and $G_1 \cup G_2 = G_3 \cup G_4$. An adversary can reconstruct the sum of parties in groups $G_1 \cup G_2$ by calling \texttt{reconstruct} on the sum of $G_1$ and $G_2$'s round 1 outputs and the sum of $G_3$ and $G_4$'s round 2 outputs. \paragraph{Graph background.} A graph $G = (V, E)$ where $V$ is the set of nodes and $E$ is the set of edges such that $(i, j) \in E \iff i \in V \land j \in V\ \land$ there is an edge between $i$ and $j$. These are undirected graphs so $(i, j)$ and $(j, i)$ are equivalent. We consider a subgraph $SG = (V', E')$ where $V' \subseteq V$ and $E' \subseteq E \land (i , j) \in E' \implies (i \in V' \land j \in V')$. Finally we consider a disconnected subgraph $DG = (V'', E'')$ of $G$ if $CC$ is a subgraph of $G$, and $\forall (i,j) \in E,\ i \in V'' \implies (i, j) \in E''$. In other words, all nodes in a disconnected subgraph of $G$ exclusively have edges to other nodes within the disconnected subgraph. The disconnected subgraph is \emph{disconnected} from the rest of $G$. \paragraph{Avoiding Information Leakage.} In order to ensure that no subset sum can be accessed besides the sum of all honest parties, we require that our honest party communication graph is fully connected. We define a party communication graph as \[HG = (V, E) \ s.t.\] \[V = \{honest\ parties\} \] \[E = \{ (i,j) |\ \exists\ G \in R_1 \cup R_2 | i \in G \land j \in G\} \] The honest party communication graph draws connections between any two parties that are in a group together in either round. \begin{property}\label{prop:conn} Suppose a subgraph $SG = (V', E')$ of $HG$. $sum('V)$ is recoverable $\implies$ $SG$ is a disconnected subgraph \end{property} \begin{proof} The proof is by contradiction. Suppose a subgraph $SG = (V', E')$ where the sum of $V'$ is accessible, but $SG$ is not a disconnected subgraph. Because $SG$ is not a disconnected subgraph, we know $$ \exists i, j \in V | i \in V' \land j \notin V' \land (i, j) \in E$$ Ultimately, there is an edge between a node in $SG$ and a node outside of $SG$. From the existence of this edge we know that $i$ and $j$ were in a group together for one of the sharding rounds. This implies that one of $i$'s shards is aggregated with one of $j$'s shards, and a sum including either of these shards would have to include the other. We reach contradiction here because $j$ is not in $SG$, so the sum of $SG$ is unavailable. \end{proof} From Property~\ref{prop:conn}, the requirement that $HG$ remain fully connected emerges. \begin{property}\label{prop:con_sec} $HG.isconnected \implies$ no subset sum leakage. \end{property} This property follows directly from Property~\ref{prop:conn}. Because there are no connected components of a fully connected graph, no sums smaller than the one released by the ideal functionality are revealed. \paragraph{Generating Groups.} There are conceivably many different ways to generate groups for two rounds of sharding to ensure $HG$ remains fully connected, and different instantiations of this protocol might want to use different group generation methods to adapt to network conditions like geo-location. In our implementation we determine group membership based on a single permutation of the network. Suppose $i$ is the index of some party in our permutation, and $g$ is the group size. Party $i$ is a member of group $i // g$ for the first round, and $(i//g + g* i\%g) \ n$ for the second round. The expression for second round moves the $j^{th}$ member of each group $j$ groups forward. This spreads parties around for the second round sufficiently enough to ensure that $HG$ is fully connected.
1,314,259,996,869
arxiv
\section{Introduction} Let $X\subset\P^3$ be a nonsingular projective cubic surface over a field~$k$ of characteristic zero. An \emph{elliptic fibration on~$X$}, sometimes called an \emph{elliptic fibration birational to~$X$}, is a dominant rational map $\phi\colon X\dashrightarrow B$ to a normal variety~$B$, where $\phi$ is defined over~$k$, it has connected fibres, and its general geometric fibre is birational to a curve of genus~1. We describe in Section~\ref{sec!hal} a class of elliptic fibrations called {\em Halphen fibrations}. Conversely, given an elliptic fibration on a \emph{minimal\/}~$X$ (see below) we relate it to an Halphen fibration as follows: \begin{thm} \label{thm!main} Let $X \subset \mathbb P^3$ be a nonsingular, minimal cubic surface over a field~$k$ of characteristic zero. If $\phi \colon X \dashrightarrow B$ is an elliptic fibration on~$X$ then $B\cong\mathbb P^1$ and there exists a composite \[ \xymatrix{ X \ar@{-->}[r]^{i_s} & X \ar@{-->}[r]^{i_{s-1}} & \quad \cdots \quad \ar@{-->}[r]^{i_2} & X \ar@{-->}[r]^{i_1} & X } \] of birational selfmaps of~$X$, each of which is a Geiser or Bertini involution, such that $\;\phi \circ i_1 \circ \cdots \circ i_s \colon X \dashrightarrow B\cong\mathbb P^1\;$ is an Halphen fibration. \end{thm} Geiser and Bertini involutions are birational selfmaps of~$X$ described in Section~\ref{sec!GB}. This result is proved in Cheltsov \cite{Ch} and independently in the unpublished~\cite{R00}. Our aims and methods are different from those of~\cite{Ch}, however: we seek to be as explicit as possible, and we have implemented algorithms in the computational algebra system {\sc Magma}~\cite{Ma} for Halphen fibrations and Geiser and Bertini involutions. Our code is available at~\cite{BR}. All varieties, subschemes, maps and linear systems are defined over the fixed field~$k$ of characteristic zero, except where a different field is mentioned explicitly. \paragraph{Contents of the paper.} In the remainder of the introduction we discuss motivation and background for the problem. We build Halphen fibrations on~$X$ in Section~\ref{sec!con}. In Section~\ref{sec!proof} we discuss the Noether--Fano--Iskovskikh inequalities and then prove Theorem~\ref{thm!main}. Section~\ref{sec!alg} is devoted to algorithmic considerations and an outline of our implementation, while Section~\ref{sec!egs} contains worked computer examples. \paragraph{Cubic surfaces and minimality.} Throughout this paper, by {\em cubic surface\/} we mean a nonsingular surface $X\subset\mathbb P^3$ defined by a homogeneous polynomial of degree~3 with coefficients in~$k$. We denote $-K_X$ by~$A$. When $k$~is algebraically closed, it is well known that $X$ contains 27~straight lines and that these span the Picard group $\Pic(X)\cong\mathbb Z^7$. One quickly deduces that there is a birational map $X\dashrightarrow\P^2$; in other words, $X$ is~{\em rational}. On the other hand, if $k$ is not algebraically closed, some of these lines may fail to be defined over~$k$ and the Picard group may have smaller rank. Indeed, $\Pic(X)$ is the Galois-invariant part of~$\Pic(\overline{X})$. A cubic surface $X$ is {\em minimal\/} if the Picard number of~$X$, $\rho(X)=\rank\Pic(X)$, is~1. It is easy to see that if $X$ is minimal then $\Pic(X) = \mathbb Z({-K_X})$. Elliptic fibrations were defined above with apparently arbitrary base~$B$, but in fact it follows from Iitaka's bound on Kodaira dimension that $g(B)=0$ for any surface $X$ of Kodaira dimension~$-\infty$; see \cite{BHPV} Theorem~(18.4). In particular this applies to cubic surfaces and so we have: \begin{prop*} If $X\dashrightarrow B$ is an elliptic fibration on a cubic surface~$X$ then $g(B)=0$. \end{prop*} \noindent There remains the question of whether $B$ has a $k$-rational point, that is, whether $B\cong\mathbb P^1$ over~$k$; we return to this in Section~\ref{sec!B}. \paragraph{Geometric motivation.} Our main motivation for studying elliptic fibrations on cubic surfaces is geometric. This is best explained from a broader perspective. A {\em Fano $n$-fold\/} is a normal projective variety~$X$ of dimension~$n$, with at worst $\mathbb Q$-factorial terminal singularities and Picard number~1, such that ${-K_X}$ is ample. A fundamental question in Mori theory is whether a given Fano $n$-fold~$X$ admits birational maps to other Mori fibre spaces --- see~\cite{Co} for a discussion, noting that a key example of such {\em birational non-rigidity\/} is a rational map $\phi\colon X\dashrightarrow S$ whose generic fibre is a curve of genus~0 rather than~1. We regard the search for elliptic fibrations as a limiting case in Mori theory --- a point of view we learned from papers of Iskovskikh~\cite{Is} and Cheltsov~\cite{Ch}, and one that becomes clearer when we discuss the Noether--Fano--Iskovskikh inequalities in Section~\ref{sec!prelim}. For more on how our problem fits into modern birational geometry, see~\cite{Is} and the introduction to~\cite{CPR}. \paragraph{Arithmetic motivation.} Cases of the more general problem of classifying elliptic fibrations on Fano varieties also have arithmetic applications. From this point of view a cubic surface is a baby case; but scaled-up versions of our methods attack, for example, the same problem for some Fano 3-folds, see \cite{Ch} and~\cite{R06}. In arithmetic a basic question concerning Fano varieties is the existence, or at least potential density, of rational points. Elliptic fibrations offer one approach; see Bogomolov and Tschinkel~\cite{BT}, for instance. \paragraph{History.} In contrast to the modern motivation, some of the methods are ancient. In his paper~\cite{H} of~1882 Halphen considered the problem of finding a plane curve~$G$ of degree~6 with 9~prescribed double points $P_1,\ldots,P_9$. The question is: for which collections of points~$\{P_i\}$ is there a solution apart from $G = 2C$, where $C$~is the (in general unique) cubic containing all the~$P_i$? Halphen's answer is that $C$ must indeed be unique and --- in modern language and supposing for simplicity that $C$ is nonsingular, so elliptic --- $P_1\oplus\cdots\oplus P_9$ must be a nonzero 2-torsion point of~$C$, where any inflection point is chosen as the zero for the group law. He proceeds to consider higher torsion as well. Translated to a cubic surface, this is essentially Theorem~\ref{thm!constr}. A natural next step is the result analogous to Theorem~\ref{thm!main} for~$X=\mathbb P^2$, and this was proved by Dolgachev~\cite{D} in~1966. The approach of~\cite{Ch} to Theorem~\ref{thm!main} is considerably more highbrow than ours: he uses general properties of mobile log pairs and does not spell out the construction of elliptic fibrations in detail. The paper~\cite{R00}, on the other hand, was originally conceived as a test case for~\cite{R02} and~\cite{R06}, which concern similar problems for Fano 3-folds. \paragraph{Acknowledgments.} It is our pleasure to thank Professors Andrew Kresch and Miles Reid for their help with some finer points of arithmetic and Professor Josef Schicho for a preview of his new {\sc Magma}\ package to compute the Picard group of a cubic surface over a non-closed field. \section{Constructing elliptic fibrations} \label{sec!con} We fix a nonsingular, minimal cubic surface~$X$ defined over~$k$, with $A=-K_X$. Linear equivalence of divisors is denoted by~$\sim$ and $\mathbb Q$-linear equivalence by~$\sim_{\mathbb Q}$. \subsection{Halphen fibrations} \label{sec!hal} The simplest elliptic fibrations arise as the pencil of planes through a given line. That is, if $L=(f=g=0)$ is a line in~$\P^3$ defined by two independent linear forms~$f,g$ and not lying wholly in~$X$, then the map $\phi = (f,g)$ is an elliptic fibration. In this section we make a larger class of fibrations which includes these linear fibrations as a simple case. \begin{definition} \label{def!hal} A pair $(G,D)$ is called {\em Halphen data on~$X$} when $G\in|A|$ is (reduced and) irreducible over~$k$ and $D\in\Div(G)$ is an effective $k$-rational divisor of degree~3, supported in the nonsingular locus of~$G$, satisfying $\mathcal O_G(\mu D)\cong \mathcal O_G(\mu A)$ for some integer $\mu\ge 1$. The smallest such $\mu\ge1$ is called the \emph{index} of~$(G,D)$. \end{definition} Since $X$ is minimal, $G$ may be any irreducible plane cubic or the union of three conjugate lines (it is required to be irreducible over~$k$, not over~$\overline{k}$). Since $\Supp(D) \subset \Nonsing(G)$, the sheaf isomorphism condition says that $A_{|G}-D$ is a torsion class of order~$\mu$ in~$\Pic(G)$. \begin{definition} \label{def!res} Let $(G,D)$ be Halphen data on~$X$. The {\em resolution of~$(G,D)$\/} is the blowup $\pi\colon Y\rightarrow X$ of a set of up to three points~$P_i$ that lie on varieties dominating~$X$ and are determined as follows: \begin{itemize} \item[A1.] If $D$ is a sum of distinct $k$-rational points of~$G$ then let $\{P_1,P_2,P_3\}=\Supp(D)$ (as points of~$X$) and let $\pi$~be the blowup of these points. \item[A2.] If $D = p + 2q$, where $p \ne q$ are $k$-rational points of~$G$, then let $P_1 = p$ and $P_2 = q$ (as points of~$X$); also let $\xi \colon Y' \to X$ be the blowup of these points and let $E_2'$ be the exceptional curve lying over~$P_2$. Now define $P_3$ to be the point $G' \cap E_2'$ on~$Y'$, where $G'$ is the strict transform of~$G$; let $o \colon Y \to Y'$ be the blowup of~$P_3$ and set $\pi = \xi \circ o$. \item[A3.] If $D = 3p$ with $p$ a $k$-rational point of~$G$ then let $P_1 = p$ and let $\nu \colon Y' \to X$ be the blowup of~$P_1$. Next define $P_2 = E_1' \cap G'$ where $E_1',G' \subset Y'$ are respectively the exceptional curve of~$\nu$ and the strict transform of~$G$, and let $\xi \colon Y'' \to Y'$ be the blowup of~$P_2$. Now, similarly, define $P_3 = E_2'' \cap G''$ where $E_2'',G'' \subset Y''$ are respectively the exceptional curve of~$\xi$ and the strict transform of~$G'$. Finally let $o \colon Y \to Y''$ be the blowup of~$P_3$ and let $\pi = \nu \circ \xi \circ o \colon Y \to X$. \item[B.] If $D = p_1 + p_2$ with $p_1$ a $k$-rational point of~$G$ and $\deg(p_2) = 2$ then let $P_i = p_i$ for $i = 1,\,2$ and let $\pi \colon Y \to X$ be the blowup of~$P_1$ and~$P_2$. \item[C.] If $D = p$, a single $k$-closed point of~$G$ of degree~3, then let $P_1 = p$ and let $\pi\colon Y\rightarrow X$ be the blowup of~$P_1$. \end{itemize} In each case we fix the following notation: let $E_i \subset Y$ be the \emph{total\/} transform on~$Y$ of the exceptional curve over~$P_i$. So in case~A2, for example, $E_2 = o^*(E_2') = E_2'' + E_3$ has two irreducible components, $E_2'' = o^{-1}_*(E_2')$ and $E_3 = \Exc(o)$. Furthermore let $E = \sum_i E_i$, the relative canonical class of~$\pi$. \end{definition} It can easily be checked in the above definition that $E_i$ is the reduced preimage of~$P_i$ on~$Y$. Note, though, that this is a consequence of our positioning of each subsequently-defined $P_j$ on the strict transform of~$G$; the corresponding statement no longer holds, for example, in the closely related notation of Section~\ref{sec!prelim} below. \begin{definition} \label{def!H} Let $(G,D)$ be Halphen data on~$X$ of index~$\mu$, and let $\pi\colon Y\rightarrow X$ be the resolution introduced above with relative canonical class~$E$. We define $\mathcal H_Y$ to be the linear system $|\mu \pi^*(A) - \mu E|$ on~$Y$. The {\em Halphen system $\mathcal H$ associated to $(G,D)$} is the birational transform of~$\mathcal H_Y$ on~$X$. \end{definition} Notice that $\mathcal H$ is the set of divisors in~$|\mu A|$ that have multiplicity $\mu$ at every point~$P_i$. It would be natural to write $\mathcal H = |\mu A - \mu D|$, but we don't. \begin{thm} \label{thm!constr} Let $(G,D)$ be Halphen data on~$X$ of index~$\mu$, and let $\mathcal H$ be the linear system described in Definition~\ref{def!H}. Then $\mathcal H$ is a mobile pencil, and the rational map $\phi =\phi_\mathcal H$ is an elliptic fibration $\phi\colon X \dashrightarrow \mathbb P^1$ that has $\mu G$ as a fibre. The set-theoretic base locus of~$\phi$ is~$\Supp(D)$ and the resolution of~$(G,D)$ is its minimal resolution of indeterminacies. \end{thm} Following Cheltsov \cite{Ch}, fibrations $\phi_\mathcal H$ arising in this way are called {\em Halphen fibrations}. We give the proof of this theorem in Section~\ref{sec!pf}. \subsection{Twisting by Geiser and Bertini involutions} \label{sec!GB} Not all elliptic fibrations are Halphen: we can precompose, or {\em twist}, Halphen fibrations by elements of~$\Bir(X)$, and usually the result will have more than three basepoints (counted with degree). We describe two particular classes of birational selfmap of~$X$: Geiser and Bertini involutions, also described at greater length in \cite{CPR} Section~2. In fact, the group $\Bir(X)$ (in the case of minimal~$X$) is generated by Geiser and Bertini involutions together with all regular automorphisms, although we do not use this fact explicitly; see \cite{M}~Chapter~5. \paragraph{Geiser involutions.} Let $P\in X$ be a point of degree~$1$. We define a birational map $i_P\colon X\dashrightarrow X$ as follows. Let $Q$ be a general point of~$X$, and let $L\subset\P^3$ be the line joining $P$ to~$Q$. Then $L\cap X$ consists of three distinct points, $P,Q$ and a new point~$R$. Define $i_P(Q)=R$. In fact, $i_P$ is the map defined by the linear system~$|2A-3P|$. \paragraph{Bertini involutions.} Let $P\in X$ be a point of degree~$2$. Let $L\subset\P^3$ be the unique line that contains~$P$. Since $X$ is minimal, $L$ intersects $X$ in $P$ and exactly one other point $R$ of degree~$1$. We define a birational map $i_P\colon X\dashrightarrow X$ as follows. Let $Q$ be a general point of~$X$. If $\Pi\cong\P^2$ is the plane spanned by $P$ and~$Q$, then $C=\Pi\cap X$ is a nonsingular plane cubic curve containing~$R$. Then $i_P(Q) = {-Q}$, the inverse of~$Q$ in the group law on $C$ with origin~$R$. In fact, $i_P$ is the map defined by the linear system~$|5A-6P|$. \subsection{Proof of Theorem~\ref{thm!constr}} \label{sec!pf} \paragraph{Comments about $G$.} We are given Halphen data $(G,D)$ on~$X$. The curve $G$ is a Gorenstein scheme with $\omega_G\cong\mathcal O_G$ and~$\chi(\mathcal O_G)=0$. When $\mu>1$, $G$ cannot be a cuspidal cubic since in that case the Picard group $\Pic(G) \cong \mathbb{G}_a$ is torsion free; here we use~$\ch(k)=0$. This restriction on~$G$ also follows from Theorem~\ref{thm!constr}, given Kodaira's classification of multiple fibres of elliptic fibrations: multiple cusps do not occur. Our $G$ may be a nodal cubic (with Picard group~$\mathbb{G}_m$) or a triangle of conjugate lines (with Picard group an extension of~$\mathbb Z^3$ by~$\mathbb{G}_m$). If~$\mu = 1$ then $G$ can be cuspidal; but in this case we are free to re-choose~$G$ as we please from the pencil~$\mathcal H$ of Definition~\ref{def!H}, so without loss of generality $G$ is nonsingular. \begin{proof}[Proof of Theorem~\ref{thm!constr}] The case $\mu=1$ is trivial, so let $\mu \ge 2$. Let $\pi\colon Y\rightarrow X$ together with the points~$P_i$ be the resolution of~$(G,D)$ of Definition~\ref{def!res}. We have the Halphen system $\mathcal H$ on~$X$ of Definition~\ref{def!H} and, by construction,~$\mu G\in \mathcal H$. Suppose at first that we are in case A1, B or~C. Define $\mathcal F$ on~$X$ as the tensor product of all~$\mathcal I_{P_i}^{\mu}$. There is a map between exact sequences of sheaves of $\mathcal O_X$-modules: \begin{equation} \label{eq!Gdef} \xymatrix@C=0.8cm{ 0 \ar[r] & \mathcal F(\mu A) \ar[r] \ar[d] & \mathcal O_X(\mu A) \ar[r] \ar[d] & \mathcal G \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \mathcal O_G(\mu A-\mu D) \ar[r] & \mathcal O_G(\mu A) \ar[r] & \mathcal O_{\mu D}(\mu A) \ar[r] & 0 } \end{equation} where $\mathcal G=\left(\mathcal O_X/\mathcal F\right)\otimes\mathcal O_X(\mu A)$. (The lefthand vertical arrow is from the definition of~$\mathcal F$, the central one is clear, and the final one follows from the others.) By assumption, $\mathcal O_G(\mu A-\mu D)\cong\mathcal O_G$. Kodaira vanishing shows that $H^1(X,\mathcal O_X(\mu A))=0$. By Serre duality (since $G$ is Gorenstein) we have \[ H^1(G,\mathcal O_G(\mu A)) \cong \Hom(\mathcal O_G(\mu A),\mathcal O_G)^*, \] and this $\Hom$ is zero because $A$ is ample on every component of~$G$. So, taking cohomology, we have a map between exact sequences of $k$-vector spaces: \[ \xymatrix@C=0.4cm{ 0 \ar[r] & H^0(X,\mathcal F(\mu A)) \ar[r] \ar[d] & H^0(X,\mathcal O_X(\mu A)) \ar[r] \ar[d] & H^0(X,\mathcal G) \ar[r] \ar[d]^{\beta_1} & H^1(X,\mathcal F(\mu A)) \ar[r] \ar[d]^{\beta_2} & 0 \\ 0 \ar[r] & H^0(G,\mathcal O_G) \ar[r] & H^0(G,\mathcal O_G(\mu A)) \ar[r] & H^0(G,\mathcal O_{\mu D}(\mu A)) \ar[r]^(.55){\alpha} & H^1(G,\mathcal O_G) \ar[r] & 0 } \] Since both $\alpha$ and~$\beta_1$ are surjective, we have that $\beta_2$ is surjective. Now $\chi(\mathcal O_G)=0$ so $h^1(G,\mathcal O_G)=1$ and we conclude that $H^1(X,\mathcal F(\mu A))\not=0$. From a local calculation at the geometric points of~$D$ we have \[ h^0(X,\mathcal G) \le 3 \begin{pmatrix}\mu+1\\2\end{pmatrix} \] and by Riemann--Roch \[ h^0(X,\mathcal O_X(\mu A))= 3\mu(\mu+1)/2 + 1 \ge h^0(X,\mathcal G)+1. \] Thus $h^0(X,\mathcal F(\mu A))\ge 2$. The linear system $\mathcal H$ is the system associated to $H^0(X,\mathcal F(\mu A))$, and so it has positive dimension; $\mathcal H_Y$ has the same dimension. Since $\mu G\in\mathcal H$, the only possible fixed curve of~$\mathcal H$ is some multiple~$\mu' G$, but then $(\mu-\mu')G$ contradicts the minimality of~$\mu$; therefore $\mathcal H$ is mobile. Let $H_Y\in \mathcal H_Y$ be a general element. Since $H_Y\sim \mu\pi^*(A) - \mu E$, and~$E^2=3$, we have $H_Y^2=0$. So the map $\phi_Y=\phi_{\mathcal H_Y}$ is a morphism to a curve. Furthermore, $H_Y\sim -K_Y$ so the general fibre is a nonsingular curve (over~$k$) with trivial canonical class. Since $\mu \pi_*^{-1}(G)$ is a fibre of~$\phi_Y$, the image curve~$B$ has a rational point~$Q\in B$. The minimality of~$\mu$ implies that $\mathcal H_Y$ is the pencil $\phi_Y^*|\mathcal O_{B}(Q)|$. In cases A2 and~A3, we make similar calculations on a blowup of~$X$. For example, in case~A2 let $\tau \colon X' \rightarrow X$ be the blowup of~$P_2$ with exceptional curve~$L$. Define $G'$ and~$\mathcal H'$ to be the birational transforms on~$X'$ of $G$ and~$\mathcal H$ respectively. The point $P_3$ lies on~$X'$, and we identify $P_1$ with its preimage under~$\tau$. Let $A' = \tau^*A - L$, and let $D' = P_1 + 2P_3$ as a divisor on~$G'$. Define $\mathcal F'$ as the sheaf $\mathcal I_{P_1}^{\mu}\otimes \mathcal I_{P_3}^{\mu}$ on~$X'$. There is a map between exact sequences of sheaves of $\mathcal O_{X'}$-modules analagous to~\eqref{eq!Gdef} above (involving $A'$, $G'$, etc.)\ with $\mathcal G'=\left(\mathcal O_{X'}/\mathcal F'\right)\otimes\mathcal O_{X'}(\mu A')$. Since $A'\sim -K_{X'}$, the argument works as before in cohomology, with the conclusion that $H^1(X',\mathcal F'(\mu A'))\not=0$. The dimension calculation differs slightly, giving instead that \[ h^0(X',\mathcal G') \le 2 \begin{pmatrix}\mu+1\\2\end{pmatrix} \] and $h^0(X',\mathcal O_{X'}(\mu A'))= 2\mu(\mu+1)/2 + 1 \ge h^0(X',\mathcal G')+1$. The conclusion is again that $h^0(X',\mathcal F'(\mu A'))\ge 2$, and the rest of the proof follows verbatim. In case~A3, the only change is again the dimension calculation. \end{proof} \section{Proof of the main theorem} \label{sec!proof} Let $\phi \colon X \dashrightarrow B$ be as in the statement of Theorem~\ref{thm!main}. \subsection{Rationality of the base} \label{sec!B} Let $H_B$ be a very ample divisor on~$B$. We may choose it to have minimal possible degree; since $B$ has genus~$0$, this is either $1$ or~$2$. We first show that in fact the minimal degree is always~1, so that~$B\cong\mathbb P^1$. Suppose $\deg H_B=2$; in particular, this means that $B$ has no rational points. We let $\mathcal H=\phi^*|H_B|$. A general element $H\in\mathcal H$ splits over~$\overline{k}$ as a sum $D_1+D_2$ of two conjugate curves each of genus~$1$. Over $\overline{k}$, $D_1\sim D_2$, so the class of~$D_1$ in $\Pic(\overline{X})$ is Galois invariant. In particular, $D_1$ defines a divisor class in~$\Pic(X)$ over~$k$. So $H$ is divisible by~$2$ in~$\Pic(X)$: say $H\sim 2F$ where $F$ is an effective divisor defined over~$k$. So $F\sim D_1$ over~$\overline{k}$ and therefore, over~$k$, $|F|$ determines a map $X\dashrightarrow\mathbb P^1$ which factorises~$\phi$. So $B$ has a rational point, contrary to our assumption. \subsection{More preliminaries} \label{sec!prelim} We know now that $B$ has a rational point, so we may assume $B=\mathbb P^1$. We denote by~$\mathcal H$ the mobile linear system $\phi^*|\mathcal O_{\mathbb P^1}(1)|$, a linear system that defines~$\phi$. Since $X$ is minimal, $\mathcal H \subset |\mu A|$ for some fixed~$\mu\in\mathbb N$. The anticanonical degree $\mu$ is also denoted~$\deg\mathcal H$. Let $P_1,\ldots,P_r$ be the distinct basepoints of~$\mathcal H$ and $m_1,\ldots,m_r \in \mathbb N$ their multiplicities: so a general $C \in \mathcal H$ has $\mult_{P_i}(C) = m_i$ for all~$i$. The list $P_1,\ldots,P_r$ may include infinitely near basepoints that lie on surfaces dominating~$X$; compare with Definition~\ref{def!res}. Note that any $P_i$ may have degree greater than~1. Let $f \colon W \to X$ be the blowup (in any appropriate order) of all the~$P_i$; $f$~is a minimal resolution of indeterminacy for~$\phi$. We denote by~$E_i$ the \emph{total transform\/} on~$W$ of the exceptional curve over~$P_i$: that is, if $L$ is the exceptional curve of the blowup of~$P_i$ then $E_i$ is the total transform of~$L$ on~$W$. (Note that $E_i$ may be reducible or even nonreduced.) Then denoting $\deg P_i$ by~$d_i$, we have \begin{equation} \label{eqns!EiEj} E_i^2 = -d_i \mbox{\quad and \quad} E_i E_j = 0 \mbox{\;\; for $i \ne j$.} \end{equation} With this notation, the adjunction formula for~$f$ reads \begin{equation} \label{eq!KW} K_W \sim f^*K_X + E_1 + \cdots + E_r \end{equation} and the birational transform $\mathcal H_W$ of~$\mathcal H$ on~$W$ satisfies \begin{equation} \label{eq!HW} \mathcal H_W \sim f^*\mathcal H - m_1E_1 - \cdots - m_rE_r. \end{equation} \begin{thm}[Noether--Fano--Iskovskikh inequalities] \label{thm!nfi} Under the hypotheses of Theorem~\ref{thm!main}, $\mathcal H$ has a basepoint of multiplicity at least~$\mu = \deg \mathcal H$: that is, $m_i \ge \mu$ for some~$i$. \end{thm} \begin{rk} \label{rk!Pi} We may assume the point~$P_i$ with $m_i \ge \mu$ is a point of~$X$, not an infinitely near point, because multiplicities of linear systems on nonsingular surfaces are nonincreasing under blowup. \end{rk} The theorem contrasts with the familiar case, explained in~\cite{CPR} and \cite{KSC} \S5.1, for instance, when $\mathcal H \subset |\mu A|$ induces a birational map from $X$ to a nonsingular surface~$Y$ that is minimal over~$k$: in this case the NFI inequalities tell us there is a basepoint of multiplicity strictly larger than~$\mu$. In Mori theory the latter statement is that $(X, \tefrac{1}{\mu} \cH)$ has a noncanonical singularity; the case we need, Theorem~\ref{thm!nfi}, says that $(X, \tefrac{1}{\mu} \cH)$ has a nonterminal singularity. For the modern viewpoint on NFI for elliptic and K3 fibrations birational to Fano varieties, see~\cite{R06}, whose approach follows Cheltsov~\cite{Ch} and is based on ideas of Shokurov~\cite{Sh}. \begin{proof}[Proof of Theorem~\ref{thm!nfi}] By equations \eqref{eq!KW} and \eqref{eq!HW} \[ 0 \:\:\sim_{\Q}\:\: f^*\big(K_X + \tefrac{1}{\mu} \cH \big) \:\:\sim_{\Q}\:\: K_W + \textstyle\frac{1}{\mu}\mathcal H_W - \textstyle\sum_{i=1}^r \big(1 - \textstyle\frac{m_i}{\mu}\big) E_i \] where $\sim_{\Q}$~denotes $\mathbb Q$-linear equivalence of $\mathbb Q$-divisors. Now the intersection number $\mathcal H_W^2$ is zero since the morphism $\phi \circ f$ is a fibration, which implies that \begin{equation} \textstyle\sum_{i=1}^r d_i m_i^2 = 3 \mu^2 \label{eqn!n1}. \end{equation} Also $K_W \mathcal H_W = 0$ by the adjunction formula, and expanding $\mathcal H_W(K_W+(1/\mu)\mathcal H_W) = 0$ gives \begin{equation} \textstyle\sum_{i=1}^r d_i m_i \big(1 - m_i/\mu \big) = 0. \label{eqn!n2} \end{equation} Now (\ref{eqn!n2}) implies the result, since if any of the coefficients $(1 - m_i/\mu)$ is nonzero then at least one must be negative. Note that by equation~(\ref{eqn!n1}) there is at least one basepoint, that is, $r \ge 1$; this equation will also be used later. \end{proof} \subsection{Proof of Theorem~\ref{thm!main}.} First we describe the logical structure of the argument. It falls into two parts according to equation \eqref{eqn!n2}: either $m_i>\mu$ for some~$i$, in which case we sketch a standard induction step; or $m_i=\mu$ for every~$i$, and we work this base case out in detail. \paragraph{Induction step.} This is essentially the proof of the birational rigidity of~$X$, as given in~\cite{CPR}, for example. We are given a point $P_i\in X$ (by Remark~\ref{rk!Pi}) with multiplicity $m_i>\mu$ --- by definition, $P_i$ is a {\em maximal centre} of~$\mathcal H$. So \[ 3\mu^2 \: = \: (\mu A)^2 \: = \: \mathcal H^2 \: \ge \: m_i^2 d_i \: > \: \mu^2 d_i, \] where $d_i = \deg P_i$, and the inequality $\mathcal H^2 \ge m_i^2 d_i$ is the global-to-local comparison of intersection numbers $\mathcal H^2 \ge (\mathcal H)^2_{P_i}$. It follows that $d_i=1$ or~$2$. We precompose~$\phi$ with the Geiser or Bertini involution~$i_{P_i}$. It can be shown --- Lemma~2.9.3 of \cite{CPR} --- that this \emph{untwists}~$\mathcal H$, in other words that $\deg (i_{P_i})^{-1}_*\mathcal H < \deg \mathcal H = \mu$, and we conclude by induction on the degree~$\mu$. (Note that if $\mu=1$ then all $m_i=1$ by~\eqref{eqn!n2}.) \paragraph{Base case.} Equation~(\ref{eqn!n1}) implies that $\sum d_i = 3$, i.e., if we count over an algebraic closure~$\overline{k}$ of~$k$ then there are 3~basepoints; we must show they arise from Halphen data~$(G,D)$. So let $\psi = f\circ \phi \colon W \to \mathbb P^1$ be the morphism obtained by blowing up the base locus $P_1,\ldots,P_r$ of~$\phi$. We work over~$\overline{k}$ for the remainder of this paragraph. Take a general fibre $F$ of~$\psi$; by Bertini's Theorem $F$~is a nonsingular curve of genus~1. Now \[ F \;\sim\; \mu f^*(A) - \mu \textstyle\sum_{i=1}^r E_i \;\sim\; {-\mu K_W}. \] By Kodaira's canonical bundle formula applied to~$\psi$, \[ K_W \;\sim\; \psi^*(K_{\mathbb P^1} + M) \,+\, \textstyle\sum_j (n_j-1) G_j \] where $M$ is a divisor of degree~$\chi(\mathcal O_W)$ on~$\mathbb P^1$ and the $n_j G_j \sim F$, with $n_j \ge 2$, are the multiple fibres of~$\psi$. Now $\chi(\mathcal O_W) = \chi(\mathcal O_X) = 1$ so $M$ is a point and we have \[ {-\textstyle\frac{1}{\mu} F} \;\sim_{\Q}\; {-F} + \textstyle\sum_j \big(1-\textstyle\frac{1}{n_j} \big) F. \] Therefore $1-\frac{1}{\mu} = \sum_j \big(1-\frac{1}{n_j} \big)$. So either $\mu = 1$ and there are no multiple fibres, or there is a single multiple fibre $n_1 G_1 = \mu G_1 \sim F$ of multiplicity~$\mu$. Since the subscheme of multiple fibres is Galois invariant, $G_1$ is in fact defined over~$k$. From here on, we work exclusively over~$k$. In the case $\mu=1$, $\mathcal H$ is a pencil contained in~$\left|A \right|$ so it gives a linear fibration and we are done. The main case is~$\mu>1$. Let $G_W = G_1$ and $G = f_*(G_W)$: then \[ G \;\sim_{\Q}\; f_* \big( \textstyle\frac{1}{\mu} F \big) \;\sim_{\Q}\; f_*({-K_W}) \;\sim\; {-K_X} \;=\; A \] so $G$ is a plane section of~$X$. By minimality of~$X$, $G$~is irreducible over~$k$; also $\mu G = f_*(\mu G_W) \in f_*(\mathcal H_W) = \mathcal H$, so $\mult_{P_i}(G) \ge 1$ for each basepoint~$P_i$. (We are abusing notation here: if $P_i$ is an infinitely near point, let $Z$ denote any surface between $X$ and~$W$ on which $P_i$~lies and define $\mult_{P_i}(G)$ to be $\mult_{P_i}(G_Z)$, where $G_Z$ is the pushforward of~$G_W$ to~$Z$.) We claim that in fact $\mult_{P_i}(G) = 1$ for each~$P_i$. Indeed, first note that $G_W$ is the strict transform of~$G$ on~$W$, since otherwise $G_W$ would contain some~$E_i$ with multiplicity at least~1; but then $E_i$ would be contained in a fibre of~$\psi$, contradicting \[ FE_i \;=\; {-\mu K_W E_i} \;=\; \mu d_i \;>\; 0. \] Therefore the claim $\mult_{P_i}(G) = 1$ for each~$P_i$ is equivalent to \[ G_W \;=\; f^*(G) - \textstyle\sum_i E_i; \] but the latter follows from the facts $\mu G \in \mathcal H$, $\mu G_W \in \mathcal H_W$ and $\mathcal H_W = f^*(\mathcal H) - \sum \mu E_i$. We now construct an effective $k$-rational divisor~$D$ of degree~3 on~$G$ by the inverse of the procedure in Definition~\ref{def!res}. We define $D$ to be $\sum \ell_i P_i$ as a divisor on~$G$, where the sum extends over basepoints~$P_i$ that lie on~$X$ (rather than on a surface dominating~$X$) and $\ell_i$ is some factor 1, 2 or~3 that we specify. If the $P_i$ are all points of~$X$ then we set all $\ell_i=1$, so $D = P_1+P_2+P_3$ (this is one of cases A1, B and~C). If $P_1, P_2 \in X$ and $P_3$ lies above~$P_2$, possibly after renumbering, then we set $\ell_1=1$ and~$\ell_2=2$, so~$D = P_1+ 2P_2$ (case~A2). Notice that in this case $P_3$ must be the unique intersection point of the exceptional curve above~$P_2$ and the birational transform of~$G$, so this procedure is indeed the inverse of the construction in Definition~\ref{def!res}. If $P_1 \in X$, $P_2$ lies over~$P_1$ and $P_3$ lies over~$P_2$, then we set $\ell_1=3$, so $D = 3P_1$ (case~A3); again the points $P_i$ lie on the strict transform of~$G$ at every stage. Next we check that $(G,D)$ is Halphen data: the outstanding point is that $\mathcal O_G(H) \cong \mathcal O_G(\mu D)$ for a general curve $H \in \mathcal H$, that is, that $H$ cuts out exactly $\mu D$ on~$G$. At a point~$P$, the divisor of~$H$ on~$G$ is $i_P(H,G)P$, where $i_P(H,G)$ denotes the local intersection number of $H$ and~$G$. So we must show that for basepoints $P_i$ that lie on~$X$, we have $i_{P_i}(H,G)=\ell_i\mult_{P_i}(H)$ for the $\ell_i$ defined above. In cases A1, B and~C, $H$ can be chosen so that at any basepoint~$P_i$ none of its branches is tangent to $G$ at~$P_i$ --- otherwise there would be an additional infinitely near basepoint above~$P_i$ --- so $i_{P_i}(H,G)=\mult_{P_i}(H)$ and all $\ell_i=1$ as required. In case A2, using the notation above with $P_3$ the infinitely near point, again $i_{P_1}(H,G)=\mult_{P_1}(H)$. So \[ i_{P_2}(G,H) \:\:=\:\: GH - i_{P_1}(G,H) \:\:=\:\: 3\mu - \mu \:\:=\:\: 2\mu \:\:=\:\: 2\mult_{P_2}(H) \] and $\ell_2=2$ as required. Case~A3 is similar. Finally, let $\mu'$ be the index of~$(G,D)$; $\mu'$ is a divisor of~$\mu$. The construction of Theorem~\ref{thm!constr} now applies to~$(G,D)$ to give a pencil $\mathcal P$ on~$X$ containing~$\mu' G$. On~$W$, the multiple $(\mu/\mu')\pi_*^{-1}\mathcal P$ is contained in~$\mathcal H_W$; since $\mathcal H_W$ is a pencil, we have $\mu'=\mu$ and~$\mathcal H=\mathcal P$. \section{Algorithms} \label{sec!alg} We describe algorithms to carry out our analysis of elliptic fibrations; we assume without comment standard routines of computer algebra such as Taylor series expansions, ideal quotients and primary decomposition. We also need the field~$k$ to be computable; that is, we must be able to make standard computations in linear algebra over~$k$ and work with polynomials, rational functions and power series over~$k$ and in small finite extensions of~$k$. The routines are expressed here in a modular way; we have implemented them in the computer algebra system {\sc Magma}\ \cite{Ma} closely following this recipe. Our descriptions below are self-contained and we include them to support the code. The initial setup of the cubic surface is this: $R=k[x,y,z,t]$ is the homogeneous coordinate ring of~$\P^3$ and $R(X) = R/F=\oplus_{n\in\mathbb N}H^0(X,\mathcal O(n))$ is the homogeneous coordinate ring of~$X$; here $F=F(x,y,z,t)$ is the defining equation of~$X$, a homogeneous polynomial of degree~$3$. \paragraph{Overview of the computer code.} The code can be used to build examples of Halphen fibrations, as in Section~\ref{sec!hal}, and Geiser and Bertini involutions in order to twist Halphen fibrations, as in Section~\ref{sec!GB}; using these in conjunction, one can realise Theorem~\ref{thm!main} for particular examples. The central point in all of these is to impose conditions on linear systems on~$X$. We describe an algorithm to do this in Section~\ref{sec!imp}; this follows our code very closely. Then we explain the applications in Section~\ref{sec!app}. Finally we give an implementation of Theorem~\ref{thm!main} in Section~\ref{sec!mainalg}. This requires two additional elements: we need to compute the multiplicity of a linear system (not just a single curve) at a point $P\in X$ and to analyse the base locus of a linear system on~$X$. \subsection{Imposing conditions on linear systems} \label{sec!imp} This is the central algorithm: given a (nonsingular, rational) point $P\in X$ and positive integers $d$ and~$m$, return the space of forms of degree $d$ on~$\P^3$ that vanish to order $m$ at~$P$ when regarded as functions on~$X$ in a neighbourhood of~$P$. \paragraph{Step 1: A good patch on the blowup of $X$ at $P$.} Change coordinates so that $P=(0:0:0:1)\in X\subset\P^3$ and so that the projective tangent space $T_pX$ to~$X$ at~$P$ is the hyperplane $y=0$. Then consider the blowup patch $(xz,yz,z)$ in local coordinates on~$X$ at~$P$. Altogether, this determines a map $f\colon\mathbb A^3\rightarrow\P^3$ with exceptional divisor $E_{\mathrm{amb}}=(z=0)$. The birational transform $\widetilde{X}$ satisfies $f^*(X) = \widetilde{X} + E_{\mathrm{amb}}$ and the exceptional curve of $f_{|\widetilde{X}}\colon \widetilde{X}\rightarrow X$ is $E=E_{\mathrm{amb}}\cap\widetilde{X}$, which is the $x$-axis in~$E_{\mathrm{amb}}$. \paragraph{Step 2: Parametrise $\widetilde{X}$ near the generic point of $E$.} The local equation of~$\widetilde{X}$ is $g=f^*(F)/z$. The exceptional curve $E$ is the $x$-axis. Working over $K=k(x)$, $\widetilde{X}$ is the curve $g(y,z)=0$ in~$\mathbb A^2_K$, and this is nonsingular at the origin (the generic point of~$E$). Cast $g$ into the ring $k(x)[\![z]\!][y]$ and compute a root $Y$ of~$g$ as a polynomial in~$y$ --- this is the implicit function $Y=y(z)\in K[\![z]\!]$ implied by $g(y,z)=0$ (with coefficients in~$K$). \paragraph{Step 3: Pull a general form of degree $d$ back along the blowup.} Let $N$~be the binomial coefficient $d+3$ choose $3$ and let $p = a_1x^d + a_2x^{d-1}y + \cdots + a_Nt^d$ be a form of degree~$d$ with indeterminate coefficients $a_1,\dots,a_N$. Compute $q(x,y,z) = f^*(p)$. \paragraph{Step 4: Impose vanishing conditions on $q$.} Evaluate $q$ at $y=Y$. The result is a power series in~$z$ with coefficients in~$k(x)$ and the indeterminates $a_1,\dots,a_N$. The condition that $p$ vanishes to order at least $m$ at~$P\in X$ is just that the coefficient of $z^i$ vanishes identically for $i=0,\dots,m-1$. Each such coefficient is of the form $p_i(x,a_1,\dots,a_N)/q_i(x)$, where $q_i(x)$ is a polynomial in~$x$ and $p_i$ is polynomial in~$x$ but linear in $a_1,\dots,a_N$. Writing $p_i=\sum_j \ell_{i,j}(a_1,\dots,a_N)x^j$, the coefficient of~$z^i$ is zero if and only if $ \ell_{i,j}(a_1,\dots,a_N)=0$ for each~$j$. This is finitely many $k$-linear conditions on the~$a_i$. \paragraph{Step 5: Interpret the linear algebra on $X$.} Choose a basis of the solution space~$U_0$ of the linear conditions on $a_1,\dots,a_N$. This is almost the solution; if $d\ge 3$, however, we must work modulo the equation $F$ of the surface~$X$. This is trivial linear algebra: compute the span $W_d=F\cdot\mathcal O(d-3)$ of~$F$ in degree $d$, intersect with the given solutions $W=W_d\cap U_0$, and then compute a complement $U$ inside $U_0$ so that $U_0=W\oplus U$. A basis of~$U$ gives the coefficients (in the ordered basis of monomials of degree $d$) of a basis of the required linear subsystem of $|\mathcal O_{\P^3}(d)|$. \paragraph{Variation 1: working inside a given linear system.} Rather than working with all monomials of degree~$d$, we can start with a subspace $V \subset H^0(X,\mathcal O(d))$ and impose conditions on that. We simply work with a basis of~$V$ throughout the calculation in place of the basis of monomials used above. \paragraph{Variation 2: non-rational basepoints.} In our applications, the only nonrational basepoints~$P$ that we need to consider have degree 2 or~3. In the former case we can make a degree~2 extension $k\subset k_2$ so that $P$ is rational after base change to~$k_2$. Computing as before at one of the two geometric points of~$P$ gives $k_2$-linear conditions on the coefficients~$a_i$. Picking a basis for $k_2$ over~$k$, we can split these conditions into `real and imaginary' parts, and impose them all as linear conditions over~$k$. A similar trick works for points of degree~3. \subsection{Applications of the central algorithm} \label{sec!app} \paragraph{Building Halphen fibrations from Halphen data.} We are given Halphen data $(G,D)$ of index $\mu$ on~$X$, as in Definition~\ref{def!hal}, and we need to construct the associated Halphen system $\mathcal H\subset |\mu A|$ of Definition~\ref{def!H} by imposing conditions on~$|\mu A|$. Recall the points $P_i$ that are blown up in Definition~\ref{def!res} to make the resolution of~$(G,D)$. In cases A1, B and C, we simply impose the basepoints of~$X$ as multiplicity $\mu$ basepoints of~$\mathcal H$, using Variation~2 of the algorithm to handle nonrational basepoints. In case A2, we need to impose the conditions at~$P_1$ and $P_3$ only --- for the latter we must blow up $X$ at $P_2$ and compute on that new surface. Similarly in case A3 we make two blowups and impose conditions only at~$P_3$. \paragraph{Geiser and Bertini involutions.} As usual, let $A=\mathcal O_X(1)$. The Geiser involution at~$P$ is given by the linear system $\mathcal L=|2A-3P|$, and the Bertini involution at~$P$ is given by $\mathcal L=|5A-6P|$. Bases of these linear systems are computed by the algorithm of Section~\ref{sec!mainalg}; we start by computing any basis, which determines a map $j_P\colon X \dashrightarrow \P^3$. However, it is important to choose the right basis. There are two problems that may occur with our initial choice: the image of $j_P$ may not be~$X$; and, even if it is, $j_P$ could be the involution we want composed with a linear automorphism of~$X$. Our solution is to mimic the geometric definition of~$i_P$ in Section~\ref{sec!GB}. For both Geiser and Bertini involutions we find five affine-independent points and compute their images under both $i_P$ and~$j_P$, and thus interpolate for the linear automorphism $\tau$ of~$\P^3$ such that $i_P = \tau \circ j_P$. In the Geiser case, if $L$ is a general line through $P$ then the two residual points of $X\cap L$ are swapped by the involution. Typically, residual points arising as $X\cap L$ become geometric only after a degree~2 base change, and different lines need different field extensions. This is a bit fiddly in computer code, but is only linear algebra. (There may be a better solution using the projection of~$X$ away from $P$ to $\P^2$ and working directly with the equation of~$X$ expressed as a quadratic over the generic point of~$\P^2$.) For the Bertini involution, in order to compute a single point and its image under $i_P$ we first find the unique line $L$ though~$P$ and the point $R\in X$ such that $L\cap X = \{P,R\}$. Let $\Pi\supset L$ be a general plane containing $L$; $E = X\cap \Pi$ is a nonsingular cubic curve. We make the Weierstrass model of~$(E,R)$ --- that is, we embed $E$ in a new plane $\P^2$ with $R$ as a point of inflexion. In that model, we take a general line through~$R$ and compute the two other (possibly equal) intersection points $(Q_1,Q_2)$ of that line with~$E$. Then $Q_2=-Q_1$ in the group law on~$E$ with $R$ as zero, and the Bertini involution maps $Q_1$ to~$Q_2$. Of course it may happen that the points $Q_i$ are not $k$-rational; but in that case, as for the Geiser involution, we simply make a degree~$2$ field extension to realise them and separate `real and imaginary' parts later. \paragraph{Calculating multiplicities of linear systems.} Suppose $\mathcal H$ is a linear system on~$X$ and $P\in X$ a point of degree~1. To compute the multiplicity of~$\mathcal H$ at~$P$ we run the first three steps of the algorithm of Section~\ref{sec!mainalg} and the first evaluation of Step~4. The result is a power series in the variable~$z$, and the multiplicity of~$\mathcal H$ at~$P$ is the order of that power series. Whether this works in practice depends on what implementation of power series is being used. If power series are expanded lazily with precision extended as required then it works as stated; if they are computed to a fixed precision then the algorithm is best applied to compute lower bounds on multiplicities. Fortunately we use it only to identify maximal centres, for which a lower bound is exactly the requirement. \subsection{The main theorem: untwisting elliptic fibrations} \label{sec!mainalg} We are given a cubic surface $X\subset\P^3$ together with a rational map $\phi\colon X \dashrightarrow \P^1$ defined by two homogeneous polynomials $f,g$ of common degree~$d$. Equivalently, we may regard $\phi$ as a linear system $\mathcal H=\left< f,g \right> \subset H^0(X,\mathcal O(\mu))$. In outline, the algorithm is simple; it terminates by the proof of Theorem~\ref{thm!main}, the main point being that Step~3 below cannot be repeated infinitely often. \paragraph{Step 0: Trivial termination.} If the degree~$\mu$ is equal to 1 then stop: the pencil must be a linear elliptic fibration. Return the pencil and its base locus (which is trivial to compute). \paragraph{Step 1: Basepoints.} Ideally we would compute precisely the base locus of~$\mathcal H$ as a subscheme of~$X$ and work directly with that. But to avoid computing in local rings, our algorithm in Section~\ref{sec!base} below computes a finite set of reduced zero-dimensional subschemes of~$X$ that supports the base locus. (In short, it solves $f=g=0$ on~$X$ and then strips off one-dimensional primary components.) We call these {\em potential basepoints} of~$\mathcal H$. As in Section~\ref{sec!proof}, the degree of a maximal centre is at most 2, so we discard any potential basepoints of higher degree. We refer to any of these as a {\em potential centre} of~$\phi$. \paragraph{Step 1a: Check termination.} If there are no potential centres then stop: the linear system must be an Halphen system, and moreover we must be in case C of Definition~\ref{def!res} --- that is, there is a single basepoint of degree~$3$. Return the system and its base locus. \paragraph{Step 2: Multiplicities.} Compute the multiplicity of the linear system $\mathcal H$ at each potential centre $P$ in turn. (At points of degree~2 we make a quadratic field extension and calculate at one of the two resulting geometric points.) If $P$ has multiplicity $m>\mu$ then go to Step~3. It may happen that no such $P$ exists, in which case: \paragraph{Step 2a: Termination.} This is the base case of the proof of Theorem~\ref{thm!main}. The linear system gives an Halphen fibration and its base locus consists of all the potential centres of multiplicity~$m=\mu$. Return the linear system and its base locus. \paragraph{Step 3: Untwist.} If the maximal centre $P$ has degree~1 then compute the Geiser involution $i_P\colon X\dashrightarrow X$ at that point. If it has degree~2, compute the Bertini involution $i_P\colon X\dashrightarrow X$. In either case, replace~$\phi$ by $\phi\circ i_P$ and repeat from Step~0. \subsection{Analysing base loci on surfaces} \label{sec!base} It remains to provide an algorithm for Step~1 above. We work in slightly more generality with an arbitrary linear system $\mathcal L$ on~$X$ corresponding to a subspace $V\subset H^0(X,\mathcal O(d))$. The base locus $B=\Bs\mathcal L$ of~$\mathcal L$ is contained in the subscheme $B'\subset X$ defined by the ideal $I=\left< V \right>\subset R(X)$; the algorithm below returns the reduced set of associated primes of height~$\ge 2$ of~$B'$. \paragraph{Step 0: Setup.} $\mathcal L$ is defined by a basis of~$V$, a finite set of homogeneous polynomials $p_1,\dots,p_k$ of degree~$d$. Let $I=\left< p_1,\dots,p_k,F \right>\subset R$; this is the ideal of~$B'$ considered as a subscheme of~$\P^3$. \paragraph{Step 1: Identify and remove codimension 1 components.} Let $I_{\red}$ be the radical of~$I$ and let $P_1,\dots,P_N$ be the height~1 associated primes of~$I_{\red}$. Let $J_0=I$ and, for $i=1,\dots,N$, let $J_i=(J_{i-1} : P_i^{n_i})$ where $n_i\in\mathbb N$ is minimal such that $J_i$ is not contained in~$P_i$. This removes the codimension~1 base locus without removing any embedded primes there (at least set-theoretically): the radical of~$J_N$ is the ideal of the set of all isolated or embedded basepoints. \paragraph{Step 2: End.} Let $K=\rad(J_N)$, the ideal of a reduced zero-dimensional scheme. Let $R_1,\dots,R_M$ be the associated primes of~$K$. Return this set of primes. \section{Examples} \label{sec!egs} We have implemented computer code in the {\sc Magma}\ computational algebra system; together with instructions, it can be downloaded at~\cite{BR}. We present some examples below to illustrate our code. Here we work in~$\mathbb P^3$ defined over $k=\mathbb Q$, which we input as: {\small \begin{verbatim} > k := Rationals(); > P3<x,y,z,t> := ProjectiveSpace(k,3); \end{verbatim} }\noindent The symbol {\small \tt >} is the {\sc Magma}\ prompt. In some cases below the output has been edited mildly. \subsection{An Halphen fibration with $\mu=2$} We start with the surface $X\colon(t^3 - x^3 + y^2z + 2xz^2 - z^3=0)\subset\P^3$. {\small \begin{verbatim} > X := Scheme(P3,t^3 - x^3 + y^2*z + 2*x*z^2 - z^3); > IsNonsingular(X); true \end{verbatim} }\noindent The surface $X$ is not minimal --- for example, $z=x-t=0$ is a line --- but we can still construct interesting elliptic fibrations on it. The $t=0$ section of~$X$ is an elliptic curve~$G$ with origin $O=(0:1:0:0)$ and an obvious rational $2$-torsion point $R=(1:0:1:0)$. (Of course, to construct the example we started with this curve and extended to~$X$.) {\small \begin{verbatim} > O := X ! [0,1,0,0]; > R := X ! [1,0,1,0]; \end{verbatim} }\noindent To make Halphen data with~$\mu=2$, we need an effective, $k$-rational divisor $D$ on~$G$ of degree~3 for which $D-3O$ is $2$-torsion in~$\Pic(G)$. We construct such~$D$ as follows. Let $L\subset \mathbb P^3$ be the line $y=t=0$ and define a point of degree~$2$ on~$X$ by $L\cap X = \{R,P\}$: so $P$ is the union of the two points $(\alpha:0:1:0)$ with $\alpha^2 + \alpha -1=0$. Define $D=P+O$ as a divisor on~$G$. The pair $(G,D)$ is Halphen data of index~$\mu=2$. In fact the construction of the Halphen system is in terms of linear systems and points on~$X$, rather than on~$G$, so for the calculation it only remains to construct~$P$. {\small \begin{verbatim} > L := Scheme(P3,[y,t]); > PandR := Intersection(X,L); > P := [ Z : Z in IrreducibleComponents(PandR) | Degree(Z) eq 2 ][1]; P; Scheme over Rational Field defined by x^2 + x*z - z^2, y, t \end{verbatim} }\noindent We build the Halphen system by imposing $D$ as base locus of multiplicity~2 on the linear system $|2A|$, where $A$ is a hyperplane section of~$X$. {\small \begin{verbatim} > A2 := LinearSystem(P3,2); > H0 := ImposeBasepoint(X,A2,P,2); > H := ImposeBasepoint(X,H0,O,2); > H; Linear system on Projective Space of dimension 3 with 2 sections: x^2 + x*z - z^2, t^2 \end{verbatim} }\noindent The resulting fibration is $\phi=(x^2+xz-z^2 : t^2)\colon X\dashrightarrow\mathbb P^1$, and we see $\phi^{-1}(1:0)=2G$. We check that the fibre $C = \phi^{-1}(-1:1)$ is irreducible and has genus~1: {\small \begin{verbatim} > C := Curve(Intersection(X, Scheme(P3, t^2 + x^2 + x*z - z^2))); > assert IsIrreducible(C); > Genus(C); 1 \end{verbatim} } \subsection{Geiser and Bertini involutions} We construct a Geiser involution on the minimal surface $X\colon (x^3 + y^3 + z^3 + 3t^3=0)\subset\mathbb P^3$. {\small \begin{verbatim} > X := Scheme(P3,x^3 + y^3 + z^3 + 3*t^3); > P := X ! [1,1,1,-1]; > iP := GeiserInvolution(X,P); > DefiningEquations(iP); \end{verbatim} }\noindent returns the equations of the involution~$i_P$: \begin{eqnarray*} ( \; -xy + y^2 - xz + z^2 - 3xt - 3t^2 \;:\; x^2 - xy - yz + z^2 - 3yt - 3t^2 \;: \mbox{\hspace{20mm}} \\ x^2 + y^2 - xz - yz - 3zt - 3t^2 \;:\; -x^2 - y^2 - z^2 - xt - yt - zt\; ). \end{eqnarray*} Since $P\in X$ is not an Eckardt point --- we discuss that case below --- the Geiser involution contracts the tangent curve $C_P = T_P(X) \cap X$ to~$P$. {\small \begin{verbatim} > TP := TangentSpace(X,P); > CP := Curve(Intersection(X,TP)); > iP(CP); Scheme over Rational Field defined by z + t, y + t, x + t > Support(iP(CP)); { (-1 : -1 : -1 : 1) } \end{verbatim} }\noindent To make a Bertini involution, we find a point of degree~$2$. {\small \begin{verbatim} > L := Scheme(P3,[x-y,z+t]); > XL := Intersection(X,L); > Q := [ Z : Z in IrreducibleComponents(XL) | Degree(Z) eq 2 ][1]; > iQ := BertiniInvolution(X,Q); > DefiningEquations(iQ); \end{verbatim} }\noindent again returns the equations of~$i_Q$, although in this case they are too large to print reasonably: the first equation has 38 terms, beginning with \[ 6x^2y^3 - 5xy^4 + 5y^5 - x^2y^2z - xy^3z - 4x^2yz^2 - 4y^3z^2 + 6x^2z^3 - 4xyz^3 + 11y^2z^3 - \cdots. \] \subsection{Eckardt points} A $k$-rational point $P\in X$ is an {\em Eckardt point} if $T_P X \cap X$ splits as three lines through~$P$ over a closure~$\overline{k}\supset k$. For example, the surface \[ X\colon (x^3 + y^3 + z^3 + 2t^3 = 0) \subset \P^3 \] is minimal and $P=(1:-1:0:0)\in X$ is an Eckardt point: $T_P X \cap X = (x+y=z^3 + 2t^3 = 0)$. Geiser involutions in Eckardt points are in fact biregular, and we see this here: {\small \begin{verbatim} > X := Scheme(P3, x^3 + y^3 + z^3 + 2*t^3); > P := X ! [1,-1,0,0]; > iP := GeiserInvolution(X,P); \end{verbatim} }\noindent When {\sc Magma}\ computes a map to projective space, it does not automatically search for common factors between the defining equations and cancel them. To see the map more clearly, we do this by hand. {\small \begin{verbatim} > [ f div GCD(E) : f in E ] where E is DefiningEquations(iP); [ y, x, z, t ] \end{verbatim} }\noindent So the Geiser involution~$i_P$ switches $x$ and~$y$ in this case, and that is clearly a biregular automorphism of~$X$. \subsection{An example of untwisting} Working on the same surface $X\colon (x^3 + y^3 + z^3 + 2t^3 = 0)$ as above, consider the fibration $f =(f_1:f_2)\colon X \dashrightarrow \mathbb P^1$ defined by the two polynomials {\small \begin{center} $f_1 = 57645x^2y^3 + 47234xy^4 - 9963y^5 + 23490x^2y^2z + 97322xy^3z + 70056y^4z - 26730x^2yz^2 - 33603xy^2z^2 + 5751y^3z^2 + 47925x^2z^3 + 85664xyz^3 - 5373y^2z^3 + 41480xz^4 + 72990yz^4 + 4095z^5 + 8100x^2y^2t + 157516xy^3t + 148392y^4t - 200880x^2yzt - 25896xy^2zt + 182664y^3zt + 9720x^2z^2t - 10800xyz^2t - 42408y^2z^2t + 118912xz^3t + 194220yz^3t + 109800z^4t - 124740x^2yt^2 - 27990xy^2t^2 + 96462y^3t^2 - 42120x^2zt^2 - 112938xyzt^2 - 70722y^2zt^2 + 24042xz^2t^2 + 28314yz^2t^2 + 63558z^3t^2 + 118530x^2t^3 + 111736xyt^3 - 48186y^2t^3 + 157684xzt^3 + 176616yzt^3 + 14958z^2t^3 + 247316xt^4 + 338796yt^4 + 265536zt^4 + 123444t^5$ \end{center} }\noindent and {\small \begin{center} $f_2= 20232x^2y^3 + 27216xy^4 + 6600y^5 - 66429x^2y^2z - 29187xy^3z + 40250y^4z + 25596x^2yz^2 - 8532xy^2z^2 - 42800y^3z^2 + 24507x^2z^3 + 23436xyz^3 + 3585y^2z^3 - 4185xz^4 + 35420yz^4 - 38240z^5 - 48978x^2y^2t + 77706xy^3t + 128092y^4t - 84456x^2yzt - 85428xy^2zt - 11724y^3zt + 65322x^2z^2t + 26676xyz^2t - 8214y^2z^2t + 100710xz^3t + 125152yz^3t + 25500z^4t - 196596x^2yt^2 - 75438xy^2t^2 + 122086y^3t^2 - 106596x^2zt^2 - 104598xyzt^2 + 366y^2zt^2 + 4590xz^2t^2 - 6786yz^2t^2 + 144574z^3t^2 - 62424x^2t^3 - 63612xyt^3 - 16932y^2t^3 - 105030xzt^3 + 1972yzt^3 - 98056z^2t^3 + 117720xt^4 + 231884yt^4 + 36888zt^4 + 247412t^5$. \end{center} }\noindent Amazingly enough, this is an elliptic fibration --- although that is by no means obvious, and we gave up on computing the genus of a fibre with {\sc Magma}\ after 5~hours. To understand~$f$, we follow the proof of Theorem~\ref{thm!main} as the algorithm of Section~\ref{sec!mainalg}. First we look for a maximal centre. {\small \begin{verbatim} > P1 := ProjectiveSpace(k,1); > f := map< P3 -> P1 | [f1,f2] >; > time existence, Q := HasMaximalCentre(f,X); assert existence; Time: 64.240 \end{verbatim} }\noindent This function, which executes Steps~1 and~2 of Section~\ref{sec!mainalg}, returns either one or two values: first, either true or false according to whether $f$ has a maximal centre or not; and, second, a maximal centre if there is one. In this example there is a maximal centre of degree~$2$: {\small \begin{verbatim} > Q; Scheme over Rational Field defined by z^2 - 31/4*z*t - 5/4*t^2, x + 3/2*z + 3/2*t, y - 3/2*z - 1/2*t > Degree(Q); 2 \end{verbatim} }\noindent We don't need to know it, but in fact $Q$ is the following pair of conjugate points: {\small \begin{verbatim} > k2<w> := Degree2SplittingField(Q); > Support(Q,k2); { (w : -w - 1 : 1/3*(-2*w - 3) : 1), (1/8*(-8*w - 117) : 1/8*(8*w + 109) : 1/12*(8*w + 105) : 1) } \end{verbatim} }\noindent Here $k_2$ is the number field $\mathbb Q[w]/(8w^2 + 117w + 135)$. Following Step~3 of Section~\ref{sec!mainalg}, we untwist $f$ using the Bertini involution $i_Q$ centred at~$Q$. {\small \begin{verbatim} > iZ := BertiniInvolution(X,Z); > g := iZ * f; \end{verbatim} }\noindent As before, the defining equations of~$g$ have not been simplified by {\sc Magma}, and are of degree~$25$ with thousands of terms and no common factor. However, a simple interpolation shows that $g$ is the map $(x:y)$. We omit the demonstration of this here, but instead confirm it by cross multiplication. {\small \begin{verbatim} > Eg := DefiningEquations(g); > assert IsDivisibleBy(x*Eg[2] - y*Eg[1], DefiningEquation(X)); \end{verbatim} }\noindent \subsection{The problem of minimality} Geiser and Bertini involutions exist whether or not the surface~$X$ is minimal: the geometric descriptions given in Section~\ref{sec!GB} work regardless. In the nonminimal case, however, the linear systems that determine the involutions need not be $|2A-3P|$ and~$|5A-6P|$. Here we give an example where $|5A-6P|$ does not give a Bertini involution. Let $X=(xt^2 + x^2y+y^3-z^3=0)\subset\P^3$. The point $P = (0:0:0:1)$ is an Eckardt point with tangent curve splitting as a line $x=y-z=0$ and a conjugate pair of lines $x=y^2+yz+z^2=0$. The point $Q=(1:0:0:0)$ lies on three conics, each defined by $xy=t^2$ together with one of the linear factors of~$y^3-z^3$. Clearly each of the conics meets exactly one of the lines, and that intersection is tangential. The three intersection points are $(0:1:1:0)$, $(0:\omega:1:0)$ and $(0:\omega^2:1:0)$ where $\omega$~is some chosen primitive cube root of~$1$. Let $Z=(x=t=y^2+yz+z^2=0)\subset X$ be the conjugate pair of intersection points. Although $X$ is clearly not minimal, we can compute the linear system~$|5A-6Z|$. {\small \begin{verbatim} > X := Scheme(P3,x*t^2 + x^2*y + y^3 - z^3); > Z := Scheme(P3,[x,t,y^2+y*z+z^2]); > L1 := ImposeBasepoint(X, LinearSystem(P3,5), Z, 6); > L2 := Complement(L1,X); \end{verbatim} }\noindent Notice that since the linear system is computed on the ambient~$\mathbb P^3$, we must work modulo the equation of~$X$ by hand, taking a complement of the subspace of degree~$6$ polynomials that it divides --- in previous examples this was hidden inside the function for Bertini involutions. But this is the wrong linear system; it has (projective) dimension~4: {\small \begin{verbatim} > #Sections(L2); 5 \end{verbatim} }\noindent Our code cannot compute the Bertini involution in this case. Out of interest, we show instead how to make the map $f\colon X\dashrightarrow\mathbb P^4$ with these five sections and compute its image. {\small \begin{verbatim} > P4<[a]> := ProjectiveSpace(k,4); > f := map< P3 -> P4 | Sections(L2) >; > f(X); \end{verbatim} }\noindent returns a surface in~$\P^4$ defined by three equations, the $2\times 2$ minors of the $2\times 3$ matrix \[ \begin{pmatrix} \; -a_4 \; & \; a_1^2 + a_2^2 + a_2a_3 + a_3^2 \; & \; a_1 \; \\ \; a_5 \; & \; a_4^2 - a_1a_3 \; & \; a_2-a_3 \; \end{pmatrix}. \] The third minor is the equation of~$X$; the second is the cone on $\P^1\times\P^1$ in some coordinates. In fact, this image surface is singular: it has a single Du Val singularity of type~${\mathrm A}_2$. The map~$f$ blows up~$Z$ and then contracts the two conjugate lines that meet at~$P$, which form a chain of two $-2$-curves on the blowup.
1,314,259,996,870
arxiv
\section{Introduction} Nonlinear Optimal Control Problems (OCPs) are critical to solve to obtain high performance in many engineering applications. For example, model predictive control (MPC) requires an OCP being solved in every control loop \cite{Bemporad2000}, while kinodynamic motion planners rely on solving OCPs between sampled states \cite{donald1993kinodynamic}. However, they are generally difficult to solve to global optimum quickly and with high confidence due to inherent nonconvexity. This has led to an intense interest in using learning to obtain approximations of optimal control policies, either using supervised learning~\cite{Jetchev:2013cr,Lampariello} or reinforcement learning \cite{lillicrap2015continuous}. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{blackPendulum} \caption[black pendulum]% {{\small Samples of data}} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{penSNNbadPred} \caption[]% {{\small SNN Prediction}} \end{subfigure} \vskip\baselineskip \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{colorPendulum} \caption[]% {{\small Samples of clustered data}} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{penMoMgoodPred} \caption[]% {{\small MoE Prediction}} \end{subfigure} \caption {\small Illustration of dataset and prediction of a selected state from SNN and MoE for the pendulum swingup task. (a) samples of optimal pendulum swingup trajectories from different initial states. The red circles are possible target states. (b) prediction of a selected state by SNN that is trained using data in (a). The solid and dashed lines denote the optimal and predicted trajectories, respectively. (c) samples of clustered optimal trajectories where each color denotes one cluster. Trajectories are clustered according to final state. (d) prediction by MoE to the same state as (b). } \label{fig:SNNbadPred} \end{figure} In this paper, we highlight the problem that function approximators such as standard neural networks (SNN) perform poorly near discontinuities that are prevalent in many nonlinear OCPs. Fig.~\ref{fig:SNNbadPred} shows the results of using a multilayer SNN to learn a pendulum swingup task from optimal trajectories. The optimal trajectories have three possible goal states so the parameter-solution mapping is discontinuous. Although neural networks are quite useful for approximating nonlinear functions \cite{hornik1989}, near the region where the optimal goal state switches, their prediction tends to predict a final state that interpolates between two goal states. This paper addresses this problem by modifying the Mixture of Experts (MoE) \cite{jacobs1991adaptive, jordan1994hierarchical, shazeer2017outrageously} model to learn the solutions to parametric OCPs. The model structure uses a classifier (gating network) to select a regressor (expert) which makes the final prediction (Fig.~\ref{fig:showMoE}). We intend to train a model such that each regressor works in a region of the parameter space where the parameter-solution mapping is continuous. This is reminiscent of a divide and conquer approach, which has already been widely used in control community for controller design \cite{murray1997multiple}. Fig.~\ref{fig:SNNbadPred} illustrates that the pendulum swingup dataset can be divided into three regions, and by classifying them and approximating them separately, MoE makes better prediction than SNN particularly near discontinuity. \begin{figure}[htbp] \centering \includegraphics[width=0.5\columnwidth]{mom} \caption{\small Illustration of MoE. The classifier selects a model which makes the final prediction.} \label{fig:showMoE} \end{figure} Considerable care must be taken during MoE training. Although MoE is generally trained using backpropagation \cite{shazeer2017outrageously} or expectation maximization \cite{jordan1994hierarchical}, training can be unstable. We propose an approach specially designed for parametric OCPs. The training set consists of solutions to a sampling of parametric OCPs, and we first partition the data into several clusters. Then the classifier is trained to predict the identity of the partition and a separate regressor is trained for each partition. Each component is trained individually using backpropagation. Interestingly, although joint training leads to a model with lower prediction error (loss), it tends to {\em worsen} trajectory tracking success rate. Moreover, clustering the dataset appropriately is nontrivial and it is fundamental to our approach. Rather than using general methods of input partitioning~\cite{tang2002input}, we propose certain features of optimal trajectories that tend to work well empirically. Experiments on toy underactuated control problems and agile vehicle control problems demonstrate that suitably trained MoE models can learn near-optimal trajectories suitable for trajectory tracking with remarkably high success rates (99.5+\%). \section{Related Work} Nonconvex OCP is generally difficult to solve to global optimum, despite much work to enlarge the convergence domain, e.g., \cite{Jiang2012}. Moreover, numerical trajectory optimization~\cite{betts1998survey} techniques are, in general, too computationally expensive for highly reactive motions. As a result, machine learning approaches have been proposed to solve OCPs approximately but in real-time. Reinforcement learning learns the optimal policy by interacting with the environment, and deep neural network policy approximators have been shown to solve complex control problems \cite{lillicrap2015continuous}. Another approach uses supervised learning to learn from precomputed optimal solutions to solve novel problems, and has seen successful application in trajectory optimization \cite{Jetchev:2013cr, tangdata, tomic2014} and global nonlinear optimization~\cite{Hauser2017}. In \cite{Jetchev:2013cr} precomputed optimal motions are used in a regression to predict trajectories for novel situations to speed up subsequent optimization. In \cite{tangdata} the nearest-neighbor optimal control (NNOC) method is proposed, with a multiple restart method proposed to handle discontinuities. In both these works, the techniques work faster than optimizing from scratch, but still require some amount of optimization for their predicted trajectories. This paper also learns optimal trajectories instead of optimal policies, which has the advantage that trajectories can be tracked using a stabilizing feedback controller to handle model uncertainties and disturbances. It should be noted that the predicted trajectory might not fully satisfy the system dynamics constraints. However, if learning is sufficiently accurate, then this should not be an issue because a feedback controller can correct for such violations. The discontinuity of the solutions to parametric OCPs as a function of problem parameters has long been known~\cite{fiacco1983}, a fact that has been underappreciated in the control learning community. Under certain assumptions, this function is piecewise continuous, and discontinuity-tolerant methods have been proposed for learning from optimal solutions \cite{Hauser2017,tangdata}. However, their approaches do not explicitly try to partition the space into regions. In contrast, the discontinuity-sensitive approach proposed here does indeed segment the dataset according to estimated discontinuities. The most related work is previous research on MoE \cite{jordan1994hierarchical, shazeer2017outrageously,tang2002input}. This paper proposes several modifications to MoE make it suitable for learning optimal control. We use hard classification boundaries to avoid predicting an average of both sides, and we also modify the training approach. Traditionally MoE is trained using either backpropagation \cite{shazeer2017outrageously} or expectation maximization \cite{jordan1994hierarchical} so the gating function and experts are both updated. However, we train the classifier and regressors individually, and experiments suggest that this is fundamental to achieving high trajectory tracking accuracy. \section{Problem Formulation} In this section, the problem of learning from optimal control is formulated and the key components are analyzed. The proposed approach first forumlates a parametric OCP and then performs the following procedure: \begin{enumerate} \item Input: collect dataset of solutions to parametric OCPs on sampled parameters. \item Cluster: select a clustering approach to cluster the trajectories and partition the parameter space. \item Train: weights of classifier and regressors are trained individually using backpropagation. \item Validate: predict optimal trajectories for novel states and validate the learned model by trajectory rollout. \end{enumerate} \subsection{Parametric Optimal Control} A system is governed by dynamical equations \begin{equation} \label{eq:contsysdyn} \dot{\bs{x}}=\bs{f}(t,\bs{x}, \bs{u}, \bs{p}) \end{equation} where $t$ is time; $\bs{x} \in \mathbb{R}^n$ is the state variable; $\bs{u} \in \mathbb{R}^m$ is the control variable; $\bs{p}\in \mathbb{R}^l$ is the problem parameters and captures the variability of studied problems. The vector $\bs{p}$ may specify the initial state, model parameters, and modifications to costs or constraints. We use subscript 0 and $f$ to denote the variables at initial and final time, respectively. The goal is to control the system from some state $\bm{x} _0$ to some state $\bm{x} _f$ while minimizing the cost function \begin{equation} \label{eq:contJ} J=\varphi(t_0, \bm{x} _0, t_f, \bm{x} _f, \bm{p}) + \int_{t_0}^{t_f}L(t, \bm{x}(t), \bm{u}(t), \bm{p}) \end{equation} where $\varphi$ only depends on initial and final states; $L$ depends on state and control variables within $[t_0, t_f]$. Practical OCPs may have state, control, and terminal set constraints that have to be satisfied and we refer to \cite{betts1998survey} for details. Parametric OCP is generally difficult to solve analytically~\cite{Maurer:2001by}, but for any given parameter, numerical methods may be used to solve the resulting OCP~\cite{betts1998survey}. In this work we employ a direct transcription method, which transforms the OCP into a nonlinear optimization problem and solves it using SNOPT~\cite{gill2005snopt}. The solution trajectory is a sequence of state and control variables along a time grid, denoted as $\bm{z}\equiv\{t_i;\bm{x}_i;\bm{u}_i\}_{i=0}^{N}$ where $N$ is the grid size for discretization. Stacking the element of $\bm{z}$ into a vector, our goal is to approximate the map from problem parameters to optimal trajectories $\bm{z}^\star(\bs{p})$. \begin{comment} We call a trajectory as a composition of state and control variables, both as a function of time, i.e. $\{\bm{x}(t); \bm{u}(t)\}$ for $t \in [t_0, t_f]$ The initial conditions at $t_0$ are a set of constraints at the initial time and defined by \begin{equation} \label{eq:con0} \bm{\psi} _{0l} \le \bm{\psi}(\bm{x} _0, \bm{u} _0, \bm{p}, t_0) \le \bm{\psi} _{0u} \end{equation} where subscript $l$ and $u$ denote the lower and upper bound and $\bm{\psi}(\bm{x} _0, \bm{u} _0, \bm{p}, t_0) \equiv \bm{\psi}_{0}$. Similarly, at final time $t_f$ the terminal conditions are \begin{equation} \label{eq:conf} \bm{\psi} _{fl} \le \bm{\psi}(\bm{x} _f, \bm{u} _f, \bm{p}, t_f) \le \bm{\psi} _{fu} \end{equation} where $\bm{\psi}(\bm{x} _f, \bm{u} _f, \bm{p}, t_f) \equiv \bm{\psi}_{f}$. Additionally, the solution must also satisfy path constraints of the form \begin{equation} \label{eq:conpath} \bm{c}_{l} \le \bm{c}(\bm{x}(t), \bm{u}(t), \bm{p}, t) \le \bm{c} _{u} \end{equation} where $\bm{c}$ collects all the constraints on both state and control that have to be satisfied throughout the trajectory. A few examples of parametric OCPs are: \begin{enumerate} \item A pendulum starts from different initial states and reaches the straight up state. \item A quadcopter starts from different initial positions and reaches the goal state with collision avoidance. The parameters of this problem include both the initial states of the quadcopter and the positions of the obstacles. \end{enumerate} \end{comment} \begin{comment} \subsection{Direct Transcription Method} \KHComment{You could take this section out entirely. It has nothing to do with the contribution. You should just say that you use direct transcription to generate optima and give a citation (e.g., the Betts survey). } The direct transcription method defines a time grid $t_0, t_1, \dots, t_N$ where $t_k=t_{k-1}+h$ with $h=t_f/N$ and $t_f=t_N$. The trajectory is parameterized by the state variables $\bm{x}_0, \bm{x}_1, \dots, \bm{x}_N$, control variables $\bm{u}_0, \bm{u}_1, \dots, \bm{u}_{N-1]}$, and final time $t_N$. We denote those variables as $\bs{z}$. Direct transcription method converts OCP into an optimization problem and tries to optimize $\bs{z}$ directly. The cost function given in Eq.~\eqref{eq:contJ} is approximated by \begin{equation} \label{eq:discJ} J = \varphi(t_0, \bm{x}_0, t_N, \bm{x}_N, \bm{p}) + \sum_{k=0}^{N-1}L(t_k, \bm{x}_k, \bm{u}_k, \bm{p})h \end{equation} The system dynamics constraints in Eq.~\eqref{eq:contsysdyn} are converted into \begin{equation} \label{eq:discsysdyn} \bm{x}_{k+1} = \bm{x}_k + \int_{t_k}^{t_{k+1}}\bs{f}(t, \bs{x}, \bs{u})\,{\rm d}t \end{equation} One typical approach is to use forward-Euler integration method to approximate the integral term. In this paper, we use fixed-step 4th order Runge-Kutta method to integrate it for its higher accuracy. Another advantage is the derivatives of the integration results with respect to $\bm{x}_k, \bm{u}_k, h$ are easily evaluated. We admit that integrator with adaptive step-size can yield a solution with higher accuracy at the cost of computational requirement. The boundary conditions defined in Eq.~\eqref{eq:con0} and \eqref{eq:conf} are directly imposed as constraints by substituting $\bm{x}_f, \bm{u}_f, t_f$ by $\bm{x}_N, \bm{u}_N, t_N$, respectively. Additionally, the path constraints in Eq.~\eqref{eq:conpath} are imposed at each time grid, i.e. \begin{equation} \label{eq:discpath} \bm{c}_{l} \le \bm{c}(\bm{x}_k, \bm{u}_k, \bm{p}, t_k) \le \bm{c} _{u} \end{equation} for $k=0, 1, \dots, N$. By collecting the constraints and cost function defined above, an optimization problem is formulated. Due to the nonlinearity in system dynamics and non-convexity in constraints, this optimization problem is generally non-convex. An SQP algorithm is used to solve this problem. \end{comment} \begin{comment} \subsection{Piecewise Continuity of Parameter-Solution Mapping} Direct transcription method converts parametric OCP into a parametric optimization problem in the general form of \begin{equation} \label{eq:optfun} \begin{aligned} & \underset{z}{\text{minimize}} & & f(z, p) \\ & \text{subject to} & & \bm{g}(z, p) \leq 0 \end{aligned} \end{equation} $f$ and $\bs{g}$ collects the cost function and constraints. With such a definition, it becomes apparant that the solution to Eq.~\eqref{eq:optfun}, denoted as $z^*$ is a function of problem parameter $p$, i.e. $z^*\equiv z^*(p)$. Our goal is to find the parameter-solution mapping for $p$ given in a set, denoted as $P \subset \mathbb{R}^l$. \subsubsection{Continuity of $z^*(p)$} In parametric nonlinear optimization field \cite{fiacco1983}, the continuity of $z^*(p)$ has been proven under certain assumptions. This property is exploited by previous research in learning from optimal control \cite{tangdata, tobiaparametric}. \end{comment} \begin{comment} We study the Karush-Kuhn-Tucker(KKT) conditions for the parametric optimization problem. For an optimum $x$, there exists KKT multipliers $\mu \in \mathbb{R}^m$ such that \begin{equation} \label{eq:KKT} \begin{aligned} &\pd{f}{x}(x, p)+\pd{g}{x}(x, p){^{\rm T}}\mu=0\\ &g(x, p) \leq 0\\ &\mu_i g_i(x, p) = 0 \quad \text{for }\,i=0, 1, \dots, m\\ &\mu \geq 0 \end{aligned} \end{equation} The set of indices where the multiplier $\mu_i$ is nonzero are denoted as active set. The active set collects the constraints in $g(x, p)$ that are zero at the optimum. It should be noted that this set includes both constraints that are indeed equality constraints and inequality constraints that are zero at the optimum point. We denote the set of all the constraints that are zero at the optimum as $g_A$ and the rest as $g_I$. Similarly, the corresponding multipliers are dnoted as $\mu_A$ and $\mu_I$, respectively. The KKT conditions are rewriteen as \begin{equation} \label{eq:reKKT} \begin{aligned} &\pd{f}{x}(x, p)+\pd{g_A}{x}(x, p){^{\rm T}}\mu_A=0\\ &g_A(x, p) = 0, \quad g_I(x, p) \leq 0\\ &\mu_A \geq 0, \quad \mu_I = 0 \end{aligned} \end{equation} We consider an infinitesimally change of problem parameter to $p'=p+\Delta p$. If we assume an optimum $x'$ exists for problem $p'$ and the active set does not change, the KKT conditions at $x'$ are \begin{equation} \label{eq:newKKT} \begin{aligned} &\pd{f}{x}(x', p')+\pd{g_A}{x}(x', p'){^{\rm T}}\mu_A'=0\\ &g_A(x', p') = 0\\ &\mu_A' \geq 0 \end{aligned} \end{equation} {\color{red} with some reference on LICQ, SCS and why this mapping is continuous.} With the above conditions, $x^*(p)$ is a continuous function of $p$. \end{comment} \begin{comment} \subsubsection{Discontinuity of $z^*(p)$} In practical parametric OCP, the discontinuity of $z^*(p)$ is prevalent. As shown in Fig.~\ref{fig:SNNbadPred} the optimal trajectories switch goal states since one goal outperforms others. It clearly shows that near certain states the optimal trajectories have discontinuity due to change of local optimum. \end{comment} \begin{comment} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{showDisc} \caption{\small Illustration of a shortest path problem. Corresponding to different rectangular center position, the optimal path has discontinuous change. The change is either caused by change of active sets or another local optimum becomes global optimum.} \label{fig:showDisc} \end{figure} We elaborate those two reasons using a shortest-path example shown in Fig.~\ref{fig:showDisc}. The goal is to find the shortest path from point $S$ to $T$ while avioding the rectangular obstacle. Additionally, the $y$-coordinate of the path is bounded. The grid is drawn and the goal is to find the optimal path parameterized by the $y$ locations on the grid, i.e. $\{y_i\}_{i=0}^{8}$. We assume the obstacle is of fixed size and the center can change. The problem is parameterized by the locations of the rectangular center, denoted as $(x_c, y_c)$. Without loss of generality, we assume the width of the rectangular is $4h$ where $h$ is the grid size. With changing $x_c$, the left and right sides of the rectangular are between different grids. The red lines correspond to when the left side of the obtacle is between grid line 2 and 3. The black lines are the shortest path when the left side moves to the space within line 1 and 2. When the left side of the obstacle continuously changes, the optimal $y_2$ has to change from $S_1$ to $S_2$. Simularly optimal $y_6$ has to change from $T_1$ to $T_2$. It should be noted that optimal $y_1$ and $y_7$ also suffers from discontinuous change. This change is caused by the change of constraint at $y_2$ from inactive to active ($y_6$ from active to inactive). Since $y$ is bounded, the path that goes above the obstacle becomes infeasible if the upper side of the rectangular is larger than the upper bound of y. Similarly, the path that goes below the obstacle might also become infeasible as $y$ changes. As problem parameter changes, the solution might not be continuous along all directions. The blue lines denote another local optimum which avoids the obstacle from below. With changing $y_c$, the length of path going above and below are continuously changing. Specifically, when $y_c$ is below the line connecting $S$ and $T$, the shortest path is the above one and vice versa. Obviously, the pathes going below and up are both local optimum and a function of the problem parameters. As the parameter changes, the global optimum can switch from one local optimum to another. \end{comment} \subsection{Optimal Trajectory Database Generation} To train and test models we generate a database of optimal trajectories $\bm{z}_1,\ldots,\bm{z}_M$ to sampled problems $\bs{p}_1,\ldots,\bs{p}_M \in \mathbb{R}^l$. Due to non-convexity, even finding a global optimum to a single problem can be difficult. One practical approach is to pick the best local optimum from a multi-start method. However, the local optimum can be also quite difficult to find if an initial guess not close to the optimum is provided. We adopt a nearest-neighbor approach \cite{tangdata} to help generate large databases quickly. We first sample some number of problems (fewer than $M$ but much larger than the number of expected partitions) and use an exhaustive random restart approach to solve them. These solutions are used as the initial database. Then we sample more parameters, and for each new problem we attempt local optimization from each of its $k$-nearest neighbors to find $k$ local optima. The best solution is kept in the database. \begin{comment} It seems that finding the global optimum for a specific $p$ is difficult, let alone any $p$ in the set $P$. This problem is the difficulty in data collection, i.e. solving to global optimum for a sufficiently high number of $p$. While this is obviously true, we argue that this is why learning optimal control is important. Otherwise we have to use multi-start technique for every new problem. If we solve the problem for a set of $P$, the benefit is that for a novel problem we can leverage the solutions of similar problems, i.e. with similar $p$. This idea has demonstrated its success in nearest-neighbor optimal control \cite{tangdata}. In this paper, we use the same approach to build the database of solution. \end{comment} We note that this process is done completely offline and parallelizable. \subsection{Mixture of Experts} The MoE model is composed of a classifier and $r$ regressors, as shown in Fig.~\ref{fig:showMoE}. In this paper both models are chosen as multilayer perceptrons (MLP). The goal is to learn a function $z:\mathbb{R}^l \to \mathbb{R}^R$ that approximates $\bm{z}(\bs{p})$ where $R$ is the length of vector $\bm{z}$. Each regressor takes input $\bs{p} \in \mathbb{R}^l$ and makes a prediction $y_i(\bs{p}, w_i) \in \mathbb{R}^R, i=1,\dots,r$ where $w_i$ specifies the weights of each regressor. The classifier, with weights $w_c$, takes input $\bs{p}$ and predicts $r$ values $\{c_i\}_{i=1}^r$. The output of the classifier are combined with softmax to assign probabilities for each model, i.e. \begin{equation} P_i=\frac{\exp{c_i}}{\Sigma_{i=1}^{n}\exp{c_i}} \end{equation} or argmax to select one model only (in this case, $P_k=1$ for $k=\arg\max_i c_i$ and $P_k=0$ otherwise.) The difference between softmax and argmax is softmax tends to give a prediction that is a mixture of predictions from all experts. Argmax, however, selects one model and ignores other models' predictions. In either case, the ultimate prediction is a mixture of predictions from all regressors, i.e. \begin{equation} z(\bs{p})=\Sigma_{i=1}^n P_i(\bs{p},w_c) y_i(\bs{p}, w_i) \end{equation} The target is to find $w_c$ and $\{w_i\}_{i=1}^r$ in order to miminize \begin{equation} \label{eq:trainobj} L=\mathbb{E}_{\bs{p}\sim P_{\text{data}}}\text{loss}(z(\bs{p}), \bm{z}^\star(\bs{p})) \end{equation} where $P_{\text{data}}$ is a distribution over problems and $\text{loss}(\cdot, \cdot)$ is any regression loss function. The most straightforward way train MoE is to treat it as an SNN, randomly initialize weights, and miminize \eqref{eq:trainobj} using backpropagation. Although several heuristics have been proposed to train MoE using backpropagation such as \cite{shazeer2017outrageously}, training may still be unstable. If softmax is used, all the data is used to train each regressor, with weights equal to the probabilities predicted by the classifier. In the case of argmax, each regressor is only trained using data assigned to it by the classifier. There is no gradient for the classifier to update its weights if argmax is used. Softmax, on the other hand, can still have gradient to update the weights of the classifier. To perform joint training, since argmax is the limit of softmax if we scale $\{c_i\}_{i=1}^{r}$ by a large positive scalar, we introduce $\epsilon\in[0, \infty)$ which is used to divide the output of the classifier before applying softmax, i.e. \begin{equation} P_i=\frac{\exp{(c_i/\epsilon)}}{\Sigma_{i=1}^{n}\exp{(c_i/\epsilon)}}. \end{equation} As $\epsilon\rightarrow 0$, the softmax weights approach the argmax function. Hence, $\epsilon$ must be gradually lowered to balance between updating weights of classifier and restricting mixture of outputs from multiple regressors. As we shall show later, joint training of MoE may improve the loss function compared to decoupled training, but appears to be detrimental to trajectory tracking performance. \subsection{Parameter Space Partition} Clustering has been shown to be effective to avoid some instability in MoE training~\cite{tang2002input} by training the classifier and regressors of MoE individually on subsets of the data. We adopt the same approach here, and study how to partition parameter space such that in each region the parameter-solution mapping is continuous. The dataset $\{(\bs{p}_j, \bm{z}_j)\}_{j=1}^M$ is divided into $r$ groups $C_1,\ldots,C_r$, ideally so that $\bm{z}^\star(\bs{p})$ is a continuous function for all $\bs{p}$ in a given region. This problem can be formulated as a clustering problem and each cluster denotes a region of the partitioned parameter space. The classifier is trained to predict $P_i(\bs{p}_j,w_c)=1$ for all $\bs{p}_j$ in $C_i$, and the $i$'th regressor is trained as usual, restricted to the examples in $C_i$. We call this process (decoupled) {\em pretraining}. Parametric OCPs have rich features that can be used to find appropriate clusters. We note that this partition cannot be done simply using problem parameters only since the target is to find the discontinuity in the solutions. Discontinuity comes from switching from one family of local optima to another. Hence, although the objective function value and the problem parameters at these discontinuities is similar, the trajectory may not. For example, a car might reverse first or move forward first, and a quadcopter might avoid an obstacle from above or below. Hence, we experiment with using distance between optimal trajectories to classify the family of solution. The simplest approach is to apply standard clustering techniques, such as the k-Means algorithm, on the trajectory vector space. In order to do so, we first normalize the state and control variables to zero mean and unit variance. After choosing a number of clusters $k$, the k-Means algorithm is run from random initial centers. Our experiments observe that k-Means is for some problems successful at predicting discontinuities, but can also group trajectories poorly when $k$ is small. On the other hand, when $k$ is large, each cluster contains less training data, causing the regressors to overfit, and making the job of the classifier harder. We also propose custom clustering criteria that are based on a system designer's intuition and inspection of datasets. As an example, the periodicity of angles is a useful feature when angle is in the state space and optimal trajectories have distinct final angles; in other words, trajectories lie in distinct homotopy classes. This is useful for the pendulum swingup problem as well as the ground vehicle control problem we consider later. Another approach that is applicable is to examine the Lagrange multipliers of constraints at optimal solutions, since they provide rich information about how constraints influence the trajectory's shape. For example, in quadcopter obstacle avoidance the shortest path might go on either side of the obstacle. Hence, the gradient of the active constraint will have different sign. \begin{comment} \begin{itemize} \item {\bf Clustering} After normalizing the state and control variables, clustering approaches can be used to directly cluster the trajectories. We test k-Means on pendulum swingup problem with 3, 4, and 10 clusters. \item {\bf Graph connectedness... spectral clustering? Is this used anywhere?} \GT{In car problem, I mentioned \textit{We observed that finding connected subgraphs also gives similar results.}} A heuristic to detect discontinuity between two parameters, denoted as $x_1, x_2$ is to calculate \begin{equation} k(x_1, x_2)=\|f(x_1)-f(x_2)\|/\|x_1-x_2\| \end{equation} and compare this value with a threshold. For each parameter, this can be done with its several nearest neighbors. A graph can be constructed with all the parameters being vertices and two vertices $x_1, x_2$ being connected if $k(x_1, x_2)$ is below a threshold. Then we can find all the connected subgraphs and each subgraph corresponds to a trajectory cluster. \end{itemize} {\bf Lagrange multiplier features.} For problems with inequality constraints, the active constraints provide rich information of the trajectories. An example is shortest path problem with obstacle avoidance where the shortest path might go either side of the obstacle. In such cases, the gradient of the active constraints has different signs and serves as useful features. \end{comment} \subsection{Discussion and Preliminary Experimentation} The usual approach to MoE is to first perform pretraining before (coupled) {\em retraining} by minimizing~\eqref{eq:trainobj}. The rationale is that pretraining provides a good initialization, but if the data is clustered badly, i.e. in one cluster there is discontinuity, the loss function may be large. Moreover, even if clustering is perfect, a pretrained model does not necessarily minimize \eqref{eq:trainobj} due to misclassification. In this section we shall experimentally demonstrate and discuss why this may be a poor approach for parametric OCPs. \label{sec:RolloutError} \begin{figure*}[tbp] \centering \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=\textwidth]{penPredictionThetaf} \caption {{\small Prediction error of $\theta_f$}} \end{subfigure} \hfill \begin{subfigure}[b]{\columnwidth} \centering \includegraphics[width=\textwidth]{penTrackError} \caption {{\small State error after trajectory tracking}} \end{subfigure} \caption{\small Comparing several models for learning the pendulum swing-up task. } \label{fig:penLargeError} \end{figure*} We study a toy pendulum swingup task, where the task is to reach the upright position. Details on the system and neural network models are given in Sec.~\ref{sec:Pendulum}. We compare on two metrics: 1) test error (smoothed L1 loss) and 2) rollout success rate after trajectory tracking. In trajectory tracking, we simulate trajectory execution under an LQR controller, which compensates for errors dynamic constraint violations. About each state along the predicted trajectory, we compute an LQR solution for a linear dynamics model and a quadratic cost obtained by Taylor expansion. After trajectory tracking is complete, the simulation switches to a stabilizing controller about the origin. If after 5 seconds the norm of the state error is within a certain threshold (0.1) we denote the rollout as a success. (We note that for the car problem, only the first stage is implemented since the final state is not controllable.) The following variations are considered: \begin{enumerate} \item SNN vs MoE, \item MoE with random weights against $k$-means clustering on trajectories, and against custom clustering, and \item Retraining vs no retraining. \end{enumerate} The SNN is chosen as MLP of size (2, 300, 75), there the first number denotes the size of the input layer, the last number denotes the size of the output layer, and intermediate numbers indicate the size of hidden layers. We experimented with SNN with more hidden layers or more neurons in the hidden layer, but they result in similar or larger test error. Specifically, MLPs of size (2, 50, 20, 75) and (2, 20, 50, 75), (2, 30, 30, 30, 75) yield test errors of 0.258, 0.170, and 0.232, respectively. The size (2, 300, 75) network, on the other hand, has test error of 0.058. For MoE, the classifier is of size (2, 50, $r$) and the $r$ regressors are all of size (2, 20, 75). Custom MoE and random weight MoE use $3$ experts. The custom clustering divides the data into 3 clusters based on the final angle. We also use $k$-means with 3, 4, and 10 clusters solely on trajectories with the same design of network size. Fig.~\ref{fig:penLargeError}.a plots the prediction error on $\theta_f$ and Fig.~\ref{fig:penLargeError}.b plots the state error after trajectory tracking. The validation error and rollout success for each model are also listed in Tab.~\ref{tab:cmpTraining}. Row 1 shows that SNN has difficulty in making predictions in regions near the discontinuity, averaging between both sides. MoE does also make inaccurate prediction, but these are caused by misclassification and the prediction is a local optimal trajectory belonging to another cluster. Hence, they are suboptimal but still reach the vertical position as desired, since the difference in $\theta_f$ is $2\pi$. The suboptimality is not too great, because near the boundaries two families of solutions have similar objective function. MoE trained from random initialization does achive lower prediction error than SNN, but is not very successful. This indicates that training by simply descending \eqref{eq:trainobj} is unable to guide the classifier to the appropriate clusters. Row 2 tests MoE with k-Means and various cluster sizes., which are shown in Fig.~\ref{fig:penClusters}. $k=3$ has one cluster that has data from both families of trajectories, so the prediction close to the discontinuity is worse. $k=4$ and $k=10$ clusters finds the discontinuity successfully, and the resulting MoE achieves high success rate. Row 3 of Fig.~\ref{fig:penLargeError} shows various methods of retraining after pretraining MoE with custom clustering. In all cases this approach decreases regression error but also rollout success rate. In (vii) argmax is used following the output layer of the classifier. The classifier has no gradient to update it self so only the regressors are updated. Due to classification error, the regressors will be trained with trajectories from other clusters. As a result, the prediction near the boundaries will tends towards the average of two clutsers. In (vii) and (ix) we use softmax with different $\epsilon$. In these cases, the classifier is updated but the regressors will predict towards the average. As shown in Tab.~\ref{tab:cmpTraining}, retraining does decrease the prediction error at the cost of lower rollout success rate. \begin{table*}[tbp] \caption{\small Comparison of prediction error and rollout success rate on the pendulum problem} \label{tab:cmpTraining} \begin{center} \begin{tabular}{@{}llllllllll@{}}\toprule Model & SNN & \multicolumn{8}{c}{MoE} \\ Clustering & --- & Custom & Rand. & k-means-3 & k-means-4 & k-means-10 & Custom & Custom & Custom \\ Retrain & --- & --- & --- & --- & --- & --- & argmax & softmax 1.0 & softmax 0.1 \\ \midrule Validation error & 0.046& 0.030& 0.035& 0.039& 0.029& 0.051& 0.027& 0.028& 0.026\\ Success (out of 1000) & 717& 998& 829& 970& 1000& 1000& 941& 896& 969\\ \bottomrule \end{tabular} \vspace{-0.5cm} \end{center} \end{table*} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{imclusterplot} \caption{\small Choices of clusters for the pendulum problem. Different colors means different clusters. Figures include: 1) custom 3 clusters 2) k-Means with 3 clusters 3) k-Means with 4 clusters 4) k-Means with 10 clusters} \label{fig:penClusters} \vspace{-0.5cm} \end{figure} These experiments suggest that proper clustering is important for MoE training. Moreover, rollout success is a better metric to use in practice, while testing error can be misleading. Due to misclassifications, a lower testing error can be achieved by averaging at discontinuities, but this leads to severe failures. We also observe that coupled retraining is detrimental to performance. This is because the imperfect classification causes the individual regressors to be provided with discontinuous training data, again leading to averaging artifacts. \begin{comment} One practical issue in solving OCP is the choice of the grid size $N$. A larger $N$ yields solution with higher accuracy at the cost of more computational resource. We argue that for different $N$, the hidden layers of the neural network is ``almost invariant''. We assume that for sufficiently large $N_1$ and $N_2$ changing from $N_1$ to $N_2$ is close to interpolation from a grid of size $N_1$ to $N_2$. If we use Lagrangian interpolation, such an interpolation process is actually a linear mapping from $\mathbb{R}^{N_1}$ to $\mathbb{R}^{N_2}$ and the matrix only depends on the grids. We denote is as $y_2=Ay_1$ where $A$ is the interpolation matrix, $y$ is a general continuous function and subscript $1$ and $2$ denotes the grid. If the output layer corresponding to $N_1$ is $y_1=W_1h+b_1$ where $h$ is the output of the hidden layer, then the output layer of $N_2$ can be written as $y_2=AW_1 h+A b_1$. Ideally we do not need another training process so we claim the hidden layers are "almost invariant". For the same reason we do not take the parameters (weights and biases) corresponding to the output layer into account when we design network architectures. \end{comment} \section{Numerical Examples} We run experiments on the pendulum task and three dynamic vehicle problems, and the details are given below. Results are summarized in Tab.~\ref{tab:summary}. In each case, training sets contained 80\% of examples specified in Dataset size, and the testing sets had the remaining 20\%. Validation sets (of size Validation size) are generated separately. SNN test error indicates the testing error when training is terminated. SNN hyperparameters (SNN size) were tuned to achieve low test error. Validation error (SNN/MoE validation) indicates loss on the validation set, while rollout error (SNN/MoE rollout) indicates success rate during trajectory tracking. Except for the car problem, this involves the stabilizing LQR approach described in Sec.~\ref{sec:RolloutError}. Details on the car rollout success criteria are specified below. Details on the MoE network design are listed in the rows listing the number of clusters, the resulting cluster sizes, and the network hyperparameters (Classifier/Regressor size). The Regressor Test Error row indicates how well the MoE regressors are fitting on clustered data, showing that each regressor has quite small error when fit on a continuous region. In all of these experiments, hidden layers use LeakyReLU with $\alpha=0.2$. The output layer of regressors is a linear layer without nonlinear activation function. The loss function is the smooth L1 loss and cross entropy loss for regressors and classifier, respectively. \subsection{Pendulum Swing-up} \label{sec:Pendulum} \subsubsection{Problem Setup} The system dynamic equations are \begin{equation} \dot{\theta}=\omega, \dot{\omega}=u-\sin\theta \end{equation} where $\theta, \omega$ are the angle and angular velocity of the pendulum; $u\in[-1, 1]$ is the control torque. The problem parameters are the initial states. The target state is the straight up state, i.e. $\omega_f=0, \mod(\theta_f, 2\pi)=\pi$. The cost function is a weighted sum of time and control energy, i.e. $J=w(t_f-t_0)+r\int_{t_0}^{t_f}u^2 \,{\rm d}t$ with $w=1, r=1$. \subsubsection{Data Generation and Training} The parameter space is a subset of $\mathbb{R}^2$ and we directly sample parameters on a uniform grid. Specifically, we use a grid size of $61\times21$. The validation set is sampled at random. Samples of optimal trajectories are shown in Fig.~\ref{fig:SNNbadPred}. The custom clustering partitions the trajectories by $\theta_f$. \begin{comment} \begin{table*}[htbp] \centering \begin{threeparttable} \caption{\small Summary of experimental results for SNN and MoE} \label{tab:summary} \begin{tabular}{@{}lllll@{}}\toprule & pendulum & car & quadcopter & quadcopter-obstacle \\ \midrule State dims & 2 & 4 & 12 & 12 \\ Control dims & 1 & 2 & 4 & 4 \\ Problem param. & $\bm{x}_0\in\mathbb{R}^2$ & $\bm{x}_0\in\mathbb{R}^4$ & initial position, $\mathbb{R}^3$ & initial position and obstacle, $\mathbb{R}^7$ \\ Param range & $[-\pi, \pi] \times [-2, 2]$ & $[-10, 10]^2\times[-\pi,\pi]\times[-3.1, 3.1]$ & $[-10,10]^3$ & {$[-10, 10]^6\times[1, 5]$\tnote{a}}\\ Dataset size\tnote{\textdagger} & 1281 & 120009 & 9000 & 616758 \\ Validation size & 1000 & 10000 & 1000 & 10000\\ SNN size & (2, 300, 75) & (4, 200, 200, 149) & (3, 200, 317) & (7, 1000, 1000, 317)\\ SNN test error & 0.058 & 0.045 & \sn{8.6}{-5} & 0.014 \\ {\bf SNN validation} & 0.046 & 0.046 & \sn{4.7}{-5} & 0.024 \\ {\bf SNN rollout} & 717/1000 & 6729/10000 & 1000/1000 & -0.315 \tnote{b}\\ \# clusters & 3 & 6 & 4 & 8\\ Cluster size range & [388,505] & [7266,45626] & 2072,2356] & 70474,84280] \\ Classifier size & (2, 50, 3) & (4, 200, 6) & (3, 50, 4) & (7, 200, 200, 8)\\ Test accuracy & $97.2\%$ & $98.7\%$ & $99.6\%$ & $88.9\%$ \\ Regressor size & (2, 20, 75) & \makecell[cl]{(4, 200, 149) for small clusters\\ (4, 500, 149) for large} & (3, 50, 317) & (7, 200, 200, 317)\\ Regressor test error & 0.0032 $\pm$ 0.0031 & 0.0018 $\pm$ 0.0014 & \sn{4.6}{-5} $\pm$ \sn{9}{-6} & 0.0022 $\pm$ 0.0003 \\ {\bf MoE validation} & 0.030 & 0.019 & \sn{4.6}{-5} & 0.015\\ {\bf MoE rollout} & 998/1000 & 9975/10000 & 1000/1000 & -0.043\tnote{b}\\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] The obstacle is sampled such that it always collides with optimal obstacle-free trajectory \item[b] Average of the largest constraint violations based on trajectory rollout. All states can be controlled to the target. See histogram in Fig.~\ref{fig:constrVio} for distribution. \end{tablenotes} \end{threeparttable} \end{table*} \end{comment} \begin{table*}[htbp] \centering \begin{threeparttable} \caption{\small Summary of experimental results for SNN and MoE} \label{tab:summary} \begin{tabular}{@{}lllllll@{}}\toprule & pendulum & \multicolumn{2}{l}{ground vehicle} & quadcopter & \multicolumn{2}{l}{quadcopter-obstacle} \\ \midrule State dims & 2 & \mg{4} & 12 & \mg{12} \\ Control dims & 1 & \mg{2} & 4 & \mg{4} \\ Problem param. & $\bm{x}_0\in\mathbb{R}^2$ & \mg{$\bm{x}_0\in\mathbb{R}^4$} & initial position, $\mathbb{R}^3$ & \mg{\makecell[cl]{initial position and \\obstacle, $\mathbb{R}^7$}} \\ Param range & $[-\pi, \pi] \times [-2, 2]$ & \mg{$[-10, 10]^2\times[-\pi,\pi]\times[-3.1, 3.1]$} & $[-10,10]^3$ & \mg{$[-10, 10]^6\times[1, 5]$\tnote{a}}\\ Dataset size\tnote{\textdagger} & 1281 & \mg{120009} & 9000 & \mg{616758} \\ Validation size & 1000 & \mg{10000} & 1000 & \mg{10000}\\ SNN size & (2, 300, 75) & \mg{(4, 200, 200, 149)} & (3, 200, 317) & \mg{(7, 1000, 1000, 317)}\\ SNN test error & 0.058 & \mg{0.045} & \sn{8.6}{-5} & \mg{0.014} \\ {\bf SNN validation} & 0.046 & \mg{0.046} & \sn{4.7}{-5} & \mg{0.024} \\ {\bf SNN rollout} & 717/1000 & \mg{6729/10000} & 1000/1000 & \mg{-0.315 \tnote{b}}\\ \# clusters & 3 & \mg{6} & 4 & \mg{8}\\ cluster approach & custom & custom & k-means & k-means & custom & kmeans\\ Cluster size range & [388,505] & [7266,45626] & [7228, 28913] & [2072,2356] & [70474,84280] & [64682, 101669] \\ Classifier size & (2, 50, 3) & (4, 200, 6) & (4, 200, 6) & (3, 50, 4) & (7, 200, 200, 8) & (7, 200, 200, 8)\\ Test accuracy & $97.2\%$ & $98.7\%$ & $97.7\%$ & $99.6\%$ & $88.9\%$ & $96.4\%$ \\ \hfour{Regressor size} & \hfour{(2, 20, 75)} & \makecell[cl]{(4, 200, 149) \\ for small clusters\\ (4, 500, 149) \\ for large} & \makecell[cl]{(4, 200, 149)\\ for small clusters\\(4, 300, 149)\\for large} & \hfour{(3, 50, 317)} & \hfour{(7, 200, 200, 317)} & \hfour{(7, 200, 200, 317)}\\ Regressor test error & 0.0032 $\pm$ 0.0031 & 0.0018 $\pm$ 0.0014 & 0.0088 $\pm$ 0.0082 & \sn{4.6}{-5} $\pm$ \sn{9}{-6} & 0.0022 $\pm$ 0.0003 & 0.0052 $\pm$ 0.0027 \\ {\bf MoE validation} & 0.030 & 0.019 & 0.031 & \sn{4.6}{-5} & 0.015 & 0.016\\ {\bf MoE rollout} & 998/1000 & 9975/10000 & 9413/10000 & 1000/1000 & -0.043\tnote{b} & -0.167\tnote{b}\\ \bottomrule \end{tabular} \begin{tablenotes} \item[a] The obstacle is sampled such that it always collides with optimal obstacle-free trajectory \item[b] Average of the largest constraint violations based on trajectory rollout. All states can be controlled to the target. See histogram in Fig.~\ref{fig:constrVio} for distribution. \end{tablenotes} \end{threeparttable} \end{table*} \subsection{Ground vehicle} \subsubsection{Problem Setup} We use a planar car with dynamic equations \begin{equation} \dot{x}=v\sin\theta, \, \dot{y}=v\cos\theta, \, \dot{\theta}=u_\theta v, \, \dot{v}=u_v \label{eq:CarDyn} \end{equation} where the state $\bs{x}=[x,y,\theta,v]$ includes the planar coordinates, orientation, and velocity of the vehicle; the control $\bs{u}=[u_\theta,u_v]$ includes the control variables which change the steering angle and velocity, respectively. The problem parameters are the initial states, as listed in Tab.~\ref{tab:summary} and the goal is to control the system to the origin with zero velocity and $\mod(\theta_f, 2\pi)=0$. The cost function is a weighted sum of time and control energy, i.e. $J=w(t_f-t_0)+\int_{t_0}^{t_f}r_1u_\theta^2+r_2 u_v^2 \,{\rm d}t$ with $w=10, r_1=r_2=1$. \begin{figure} \centering \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{carsampleTraj} \caption[MoE Prediction]% {{\small Samples of optimal trajectories for the car problem}} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{carsnnbadmoegood} \caption[]% {{\small Illustration of poor predictions from SNN}} \end{subfigure} \caption {\small Left: samples of optimal trajectories. Each color corresponds to one cluster of trajectories. Black circle is the target. Right: A selected state that SNN makes worse prediction than MoE. It also shows states near this state might belong to to three different trajectory clusters. SNN predicts a trajectory with incorrect final angle. } \label{fig:carSNNbadPred} \end{figure} \subsubsection{Data Generation and Training} The data is generated by uniformly sampling the parameter space. Fig.~\ref{fig:carSNNbadPred} shows a few samples of the optimal trajectories. Similar to the pendulum swingup problem, the constraint on $\theta_f$ makes it possible to reach the goal with different $\theta_f$. The custom clustering is developed by inspection, whereby we first divide the dataset into three groups based on the final angle. Then we find that for trajectories with the same $\theta_f$, the car can either go forward or backward to reach the origin, i.e. with positive or negative velocities. This is illustrated in Fig.~\ref{fig:carClusterTraj}. Hence, we divide the dataset into 6 clusters. We note that the cluster sizes are bimodal and we use larger regression network for cluster with larger size. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{carClusterTraj} \caption{\small Samples of trajectories in each cluster for the car problem. Column: different state variables for each cluster. Row: state variable for different clusters.} \label{fig:carClusterTraj} \end{figure} \subsubsection{Trajectory tracking} Because this problem is not controllable at the origin, a stabilizing LQR controller may not be used at the trajectory endpoint. Instead, we simply perform LQR rollout on the predicted trajectory, and stop when the end time is reached. To determine success, we check if norm of final state error is within 0.5. \subsubsection{Results and Discussion} The data in Tab.~\ref{tab:summary} show similar trends to the pendulum problem, in particular, MoE yields lower validation error and higher rollout success rate than SNN. Moreover, the custom clustering outperforms k-Means which further outperforms SNN. In Fig.~\ref{fig:carSNNbadPred} we show the predictions from SNN and MoE on a selected parameter as well as the optimal trajectories of its neighbors. It is clearly shown that SNN may fail to predict $\theta_f$ correctly. The histogram in Fig.~\ref{fig:carRolloutHist}.a shows the norm of the error in predicted final state, indicating that SNN has higher prediction error. Fig.~\ref{fig:carRolloutHist}.b also show that paths predicted by SNN violates system dynamics more than MoE. The reason why tracking error is actually much larger than predicted is that the predicted trajectory violates system dynamics, so path tracking diverges. \begin{figure} \centering \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{carmodelsolendValidate} \caption[MoE Prediction]% {{\small Prediction error of $\bm{x}_f$}} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\columnwidth} \centering \includegraphics[width=\textwidth]{carmodelxfendValidate} \caption[]% {{\small Tracking error of $\bm{x}_f$}} \end{subfigure} \caption {\small Histograms of prediction and tracking results for MoE and SNN on the car problem.} \label{fig:carRolloutHist} \end{figure} \subsection{Quadcopter with Collision Avoidance} \subsubsection{Problem Setup} The system has state $\bm{x}=(x,y,z,v_x, v_y, v_z, \phi, \theta, \psi, p, q, r)\in\mathbb{R}^{12}$ and control $\bm{u}\in\mathbb{R}^4$. We refer \cite{Mellinger:2012ez} to the details. The goal is to control the quadcopter from any equilibrium state with position within $[-10, 10]^3$ and all other states zero to the goal state $\bs{0}$. The cost function is a weighted sum of time, control energy, and penalty on states, i.e. $J=w(t_f-t_0)+\int_{t_0}^{t_f}\bs{x}{^{\rm T}}\bs{Q}\bs{x}+\bm{u}{^{\rm T}}\bs{R}\bm{u}\,{\rm d}t$ with $w=10$, $\bs{Q}=\text{diag}(0, 0, 0, 1, 1, 1, 0.1, 0.1, 0.1, 1, 1, 1)$, $\bs{R}=\text{diag}(1, 1, 1, 1)$. The quadcopter-obstacle case imposes additional path constraints on the state variables. The obstacle is a sphere with different position and radius, and obstacles are randomly placed in space with radius within $[1, 5]$. We are interested in how the obstacles influence the trajectory. \subsubsection{Data Generation and Training} In the obstacle-free case, initial positions are sampled at random, and k-Means is used for clustering. The obstacle problem is more challenging because it has higher dimensionality in parameter space (7). The OCP is also more challenging to solve due to the non-convex of obstacle avoidance constraint. We want to focus on problem instances with significant obstacles, so our dataset only includes examples where the optimal collision-free trajectory would collide with an obstacle. To generate this dataset, we collect obstacle free trajectories and then sample obstacles that collide with the trajectory. We then re-optimize for the sampled obstacles. Samples of trajectories are shown in Fig.~\ref{fig:obsBunchTraj} and Fig.~\ref{fig:obsClusTraj}. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{obsBunchTraj} \caption{\small Trajectories of problems with parmameter close to the problem with sphere at (4, 4, 4), radius 3 and initial position (8, 8, 8). Each color corresponds to trajectories from one cluster. It shows the trajectories can be quite different even for close problem parameters.} \label{fig:obsBunchTraj} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{obsClusTraj} \caption{\small Samples of optimal trajectories for the quadcopter problem. Each color corresponds to each trajectory cluster.} \label{fig:obsClusTraj} \end{figure} The discontinuity of the parameter-solution mapping in this problem is avoiding the obstacle from different directions outperforms others and vice versa. One feature that describes how the obstacle-free trajectory is affected by the obstacles is the gradient of the active constraints with respect to state variables. Since the obstacles are spheres, the gradient is essentially the vector from the center of the sphere to the point on surface where constraints are active. Its direction clearly shows which direction the trajectory has to change for collision avoidance. For trajectories that has more than one active constraints, we use the multipliers as weights and take the average. In this way, a 3D vector is calculated for each trajectory and used as features to divide the problem space. We divide the dataset into 8 groups based on the sign of each element of the 3D vector. \subsubsection{Results and Discussion} Results show that both SNN and MoE control the quadcopter to a stabilizable state in highly reliable fashion without obstacles. Hence, for validation we focus more on the amount of collision avoidance violation, i.e. $\min\{\|\bm{x}_i-\bm{c}_o\|-r_o\}_{i=0}^N$ where $r_o$ and $\bm{c}_o$ are respectively the radius and center of the obstacle. With obstacles, MoE with custom cluster also significantly outperforms others. A histogram of the constraint violation is shown in Fig.~\ref{fig:constrVio}, indicating that MoE yields much lower violation of constraints than SNN. Fig.~\ref{fig:obsSNNbadMoEgood} shows examples of optimal trajectories and prediction from SNN and MoE. As the initial state moves along $z$ direction, the optimal trajectories turns from going above to going below the obstacle. SNN is unable to handle such discontinuity and predicts a trajectory that violates the constraints. However, MoE is able to detect such discontinuity and predicts the corresponding trajectories. It is important to note however that MoE still creates grazing collisions, so to successfully avoid an obstacle in practice, either a margin of error should be added to the modeled obstacle, or local collision avoidance should be added to the trajectory tracker. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{obsSNNbadMoMgood} \caption{\small Optimal trajectories and prediction from SNN and MoE for two selected close states. The green sphere is an obstacle centered at (0, 4, 4) with radius of 3. The solid, dashed and dotted lines are the optimal trajectories, prediction of MoE, and prediction of SNN, respectively. It shows SNN predicts a trajectory that violates obstacles avoidance constraints.} \label{fig:obsSNNbadMoEgood} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{obsRolloutVioValidate} \caption {{\small Rollout constraint violation for quadcopter-obstacle.}} \label{fig:constrVio} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper we demonstrate that optimal trajectories can be learned with high accuracy if we take into account the special structure of optimal control problems. The mixture of experts model is designed such that each expert approximates a smooth region in the problem optimum map, and the classifier handles discontinuities without averaging. It is important to train MoE with the correct clusters, and curiously, coupled training of the regressors and classifier tends to be detrimental to tracking performance. We also argue that test error is not a good metric to judge learning models, but rather rollout success rate under trajectory tracking control is preferable. Future work includes developing more sophisticated clustering algorithms that automatically find the best partition strategy. For certain OCPs, differential flatness can be used such that the predicted trajectory satisfies dynamical constraints. Further work also includes how to prove the stability of the predicted trajectories, and to scale up to handle larger problems, e.g., from sensor data or model uncertainties. \section*{Acknowledgments} This work is supported by NSF grant \#IIS-1816540. \bibliographystyle{plainnat}
1,314,259,996,871
arxiv
\section{Introduction} \begin{tikzpicture}[overlay, remember picture] \node[red, yshift=-20mm, anchor=north, text width=115mm] at (current page.north) { \textbf{Preprint} version. Please consult the final version of record instead: \\ Lars Lenssen, Erich Schubert: Clustering by Direct Optimization of the Medoid Silhouette. Similarity Search and Applications (SISAP 2022)\\ \url{https://doi.org/10.1007/978-3-031-17849-8_15} }; \end{tikzpicture}% In cluster analysis, the user is interested in discovering previously unknown structure in the data, as opposed to classification where one predicts the known structure (labels) for new data points. Sometimes, clustering can also be interpreted as data quantization and approximation, for example $k$-means which aims at minimizing the sum of squared errors when approximating the data with $k$ average vectors, spherical $k$-means which aims to maximize the cosine similarities to the $k$ centers, and $k$-medoids which minimizes the sum of distances when approximating the data by $k$ data points. Other clustering approaches such as DBSCAN \cite{DBLP:conf/kdd/EsterKSX96,DBLP:journals/tods/SchubertSEKX17} cannot easily be interpreted this way, but discover structure related to connected components and density-based minimal spanning trees \cite{DBLP:conf/lwa/SchubertHM18}. The evaluation of clusterings is a challenge, as there are no labels available. While many internal (``unsupervised'', not relying on external labels) evaluation measures were proposed such as the Silhouette~\cite{Rousseeuw/87a}, Davies-Bouldin index, the Variance-Ratio criterion, the Dunn index, and many more, using these indexes for evaluation suffers from inherent problems. Bonner \cite{DBLP:journals/ibmrd/Bonner64} noted that ``none of the many specific definitions [...] seems best in any general sense'', and results are subjective ``in the eye of the beholder'' as noted by Estivill-Castro~\cite{DBLP:journals/sigkdd/Estivill-Castro02}. While these claims refer to clustering methods, not evaluation methods, we argue that these do not differ substantially: each internal cluster evaluation method implies a clustering algorithm obtained % by enumeration of all candidate clusterings, keeping the best. The main difference between clustering algorithms and internal evaluation then is whether or not we know an efficient optimization strategy. $K$-means is an optimization strategy for the sum of squares evaluation measure, while the $k$-medoids algorithms PAM, and Alternating are different strategies for optimizing the sum of deviations from a set of $k$ representatives. In this article, we focus on the evaluation measure known as Silhouette~\cite{Rousseeuw/87a}, % and discuss an efficient algorithm to optimize a variant of this measure, inspired by the well-known PAM algorithm \cite{Kaufman/Rousseeuw/87a, Kaufman/Rousseeuw/90c} and FasterPAM~\cite{DBLP:journals/is/SchubertR21,DBLP:conf/sisap/SchubertR19}. \section{Silhouette and Medoid Silhouette} The Silhouette~\cite{Rousseeuw/87a} is a popular measure to evaluate clustering validity, and performs very well in empirical studies \cite{DBLP:journals/pr/ArbelaitzGMPP13,DBLP:journals/pr/BrunSHLCSD07}. For the given samples $X {=} \{x_1,\ldots,x_n\}$, a dissimilarity measure $d:X{\times} X\mapsto \mathbb{R}$, and the cluster labels $L{=}\{l_1,\ldots,l_n\}$, the Silhouette for a single element $i$ is calculated based on the average distance to its own cluster $a_i$ and the smallest average distance to another cluster $b_i$ as: \begin{align*} s_i(X, d, L) &= \tfrac{b_i-a_i}{\max(a_i,b_i)} \;\text{, where}\\%\shortintertext{where} a_i &= \phantom{\min\nolimits_{k\neq l_i}\;} \operatorname{mean} \left\{d(x_i, x_j) \mid l_j = l_i\right\} \\ b_i &= \min\nolimits_{k\neq l_i}\;\operatorname{mean}\left\{d(x_i, x_j) \mid l_j = k\right\} \;. \end{align*} The motivation is that ideally, each point is much closer to the cluster it is assigned to, than to another ``second closest'' cluster. For $b_i{\gg} a_i$, the Silhouette approaches 1, while for points with $a_i{=}b_i$ we obtain a Silhouette of 0, and negative values can arise if there is another closer cluster and hence $b_i{<}a_i$. The Silhouette values $s_i$ can then be used to visualize the cluster quality by sorting objects by label $l_i$ first, and then by descending $s_i$, to obtain the Silhouette plot. However, visually inspecting the Silhouette plot is only feasible for small data sets, and hence it is also common to aggregate the values into a single statistic, often referred to as the Average Silhouette Width (ASW): \begin{align*} S(X, d, L) = \tfrac{1}{n} \textstyle\sum\nolimits_{i=1}^n s_i(X, d, L) \;. \end{align*} Hence, this is a function that maps a data set, dissimilarity, and cluster labeling to a real number, and this measure has been shown to satisfy desirable properties for clustering quality measures (CQM) by Ackerman and Ben-David \cite{DBLP:conf/nips/Ben-DavidA08}. A key limitation of the Silhouette is its computational cost. It is easy to see that it requires all pairwise dissimilarities, and hence takes $O(N^2)$ time to compute -- much more than popular clustering algorithms such as $k$-means. For algorithms such as $k$-means and $k$-medoids, a simple approximation to the Silhouette is possible by using the distance to the cluster center respectively medoids $M=\{M_1,\ldots,M_k\}$ instead of the average distance. For this ``simplified Silhouette'' (which can be computed in $O(N k)$ time, and which Van der Laan et al.~\cite{VanderLaan/03a} called medoid-based Silhouette) we use \begin{align*} s_i'(X, d, M) &= \tfrac{b_i'-a_i'}{\max(a_i',b_i')} \;\text{, where}\\%\shortintertext{where} a_i' &= \phantom{\min\nolimits_{k\neq l_i}\;{}} d(x_i, M_{l_i}) \\ b_i' &= \min\nolimits_{k\neq l_i}\; d(x_i, M_k) \;. \end{align*} If each point is assigned to the closest cluster center (optimal for $k$-medoids and the Silhouette), we further know that $a_i'\leq b_i'$ and $s_i\geq 0$, and hence this can further be simplified to the \emph{Medoid Silhouette} \begin{align*} \tilde{s}_i(X, d, M) &= \tfrac{d_2(i)-d_1(i)}{d_2(i)} = 1 - \tfrac{d_1(i)}{d_2(i)} \;. \end{align*} where $d_1$ is the distance to the closest and $d_2$ to the second closest center in~$M$. For $d_1(i)=d_2(i)=0$, we add a small $\varepsilon$ to $d_2(i)$ to get $\tilde{s}=1$. The Average Medoid Silhouette (AMS) then is defined as the average \begin{align*} \tilde{S}(X, d, M) = \tfrac{1}{n} \textstyle\sum\nolimits_{i=1}^n \tilde{s}_i(X, d, M) \;. \end{align*} It can easily be seen that the optimum clustering is the (assignment of points to the) set of medoids such that we minimize an ``average relative loss``: \begin{align*} \operatorname{arg\,max}_M \tilde{S}(X, d, M) = \operatorname{arg\,min}_M \operatorname{mean}_i \tfrac{d_1(i)}{d_2(i)} \;. \end{align*} For clustering around medoids, we impose the restriction $M\subseteq X$; which has the benefit of not restricting the input data to be numerical (e.g., $X\subset\mathbb{R}^d$, as in $k$-means), and allowing non-metric dissimilarity functions~$d$. \section{Related Work} The Silhouette~\cite{Rousseeuw/87a} was originally proposed along with Partitioning Around Medoids (PAM,~\cite{Kaufman/Rousseeuw/87a, Kaufman/Rousseeuw/90c}), and indeed $k$-medoids already does a decent job at finding a good solution, although it does optimize a different criterion (the sum of total deviations). Van der Laan et al.~\cite{VanderLaan/03a} proposed to optimize the Silhouette by substituting the Silhouette evaluation measure into the PAM SWAP procedure (calling this PAMSIL). Because they recompute the loss function each time (as opposed to PAM, which computes the change), the complexity of PAMSIL is $O(k(N-k) N^2)$, since for each of $k\cdot (N-k)$ possible swaps, the Silhouette is computed in $O(N^2)$. Because this yields a very slow clustering method, they also considered the Medoid Silhouette instead (PAMMEDSIL), which only needs $O(k^2(N-k)N)$ time (but still considerably more than PAM). Schubert and Rousseeuw \cite{DBLP:journals/is/SchubertR21,DBLP:conf/sisap/SchubertR19} recently improved the PAM method, and their FastPAM approach reduces the cost of PAM by a factor of $O(k)$, making the method $O(N^2)$ by the use of a shared accumulator to avoid the innermost loop. In this work, we will combine ideas from this algorithm with the PAMMEDSIL approach above, to optimize the Medoid Silhouette with a swap-based local search, but a run time comparable to FastPAM. But we will first perform a theoretical analysis of the properties of the Medoid Silhouette, to show that it is worth exploring as an alternative to the original Silhouette. \section{Axiomatic Characterization of Medoid Clustering} We follow the axiomatic approach of Ackerman and Ben-David~\cite{DBLP:conf/nips/Ben-DavidA08}, to prove the value of using the Average Medoid Silhouette (AMS) as a clustering quality measure (CQM). Kleinberg~\cite{Kleinberg/Jon/02a} defined three axioms for clustering functions and argued that no clustering algorithm can satisfy these desirable properties at the same time, as they contradict. Because of this, Ackermann and Ben-David~\cite{DBLP:conf/nips/Ben-DavidA08} weaken the original Consistency Axiom and extract four axioms for clustering quality measures: \emph{Scale Invariance} and \emph{Richness} are defined analogously to the Kleinberg Axioms. We redefine the CQM axioms~\cite{DBLP:conf/nips/Ben-DavidA08} for medoid-based clustering. \begin{definition} \label{d1} For given data points $X = \{x_1,\ldots,x_n\}$ with a set of $k$ medoids $M = \{m_1,\ldots,m_k\}$ and a dissimilarity $d$, we write $x_i \sim_M x_{i'}$ whenever $x_i$ and~$x_{i'}$ have the same nearest medoid ${n_1}(i) \subseteq M$, otherwise $x_i \not\sim_M x_{i'}$. \end{definition} \begin{definition} \label{d2} Dissimilarity $d'$ is an M-consistent variant of $d$, if $d'(x_i, x_{i'}) \leq d(x_i, x_{i'})$ for $x_i \sim_M x_{i'}$, and $d'(x_i, x_{i'}) \geq d(x_i, x_{i'})$ for $x_i \not\sim_M x_{i'}$. \end{definition} \begin{definition} \label{d3} Two sets of medoids $M, M' \subseteq X$ with a distance function $d$ over~$X$, are isomorphic, if there exists a distance-preserving isomorphism $\phi : X \to X$, such that for all $x_i, x_{i'} \in X$, $x_i \sim_M x_{i'}$ if and only if $\phi(x_i) \sim_{M'} \phi(x_{i'})$. \end{definition} \begin{axiom}[Scale Invariance] \label{a1} A medoid-based clustering quality measure~$f$ satisfies scale invariance if for every set of medoids $M \subseteq X$ for $d$, and every positive~$\lambda$, $f(X, d, M) = f(X, \lambda d, M)$. \end{axiom} \begin{axiom}[Consistency] \label{a2} A medoid-based clustering quality measure~$f$ satisfies consistency if for a set of medoids $M \subseteq X$ for $d$, whenever $d'$ is an \mbox{M-consistent} variant of~$d$, then $f(X, d', M) \geq f(X, d, M)$. \end{axiom} \begin{axiom}[Richness] \label{a3} A medoid-based clustering quality measure~$f$ satisfies richness if for each set of medoids $M \subseteq X$, there exists a distance function $d$ over~$X$ such that $M = \operatorname{arg\,max}_{M}f(X, d, M)$. \end{axiom} \begin{axiom}[Isomorphism Invariance] \label{a4} A medoid-based clustering quality measure~$f$ is isomorphism-invariant if for all sets of medoids $M, M' \subseteq X$ with distance $d$ over~$X$ where $M$ and $M'$ are isomorphic, $f(X, d, M) = f(X, d, M')$. \end{axiom} Batool and Hennig~\cite{Batool/Hennig/21a} prove that the ASW satisfies the original CQM axioms. We prove the first three adapted axioms for the Average Medoid Silhouette. The fourth, Isomorphism Invariance, is obviously fulfilled, since AMS is based only on dissimilarites, just as the ASW~\cite{Batool/Hennig/21a}. \begin{theorem} The AMS is a \emph{scale invariant} clustering quality measure. \end{theorem} \begin{proof} If we replace $d$ with $\lambda d$, both $d_1(i)$ and $d_2(i)$ are multiplied by $\lambda$, and the term will cancel out. Hence, $\tilde{s}_i$ does not change for any $i$: \begin{align*} \tilde{S}(X, \lambda d, M) &= \tfrac{1}{n} \textstyle\sum\nolimits_{i=1}^n \tilde{s}_i(X, \lambda d, M) = \tfrac{1}{n} \textstyle\sum\nolimits_{i=1}^n \tfrac{\lambda d_2(i)-\lambda d_1(i)}{\lambda d_2(i)} \\ &= \tfrac{1}{n} \textstyle\sum\nolimits_{i=1}^n \tfrac{ d_2(i)- d_1(i)}{ d_2(i)} = \tfrac{1}{n} \textstyle\sum\nolimits_{i=1}^n \tilde{s}_i(X, d, M) = \tilde{S}(X, d, M) \;. \end{align*} \end{proof} \begin{theorem} The AMS is a \emph{consistent} clustering quality measure. \end{theorem} \begin{proof} Let dissimilarity $d'$ be a M-consistent variant of $d$. By Definition~\ref{d2}: $d'(x_i, x_{i'} ) \leq d(x_i, x_{i'} )$ for all $x_i \sim_{M} x_{i'}$, and $\min_{x_i \not\sim_{M} x_{i'}}d'(x_i, x_{i'} ) \geq \min_{x_i \not\sim_{M} x_{i'}}d(x_i, x_{i'} )$. This implies for all $i \in \mathbb{N}$: $d'_1(i) \leq d_1(i), d'_2(i) \geq d_2(i)$ and it follows: \begin{align*} \tfrac{d_1(i)}{ d_2(i)} - \tfrac{ d'_1(i)}{ d'_2(i)} &\geq 0 \quad\Leftrightarrow\quad % \tfrac{ d'_2(i)- d'_1(i)}{ d'_2(i)} - \tfrac{ d_2(i)- d_1(i)}{ d_2(i)} \geq 0 \end{align*} which is equivalent to $\forall_i \; \tilde{s}_i(X, d', M) \geq \tilde{s}_i(X, d, M)$, hence $\tilde{S}(X, d', M) \geq \tilde{S}(X, d, M)$, i.e., AMS is a consistent clustering quality measure. \end{proof} \begin{theorem} The AMS is a \emph{rich} clustering quality measure. \end{theorem} \begin{proof} We can simply encode the desired set of medoids $M$ in our dissimilarity~$d$. We define $d(x_i,x_j)$ such that it is~0 if trivially $i=j$, or if $x_i$ or $x_j$ is the first medoid $m_1$ and the other is not a medoid itself. Otherwise, let the distance be~1. For $M$ we then obtain $\tilde{S}(X, d, M)=1$, because $d_1(i)=0$ for all objects, as either $x_i$ is a medoid itself, or can be assigned to the first medoid~$m_1$. This is the maximum possible Average Medoid Silhouette. Let $M'\neq M$ be any other set of medoids. Then there exists at least one missing $x_i\in M\setminus M'$. For this object $\tilde{s}_i(X, d, M)=0$ (as its distance to all other objects is 1, and it is not in~$M'$), and hence $\tilde{S}(X, d, M')<1=\tilde{S}(X, d, M)$. \end{proof} \section{Direct Optimization of Medoid Silhouette} PAMSIL~\cite{VanderLaan/03a} is a modification of PAM~\cite{Kaufman/Rousseeuw/87a, Kaufman/Rousseeuw/90c} to optimize the ASW. For PAMSIL, Van der Laan~\cite{VanderLaan/03a} adjusts the SWAP phase of PAM by always performing the SWAP that provides the best increase in the ASW. When no further improvement is found, a (local) maximum of the ASW has been achieved. However, where the original PAM efficiently computes only the change in its loss (in $O(N-k)$ time for each of $(N-k)k$ swap candidates), PAMSIL computes the entire ASW in $O(N^2)$ for every candidate, and hence the run time per iteration increases to $O(k(N-k)N^2)$. For a small $k$, this yields a run time that is cubic in the number of objects $N$, and the algorithm may need several iterations to converge. \subsection{Naive Medoid Silhouette Clustering} PAMMEDSIL~\cite{VanderLaan/03a} uses the Average Medoid Silhouette (AMS) instead, which can be evaluated in only $O(Nk)$ time. This yields a SWAP run time of $O(k^2(N-k)N)$ (for small $k\ll N$ only quadratic in~$N$). \begin{algorithm2e}[tb!] \caption{PAMMEDSIL SWAP: Iterative improvement} \label{alg1} \SetKwBlock{Repeat}{repeat}{} \SetKw{Break}{break} $S' \gets $ Simplified Silhouette sum of the initial solution $M$\; \Repeat{ $(S'_*, M_*)\gets(0,$null$)$\; \ForEach(\tcp*[f]{each medoid}\label{alg1-loop1}){$m_i\in M=\{m_1,\ldots,m_k\}$} { \ForEach(\tcp*[f]{each non-medoid}\label{alg1-loop2}){$x_j\notin\{m_1,\ldots,m_k\}$} { $(S',M') \gets (0, M \setminus \{m_i\} \cup \{x_j\})$\; \ForEach(\label{alg1-loop3}){$x_o\in X=\{x_1,\ldots,x_n\}$} { $S' \gets S' + s_o'(X,d,M')$\tcp*{ Simplified Silhouette} } \lIf(\tcp*[f]{keep best swap found}\label{alg1-if1}) {$S' > S'_*$} { $(S'_*, M_*)\gets( S',M')$% } } } \lIf{$S'_*\geq S'$}{\Break} $(S',M) \gets (S'_*,M_*)$\tcp*[r]{perform swap} } \Return {$(S' / N,M)$}\; \end{algorithm2e} As Schubert and Rousseeuw \cite{DBLP:journals/is/SchubertR21,DBLP:conf/sisap/SchubertR19} were able to reduce the run time of PAM to $O(N^2)$ per iteration, % we modify the PAMMEDSIL approach accordingly to obtain a similar speedup. The SWAP algorithm of PAMMEDSIL is shown in Algorithm~\ref{alg1}. \subsection{Finding the Best Swap} \label{sec52} We first bring PAMMEDSIL up to par with regular PAM. The trick introduced with PAM is to compute the change in loss instead of recomputing the loss, which can be done in $O(N-k)$ instead of $O(k(N-k))$ time if we store the distance to the nearest and second centers, as the latter allows us to compute the change if the current nearest center is removed efficiently. In the following, we omit the constant parameters $X$ and $d$ for brevity. We denote the previously nearest medoid of $i$ as ${n_1}(i)$, and $d_1(i)$ is the (cached) distance to it. We similarly define ${n_2}(i)$, $d_2(i)$, and $d_3(i)$ with respect to the second and third nearest medoid. We briefly use $d_1'$ and $d_2'$ to denote the new distances for a candidate swap. For the Medoid Silhouette, we can compute the change when swapping medoids $m_i\in\{m_1,\ldots,m_k\}$ with non-medoids $x_j\notin\{m_1,\ldots,m_k\}$: \begin{align*} \Delta\ensuremath{\mathrm{\tilde{S}}} &= \tfrac{1}{n} \textstyle\sum\nolimits_{o=1}^n \Delta\ensuremath{\mathrm{\tilde{s}}}_o(M,m_i,x_j) \\ \Delta\ensuremath{\mathrm{\tilde{s}}}_o(M,m_i,x_j) &= \ensuremath{\mathrm{\tilde{s}}}_o(M \setminus \{m_i\} \cup \{x_j\}) - \ensuremath{\mathrm{\tilde{s}}}_o(M) \\ &= \tfrac{d_2'(i) - d_1'(i)}{d_2'(i)} - \tfrac{d_2(i)-d_1(i)}{d_2(i)} = \tfrac{d_1(i)}{d_2(i)} - \tfrac{d_1'(i)}{d_2'(i)} \;. \end{align*} Clearly, we only need the distances to the closest and second closest center, before and after the swap. Instead of recomputing them, we exploit that only one medoid can change in a swap. By determining the new values of $d_1'$ and $d_2'$ using cached values only, we can save a factor of $O(k)$ on the run time. In the PAM algorithm (where the change would be simply $d_1'-d_1$), the distance to the \emph{second} nearest is cached in order to compute the loss change if the current medoid is removed, without having to consider all $k-1$ other medoids: the point is then either assigned to the new medoid, or its former second closest. To efficiently compute the change in Medoid Silhouette, we have to take this one step further, and additionally need to cache the identity of the second closest center and the distance to the \emph{third} closest center (denoted~$d_3$). This is beneficial if, e.g., the nearest medoid is replaced. Then we may have, e.g., $d_1'=d_2$ and $d_2'=d_3$, if we can distinguish these cases. The change in Medoid Silhouette is then computed roughly as follows: (1) If the new medoid is the new closest, the second closest is either the former nearest, or the second nearest (if the first was replaced). (2) If the new medoid is the new second closest, the closest either remains the former nearest, or the second nearest (if the first was replaced). (3) If the new medoid is neither, we may still have replaced the closest or second closest; in which case the distance to the third nearest is necessary to compute the new Silhouette. Putting all the cases (and sub-cases) into one equation becomes a bit messy, and hence we opt to use pseudocode in Algorithm~\ref{alg:change} instead of an equivalent mathematical notation. \begin{algorithm2e}[tb] \caption{Change in Medoid Silhouette, $\Delta\ensuremath{\mathrm{\tilde{s}}}_o(M,m_i,x_j)$} \label{alg:change} \If(\tcp*[f]{nearest is replaced}){$m_i={n_1}(o)$}{ \lIf(\tcp*[f]{xj is new nearest}){$d(o,j)< d_2(o)$}{ \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d(o,j)}{d_2(o)}$ } \lIf(\tcp*[f]{xj is new second}){$d(o,j)< d_2(o)$}{ \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d_2(o)}{d(o,j)}$ } \lElse { \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d_2(o)}{d_3(o)}$ } } \ElseIf(\tcp*[f]{second nearest is replaced}){$m_i={n_2}(o)$}{ \lIf(\tcp*[f]{xj is new nearest}){$d(o,j)< d_1(o)$}{ \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d(o,j)}{d_1(o)}$ } \lIf(\tcp*[f]{xj is new second}){$d(o,j)< d_3(o)$}{ \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d_1(o)}{d(o,j)}$ } \lElse { \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d_1(o)}{d_3(o)}$ } } \Else{ \lIf(\tcp*[f]{xj is new nearest}){$d(o,j)< d_1(o)$}{ \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d(o,j)}{d_1(o)}$ } \lIf(\tcp*[f]{xj is new second}){$d(o,j)< d_2(o)$}{ \Return \tabto*{48mm} $\frac{d_1(o)}{d_2(o)}-\frac{d_1(o)}{d(o,j)}$ } \lElse { \Return \tabto*{48mm} 0 } } \end{algorithm2e} Note that the first term is always the same (the previous loss), except for the last case, where it canceled out via $0=\frac{d_1(o)}{d_2(o)}-\frac{d_1(o)}{d_2(o)}$. As this is a frequent case, it is beneficial to not have further computations here (and hence, to compute the change instead of computing the loss). Clearly, this algorithm runs in $O(1)$ if $n_1(o)$, $n_2(o)$, $d_1(o)$, $d_2(o)$, and $d_3(o)$ are known. We also only compute $d(o,j)$ once. Modifying PAMMEDSIL (Algorithm~\ref{alg1}) to use this computation yields a run time of $O(k(N-k)N)$ to find the best swap, i.e., already $O(k)$ times faster. But we can further improve this approach.\todo{Das ist momentan als ``PAMMEDSIL'' in den Experimenten, oder? Dann benennen, und ``naive'' hinzufügen?} \subsection{Fast Medoid Silhouette Clustering} We now integrate an acceleration added to the PAM algorithm by Schubert and Rousseeuw~\cite{DBLP:conf/sisap/SchubertR19,DBLP:journals/is/SchubertR21}, that exploits redundancy among the loop over the $k$ medoids to replace. For this, the loss change $\Delta\ensuremath{\mathrm{\tilde{S}}}(m_i,x_j)$ is split into multiple components: (1)~the change by removing medoid $m_i$ (without choosing a replacement), (2)~the change by adding $x_j$ as an additional medoid, and (3)~a correction term if both operations occur at the same time. The first factors can be computed in $O(kN)$, the second in $O(N(N-k))$, and the last factor is~0 if the removed medoid is neither of the two closest, and hence is also in $O(N^2)$. This then yields an algorithm that finds the best swap in $O(N^2)$, again $O(k)$ times faster. The first terms (the removal of each medoid $m_i\in M$) are computed as: \begin{align} \Delta\ensuremath{\mathrm{\tilde{S}}}^{-m_i} =& \sum\nolimits_{{n_1}(o)=i} \tfrac{d_1(o)}{d_2(o)}-\tfrac{d_2(o)}{d_3(o)} + \sum\nolimits_{{n_2}(o)=i} \tfrac{d_1(o)}{d_2(o)}-\tfrac{d_1(o)}{d_3(o)} \;, \label{eq:removal-mi} \shortintertext{ while for the second we compute the addition of a new medoid $x_j\not\in M$ } \Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j} =& \sum_{o=1}^n \begin{cases} \frac{d_1(o)}{d_2(o)}-\frac{d(o,j)}{d_1(o)} & \text{if }d(o,j)< d_1(o) \\ \frac{d_1(o)}{d_2(o)}-\frac{d_1(o)}{d(o,j)} & \text{else if }d(o,j)< d_2(o) \\ 0 & \text{otherwise} \;. \end{cases} \notag \shortintertext{ Combining these yields the change: } \Delta\ensuremath{\mathrm{\tilde{S}}}(m_i,x_j) =& \Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j} + \Delta\ensuremath{\mathrm{\tilde{S}}}^{-m_i} \notag\\ &+ \sum_{\substack{o \text{ with}\\{n_1}(o)=i}} \begin{cases} \frac{d(o,j)}{d_1(o)}+\frac{d_2(o)}{d_3(o)}-\frac{d_1(o)+d(o,j)}{d_2(o)} & \text{if }d(o,j)< d_1(o)\\ \frac{d_1(o)}{d(o,j)}+\frac{d_2(o)}{d_3(o)}-\frac{d_1(o)+d(o,j)}{d_2(o)} & \text{else if }d(o,j)< d_2(o) \\ \frac{d_2(o)}{d_3(o)}-\frac{d_2(o)}{d(o,j)} & \text{else if }d(o,j)< d_3(o) \\ 0 & \text{otherwise} \end{cases} \notag\\ &+ \sum_{\substack{o \text{ with}\\{n_2}(o)=i}} \begin{cases} \frac{d_1(o)}{d_3(o)}-\frac{d_1(o)}{d_2(o)} & \text{if }d(o,j)< d_1(o) \\ \frac{d_1(o)}{d_3(o)}-\frac{d_1(o)}{d_2(o)} & \text{else if }d(o,j)< d_2(o) \\ \frac{d_1(o)}{d_3(o)}-\frac{d_1(o)}{d(o,j)} & \text{else if }d(o,j)< d_3(o) \\ 0 & \text{otherwise} \;. \end{cases} \notag \end{align} It is easy to see that the additional summands can be computed by iterating over all objects $x_o$, and adding their contributions to accumulators for $n_1(o)$ and $n_2(o)$. As each object $o$ contributes to exactly two cases, the run time is $O(N)$. \begin{algorithm2e}[tb!] \caption{FastMSC: Improved SWAP algorithm} \label{alg:fastpms} \SetKwBlock{Repeat}{repeat}{} \SetKw{BreakOuterLoopIf}{break outer loop if} \Repeat{ \lForEach(\label{alg:fastpms-loop1}){$x_o$}{compute ${n_1}(o), {n_2}(o), d_1(o), d_2(o), d_3(o)$} $\Delta\ensuremath{\mathrm{\tilde{S}}}^{-m_1},\ldots,\Delta\ensuremath{\mathrm{\tilde{S}}}^{-m_i} \gets$ compute loss change removing $m_i$ using \eqref{eq:removal-mi}\; $(\Delta\ensuremath{\mathrm{\tilde{S}}}^*, m^*, x^*)\gets(0,$null$,$null$)$\; \ForEach(\tcp*[f]{each non-medoid}\label{alg1-loop2}){$x_j\notin\{m_1,\ldots,m_k\}$} { $\Delta\ensuremath{\mathrm{\tilde{S}}}_i,\ldots,\Delta\ensuremath{\mathrm{\tilde{S}}}_k\gets(\Delta\ensuremath{\mathrm{\tilde{S}}}^{-m_1},\ldots,\Delta\ensuremath{\mathrm{\tilde{S}}}^{-m_i})$\label{alg:fastpmsl6}\tcp*[r]{use removal loss} $\Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j}\gets0$\label{alg:fastpmsl7}\tcp*[r]{initialize shared accumulator} \ForEach(\label{alg1-loop3}){$x_o\in\{x_1,\ldots,x_n\}$} { $d_{oj}\gets d(x_o,x_j)$\tcp*[r]{distance to new medoid} \If(\tcp*[f]{new closest}\label{alg:fastpmsl10}) {$d_{oj} < d_1(o)$} { $\Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j}\gets\Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j}+d_1(o)/d_2(o)-d_{oj}/d_1(o)$\label{alg:fastpmsl11}\; $\Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_1}(o)}\gets \Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_1}(o)}+ d_{oj}/d_1(o) + d_2(o)/d_3(o) - \tfrac{d_1(o)+d_{oj}}{d_2(o)}$\label{alg:fastpmsl12}\; $\Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_2}(o)}\gets \Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_2}(o)}+d_1(o)/d_3(o) - d_1(o)/d_2(o)$\; } \ElseIf(\tcp*[f]{new first/second closest}\label{alg:fastpms-if2}) {$d_{oj} < d_2(o)$} { $\Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j}\gets\Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j}+d_1(o)/d_2(o)-d_1(o)/d_{oj}$\label{alg:fastpmsl15}\; $\Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_1}(o)}\gets \Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_1}(o)}+d_1(o)/d_{oj} + d_2(o)/d_3(o) - \tfrac{d_1(o)+d_{oj}}{d_2(o)}$\label{alg:fastpmsl16}\; $\Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_2}(o)}\gets \Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_2}(o)}+d_1(o)/d_3(o) - d_1(o)/d_2(o)$\; } \ElseIf(\tcp*[f]{new second/third closest}\label{alg:fastpms-if2}) {$d_{oj} < d_3(o)$} { $\Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_1}(o)}\gets \Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_1}(o)}+d_2(o)/d_3(o) - d_2(o)/d_{oj}$\; $\Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_2}(o)}\gets \Delta\ensuremath{\mathrm{\tilde{S}}}_{{n_2}(o)}+d_1(o)/d_3(o) - d_1(o)/d_{oj}$\label{alg:fastpmsl20}\; } } $i\gets \text{argmax}\Delta\ensuremath{\mathrm{\tilde{S}}}_i$\; $\Delta\ensuremath{\mathrm{\tilde{S}}}_i\gets\Delta\ensuremath{\mathrm{\tilde{S}}}_i+\Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j}$\; \lIf(\label{alg1-if1}) {$\Delta\ensuremath{\mathrm{\tilde{S}}}_i > \Delta\ensuremath{\mathrm{\tilde{S}}}^*$} { $(\Delta\ensuremath{\mathrm{\tilde{S}}}^*, m^*, x^*)\gets(\Delta\ensuremath{\mathrm{\tilde{S}}},m_i,x_j)$ } } \BreakOuterLoopIf{$\Delta\ensuremath{\mathrm{\tilde{S}}}^*\leq0$}\; swap roles of medoid $m^*$ and non-medoid $x^*$\tcp*[r]{perform swap} $\ensuremath{\mathrm{\tilde{S}}}\gets\ensuremath{\mathrm{\tilde{S}}}+\Delta\ensuremath{\mathrm{\tilde{S}}}^*$\; } \Return{$\ensuremath{\mathrm{\tilde{S}}},M$}\; \end{algorithm2e} This then gives Algorithm~\ref{alg:fastpms}, which computes $\Delta\ensuremath{\mathrm{\tilde{S}}}^{+x_j}$ along with the sum of $\Delta\ensuremath{\mathrm{\tilde{S}}}^{-m_i}$ and these correction terms in an accumulator array. The algorithm needs $O(k)$ memory for the accumulators in the loop, and $O(N)$ additional memory to store the cached $n_1$, $n_2$, $d_1$, $d_2$, and $d_3$ for each object. This algorithm gives the same result,\todo{ggf. camera-ready wieder numerical differences erwähnen} but FastMSC (``Fast Medoid Silhouette Clustering'') is $O(k^2)$ faster than the naive PAMMEDSIL. \subsection{Eager Swapping and Random Initialization} We can now integrate further improvements by Schubert and Rousseeuw~\cite{DBLP:journals/is/SchubertR21}. Because doing the best swap (steepest descent) does not appear to guarantee finding better solutions, but requires a pass over the entire data set for each step, we can converge to local optima much faster if we perform every swap that yields an improvement, even though this means we may repeatedly replacing the same medoid. For PAM they called this eager swapping, and named the variant FasterPAM. This does not improve theoretical run time (the last iteration will always require a pass over the entire data set to detect convergence), but empirically reduces the number of iterations substantially. It will no longer find the same results, but there is no evidence that a steepest descent is beneficial over choosing the first descent found. The main downside to this is, that it increases the dependency on the data ordering, and hence is best used on shuffled data when run repeatedly. Similarly, we will study a variant that eagerly performs the first swap that improves the AMS as FasterMSC (``Fast and Eager Medoid Silhouette Clustering''). Also, the classic initialization with PAM BUILD now becomes the performance bottleneck, and Schubert and Rousseeuw~\cite{DBLP:journals/is/SchubertR21} showed that random initialization in combination with eager swapping works very well. \section{Experiments} We next evaluate clustering quality, to show the benefits of optimizing AMS. We report both AMS and ASW, as well as the supervised measures Adjusted Random Index (ARI) and Normalized Mutual Information (NMI). Afterward, we study the scalability, to verify the expected speedup for our algorithm FastMSC. \subsection{Data Sets} Since it became possible to map gene expression at the single-cell level by RNA sequencing, clustering on these has become a popular task, and Silhouette is a popular evaluation measure there. Single-cell RNA sequencing (scRNA-seq) provides high-dimensional data that requires appropriate preprocessing to extract information. After extraction of significant genes, these marker genes are validated by clustering of proper cells. \begin{figure}[tb!] \begin{subfigure}{0.45\textwidth}\centering \begin{tikzpicture}[font=\small] \begin{axis}[unit vector ratio*=1 1 1, width=1.1\textwidth, xmin = -100, xmax = 150, ymin = -100, ymax = 150, xlabel={PC1}, ylabel={PC2}, xticklabels={,,}, yticklabels={,,}] \addplot +[only marks, mark=text, text mark=1, mark options={scale=0.6}, Pa1] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/counttable_al_2i.csv};\label{plot1_l_1} \addplot +[only marks, mark=text, text mark=2, mark options={scale=0.6}, Pa2] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/counttable_al_a2i.csv};\label{plot1_l_2} \addplot +[only marks, mark=text, text mark=3, mark options={scale=0.6}, Pa3] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/counttable_al_lif.csv};\label{plot1_l_3} \addlegendimage{/pgfplots/refstyle=plot1_l_1}\addlegendentry{2i} \addlegendimage{/pgfplots/refstyle=plot1_l_2}\addlegendentry{a2i} \addlegendimage{/pgfplots/refstyle=plot1_l_3}\addlegendentry{lif} \end{axis} \end{tikzpicture} \caption{Kolodziejczyk et al.~\cite{Kolodziejczyk/15a}} \label{plot2_1} \end{subfigure} \begin{subfigure}{0.45\textwidth}\centering \begin{tikzpicture}[font=\small] \begin{axis}[unit vector ratio*=1 1 1, width=1.1\textwidth, xmin = -30, xmax = 70, ymin = -50, ymax = 50, xlabel={PC2}, ylabel={PC1}, xticklabels={,,}, yticklabels={,,}] \addplot +[only marks, mark=text, text mark=1, mark options={scale=0.6}, Pa1] table [x=c, y=b, col sep=comma] {results/klein/counttable_klein_lif.csv};\label{plot2_l_1} \addplot +[only marks, mark=text, text mark=2, mark options={scale=0.6}, Pa2] table [x=c, y=b, col sep=comma] {results/klein/counttable_klein_2d.csv};\label{plot2_l_2} \addplot +[only marks, mark=text, text mark=4, mark options={scale=0.6}, Pa3] table [x=c, y=b, col sep=comma] {results/klein/counttable_klein_4d.csv};\label{plot2_l_3} \addplot +[only marks, mark=text, text mark=7, mark options={scale=0.6}, Pa4] table [x=c, y=b, col sep=comma] {results/klein/counttable_klein_7d.csv};\label{plot2_l_4} \addlegendimage{/pgfplots/refstyle=plot2_l_1}\addlegendentry{lif} \addlegendimage{/pgfplots/refstyle=plot2_l_2}\addlegendentry{+2 days} \addlegendimage{/pgfplots/refstyle=plot2_l_3}\addlegendentry{+4 days} \addlegendimage{/pgfplots/refstyle=plot2_l_4}\addlegendentry{+7 days} \end{axis} \end{tikzpicture} \caption{Klein et al.~\cite{Klein/15a}} \label{plot2_2} \end{subfigure} \caption{Different kind of mouse embryonic stem cells (mESCs). For both data sets we have done PCA and plot the first two principal components. (a) shows 704 mESCs grown in three different conditions and (b) 2717 mESCs at the moment of LIF withdrawal, 2 days after, 4 days after, and 7 days after.} \label{plot2} \end{figure} We explore two larger sample size (by scRNA standards) scRNA-sequencing data sets of mouse embryonic stem cells (mESCs) publicly available. Kolodziejczyk et al.~\cite{Kolodziejczyk/15a} studied 704 mESCs with 38561 genes grown in three different conditions (2i, a2i and serum). Klein et al.~\cite{Klein/15a} worked on the influence leukemia inhibitory factor (LIF) withdrawal on mESCs. For this, he studied a total of 2717 mESCs with 24175 genes. The data included 933 cells after LIF-withdrawal, 303 cells two days after, 683 cells 4 days after, and 798 cells 7 days after. We normalize each cell by total counts over all genes, so that every cell has a total count equal to the median of total counts for observations (cells) before normalization, then we perform principal component analysis (PCA) and use the first three\todo{non-normalized?} principal components for clustering. To test the scalability of our new variants, we need larger data sets. We use the well-known MNIST data set, with 784 features and 60000 samples (PAMSIL will not be able to handle this size in reasonable time). We implemented our algorithms in Rust, extending the \texttt{kmedoids} package~\cite{Schubert/22a}, wrapped with Python, and we make our source code available in this package. We perform all computations in the same package, to avoid side effects caused by comparing too different implementations~\cite{Kriegel/Schubert/17a}. We run 10 restarts on an AMD EPYC 7302 processor using a single thread, and evaluate the average values. \subsection{Clustering Quality} We evaluated all methods with PAM BUILD initialization and a random initialization. To evaluate the relevancy of the Average Silhouette Width and the Average Medoid Silhouette, we compare true labels using the Adjusted Rand Index (ARI) and Normalized Mutual Information (NMI), two common measures in clustering. \begin{table}[tb]\centering \caption{Clustering results for the scRNA-seq data sets of Kolodziejczyk et al.~\cite{Kolodziejczyk/15a} for PAM, PAMSIL, and all variants of PAMMEDSIL. All methods are evaluated for BUILD and Random initialization, and true known $k$=3.} \label{tab1} \setlength{\tabcolsep}{6pt} \begin{tabular}{ l|l|r|r|r|r|r } Algorithm & Initialization & AMS & ASW & ARI & NMI & run time (ms) \\ \hline PAM & BUILD & 0.66 & 0.64 & 0.69 & 0.65 & 18.26 \\ PAM & Random & 0.66 & 0.64 & 0.69 & 0.65 & 22.67\\ PAMMEDSIL & BUILD & \bf0.67 & 0.65 & \bf0.72 & 0.70 & 62.63 \\ PAMMEDSIL & Random & \bf0.67 & 0.65 & \bf0.72 & 0.70 & 61.91 \\ FastMSC & BUILD & \bf0.67 & 0.65 & \bf0.72 & 0.70 & 25.09 \\ FastMSC & Random & \bf0.67 & 0.65 & \bf0.72 & 0.70 & 24.67 \\ FasterMSC & BUILD & \bf0.67 & 0.65 & \bf0.72 & 0.70 & \bf9.95 \\ FasterMSC & Random & \bf0.67 & 0.65 & \bf0.72 & 0.70 & 10.95 \\ PAMSIL & BUILD & 0.61 & \bf0.66 & \bf0.72 & \bf0.71 & 12493.86 \\ PAMSIL & Random & 0.61 & \bf0.66 & \bf0.72 & \bf0.71 & 16045.47 \\ \end{tabular} \end{table} On the data set from Kolodziejczyk shown in Table~\ref{tab1}, the highest ARI is achieved by the direct optimization methods for AMS and ASW. The different initialization provide the same results for all methods. We get a much faster run time for the AMS variants compared to the ASW optimization. For FasterMSC, we obtain the same ARI as for PAMSIL with 1255$\times$ faster run time and only a 0.01 lower NMI. As expected, AMS and ASW are optimal by those algorithms, that optimize for this measure, but because the measures are correlated, those that optimize AMS only score 0.01 worse on the ASW. Interestingly the total deviation used by PAM appears to be slightly more correlated to AMS than ASW in this experiment. Given the small difference, we argue that AMS is a suitable approximation for ASW, at a much reduced run time. Since there were no variations in the resulting medoids for the different restarts of the experiment, we can easily compare single results visually. Figure~\ref{fig52} compares the results of PAMMEDSIL and PAMSIL, showing which points are clustered differently than in the given labels. Both clusters are similar, with class 1 captured better in one, classes 2 and 3 better in the other result. \begin{figure}[tb] \centering \begin{subfigure}{0.45\textwidth} \begin{tikzpicture}[font=\small] \begin{axis}[unit vector ratio*=1 1 1, width=1.1\textwidth, xmin = -100, xmax = 150, ymin = -100, ymax = 150, xlabel={PC1}, ylabel={PC2}, xticklabels={,,}, yticklabels={,,}] \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=1, fill=black, color = black] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpammedsil_buildlabels_false.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=1,Pa1] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpammedsil_buildlabels_true.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=2, fill=black, color = black] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpammedsil_buildlabels1_false.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=2, Pa2] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpammedsil_buildlabels1_true.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=3, fill=black, color = black] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpammedsil_buildlabels2_false.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=3, Pa3] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpammedsil_buildlabels2_true.csv}; \end{axis} \end{tikzpicture} \caption{Results for PAMMEDSIL (BUILD)} \label{plot2_1} \end{subfigure} \begin{subfigure}{0.45\textwidth} \begin{tikzpicture}[font=\small] \begin{axis}[unit vector ratio*=1 1 1, width=1.1\textwidth, xmin = -100, xmax = 150, ymin = -100, ymax = 150, xlabel={PC1}, ylabel={PC2}, xticklabels={,,}, yticklabels={,,}] \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=1, fill=black, color = black] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpamsil_buildlabels_false.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=1,Pa1] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpamsil_buildlabels_true.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=2, fill=black, color = black] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpamsil_buildlabels1_false.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=2,Pa2] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpamsil_buildlabels1_true.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=3, fill=black, color = black] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpamsil_buildlabels2_false.csv}; \addplot +[mark options={scale=0.6}, only marks, mark=text, text mark=3,Pa3] table [x=b, y=c, col sep=comma] {results/kolodziejczyk/zpamsil_buildlabels2_true.csv}; \end{axis} \end{tikzpicture} \caption{Results for PAMSIL (BUILD)} \label{fig52} \end{subfigure} \caption{Clustering results for the scRNA-seq data sets of Kolodziejczyk et al.~\cite{Kolodziejczyk/15a} for PAMMEDSIL and PAMSIL. All correctly predicted labels are colored by the corresponding cluster and all errors are marked as black. } \end{figure} Table~\ref{tab2} shows the clustering results for the scRNA-seq data sets of Klein et al.~\cite{Klein/15a}. In contrast to Kolodziejczyk's data set, we here obtain a higher ARI for PAMSIL than for the AMS optimization methods.\todo{was eigenartig ist, da die ASW fast gleich ist! Zufall?} We get only the same high ARI and NMI for AMS optimization as for PAM, but a slightly higher ASW. However, FasterMSC is 16521$\times$ faster than PAMSIL. \begin{table}[tb]\centering \caption{Clustering results for the scRNA-seq data sets of Klein et al.~\cite{Klein/15a} for PAM, PAMSIL and all variants of PAMMEDSIL. All methods are evaluated for BUILD and Random initialization and true known $k$=4.} \label{tab2} \setlength{\tabcolsep}{6pt} \begin{tabular}{ l|l|r|r|r|r|r } Algorithm & Initialization & AMS & ASW & ARI & NMI & run time (ms) \\ \hline PAM & BUILD & 0.75 & 0.82 & 0.84 & 0.87 & 355.55 \\ PAM & Random & 0.74 & 0.82 & 0.78 & 0.80 & 476.18\\ PAMMEDSIL & BUILD & \bf0.77 & 0.83 & 0.84 & 0.87 & 2076.15 \\ PAMMEDSIL & Random & \bf0.77 & 0.83 & 0.84 & 0.87 & 3088.77 \\ FastMSC & BUILD & \bf0.77 & 0.83 & 0.84 & 0.87 & 212.01 \\ FastMSC & Random & \bf0.77 & 0.83 & 0.84 & 0.87 & 305.00 \\ FasterMSC & BUILD & \bf0.77 & 0.83 & 0.84 & 0.87 & 163.74 \\ FasterMSC & Random & \bf0.77 & 0.83 & 0.84 & 0.87 & \bf122.63 \\ PAMSIL & BUILD & 0.67 & \bf0.84 & \bf0.95 & \bf0.92 & 2026025.10 \\ PAMSIL & Random & 0.67 & 0.84 & 0.93 & 0.91 & 1490354.10 \\ \end{tabular} \end{table} \subsection{Scalability} To evaluate the scalability of our methods, we use the well-known MNIST data, which has 784 variables ($28{\times}28$ pixels) and 60000 samples. We use the first $n{=} 1000, \ldots, 30000$ samples and compare $k{=} 10$ and $k {=} 100$. Due to its high run time, PAMSIL is not able to handle this size in a reasonable time. In addition to the methods for direct AMS optimization, we evaluate the FastPAM1 and FasterPAM implementation. For all methods we use random initialization. \begin{figure}[tb!]\centering \begin{subfigure}{.49\textwidth} \begin{tikzpicture}[font=\tiny] \begin{axis}[ legend style={at={(.05,.95)},anchor=north west,fill=none,draw=none,inner sep=0,font=\tiny}, legend cell align={left},legend columns=2, height=23mm, width=\textwidth - 15mm, scale only axis, every axis label/.style={inner sep=0, outer sep=0}, xlabel = {number of samples}, xmin = 1000, xmax = 30000, ylabel = {run time (s)}, ymin = 0, ymax = 1000, yticklabel style={/pgf/number format/fixed}, yticklabel style={/pgf/number format/1000 sep=}, xtick={1000,5000,10000,15000,20000,25000,30000}, xticklabel style={/pgf/number format/fixed}, xticklabel style={/pgf/number format/1000 sep=}, scaled x ticks=false ] \addplot[Pa1, mark=triangle*]coordinates { (1000, 0.198) (5000, 7.075) (10000, 31.187) (15000, 62.150) (20000, 138.592) (25000, 172.462) (30000, 263.539) }; \label{plot_fpms} \addplot[Pa2, mark=triangle*]coordinates { (1000, 0.023) (5000, 0.531) (10000, 1.734) (15000, 5.238) (20000, 8.723) (25000, 10.502) (30000, 14.397) }; \label{plot_fepms} \addplot[Pa3, mark=diamond*]coordinates { (1000, 13.124) (5000, 212.825) (10000, 1657.221) (15000, 3298.933) (20000, 4847.659) (25000, 7343.586) (30000, 12568.364) }; \label{plot_pms} \addplot[Pa4, mark=oplus*]coordinates { (1000, 0.165) (5000, 1.056) (10000, 2.904) (15000, 3.473) (20000, 8.076) (25000, 16.969) (30000, 43.808) }; \label{plot_fp} \addplot[Pa5, mark=pentagon*]coordinates { (1000, 0.039) (5000, 2.213) (10000, 13.783) (15000, 30.483) (20000, 58.334) (25000, 137.264) (30000, 201.293) }; \label{plot_fp1} \addlegendimage{/pgfplots/refstyle=plot_fpms}\addlegendentry{FastMSC} \addlegendimage{/pgfplots/refstyle=plot_fepms}\addlegendentry{FasterMSC} \addlegendimage{/pgfplots/refstyle=plot_pms}\addlegendentry{PAMMEDSIL} \addlegendimage{/pgfplots/refstyle=plot_fp}\addlegendentry{FasterPAM} \addlegendimage{/pgfplots/refstyle=plot_fp1}\addlegendentry{FastPAM1} \end{axis} \end{tikzpicture} \caption{run time with $k{=}10$, linear scale} \end{subfigure} \hfill \begin{subfigure}{.49\textwidth} \begin{tikzpicture}[font=\tiny] \begin{axis}[ legend style={at={(0.70,0.35)},anchor=north,fill=none,draw=none,inner sep=0,font=\tiny}, legend cell align={left},legend columns=2, scale only axis, every axis label/.style={inner sep=0, outer sep=0}, height=23mm, width=\textwidth - 15mm, xlabel = {number of samples (log scale)}, xmin = 1000, xmax = 30000, ylabel = {run time (s, log scale)}, ymin = 0, ymax = 13000, ymode=log, xmode=log, yticklabel style={/pgf/number format/fixed}, yticklabel style={/pgf/number format/1000 sep=}, xticklabel style={/pgf/number format/fixed}, xticklabel style={/pgf/number format/1000 sep=}, scaled x ticks=false ] \addplot[Pa1, mark=triangle*]coordinates { (1000, 0.198) (5000, 7.075) (10000, 31.187) (15000, 62.150) (20000, 138.592) (25000, 172.462) (30000, 263.539) }; \label{plot_fpms} \addplot[Pa2, mark=triangle*]coordinates { (1000, 0.023) (5000, 0.531) (10000, 1.734) (15000, 5.238) (20000, 8.723) (25000, 10.502) (30000, 14.397) }; \label{plot_fepms} \addplot[Pa3, mark=diamond*]coordinates { (1000, 13.124) (5000, 212.825) (10000, 1657.221) (15000, 3298.933) (20000, 4847.659) (25000, 7343.586) (30000, 12568.364) }; \label{plot_pms} \addplot[Pa4, mark=oplus*]coordinates { (1000, 0.165) (5000, 1.056) (10000, 2.904) (15000, 3.473) (20000, 8.076) (25000, 16.969) (30000, 43.808) }; \label{plot_fp} \addplot[Pa5, mark=pentagon*]coordinates { (1000, 0.039) (5000, 2.213) (10000, 13.783) (15000, 30.483) (20000, 58.334) (25000, 137.264) (30000, 201.293) }; \label{plot_fp1} \end{axis} \end{tikzpicture} \caption{run time with $k{=}10$, log-log plot} \end{subfigure} \\ \begin{subfigure}{.49\textwidth} \begin{tikzpicture}[font=\tiny] \begin{axis}[ legend style={at={(0.70,0.35)},anchor=north,fill=none,draw=none,inner sep=0,font=\tiny}, legend cell align={left},legend columns=2, scale only axis, every axis label/.style={inner sep=0, outer sep=0}, height=23mm, width=\textwidth - 15mm, xlabel = {number of samples}, xmin = 1000, xmax = 30000, ylabel = {run time (s)}, ymin = 0, ymax = 2500, ytick={500,1000,1500,2000,2500}, yticklabel style={/pgf/number format/fixed}, yticklabel style={/pgf/number format/1000 sep=}, xtick={1000,5000,10000,15000,20000,25000,30000}, xticklabel style={/pgf/number format/fixed}, xticklabel style={/pgf/number format/1000 sep=}, scaled x ticks=false ] \addplot[Pa1, mark=triangle*]coordinates { (1000, 0.21) (5000, 5.745) (10000, 22.975) (15000, 52.596) (20000, 93.356) (25000, 150.420) (30000, 210.180) }; \label{plot_fpms} \addplot[Pa2, mark=triangle*]coordinates { (1000, 0.01879) (5000, 0.661) (10000, 2.265) (15000, 4.025) (20000, 8.600) (25000, 11.023) (30000, 17.910) }; \label{plot_fepms} \addplot[Pa3, mark=diamond*]coordinates { (1000, 2014.386) (5000, 63854.893) (10000, 245499.702) }; \label{plot_pms} \addplot[Pa4, mark=oplus*]coordinates { (1000, 0.313) (5000, 1.171) (10000, 3.444) (15000, 5.547) (20000, 15.912) (25000, 32.377) (30000, 41.221) }; \label{plot_fp} \addplot[Pa5, mark=pentagon*]coordinates { (1000, 0.149) (5000, 5.254) (10000, 13.345) (15000, 30.426) (20000, 53.143) (25000, 85.543) (30000, 135.455) }; \label{plot_fp1} \end{axis} \end{tikzpicture} \caption{run time with $k{=}100$, linear scale} \end{subfigure} \hfill \begin{subfigure}{.49\textwidth} \begin{tikzpicture}[font=\tiny] \begin{axis}[ legend style={at={(0.70,0.35)},anchor=north,fill=none,draw=none,inner sep=0,font=\tiny}, legend cell align={left},legend columns=2, scale only axis, every axis label/.style={inner sep=0, outer sep=0}, height=23mm, width=\textwidth - 15mm, xlabel = {number of samples (log scale)}, xmin = 1000, xmax = 30000, ylabel = {run time (s, log scale)}, ymin = 0, ymax = 300000, ymode=log, xmode=log, yticklabel style={/pgf/number format/fixed}, yticklabel style={/pgf/number format/1000 sep=}, xticklabel style={/pgf/number format/fixed}, xticklabel style={/pgf/number format/1000 sep=}, scaled x ticks=false ] \addplot[Pa1, mark=triangle*]coordinates { (1000, 0.21) (5000, 5.745) (10000, 22.975) (15000, 52.596) (20000, 93.356) (25000, 150.420) (30000, 210.180) }; \label{plot_fpms} \addplot[Pa2, mark=triangle*]coordinates { (1000, 0.01879) (5000, 0.661) (10000, 2.265) (15000, 4.025) (20000, 8.600) (25000, 11.023) (30000, 17.910) }; \label{plot_fepms} \addplot[Pa3, mark=diamond*]coordinates { (1000, 2014.386) (5000, 63854.893) (10000, 245499.702) }; \label{plot_pms} \addplot[Pa4, mark=oplus*]coordinates { (1000, 0.190) (5000, 1.776) (10000, 3.976) (15000, 5.571) (20000, 14.372) (25000, 36.438) (30000, 26.592) }; \label{plot_fp} \addplot[Pa5, mark=pentagon*]coordinates { (1000, 0.149) (5000, 5.254) (10000, 13.345) (15000, 30.426) (20000, 53.143) (25000, 85.543) (30000, 135.455) }; \label{plot_fp1} \end{axis} \end{tikzpicture} \caption{run time with $k{=}100$, log-log plot} \end{subfigure} \caption{Run time on MNIST data (time out 24 hours)} \label{fig5} \end{figure} As expected, all methods scale approximately quadratic in the sample size $n$. FastMSC is on average 50.66x faster than PAMMEDSIL for $k{=}10$ and 10464.23$\times$ faster for $k{=}100$, supporting the expected $O(k^2)$ improvement by removing the nested loop and caching the distances to the nearest centers. For FasterMSC we achieve even 639.34$\times$ faster run time than for PAMMEDSIL for $k$=10 and 78035.01$\times$ faster run time for $k$=100. We expect FastPAM1 and FastMSC and also FasterPAM and FasterMSC to have similar scalability; but since MSC needs additional bounds it needs to maintain more data and access more memory. We observe that FastPAM1 is 2.50$\times$ faster than FastMSC for $k{=}10$ and 1.57$\times$ faster for $k{=}100$, which is larger than expected and due to more iterations necessary for convergence in the MSC methods: FastPAM1 needs on average 14.86 iterations while FastMSC needs 33.48. In contrast, FasterMSC is even 1.65$\times$ faster than FasterPAM for $k{=}10$ and 1.96$\times$ faster for $k{=}100$. \section{Conclusions} We showed that the Average Medoid Silhouette satisfies desirable theoretical properties for clustering quality measures, and as an approximation of the Average Silhouette Width yields desirable results on real problems from gene expression analysis. We propose a new algorithm for optimizing the Average Medoid Silhouette, which provides a run time speedup of $O(k^2)$ compared to the earlier PAMMEDSIL algorithm by avoiding unnecessary distance computations via caching of the distances to the nearest centers and of partial results based on FasterPAM. This makes clustering by optimizing the Medoid Silhouette possible on much larger data sets than before. The ability to optimize a variant of the popular Silhouette measure directly demonstrates the underlying problem that any internal cluster evaluation measure specifies a clustering itself. \vfill\pagebreak \bibliographystyle{splncs04}
1,314,259,996,872
arxiv
\section{Introduction} The current state of Machine Learning research presents Neural Networks as black boxes due to the high dimensionality of their parameter space, which means that understanding what is happening inside of a model regarding domain expertise is highly nontrivial, when it is even possible. However, the actual mechanics by which Neural Networks operate - the composition of multiple nonlinear transforms, with parameters optimized by a gradient method - were human-designed, and as such are well understood. In this paper, we will apply this understanding, via analogy to Chaos Theory, to the problem of explaining and measuring susceptibility of Neural Networks to adversarial methods. It is well-known that Neural Networks can be adversarially attacked, producing obviously incorrect outputs as a result of making extremely small perturbations to the input \citep{goodfellow2014explaining, szegedy2013intriguing}. Prior work, like \cite{shao2021adversarial, pmlr-v80-wang18c} and \cite{carmon2019unlabeled} discuss ``Adversarial Robustness" in terms of metrics like accuracy after being attacked or the success rates of attacks, which can limit the discussion entirely to models with hard decision boundaries like classifiers, ignoring tasks like segmentation or generative modeling \citep{he2018decision}. Other work, like \cite{li2020sok} and \cite{weber2020rab}, develop ``certification radii," which can be used to guarantee that a given input cannot be misclassified by a model without an adversarial perturbation with a size exceeding that radius. However, calculating these radii is computationally onerous when it is even possible, and is again limited only to models with hard decision boundaries. Regarding the existence of adversarial attacks in the first place, \cite{e22111201} and \cite{https://doi.org/10.48550/arxiv.1802.06927} have explained this behaviour of Neural Networks on the basis that they are dynamical systems, and then use some results from that analysis to try and classify adversarial inputs based on their Lyapunov exponents. However, this classification methodology rests on shaky ground, as the Lyapunov exponents of a single input must be relative to those of similar inputs, and it is entirely reasonable to imagine a scenario in which an input does not become more susceptible to attack just because it is itself adversarial. In this work, we re-do these Chaos Theoretic analyses in order to understand, not particular inputs, but the Neural Networks themselves. We show that Neural Networks are dynamical systems, and then continuing that analogy past where \cite{e22111201} and \cite{https://doi.org/10.48550/arxiv.1802.06927} left off, investigate what Neural-Networks-as-dynamical-systems means for their susceptibility to attack, through a combination of analysis and experimentation. We develop this into a theory of Adversarial Susceptibility, the ``Susceptibility Ratio" as a measure of how effective attacks will be against a Neural Network, and show how to easily numerically approximate this value. Returning to the work in \cite{li2020sok} and \cite{weber2020rab}, we use the susceptibility ratio to quickly and accurately estimate the certification radii of very large Neural Networks, aligning this paper with prior work. \section{Neural Networks as Dynamical Systems} We'll start by re-writing the conventional feed-forward Neural Network in the language of dynamical systems, and then transferring the analysis of dynamical systems over to Neural Networks. But we should begin by explaining what a dynamical system is \citep{alligood1998chaos}. \subsection{Dynamical Systems} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Trajectories.png} \caption{In a dynamical system, two trajectories with similar starting points may, over time, drift farther and farther away from one another, typically modeled as exponential growth in the distance between them. This growth characterizes a system as exhibiting ``sensitive dependence," known colloquially as the ``butterfly effect," where small changes in initial conditions eventually grow into very large changes in the eventual results.} \label{fig:traj} \end{figure} In Chaos Theory, a dynamical system is composed of three core ingredients. Ingredient one is $T$, representing ``time," or something like it. A change in time can be added to an initial time to get an end time, in an associative fashion. Ingredient two is $X$, the state space. Elements of $X$ could include the positions of a pendulum, the states of memory in a computer program, or all the possible arrangements of atoms in a room. And the third ingredient is $\Phi : T \times X \to X$, the ``evolution function" of the system. When $\Phi$ is given a state $x_{i, t}$ and a change in time $\Delta t$, it returns $x_{i, t + \Delta t}$, which is the new state of the system after $\Delta t$ time has elapsed. We'll write this as $$x_{i, t + \Delta t} = \Phi(\Delta t, x_{i, t})$$ In order to stay well defined, this has to have some properties like being consistent with the end result regardless of how intermediate states were taken, like so: $$\Phi\big(\Delta t_a, \Phi(\Delta t_b, x_{i, t})\big) = \Phi(\Delta t_a + \Delta t_b, x_{i, t})$$ From this, we can take a ``trajectory" of the initial state $x_{i, 0}$ over time, with points represented by $\big(t, \Phi(t, x_{i, 0})\big)$. In order to simplify the notation, and following on from the notion that the evolution over time of a system can be thought of as the composition of multiple instances of the evolution function, we will write this trajectory as $$\Phi(t, x_{i,0}) = \Phi^t(x_i)$$ The final piece of the puzzle, the Chaos in Chaos Theory, concerns the relationship between trajectories with very similar initial conditions, say $x_i$, and $x_i + dx$, where $dx$ is some very small change, such as subtly reorienting the arms of a double pendulum before setting it into motion. We then need some notion of the distance between two elements of the state space, but we will assume that the space is some sort of vector space equipped with a notion of length written with $|\cdot|$, and proceed from there. For the initial condition, we know off the bat that $$|\Phi^0(x_i) - \Phi^0(x_i + dx)| = |dx|$$ However, the interesting analysis comes when we model this difference as time progresses. In some systems, very small differences in the initial condition end up being ignored, such as the position of an oscillator with a damping force; no matter what, you reach the resting state, and that's the end of the story. However, in some systems, very small differences in the initial condition end up compounding on themselves, like the flaps of a butterfly's wings eventually resulting in a hurricane. Both of these can be approximately modeled by an exponential function, like so $$|\Phi^t(x_i) - \Phi^t(x_i + dx)| \approx |dx|e^{\lambda t}$$ In each of these cases, the growing or shrinking differences between the trajectories are described by $\lambda$, also called the Lyapunov exponent. If $\lambda < 0$, these differences disappear over time, and the trajectories of two similar initial conditions will eventually align with one another. However, if $\lambda > 0$, these differences increase over time, and the trajectories of two similar initial conditions will grow farther and farther apart, until they might as well have started from entirely different regions of the state space. This is called ``sensitive dependence," and is the mark of a chaotic system. It must be noted, however, that the exponential nature of this growth is a shorthand model, with obvious limits, and is not fully descriptive of the underlying behavior. \subsection{Neural Networks}\label{sec:dynsysnn} Conventionally, a Neural Network is given a formulation along the following lines \citep{schmidhuber2015deep}. It is given by a function $h : \Theta \times X \to Y$, where $\Theta$ is the space of possible learned parameters with $W_l$ being multiplicative weight matrices and $b_l$ as additive bias vectors, $X$ is the vector space of possible inputs, and $Y$ is the vector space of possible outputs. Each of the $L$ layers in the Neural Network is given by a matrix multiplication, an optional bias addition, and a nonlinear activation function, with hidden states $z_l$ representing the intermediate values taken during the inference operation, e.g. $$z_{i,0} \coloneqq x_i$$ $$z_{i, l + 1} = \sigma(W_lz_{i, l} + b_l) | W_l, b_l \subset \theta$$ $$h(\theta; x_i) = \hat y_i \coloneqq z_{i, L}$$ Now, consider rewriting this by saying that a Neural Network is a dynamical system composed of three ingredients. Ingredient one is $[L] = \{0, 1, 2 \dots L\}$, which here will be used to represent the current depth of the hidden state, from 0 for the initial condition up to $L$ for the eventual output. Ingredient two is $Z$, which is the vector space of all possible hidden states. And ingredient three is $g : [L] \times Z \to Z$, which is written here as $$z_{i, l+1} = g(1, z_{i, l}) = \sigma(W_lz_{i, l} + b_l)$$ We will handwave the method by which $g$ ``knows" which parameters $W_l$ and $b_l$ to use, perhaps using something along the lines of a dimension appended to $z_{i, l}$ that records the current value of $l$, and which $g$ iterates, rather than including in the ordinary operations of the feed-forward layer. The generalization to $g(\Delta l, z_{i, l})$ then follows from the same rule of composition applied to the dynamical systems, at least for integer values of $\Delta l$, under the condition that it never leaves $[L]$. We can also then re-write the notation along the lines of that for the dynamical systems, e.g. $$g(l, z_{i, 0}) = g^l(x_i)$$ Noting of course that we have defined $z_{i, 0}$ as $x_i$. From here, we can start to discuss the trajectories of the hidden states of the Neural Network, and what happens when their inputs are changed slightly. For the first hidden state, defined as the input, we can immediately say that $$|g^0(x_i) - g^0(x_i + dx)| = |dx|$$ And then by once again mapping to the dynamical systems perspective, we model the difference between the two trajectories at depth $l$ with $$|g^l(x_i) - g^l(x_i + dx)| \approx |dx|e^{\lambda l}$$ When the value of $\lambda$ is greater than 0, we call the Neural Network sensitive to the input, but when the value of $e^{\lambda L}$ - essentially, the ratio of the size of the change of the output to the size of the change in the input - is very large, we call $dx$ an Adversarial Perturbation. If this analogy holds, we should expect that when we adversarially attack a Neural Network, the difference between the two corresponding hidden states should grow as they progress through the model. Again, as per the dynamical system, this growth is not necessarily exponential, but using an exponential model is the most illustrative. This is our first experimental result. \section{Experimental Design} For our experiments, we used two different model architectures: ResNets \citep{https://doi.org/10.48550/arxiv.1512.03385}, as per the default Torchvision implementation \citep{10.1145/1873951.1874254}, and a custom CNN architecture in order to have finer-grained control over the depth and number of channels in the model. The ResNets were modified, and the custom models built as to allow for recording all of the hidden states during the inference operation. These models, unless specified that they were left untrained, were trained on the Food-101 dataset \citep{bossard14} for 50 epochs with a batch size of 64 and a learning rate of 0.0001 with the Adam optimizer against Cross Entropy Loss. The ResNet models used were ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152. In the Torchvision ResNet class, models consist of operations named \textit{conv1, bn1, relu, maxpool, layer1, layer2, layer3, layer4, avgpool,} and and \textit{fc}, with the first four representing a downsampling intake, then four more ``blocks" of ResNet layers, and then a final operation that converts the 3D spacial tensor into a 1D class weight tensor. Hidden states are recorded at the input, after \textit{conv1, layer1, layer2, layer3, layer4,} and the output. The custom models, specified with $C$ and $D$, consist of $D$ tuples of convolutional layers, batch normalization operations, and ReLU nonlinearities, with the first tuple having a downsampling convolution and a maxpool operation after the ReLU. Each of these convolutions, besides the first which takes in three channels, has $C$ hidden layers. Finally, there is a $1 \times 1$ convolution, a channel-wise averaging, and then a single fully connected layer with 101 outputs, one for each class in the Food-101 dataset. Hidden states are recorded after every tuple, and also include the input and the output of the model. The first tuple approximates the downsampling intake of the ResNet models. In order to better handle the high dimensionality and changes in scale of the inputs, outputs, and hidden states, rather than using the Euclidean $L2$ norm as the distance metric, we used a modified Euclidean distance $$|\Vec{v}| \coloneqq \sqrt{\frac{1}{\textbf{dim}(\Vec{v})} \sum_i v_i^2}$$ This will be applied to every instance of length, distance, radius, and so on. Adversarial perturbations $dx_{adv}$ against a Neural Network $h(\theta; \cdot)$ of a given radius $r$ for a given input $x_i$ were generated by using five steps of gradient descent with an update size of 0.01, maximizing $$|h(\theta; x_i) - h(\theta; x_i + dx_{adv})|$$ and projecting back to the hypersphere of radius $r$ after every update. These attacks are along the lines of \cite{Zhang_2021, wu2021crop, wu2022copa, xie2021crfl} and \cite{shao2021adversarial}, and their use of attacks with $l_p$-norm decay metrics or boundaries. For comparison, random perturbations were also generated, by projecting randomly sampled Gaussian noise to the same hypersphere. In order to perform these experiments under optimal conditions, the inputs that were adversarially perturbed were selected only from the subset of the Food-101 testing set for which every single model trained agreed on the top-1 output class and were correct. \color{red}A Jupyter Notebook implementing these training regimes and attacks will be made available alongside this manuscript, pending review.\color{black} \section{Hidden State Drift} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/ResNet18HiddenStateDrift.png} \caption{Example of hidden state drift while performing inference with the ResNet18 model. Note the logarithmic scaling on the $y$-axis.} \label{fig:rn18_hsd} \end{figure} An example of the approximately exponential growth in the distance between hidden states between normal and adversarially perturbed inputs hypothesized in \cref{sec:dynsysnn} for 32 inputs is shown in Figure \ref{fig:rn18_hsd}. Between the initial perturbations, generated with a radius of $0.0001$, and the outputs, the differences grew by a factor of $\sim 747\times$. Given that ResNet18 has 18 layers, using $747 \approx e^{18\lambda}$, we can calculate $\lambda \approx 0.368$, a measure of this drift per layer. However, the Lyapunov exponent for each layer is of less interest to an adversarial attacker or defender, with the actual value of interest being given by this new metric, $\psi$, the adversarial susceptibility for a particular input and attack, given by \begin{equation} \psi(h, \theta, x_i, dx_{adv}) \coloneqq e^{\lambda L} = \frac{|h(\theta; x_i) - h(\theta; x_i + dx_{adv})|}{|dx_{adv}|} \label{eq:advsuscform} \end{equation} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/ResNetPsiStability.png} \caption{Despite a change in the radius of the adversarial perturbation by three orders of magnitude, the value of $\psi$ associated with those attacks remains relatively stable.} \label{fig:psistability} \end{figure} \begin{table}[t] \centering \begin{tabular}{c|ccccc} & ResNet18 & ResNet34 & ResNet50 & ResNet101 & ResNet152 \\ \hline $\hat \Psi(h, \theta)$ & 781.2 & 790.7 & 854.4 & 893.2 & 846.5 \end{tabular} \caption{Overall Adversarial Susceptibility of trained ResNet models.} \label{tab:resnetsusc} \end{table} Essentially, this measures the ``Susceptibility Ratio," the ratio of the damage done by a given adversarial perturbation to its original size. If this is a meaningful metric by which to judge a Neural Network architecture, it should remain relatively stable despite changes in the radius of the adversarial attack. This is our second experimental result, demonstrated in Figure \ref{fig:psistability}. By sampling $\psi$ over a number of inputs $x_i$ and a variety of attack radii $|dx_{adv}|$ and taking the geometric mean\footnote{In order to increase the numerical stability of the geometric mean calculation, we use $\sqrt[n]{\prod_{i=0}^n a_i} = e^\frac{\sum_{i=0}^n\ln{a_i}}{n}$}, we can come to a single value, written as $$\Psi(h, \theta) = e^{\mathbb{E}[\ln(\psi(h, \theta, x_i, dx_{adv}))]}$$ and approximated with $\hat \Psi(h, \theta)$, giving a measure of the adversarial susceptibility for the model as a whole. These values have been calculated for the trained ResNet models, and are given in Table \ref{tab:resnetsusc}. These results contradict the predictions that we will make in the next section, at which point we will begin using the custom model architectures to begin to tease out the relationships between a Neural Network's architecture and its Adversarial Susceptibility. \section{Architectural Effects on Adversarial Susceptibility} \begin{table} \centering \begin{tabular}{cc|cccc|} \multicolumn{6}{c}{$\hat \Psi(h, \theta)$}\\ & & \multicolumn{4}{|c|}{Channels ($C$)}\\ & & 32 & 64 & 128 & 256 \\ \hline \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Layers ($D$)}}} & 2 & 0.749 & 0.523 & 0.651 & 0.560 \\ & 4 & 1.021 & 0.695 & 0.775 & 0.610 \\ & 8 & 2.788 & 2.134 & 1.505 & 1.276 \\ & 16 & 15.491 & 12.935 & 8.423 & 7.123 \\ & 32 & 109.340 & 135.472 & 98.834 & 92.404 \\ & 64 & 96.037 & 63.785 & 60.443 & 48.721 \\ \hline \end{tabular} \caption{Adversarial Susceptibility of randomly initialized convolutional models with custom architectures on inputs consisting of random noise } \label{tab:advsuscrand} \end{table} Returning to the definition of $\psi$ given in equation \ref{eq:advsuscform}, we might model it as being proportional to the exponent of $L$, the depth of the Neural Network. And yet, despite ResNet152 having more than eight times as many layers as ResNet18, its Adversarial Susceptibility is only marginally higher. Thus, use of an exponential model, at least to explain these experimental results, is limited. In order to explore this reasoning further in a more numerically ideal setting, we present our third experimental result, in Table \ref{tab:advsuscrand}, and replicated in Figure \ref{fig:advsuscrand}. Here, using randomly initialized, untrained models with custom architectures as described in the experimental methods section, giving them random inputs, and then attacking them on those inputs, we can tease apart the relationship between model architecture and Adversarial Susceptibility, in the case where both parameters and input dimensions are normally distributed. We immediately find an approximately exponential relationship between the Adversarial Susceptibility and the depth of the model that was expected based on equation \ref{eq:advsuscform}, however the slight dip upon moving from 32 to 64 layers is unexpected, and while exploring its potential causes and implications is outside of the scope of this paper, it may warrant further experimentation and analysis. \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{MASNC.png} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{MASMD.png} \end{subfigure}% \caption{Graphical replication of Table \ref{tab:advsuscrand}} \label{fig:advsuscrand} \end{figure} Also of interest is the effect, or lack thereof, of increasing the number of channels in the Neural Network. While a quadratic increase in the number of parameters in the model might be expected to increase its Adversarial Susceptibility, especially in the absence of batch normalization operations, no experiment that we performed yielded such a result. Our initial hypothesis followed this line of reasoning; taking two matrices $A$ and $B$ composed of standard Gaussian noise and setting $C = AB$, the standard deviation over all of the entries in $C$ is proportional to the square root of the dimension shared by $A$ and $B$, Analogizing $A$ to the weights of a layer, $B$ to a small change in the hidden state, and $C$ to the resulting change in the next hidden state, we expected the Lyapunov exponent produced by a model to be proportional to the logarithm of the square root of the number of channels of that layer. But no such experimental evidence could be found for this. We repeated the Adversarial Susceptibility testing on the same model architectures, this time with trained parameters, and with inputs from Food-101 that every single model agreed and was correct on. These results are in Table \ref{tab:advsusctrain}, and replicated in Figure \ref{fig:advsusctrain}. \begin{table} \centering \begin{tabular}{cc|cccc|} \multicolumn{6}{c}{$\hat \Psi(h, \theta)$}\\ & & \multicolumn{4}{|c|}{Channels ($C$)}\\ & & 32 & 64 & 128 & 256 \\ \hline \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Layers ($D$)}}} & 2 & 578.602 & 610.207 & 586.503 & 576.759 \\ & 4 & 1470.399 & 1658.209 & 1631.561 & 1695.993 \\ & 8 & 2144.418 & 2224.467 & 2536.745 & 2370.648 \\ & 16 & 2361.251 & 2401.381 & 2485.030 & 2846.418 \\ & 32 & 3162.568 & 3018.758 & 2987.640 & 3256.967 \\ & 64 & 2045.765 & 2213.575 & 3103.335 & 2471.823 \\ \hline \end{tabular} \caption{Adversarial Susceptibility of trained convolutional models with custom architectures on inputs consisting of Food-101 samples that every model agreed on and were correct.} \label{tab:advsusctrain} \end{table} \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{TMASNC.png} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{TMASMD.png} \end{subfigure}% \caption{Graphical replication of Table \ref{tab:advsusctrain}} \label{fig:advsusctrain} \end{figure} The largest difference here is that, for every model, the susceptibility has increased by an order of magnitude, if not several. Training and switching to a domain that contains information relevant to the model has resulted in it being far, far more sensitive to attack. Yet, following up on earlier experiments, we can again see that the number of channels doesn't effect the Adversarial Susceptibility of the model much, while the number of layers increases it significantly. However, this time, the relationship between the number of layers and the susceptibility has changed, to be almost logarithmic, rather than approximately exponential in nature, and somewhat replicates the relationship found between depth and susceptibility in the trained ResNet models. Interestingly, this is reasonably analogous to the testing accuracy of the models, where increases in depth yield diminishing returns, and it may be theorized that both of these effects are due to changes in the distributions of the weights. However, it must be noted that the increase in susceptibility is greater than the increase in accuracy. Making models deeper makes them more vulnerable faster than it makes them better, with additional costs in memory, runtime, and energy consumption. \section{Relationships to Other Metrics} \subsection{Approximation of Certified Robustness Radii} In the work of \cite{weber2020rab} and \cite{li2020sok}, they attempt to calculate what they refer to as ``Certified Robustness Radii." For a model with hard decision boundaries, e.g. a top-1 classification model, its Certified Robustness Radius is the largest value $\epsilon_h$ such that, for any input $x_i$ and any adversarial perturbation $|dx_{adv}|$, the ultimate classification given by the model $\textrm{argmax}_c h(\theta; x_i) = \textrm{argmax}_c h(\theta; x_i+dx_{adv})$ for all perturbations with radius smaller than $\epsilon_h$. In their work, however, they state explicitly that these values are incredibly computationally demanding to calculate for small models, and computationally infeasible for larger models. However, using the Adversarial Susceptibility for a model, one can quickly approximate this certified robustness radius for even very large models. It is simply the distance to the nearest decision boundary, divided by the Adversarial Susceptibility. Consider the following example: a five-class model outputs the following weights for a given input, $\hat y = \{2.1, 0.6, 0.1, -0.5, -1.1\}$. Thus, the nearest decision boundary occurs where the first and second classes become equal, at $\hat y' = \{1.35, 1.35, 0.1, -0.5, -1.1\}$. The modified Euclidean distance between these two is 0.4703. Suppose that this model has an Adversarial Susceptibility of $\hat \Psi = 25.0$. Its certified robustness radius would then be estimated as $\epsilon = \frac{0.4703}{25.0} = 0.01897$. One could then take the mean or minimum over these values for every input in a dataset, and a number could be produced for the model as a whole. Finally, an over-all criticism has to be made regarding the use of these certified robustness radii in general. Consider two models used for a binary classification problem, inferring on the same input, which has been perturbed by adversarial attacks of equal radii. The first model, moving from the vanilla to the adversarial input, changes its output from $\{0.9, 0.1\}$ to $\{0.6, 0.4\}$. The second model, under the same conditions, changes its output from $\{0.55, 0.45\}$ to $\{0.45, 0.55\}$. Using a certified robustness radius, you would say that the first model is the more robust, while a more direct reading of the change in probabilities would declare the second model to be more robust. These certified robustness radii wrap a lot of information about the model together with information about the inputs and the distributions they are drawn from together, so it can be difficult to use them as a metric. Imagine if, in the previous example, the first model was only so confident because it was massively overfit, and the actual input is relatively non-separable. Although this improves its robustness radius, it makes it more susceptible to attack in the field. \subsection{Post-Adversarial Accuracy} One of the more standard measures of adversarial robustness is to measure the accuracy of models on adversarially perturbed inputs. If our analysis and experimental results thus far are correct, we should see an inverse relationship between measured Adversarial Susceptibility and the post-adversarial accuracy for any given attack radius. This is our fourth experimental result, shown in Figure \ref{fig:paa}. In it, we see that, among ResNets, which all had very similar values of $\hat \Psi(h, \theta)$, post-attack accuracies are relatively similar between models, with an approximate but minor correspondence between higher susceptibilities and lower post-attack accuracy. We also see, among the custom architectures, represented in Figure \ref{fig:paa} by the subset of models with 32 channels and in their entirety in Figure \ref{fig:advsuscrand}, a very close inverse relationship between higher susceptibility and lower post-attack accuracy, especially at the 0.01 attack radius. We also see that the custom architecture with $D = 2$, which experimentally had $\hat \Psi = 578.602$, has a post-attack accuracy curve that very closely resembles those of the ResNet models, each of which had a similar susceptibility. \begin{figure} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{PAARN.png} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=\textwidth]{PAAEN.png} \end{subfigure}% \caption{Post-Attack Accuracies} \label{fig:paa} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{ASvsPAA.png} \caption{Relationship between Adversarial Susceptibility and Post-Attack Accuracy, with a radius of 0.01. Linear best fit shown, with a correlation coefficient of -0.911.} \label{fig:asvspaa} \end{figure} \section{Conclusions and Future Work} Our experiments have shown, with some variation due to the inscrutable black-box nature of Deep Learning, that there is an extremely strong, analytically valuable, and experimentally valid connection between Neural Networks and dynamical systems as they exist in Chaos Theory. We can use this connection to make accurate and meaningful predictions about different Neural Network architectures, as well as efficiently measure how susceptible they are to adversarial attacks. We have shown a correspondence, both experimentally and analytically, between these new measurements, and those developed in prior works. Thus, a new tool has been added to the toolbox of practitioners looking to make decisions about Neural Networks. Future work will include further exploration in this area, and the pulling in of more advanced techniques and analysis from Chaos Theory, as well as the development of new, more precise metrics that tell us even more about how models are effected by adversarial attacks. Additionally, the relationship between Adversarial Susceptibility and adversarial robustness training regimes deserves study, as well as the relationship with different attack methodologies. \vskip 0.2in \begin{center} \scalebox{.2}{\textcolor{white}{``You may call me anything, but late to dinner." -Tim}} \end{center}
1,314,259,996,873
arxiv
\section{Introduction and Our Central Results} In recent works \cite{McLerran:2007qj,Hidaka:2008yy,Glozman:2007tv,% Fukushima:2008wg,Andronic:2009gj,Kojo:2011fh}, it has been argued that there is a new state of QCD, Quarkyonic matter, at high baryon density and low to intermediate temperatures. This novel state exists at densities large compared to the QCD scale, so that the Fermi sea is best thought of in terms of quark degrees of freedom; it is, nevertheless, confining. It may be thought of as a Fermi sea of approximately free quarks, but with thermal and Fermi surface excitations made of color-confined mesons and baryons. The name ``Quarkyonic'' expresses this dualism. While the arguments for the existence of Quarkyonic matter are rigorous only in the limit of large number of colors, for three colors this may not be such a bad approximation, at least for some range of density. The inter-quark potential inferred from the charmonium spectrum is linear out to distances of $\sim$ fm, indicating that the production of quark anti-quark pairs is not very efficient in tempering its growth. One way of understanding is the large-$N_{\rm c}$ limit where quark pairs are suppressed by $1/N_{\rm c}$ \cite{'tHooft:1973jz}. Similarly, in numerical studies of lattice QCD, the (pseudo-)critical temperature of the phase transition is a slowly varying function of baryon density, certainly for small density \cite{Kaczmarek:2011zz}. At high baryon density, one might expect that chiral symmetry is restored while quark confinement survives. In fact for a spatially homogeneous chiral condensate, several computations have confirmed this expectation \cite{Glozman:2007tv,Fukushima:2008wg}. This conclusion was challenged by later analysis \cite{Schaefer:2007pw} and by simple phenomenological arguments which suggest that chiral symmetry is broken in a confining phase of QCD \cite{Casher:1979vw}. For a spatially homogeneous phase, the restoration of chiral symmetry is understood as follows. In a homogenous phase, the scalar mesons that condense to form the chiral condensate have zero net momentum. Usually a chiral condensate, composed of quarks and anti-quarks, is not energetically favored, since popping an anti-quark up from the Dirac sea, to above the Fermi sea, costs $\mu_{\rm q} \simeq p_{\rm F}$, where $\mu_{\rm q}$ is the quark chemical potential, and $p_{\rm F}$ the quark Fermi momentum. Another way of forming a homogeneous chiral condensate is to pair up quarks with quark-holes near the Fermi surface; see the left panel in Fig.~\ref{fig:ph}. In the presence of a Fermi sea, to make a scalar with zero net momentum one pairs a quark with momentum $\vec{p}_F$ with a quark hole with momentum $- \vec{p}_F$. The relative momentum of the quark and the quark-hole is large, so that in a confining theory, the string tension of the bound requires that the excitation energy of such a bound state is of order $2\mu_{\rm q}$ relative to that of the scalar meson in vacuum\footnote{Presumably this is a sufficient condition not to have homogeneous particle-hole condensation. Actually, even without confinement, it is likely that the condensation of chiral density waves is favored. See discussions below Eq.~(\ref{eq4fermi+-1}).}. Since it is unlikely that such highly excited scalar mesons condense, then chiral symmetry restoration occurs. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{0.6}[0.6] { \hspace{-0.6cm} \includegraphics[scale=.30]{excitonfermi.pdf} } \scalebox{0.6}[0.6] { \hspace{4.5cm} \includegraphics[scale=.30]{spiralsfermi.pdf} } \end{center} \begin{center} \scalebox{0.6}[0.6] { \hspace{-0.6cm} \includegraphics[scale=.30]{exciton.pdf} } \scalebox{0.6}[0.6] { \hspace{1.5cm} \includegraphics[scale=.30]{spirals.pdf} } \end{center} \vspace{0.0cm} \caption{ Particle-hole pairing with a confining interaction. (Left): A homogeneous chiral condensate, with total momentum $\vp_{\rm total} = \vec{0}$. The relative momentum between a particle and a hole is large. (Right): An inhomogeneous chiral condensate with $\vp_{\rm total} \simeq \pm 2 p_{\rm F} \vec{n}_z$. The relative momentum is small. A superposition of pairs with momenta $\vp_{\rm total} \simeq 2 p_{\rm F} \vec{n}_z$ and $\vp_{\rm total} \simeq -2 p_{\rm F} \vec{n}_z$ creates chiral spirals. } \label{fig:ph} \end{figure} Another possibility is that charge density waves form through the condensation of particles and holes \cite{Deryagin:1992rw,Shuster:1999tn,Park:1999bz,Sadzikowski:2000ap,% Nickel:2009ke}, similar to $p$-wave pion condensation in nuclear matter \cite{Migdal:1978az}. Early studies using perturbative gluon propagators \cite{Shuster:1999tn,Park:1999bz} argued that the charge density waves are only realized if the number of colors is very large. These arguments, however, do not take confinement into account. More precisely, the attractive force in the infrared (IR) sector is not strong enough to overtake screening. In a recent paper \cite{Kojo:2009ha}, several of us have argued that in Quarkyonic matter, translationally non-invariant chiral condensates form as chiral spirals. The argument for a translationally non-invariant condensate follows again from a particle-hole pair near the Fermi surface; see the right panel in Fig.~\ref{fig:ph}. A difference from the homogeneous condensate is that quarks and quark-holes co-move in the same spatial direction, and thereby exchange only small momenta of the order of $\Lambda_{\rm QCD}$, where $\Lambda_{\rm QCD}$ denotes the typical scale of QCD. In contrast to the homogeneous case, the bound sate which forms does not cost much energy, and condensation is possible. One finds that the optimal mode of condensation is a linear combination of the chiral condensate, $\langle \overline \psi \psi \rangle$, and an excitation which has spin-one, is an isosinglet, and has odd-parity, $\langle \overline \psi \sigma^{0z} \psi \rangle$. Here $z$ is the direction of motion of the wave \cite{Shuster:1999tn,Kojo:2009ha}, and we call the ``longitudinal'' direction; those directions orthogonal to $z$ are the ``transverse'' directions. The chiral spiral is characterized by a spatial oscillation between these two modes. This combination can be naturally interpreted as a superposition of particle-hole condensates with momenta $\sim \pm 2p_{\rm F} \vec{n}_z$. At high density, like heavy-quark symmetry, there emerges an approximate symmetry of $SU(2N_{\rm f})_+ \times SU(2N_{\rm f})_-$ \cite{Shuster:1999tn}, where $\pm$ expresses (1+1)-dimensional chirality that characterizes the moving directions along the $z$ axis. After the formation of chiral spirals, there are $(2N_{\rm f})^2$ Nambu-Goldstone (NG) modes: $(2N_{\rm f})^2-1$ isospin-spin excitation modes and one phonon mode, associated with spontaneous breaking of spin-chiral and translational symmetry, respectively\footnote{The formation of a single chiral spiral breaks rotational symmetry in addition to translational symmetry. Since the translation and rotation are not independent, only one phonon mode appears as an NG mode \cite{Casalbuoni:2001gt}.}. These results were derived by the dimensional reduction from the (3+1)-dimensional self-consistent equations to those in the (1+1)-dimensional 't~Hooft model for degrees of freedom near the Fermi surface. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{1.0}[1.0] { \hspace{0.2cm} \includegraphics[scale=.35]{multi.pdf} } \end{center} \caption{The two-dimensional slice of the Fermi sea, suggested in Ref.~\cite{Kojo:2010fe}. Rotational symmetry is spontaneously broken from continuous to discrete one. The number of patches accompanying chiral spirals in different directions increases with increasing density. } \label{fig:multi} \vspace{0.2cm} \end{figure} The stability analysis of Ref.~\cite{Kojo:2009ha} showed that Quarkyonic matter in the absence of a chiral condensate was unstable with respect to the formation of a (1+1)-dimensional chiral spiral. Furthermore, it was suggested that many chiral spirals of different spatial orientations interweave to form a more complicated condensate \cite{Kojo:2010fe}. This corresponds to a transition from a spherical Fermi surface into patches, inducing breaking of continuous rotational symmetry down to discrete one (see Fig.~\ref{fig:multi}). The number of patches increases as the density grows up and such phase transitions continue to occur until the screening effect on gluons strongly reduces the IR attraction between a pair of a quark and a quark-hole. Such reduction happens around the density scale, $\mu_q \sim N_{\rm c}^{1/2} \Lambda_{\rm QCD}$ \cite{McLerran:2007qj}. These behavior are sketched in a the phase diagram in $\mu_q$-$T$-${\mathcal E}$ space, as shown in Fig.~\ref{fig:Phase} (${\mathcal E}$ is the energy density of the system)\footnote{Here we consider chiral limit for the light flavors and ignore the electroweak interactions.}. Quarkyonic matter starts to appear just above $\mu_q \sim \Lambda_{\rm QCD}$, where a transition from nuclear to quark matter quickly occurs, and continues to exist up to $\mu_q \sim N_{\rm c}^{1/2} \Lambda_{\rm QCD}$. In the Quarkyonic region, the stair-like growth of the energy density along $\mu_q$ axis reflects the discontinuous changes in the shape of the Fermi sea (Fig.~\ref{fig:multi}). At larger $\mu_q$, the screening of the IR attraction reduces the size of the chiral spiral condensate, and accordingly, the interval of stair in $\mu_q$ axis and jumps in ${\mathcal E}$ become smaller. The shape of the Fermi sea smoothly approaches spherical one. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{1.0}[1.0] { \hspace{0.0cm} \includegraphics[scale=.26]{Phase.pdf} } \end{center} \caption{The three dimensional plot of the phase diagram in $\mu_q$-$T$-${\mathcal E}$ space (${\mathcal E}$ is the energy density of the system). Chiral symmetry in Quarkyonic matter is broken by the formation of the interweaving chiral spirals (ICS). The stair-like growth of the energy density along $\mu_q$ axis expresses the discontinuous change in the shape of the Fermi sea (Fig.\ref{fig:multi}). } \label{fig:Phase} \vspace{0.2cm} \end{figure} While Refs.~\cite{Kojo:2009ha,Kojo:2010fe} have argued for chiral spirals using the confining interactions, several aspects have not been explicitly shown due to technical difficulties and/or conceptual uncertainties in treating the deep IR structure of confining forces. On the other hand, while we postulate that the confining force could give a sufficient condition to drive chiral symmetry breaking, it is certainly not a necessary condition. The aforementioned phenomena may appear in a wider class of models that encompass chiral symmetry breaking even without confinement. Indeed, what is relevant for interweaving chiral spirals are Fermi surface effects and the IR enhancement of the interaction, but not precise knowledge about the deep IR region. Taking this viewpoint, we will characterize chiral symmetry breaking at high density by a simple, tractable model in which we can explore analytic insights. We will use an effective field theory to describe QCD at large $N_{\rm c}$. This model will be apparently similar to the Nambu--Jona-Lasinio (NJL) model \cite{Nambu:1961tp,Vogl:1991qt} in which the dominant interacting process in the large-$N_{\rm c}$ limit is the scattering of particles and the condensate. The crucial difference of our model from the usual NJL model is that the interaction vertex has a form factor that mimics the IR enhancement of the non-perturbative gluon propagator. We denote a form-factor scale of the model as $\Lambda_{\rm f}$ ($\sim \Lambda_{\rm QCD}$), beyond which interactions are negligible compared to the IR interaction. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{0.6}[0.6] { \includegraphics[scale=.40]{SDeq.pdf} } \scalebox{0.6}[0.6] { \includegraphics[scale=.40]{SDNJL.pdf} } \end{center} \vspace{0.0cm} \caption{ The leading self-energy diagram at zero density. At large $N_{\rm c}$, we have only to keep the rainbow ladder.\ \ (Left) The diagram in terms of QCD dynamics. Integrating the temporal component out, we can interpret the loop with momentum $\vec{k}$ as the condensate, $\langle \bar{\psi}(\vec{k}) \psi(\vec{k}) \rangle$, which is made of particle-antiparticle with momentum $\vec{k}$.\ \ (Right) The corresponding diagram in our model. The soft-gluon exchange part in QCD is replaced with the form-factor function whose strength damps as $\vec{k}$ and $\vec{p}$ go far apart. The diagrams for the $1/N_{\rm c}$ corrections will be given in Fig.~\ref{fig:1nc} in Sec.~\ref{secdiscussion}. } \label{fig:SDeq} \end{figure} Form-factor effects lead to the following consequences at large $N_{\rm c}$. The interaction between a quark and a condensate becomes strongly momentum dependent; see Fig.~\ref{fig:SDeq}. They decouple one another\footnote{ This sort of picture has been discussed for the high-lying mesons and baryons \cite{Glozman:1999tk}. See also Ref.~\cite{Shifman:2007xn} for some caveats on this picture.} if the relative momentum between the quark and the condensate is much larger than $\Lambda_{\rm f}$, reflecting composite nature of the quark condensate. As a consequence, the quark mass gap damps when the quark momentum is far away from the domain of condensation. In vacuum, the damping scale of the mass function $\Lambda_{\rm c}$ may play a similar role to the ultraviolet (UV) cutoff, $\Lambda_{{\rm NJL}}$, in the usual NJL model (see the left panel in Fig.~\ref{fig:density}). In this sense, the form factor can naturally remove the UV cutoff artifact of the NJL-type model. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{0.6}[0.6] { \includegraphics[scale=.30]{zerodensity.pdf} } \scalebox{0.6}[0.6] { \includegraphics[scale=.30]{finitedensity.pdf} } \end{center} \vspace{0.0cm} \caption{ Schematic figures for the mass gap function.\ \ (Left) The zero density case.\ \ (Right) The finite-density case with chiral symmetry breaking near the Fermi surface. } \label{fig:density} \end{figure} All of these aspects are crucial when we consider dense quark matter. Since condensation phenomena should happen near the Fermi surface, the effective UV cutoff should appear in the distance from the Fermi surface, not that from vacuum (see the right panel in Fig.~\ref{fig:density}). If we fixed the effective UV cutoff to be the same as the vacuum value by hand, arguments on chiral symmetry breaking would not make sense for $\mu_{\rm q} \ge \Lambda_{{\rm NJL}}$. Indeed, because of Pauli blocking, phase space for quarks contributing to the condensate would disappear, leading to chiral symmetry restoration as a cutoff artifact. Therefore it would be desirable to derive an effective UV cutoff dynamically for each density. We claim that the introduction of the form factor gives a natural extension of the treatment of the zero-density NJL model to that at finite density. Also we make a remark that the so-called Debye cutoff frequency is introduced in this way from the Fermi surface in the standard BCS theory. With this modified NJL model at hand, the purpose of this paper is to give detailed and analytic insights on the interweaving of chiral spirals and on the associated breakdown of the rotational invariance. A key observation in our model treatment with a form factor will be that at large $N_{\rm c}$, the particle-condensate interaction (i.e.\ the interaction of particles scattering off the condensate) happens {\it locally in momentum space}\footnote{ The $1/N_{\rm c}$ corrections (as shown in Fig.~\ref{fig:1nc}) will violate this locality; see discussions in Sec.~\ref{secdiscussion}.}, which will allow the system to simultaneously embed many chiral spirals at sufficiently high density. For the sake of simplicity, we will work in (2+1) dimensions, where the original Fermi surface is a circle and takes a simple geometric structure even after formation of many chiral spirals. An extension of this study to higher dimensions might be technically difficult but conceptually straightforward. Speaking precisely, in three space-time dimensions there is no true chiral symmetry, since there is no $\gamma_5$ matrix for two component spinors. Using four component spinors, there is flavor symmetry breaking. This technicality does not change any of our main considerations. (Further discussions is given in Sec.\ref{Spinors}.) We approximate the patched Fermi surface by a polygon of degree $N_{\rm p}$. In Fig.~\ref{fermisphere} we show the two-dimensional Fermi surface and some polygon approximations to it. We will look for an energy minimum at a non-zero value of $N_{\rm p}$, and we will find that there exists such a minimum that depends upon density. We can think of each sub-sector that constitutes the polygon shape as a wedge. The wedge is characterized by an opening angle $2\Theta$ and a depth $Q$ as shown in Fig.~\ref{wedge}. The depth $Q$ will be of the order of the Fermi momentum $p_{\rm F}$, and the opening angle of the wedge is constrained\footnote{ Here a factor $2$ is included since we will take one patch as a set of one wedge and the other wedge in the opposite side of the Fermi sea. See Sec.~\ref{secdecomp} for details.} by $2\times 2N_{\rm p} \Theta = 2\pi$. The surface thickness $\Lambda_{\rm Fermi}$ ($\sim \Lambda_{\rm f}$) characterizes the momentum scale for which non-perturbative Fermi surface effects are important. \begin{figure}[tb] \vspace{0.2cm} \begin{center} \scalebox{1.0}[1.0] { \includegraphics[scale=.80]{fermisphere.pdf} } \end{center} \vspace{0.2cm} \caption{The leftmost is the Fermi circle in spatial two dimensions. The middle figure is a square approximation to the circle. The wedges are shown corresponding to discrete sub-area of the circle. The rightmost figure is a higher-order polygon approximation to the circle. } \label{fermisphere} \vspace{0.2cm} \end{figure} The use of the wedge shape is motivated by the following reasons: In order to maximize the energy gain from condensation effects, each patch should contain only one chiral spiral by aligning total momenta of a bunch of particle-hole pairs. If we had a misalignment, interplay among chiral spirals would reduce the size of the gap, as exemplified in Sec.~\ref{1+1Dexample}. So we have to look for the most effective shape to achieve the alignment of total momenta. An obvious candidate is the flat Fermi surface with which particles and holes participating in the condensate can stay close to the Fermi surface, saving the virtual excitation energies. Other shapes require some of particles and holes with larger excitation energies, so are not effective to create a bigger condensate. \begin{figure}[tb] \vspace{0.2cm} \begin{center} \scalebox{1.0}[1.0] { \includegraphics[scale=.25]{wedge.pdf} } \end{center} \vspace{0.2cm} \caption{The basic wedge from which the polygon is constructed. The wedge has an opening angle $2\Theta$, a depth $Q$, and a surface thickness $\Lambda_{{\rm Fermi}}$. } \label{wedge} \vspace{0.2cm} \end{figure} What is the principle to determine the number of patches? It is essentially determined by the balance between the kinetic-energy cost and the condensation-energy gain. Without condensation effects, the Fermi surface would simply take a circle shape that minimizes the kinetic energy. Once condensation effects are turned on, the associated energy gain can overtake the kinetic-energy cost, changing the shape of the Fermi sea from a circle to a polygon with discrete number of wedges. The condensation effects appear in two places. One is the energy reduction of single-particle contributions inside of one wedge, as is the case in the single chiral spiral problem. The other is the inter-patch interactions among different chiral spirals. The latter will provide an energy cost, that is, chiral spirals destroy one another if their wavevectors are different (as explained in Sec.~\ref{sec:interference}, and also some results from the existing literatures will be addressed in Sec.~\ref{comparison}). This means that we should divide the Fermi sea into not too many wedges to get the largest energy gain. Therefore the size of one wedge tends to be as large as possible until the kinetic-energy cost becomes too big. Let us outline how to optimize the shape of the Fermi sea. In the following, we will consider the canonical ensemble, in which the particle number is fixed. Then the Fermi volume must be conserved, so it follows that \begin{equation} p_{\rm F}^2 \Theta = Q^2 \tan \Theta\,. \end{equation} Here the LHS is a fermion number for the plain circle shape of the Fermi sea, while the RHS is for the deformed Fermi sea. For small $\Theta$ we can approximate the RHS by the Taylor expansion in terms of $\Theta$, with which we can solve $Q(\Theta)$ as \begin{equation} Q(\Theta) = p_{\rm F} \big( 1 - \frac{\Theta^2}{6} - \frac{\Theta^4}{40} + \cdots \big)\,. \end{equation} Thus, the size of one patch in the transverse direction is \begin{equation} \text{(one patch size)} = Q(\Theta) \tan\Theta = p_{\rm F} \Theta \: \big ( 1 + \frac{1}{6} \Theta^2+ \frac{19}{360}\Theta^4 + \cdots \big)\,. \end{equation} Now that $Q$ is solved as a function of $\Theta$, the multiple chiral spiral states are characterized by $p_{\rm F}$, $\Theta$, and the single-particle mass gap, $M$. We will perform the energy minimization by taking $\Theta$ and $M$ as variational parameters. Provided that the energy minimum exists for $\Theta \ll 1$, we can expand the energy density by powers of $\Theta$. It will turn out that the expression takes the following form, \begin{align} \mathcal{E}(M,\Theta) &= N_{\rm p} \cdot \mathcal{E}^{\text{1-patch}}(M,\Theta) \notag \\ &= \frac{\mathcal{E}_{-1}(M)}{\Theta} + \mathcal{E}_0(M) + \mathcal{E}_2(M) \Theta^2 + \mathcal{E}_4 (M) \Theta^4 + \cdots \,. \end{align} The $1/\Theta$ term can appear simply because $N_{\rm p} = \pi/2\Theta$, but it will disappear for vanishing $M$, that is, $\mathcal{E}_{-1}(M\to0)=0$. Expanding $\mathcal{E}_n(M)$ by powers of $M/p_{\rm F}$, we expect \begin{equation} \mathcal{E}_n(M) = c_n^{(0)} p_{\rm F}^3 + c_n^{(1)} p_{\rm F}^2 M + c_n^{(2)} p_{\rm F} M^2 + \cdots \,, \end{equation} where we did not write possible non-analytic terms explicitly. (In what follows, solving the gap equation, in fact, we can arrive at the above form of the expression.) At sufficiently high density, one has only to keep the leading term in the $M/p_{\rm F}$-expansion for each $\mathcal{E}_n(M)$. Let us first discuss terms insensitive to details of condensation effects, computing at $M=0$. The first non-vanishing contribution of $O(p_{\rm F}^3)$ with non-trivial $\Theta$ dependence should arise from the kinetic-energy cost for the deformation of the Fermi surface. The energy at $M=0$ is an increasing function of $\Theta$, \begin{align} \mathcal{E}(M=0, \Theta) &= N_{\rm p} \cdot 2N_{\rm c} \cdot 4 \int_0^{ Q } \frac{ \mathrm{d} p_{\parallel} }{2\pi} \int_0^{ p_\parallel \tan \Theta} \frac{\mathrm{d} p_\perp}{2\pi}\: \sqrt{ p_\parallel^2 + p_\perp^2 } \notag\\ &= N_{\rm c} \cdot \frac{p_{\rm F}^3}{3\pi} \: \bigg( 1 + \frac{1}{30} \Theta^4 + O(\Theta^6) \bigg)\,, \end{align} where the first term gives the trivial contribution which should be subtracted out\footnote{ In the first line of the equation, the factor $2N_{\rm c}$ is for degeneracy factors of colors and spins for four component spinors. The second factor $4$ arises because one patch is made of two opposite wedges and the $p_\perp$ integral with the opening angle is $2\Theta$.}. It is extremely important to notice that the non-trivial deformation energy does not appear until $O(\Theta^4)$. The volume conservation of the Fermi sea cancels the $\Theta^2$-term out from the average kinetic energy. Terms beyond $O(\Theta^4)$ are much smaller than $p_{\rm F}^3\Theta^4$ and irrelevant in our minimization procedure. Thus, the energy minimum will be found by balancing the $p_{\rm F}^3 \Theta^4$ term with condensation effects which yield terms with smaller powers of $\Theta$ than the $p_{\rm F}^3 \Theta^4$ term. In the following we concentrate on the estimation of such condensation terms. The condensation effects depend on interaction properties of models. The point of our model is that at large $N_{\rm c}$ the single-particle dispersion of a fermion with momentum $\vec{p}$ is determined only by condensates within the domain of the size $\sim \Lambda_{\rm f}^2$ around $\vec{p}$. It means that if we consider the gap for single particles farther from patch boundaries than $\Lambda_{\rm f}$, it will be affected only by the single chiral spiral, not by adjacent chiral spirals. The above argument suggests that we have to evaluate the mass gap differently depending on the different domains of $\Theta$. then there are intersection points of more than one patch that have interactions among them. To solve the gap equation, hence, we have to take into account the influence of several chiral spirals simultaneously. This is a rather technically complicated problem. Fortunately, the energy minimum in our problem will be outside of this $\Theta$ domain. We can self-consistently show that the transverse size of one patch is much larger than $\Lambda_{\rm f}$, i.e. \begin{equation} p_{\rm F} \Theta \gg \Lambda_{\rm f}\,. \label{eq:pfgg} \end{equation} Once this condition is satisfied, the single-particle gap in one patch can be determined independently from $\Theta$, except in the region close to the patch boundaries. We denote such a solution as $M= M_0 \sim \Lambda_{\rm f}$. Then the energy gain from condensation effects should be \begin{equation} \hspace{-0.5cm} \text{(energy gain)} \sim N_{\rm p} \cdot N_{\rm c}\,(\Lambda_{\rm f}\, p_{\rm F} \tan\Theta)\, M_0 \sim N_{\rm c} M_0\, \Lambda_{\rm f}\, p_{\rm F} \,\bigg( 1 + \frac{\Theta^2}{3} + \cdots \bigg)\,, \label{con1} \end{equation} where $\Lambda_{\rm f}\, p_{\rm F} \tan\Theta$ is the one-patch phase space where the condensation occurs. One important observation here is that, while the gap is insensitive to $\Theta$, the phase space has $\Theta$ dependence, so that the leading term is $\Theta$ independent after multiplying a patch number $N_{\rm p}$. Let us see contributions at the intersection point of two adjacent patches. The point is that a particle from one patch and condensates from other patches interact within a limited domain of $\sim \Lambda_{\rm f}^2$ near the intersection points. Its phase space is independent of $\Theta$. Therefore the contribution from the intersection points is \begin{equation} \text{(energy cost)} \sim N_{\rm p} \cdot N_{\rm c} \Lambda_{\rm f}^2 \, f(M_{\rm B}) \sim \frac{N_{\rm c}}{\Theta} \cdot \Lambda_{\rm f}^2 \, f(M_{\rm B})\,, \end{equation} where $f(M_{\rm B})$ is some function of the order $M_{\rm B} \sim \Lambda_{\rm f}$ with $M_{\rm B}$ be the mass gap near the boundary, and vanishes as $M_{\rm B}\rightarrow 0$. The contribution must be an energy cost. The reason is that in Eq.~(\ref{con1}) we overestimated the energy gain which should be reduced around the patch boundaries. The misalignment of chiral spiral wavevectors tends to destroy the different chiral spirals one another, and reduces the size of the gap at the intersection points. Diagrammatically, this contribution will appear as interactions among chiral spiral mean fields in different domains. The presence of this term becomes more important for smaller $\Theta$. Now we can express the energy density as a function of $M$ and $\Theta$. In the domain $\Lambda_{\rm f}/p_{\rm F} \ll \Theta \ll 1$, it reads \begin{align} &\delta \mathcal{E}(M,\Theta) \notag\\ &\;\;\sim N_{\rm c} \bigg( \frac{ \Lambda_{\rm f}^2 \, f(M_{\rm B}) }{\Theta} - c_0\, M_0 \Lambda_{\rm f}\, p_{\rm F} - c_2\, M_0 \Lambda_{\rm f}\, p_{\rm F} \Theta^2 + c_4\, p_{\rm F}^3 \Theta^4 + \cdots \bigg)\,, \end{align} where coefficients $c_0, c_2, \cdots$ are positive, and we have subtracted the free Fermi gas contribution. The energy balance is schematically illustrated in Fig.~\ref{fig:energylandscape}. To assure this expression by microscopic calculations is our goal in later sections. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{1.0}[1.0] { \includegraphics[scale=.25]{conregion.pdf} } \end{center} \vspace{0.0cm} \caption{ The condensation region near the Fermi surface. The thickness of the region in the radial direction is $\sim \Lambda_{\rm f}$. In the boundary region with the transverse size of $\sim \Lambda_{\rm f}$, inter-patch interactions between the nearest-neighbor chiral spirals destroy condensates one another, reducing the energy gain from condensation effects. } \label{conregion} \vspace{0.2cm} \end{figure} When $\Lambda_{\rm f}/p_{\rm F} < (\Lambda_{\rm f}/p_{\rm F})^{1/2} < \Theta \ll 1$ is satisfied, the $p_{\rm F}^3 \Theta^4$ term dominates over other terms, and the deformation energy supersedes the condensation energy. Thus, there is an upper bound of $\Theta$, and the energy minimum should lie in the region, $\Lambda_{\rm f}/p_{\rm F} \ll \Theta \ll (\Lambda_{\rm f}/p_{\rm F})^{1/2}$. On the other hand, the lower bound of $\Theta$ will be set by the patch-patch interactions proportional to $1/\Theta$. In the region of current concern, we have \begin{equation} \frac{\partial\, \delta \mathcal{E}(M,\Theta)}{\partial \Theta} \biggr|_{\Theta=\Theta_0} \!\sim\; N_{\rm c}\bigg( -\frac{\Lambda_{\rm f}^3}{\Theta_0^2} + 4 c_4\, p_{\rm F}^4\, \Theta_0^3 \bigg) \;\sim\; 0 \,, \end{equation} for the energy minimum neglecting other terms. Therefore we find \begin{equation} \Theta_0 \sim \bigg( \frac{\Lambda_{\rm f}}{p_{\rm F}} \bigg)^{\! 3/5} \;. \end{equation} As we promised, we can confirm in this way that $p_{\rm F}\Theta_0 \sim (p_{\rm F}/\Lambda_{\rm f})^{2/5} \Lambda_{\rm f} \gg \Lambda_{\rm f}$, which surely justifies Eq.~\eqref{eq:pfgg} {\it posteriori}. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{1.0}[1.0] { \includegraphics[scale=.25]{energylandscape.pdf} } \end{center} \vspace{0.0cm} \caption{The schematic energy landscape as a function of $\Theta$. The region $\Theta \ll \Lambda_{\rm f}/p_{\rm F}$ is beyond the applicability of our analysis. } \label{fig:energylandscape} \vspace{0.2cm} \end{figure} The total energy is dominated by the condensation term $\sim - p_{\rm F} \Lambda_{\rm f}^2$, which is independent of $\Theta_0$. The leading corrections come from the $1/\Theta_0$ term (patch-patch interactions) and the $\Theta_0^4$ term (the deformation energy) which are suppressed by $(\Lambda_{\rm f}/p_{\rm F})^{2/5}$ compared to the leading contribution. This implies that after solving the one-patch problem at the mean-field level, other patch-patch contributions can be treated within perturbation theory. We will formulate our perturbation theory within the domain of $(\Lambda_{\rm f}/p_{\rm F}) \ll \Theta \ll (\Lambda_{\rm f}/p_{\rm F})^{1/2}$. This paper is organized as follows. In Sec.~\ref{secmodel}, we define our model and describe consequences of form-factor effects in vacuum. In Sec.~\ref{secdecomp}, the Fermi sea is decomposed into several segments. We formally separate the Lagrangian into the one-patch and the patch-patch interactions. In Sec.~\ref{seconepatch}, the mean-field treatment for the one-patch problem is discussed. We first identify the dominant terms within one patch, and construct the mean field for chiral spirals as well as the mean-field propagators for quasi-particles. In Sec.~\ref{formal}, we give a formal expression of the perturbative expansion. In Sec.~\ref{secpert}, we treat corrections from subdominant terms in one patch. It will be shown that subdominant terms are suppressed by powers of $\Theta$ or $\Lambda_{\rm f}/Q$. In Sec.~\ref{sec:interference}, the inter-patch interactions at the patch boundaries are discussed. The size and sign of $1/\Theta$ terms are estimated in both perturbative and non-perturbative manners. In Sec.~\ref{secdiscussion}, we argue possible impacts of several corrections ignored in this paper. We also review other works and attempt to place this work in perspective. A coordinate space structure of the interweaving chiral spirals is also discussed, leaving several interesting questions open. Section~\ref{secsummary} is devoted to a summary and possible future directions. \section{A Model: The Four-Fermi Interaction with Form-factor Effects} \label{secmodel} In this section, we introduce explicit form factors into the NJL model, so that the interaction is cutoff when the momentum transfer becomes large. This makes the NJL model non-local. (For other attempts to introduce non-locality which emphasize different aspects from ours, see Ref.~\cite{Buballa:1992sz}.) We will see that effects of such a cutoff determine the momentum domains where condensation phenomena are relevant. This aspect is particularly important when we consider the very large Fermi sea in which condensation phenomena occur as Fermi surface effects, rather than the vacuum effects. \subsection{Form factor effects} Let us consider the scalar-scalar type of the four-Fermi interaction, \begin{equation} \int \mathrm{d}^3x \, \big( \bar{\psi} \psi(x) \big)^2 = \int \mathrm{d} x_0 \! \int_{q,p,k} \big( \bar{\psi}(\vec{p} + \vec{q} ) \psi(\vec{p}) \big) \big( \bar{\psi}(\vec{k} ) \psi(\vec{k}+ \vec{q} ) \big)\,, \label{form1} \end{equation} where we define a shorthand notation, \begin{equation} \int_{q,p,k} \equiv \int \frac{\mathrm{d}\vec{q}~ \mathrm{d}\vec{p}~ \mathrm{d}\vec{k}~} {(2\pi)^6}\,, \end{equation} and we did not explicitly write the coupling constant and $x_0$ dependence of fermion fields. Since the four-Fermi interaction is not renormalizable beyond (1+1) dimensions, we need introduce some UV cutoff. We regularize the UV interaction by including form-factor effects, \begin{equation} \hspace{-0.5cm} \int \mathrm{d}^3x \, \big( \bar{\psi} \psi(x) \big)^2 \rightarrow \int \mathrm{d} x_0 \! \int_{q,p,k} \big( \bar{\psi}(\vec{p} + \vec{q} ) \psi(\vec{p}) \big) \big( \bar{\psi}(\vec{k} ) \psi(\vec{k}+ \vec{q} ) \big) \,\theta_{p,k} \;, \label{form2} \end{equation} where \begin{equation} \theta_{p,k} \equiv \theta\big( \Lambda_{\rm f}^2 - (\vec{p} - \vec{k})^2 \big)\,. \end{equation} This mimics the form-factor effects in large-$N_{\rm c}$ QCD\footnote{ Here we put the cutoff on the spatial-momentum $\vec{p}^2$, not on the Euclidean momentum $p_E^2$. So, results in this work are connected to those of large-$N_{\rm c}$ QCD in the Coulomb gauge, in which the dominant non-perturbative part is of the instantaneous type \cite{Gribov:1977wm}. An alternative choice would be to introduce a cutoff on $p_E^2$ keeping manifest Lorentz invariance. Such a treatment should mimic, for instance, Landau-gauge results in Euclidean space. However we do not know their Minkowskian behavior, so we have to compute quantities in Euclidean space. Then the price we have to pay is that a simple physical intuition does not necessarily work, especially at finite density. }, and removes the UV divergences associated with interacting processes. The large-$N_{\rm c}$ QCD is mimicked as follows. The one-gluon exchange including non-perturbative effects are shown in Fig.~\ref{figform}(a). Its strength damps as the momentum transfer becomes large. We roughly take into account this property by introducing a step function, $\theta\big( \Lambda_{\rm f}^2 - (\vec{p}-\vec{k})^2\big)$, keeping the interaction strength constant. In QCD, the cutoff scale $\Lambda_{\rm f}$ should be taken to be of the order of $\Lambda_{\rm QCD}$. In Fig.~\ref{figform}(b), we show the color line representation to illustrate how the one-gluon exchange interaction should be contracted into a four-Fermi type interaction. Taking into account features in Figs.~\ref{figform}(a) and \ref{figform}(b), we arrive at a simple model described in Eq.~(\ref{form2}) and Fig.~\ref{figform}(c). \begin{figure}[tb] \vspace{0.2cm} \begin{center} \scalebox{3.0}[1.0] { \includegraphics[scale=.12]{formfactors.pdf} } \end{center} \vspace{0.2cm} \caption{ (a) The non-perturbative gluon exchange which is supposed to damp quickly in the UV region. (b) The color line representation of the one-gluon exchange. (c) Our effective four-Fermi interaction including form factor effects. } \label{figform} \vspace{0.2cm} \end{figure} A relevant consequence of our choice of the form factor is that the coupling between fermion fields and the condensate becomes strongly momentum dependent. In particular, fermions decouple from the condensation effects if they belong to domains with a momentum difference of the order of $\Lambda_{\rm f}$. Let us first see this property in the vacuum case by investigating the Schwinger-Dyson equation for quasi-particles. After picking up the pole, we have\footnote{ We ignored the Lorentz vector self-energy for the sake of simplicity. This simplification should not alter our basic statements in this paper.} \begin{equation} \Sigma_m(\vec{p}) = \int \! \frac{ d \vec{k} }{ (2\pi)^2 } ~ \frac{\Sigma_m( \vec{k} )} {2 \epsilon( \vec{k} ) } ~ \theta_{p,k}\,, \label{SDeq1} \end{equation} where $\Sigma_m(\vec{k}) $ is the self-energy and $\epsilon(\vec{k} ) = \sqrt{ \vec{k}^2 + \Sigma_m^2(\vec{k}) }$ is the quasi-particle energy. For its diagrammatic expression, see Fig.~\ref{fig:SDeq}\footnote{ The Fock-type gluon exchange, which is dominant in large-$N_{\rm c}$ QCD, corresponds to the Hartree term of the four-Fermi interaction model, so we can apply most of techniques used in the NJL model calculations. }. When $|\vec{p}|$ is very large, $|\vec{k}|$ must be as large because of the form factor. In the asymptotic region $|\vec{p}| \gg \Sigma(\vec{p} )$, the Schwinger-Dyson equation looks like \begin{equation} \Sigma_m ( \vec{p}) \sim \frac{1}{2 \epsilon( \vec{p}) } \int \! \frac{ \mathrm{d} \vec{k} }{ (2\pi)^2 } ~ \Sigma_m ( \vec{k} ) ~ \theta_{p,k}\,, \end{equation} from which one can show that $\Sigma_m (\vec{p})$ damps faster than $1/\epsilon( \vec{p}) \sim 1/|\vec{p}|$ in the UV region, meaning that the scattering between quasi-particles and the mean-field condensates diminishes for large momentum. This UV behavior tremendously simplifies considerations on the energy benefit from condensation effects. Due to decoupling from condensation effects in the UV region, the normalized energy density, in which the energy density without condensation are subtracted out, is dominated by the IR contributions. Also, the couplings between the IR and the UV regions are allowed only in the limited region constrained by the form factor effects. Thus, we can proceed with our calculations independently from details of the UV physics. At finite density, the same argument applies if we replace the Dirac sea with the Fermi sea, namely, the excitation energy $\epsilon(\vec{p})$ is measured from the Fermi surface instead of the vacuum. Then it follows that dominant contributions to the condensation come from fermions near the Fermi surface rather than those near vacuum. \subsection{Bosonization of the Model} Following usual treatments, let us introduce the auxiliary fields and formally rewrite the action. Although this formal treatment is not mandatory, it would be practically convenient to use several relations derived in this framework. Since our four-Fermi interaction is not of the separable type, we have to modify conventional treatments slightly as \begin{equation} -\int \mathrm{d} x_0 \! \int \! \frac{\mathrm{d} \vec{q}}{ (2\pi)^2 } ~ \Phi(\vec{q})\, \Phi(-\vec{q}) \rightarrow -\int \mathrm{d} x_0 \int_{q,p,k} \Phi(\vec{q};\vec{p}) ~ \theta_{p,k} ~ \Phi(-\vec{q};\vec{k})\,. \end{equation} Here we did not write $x_0$ dependence of the auxiliary boson field, $\Phi$, explicitly for notational simplicity. Next we shift the boson fields to generate a four-Fermi interaction, \begin{equation} \Phi(\vec{q};\vec{p}) \rightarrow \Phi(\vec{q};\vec{p}) + \bar{\psi}(\vec{p}+\vec{q}) \psi(\vec{p})\,, \end{equation} then, by adding these trivial terms in $\Phi$, we can eliminate the four-Fermi interactions out from the original action to obtain the Yukawa-type model. The form of the vertex is \begin{equation} - \int \mathrm{d} x_0 \! \int_{q,p,k} \Phi(\vec{q}; \vec{p}) ~ \theta_{p,k}~ \bar{\psi}(\vec{k}) \psi(\vec{k} + \vec{q})\,. \label{massvertex} \end{equation} We can obtain the gap equation from $0=\int {\mathcal D} \Phi {\mathcal D} \Psi {\mathcal D} \bar{\Psi} ~ \frac{\delta}{\delta \Phi(\vec{q};\vec{p})} \mathrm{e}^{iS}$ and find, \begin{equation} \int \frac{\mathrm{d}\vec{k}}{(2\pi)^2}~~ \theta_{p,k} \; \big\{ \langle \Phi(-\vec{q}; \vec{k}) \rangle + \langle \bar{\psi}(\vec{k}) \psi(\vec{k} + \vec{q}) \rangle \big\} =0\,. \end{equation} Since the equation should hold at arbitrary $\vec{p}$, we conclude that $\langle \Phi(-\vec{q}; \vec{k}) \rangle = - \langle \bar{\psi}(\vec{k}) \psi(\vec{k} + \vec{q}) \rangle ~(\ge0)$, which is independent of $x_0$. The relation between the mass self-energy and $\Phi$ can be read off from the coefficient of $\bar{\psi} \psi$ in Eq.~(\ref{massvertex}), i.e. \begin{equation} \Sigma_m(\vec{q};\vec{k}) = \int \frac{\mathrm{d}\vec{p}}{(2\pi)^2} ~ \Phi(\vec{q}; \vec{p}) ~ \theta_{p,k} \,. \end{equation} In the stationary phase approximation of the $\Phi$-integral (which is rigorous in the large-$N_{\rm c}$ limit), we can replace $\Phi$ with $\langle \Phi \rangle$. One can check that substituting the gap equation for $\Phi$ and taking $\vec{q} =\vec{0}$ (for uniform condensates) reproduce the Schwinger-Dyson equation in Eq.~(\ref{SDeq1}). Instead of computing self-consistent solutions precisely, let us consider what kind of Ansatz would capture basic properties of stationary solutions of $\Phi(\vec{q};\vec{p})$. We will apply such arguments to more complicated case at finite density where condensation effects are important near the Fermi surface. In the vacuum case, we already know qualitative behaviors of $\Sigma_m(\vec{q}; \vec{p})$ at high momenta from the previous arguments. Its value damps as $|\vec{p}| \rightarrow \infty$, leading to damping of $\Phi(\vec{q};\vec{p})$, as seen in the gap equation. If we denote the characteristic scale of damping as $\Lambda_{\rm c}$, presumably the simplest Ansatz would be \begin{equation} \Phi(\vec{q};\vec{p}) = \Delta \cdot \theta ( \Lambda_{\rm c}^2 - \vec{p}^{~2} )\; \delta(\vec{q})\,. \end{equation} See also Fig.~\ref{fig:density}. In principle, not only $\Delta$, but also $\Lambda_{\rm c}$ should be treated as variational parameters as functions of $\Lambda_{\rm f}$ that is an intrinsic scale in our model. Perhaps it might be helpful to mention the relationship between this Ansatz and usual spatial-momentum cutoff scheme. In our model, conventional treatment of the NJL model is recovered by taking $\Lambda_{\rm f} \rightarrow \infty$ with $\Lambda_{\rm c}$ kept fixed to be the NJL-model cutoff $\Lambda_{{\rm NJL}} \sim 600\:\text{MeV}$ which semi-quantitatively describes low-energy properties of the hadron phenomenology. Such successes of the NJL model imply that once we correctly cutoff {\it the domain of condensation}, we should be able to express the low-energy dynamics quantitatively. In our model, this cutoff scale should be dynamically determined by adjusting a value of $\Lambda_{\rm f}$. Adopting this interpretation, the effective cutoff of the NJL model should be measured from the Fermi surface, not from zero momentum, because condensation effects mainly appear near the Fermi surface. This leads to a picture such that chiral symmetry is restored deeply inside of the Fermi sea but is broken near the Fermi surface. \subsection{Spinor Representations in (2+1) Dimensions} \label{Spinors} In this work we discuss (2+1) dimensional theory instead of (3+1) dimensional one because considerations for shapes of the Fermi surface are much simpler. On the other hand, in (2+1) dimensions, there is no chirality in a strict sense. Although this fact does not modify our main considerations, we shall give a brief remark on this special properties in (2+1) dimensions. A spinorial representation of the Lorentz group $SO(2,1)$ is provided by two component spinors, with $2\times 2$ representation of the Dirac algebra which is given by the Pauli matrices \begin{equation} \gamma_0 = \sigma_2~,~~ \gamma_1 = i \sigma_3~,~~ \gamma_2 = i \sigma_1~. \end{equation} There is no other $2\times 2$ matrix anticommuting with these $\gamma_\mu$ so that $\gamma_5$ can not be defined. Therefore we use a four component spinor for which $\gamma_5$ can be defined. To get some feeling, one can imagine that fermions in (3+1) dimensions are restricted within (2+1) dimensional space by imposing some external condition as done for the Kaluza-Klein reduction. Then the $\gamma$ matrices for four component spinors can be taken in the same way as (3+1) dimensional $\gamma$ matrices. We expect that this prescription is the easiest way to directly convert our (2+1) dimensional manipulations into higher dimensional ones. For further discussions on (2+1) dimensional chirality, see Ref. \cite{Appelquist:1986fd} for instance. \section{Decomposition of the Lagrangian into Multiple Patch Domains} \label{secdecomp} In this section, we decompose the NJL Lagrangian into different segments. For the sake of simplicity, we begin with the Lagrangian with one flavor, \begin{equation} {\mathcal L} = \bar{\psi}\, \mathrm{i}\, \Slash{\partial} \psi + \frac{G}{N_{\rm c}} \big( (\bar{\psi} \psi)^2 + (\bar{\psi}\, \mathrm{i} \gamma_5 \psi)^2 \big)\,, \end{equation} where we explicitly factor out the $N_{\rm c}$ dependence of the interaction, so that $G =O(N_{\rm c}^0)$. In the (2+1)-dimensional system, $G$ is dimensionful, and we take it to be $\sim \Lambda_{\rm f}^{-1}$. Our Lagrangian has continuous chiral symmetry, $U(1)_{\rm L} \times U(1)_{\rm R}$. In this work we ignore the $U(1)_{\rm A}$ problem that is of $O(1/N_{\rm c})$ and thus negligible. For the moment, we will not write the form factor explicitly for notational simplicity. We introduce the unit vectors $\vec{n}_{i}$ which point to the center of the $i$-th patch, and the unit vector $\vec{n}_{i\perp}$ which is orthogonal to $\vec{n}_{i}$. We project out the spatial components of vectors generically as \begin{equation} p_{i \parallel} = \vec{n}_i \cdot \vec{p}\,, \qquad p_{i \perp} = \vec{n}_{i\perp} \cdot \vec{p}\,. \end{equation} With this definition, fermion fields can be decomposed into $N_{\rm p}$ momentum domains, \begin{align} \hspace{-0.5cm} \psi(x) &= \int \!\frac{\mathrm{d}^3 p}{(2\pi)^3}~ \tilde{\psi}(p)\, \mathrm{e}^{-\mathrm{i} p\cdot x} \notag \\ &= \frac{1}{(2\pi)^3} \sum_{i=1}^{N_{\rm p}} \int^{\infty}_{-\infty} \mathrm{d} p_0 \int_{-\infty}^{\infty} \mathrm{d} p_{i \parallel} \int_{-p_{i\parallel} \tan \Theta }^{p_{i\parallel} \tan \Theta} \mathrm{d} p_{i\perp}\; \tilde{\psi}(p) \, \mathrm{e}^{-\mathrm{i} p\cdot x} \notag\\ &\equiv \sum_{i=1}^{N_{\rm p}} \psi_i(x) \,. \end{align} It is worth mentioning here that our formulation has potential relevance to analytic approaches based on the high-density effective theory \cite{Hong:1998tn}. A single patch includes a set of two domains with $p_{i \parallel} >0$ and $p_{i \parallel} <0$ for the positive energy states, in a similar way as the (1+1)-dimensional problem. Thus, the angle for one patch is $2\times 2\Theta$, and $\Theta$ should satisfy \begin{equation} 4\Theta \, N_{\rm p} = 2 \pi\,. \end{equation} Let us first decompose the free part of the Lagrangian. Since it is diagonal in momentum space, we have \begin{equation} {\mathcal L}^{\rm kin} = \sum_i \bar{\psi}_i \, i \, \Slash{\partial} \psi_i \equiv \sum_i {\mathcal L}^{\rm kin}_i\,. \end{equation} The decomposition of four-Fermi interactions ${\mathcal L}^{\rm int}$ are much more cumbersome, since the interaction terms combine different domains. Explicitly writing, it follows \begin{align} {\mathcal L}^{\rm int}&= \frac{G}{N_{\rm c}} \big( (\bar{\psi} \psi)^2 + (\bar{\psi}\, \mathrm{i} \, \gamma_5 \psi)^2 \big) \notag\\ &= \frac{G}{N_{\rm c}} \sum_{i,j,k,l} \big( (\bar{\psi}_i \psi_j)(\bar{\psi}_k \psi_l) + (\bar{\psi}_i \,\mathrm{i} \, \gamma_5 \psi_j) (\bar{\psi}_k \,\mathrm{i} \, \gamma_5 \psi_l) \big)\,. \end{align} It can be decomposed into four types of interaction terms, \begin{align} \sum_i {\mathcal L}_i^{\rm int} &= \frac{G}{N_{\rm c}} \sum_i \big( (\bar{\psi}_i \psi_i)^2 + (\bar{\psi}_i \,\mathrm{i} \gamma_5 \psi_i)^2 \big) \,, \notag \\ \sum_{i\neq j} {\mathcal L}_{i,j}^{\rm int} &= \frac{G}{N_{\rm c}} \sum_{i\neq j} \big( (\bar{\psi}_i \psi_i)(\bar{\psi}_j \psi_j) + (\bar{\psi}_i \, \mathrm{i} \gamma_5 \psi_i) (\bar{\psi}_j \, \mathrm{i} \gamma_5 \psi_j) \big) \,, \notag \\ \sum_{i, j\neq k} {\mathcal L}_{i,jk}^{\rm int} &= \frac{2G}{N_{\rm c}} \sum_{i, j\neq k} \big( (\bar{\psi}_i \psi_i)(\bar{\psi}_j \psi_k) + (\bar{\psi}_i \,\mathrm{i} \gamma_5 \psi_i) (\bar{\psi}_j \,\mathrm{i} \gamma_5 \psi_k) \big) \,, \notag \\ \sum_{i\neq j, k\neq l} {\mathcal L}_{ij,kl}^{\rm int} &= \frac{G}{N_{\rm c}} \sum_{i\neq j, k\neq l} \big( (\bar{\psi}_i \psi_j)(\bar{\psi}_k \psi_l) + (\bar{\psi}_i \,\mathrm{i} \gamma_5 \psi_j) (\bar{\psi}_k \,\mathrm{i} \gamma_5\psi_l) \big) \,. \end{align} Except the first line, interaction terms involve fermions belonging to different patches. Now we have the Lagrangian separated into one-patch and patch-patch interactions, ${\mathcal L} = \sum_i {\mathcal L}^{\text{1-patch}}_i + \Delta {\mathcal L}^{\rm int}$, where \begin{equation} {\mathcal L}^{\text{1-patch}}_i = {\mathcal L}^{\rm kin}_i + {\mathcal L}^{\rm int}_i \,, \quad \Delta {\mathcal L}^{\rm int} = \sum_{i\neq j} {\mathcal L}_{i,j}^{\rm int} + \sum_{i, j\neq k} {\mathcal L}_{i,jk}^{\rm int} + \sum_{i\neq j, k\neq l} {\mathcal L}_{ij,kl}^{\rm int} \,. \end{equation} The one-patch Lagrangian will play a dominant role for condensation effects. We will first solve the one-patch problem, and then include contributions from other patches perturbatively. \section{The One-Patch Problem at the Mean Field Level} \label{seconepatch} In this section, we provide a formal treatment of the one-patch Lagrangian. We consider sufficiently high density for which the interaction scale $\sim \Lambda_{\rm f}$ is much larger than the transverse kinetic energy near the Fermi surface which is suppressed as $\sim \vec{p}_\perp^{~2}/Q$ where $Q \sim p_{\rm F}$. Then the fermion dispersion relation is robust and does not rely on the transverse momentum; corrections from the higher spatial dimensions than the (1+1)-dimensional problem are generally suppressed by extra powers of $1/Q$. Therefore, the gap equation for the mean field effectively becomes (1+1) dimensional. We here analyze this quasi (1+1)-dimensional problem in detail. We first summarize some convenient notations used in (1+1) dimensions. They are useful to identify the dominant and subdominant terms for the chiral spiral formation. Then we bosonize the dominant part of the four-Fermi interactions, and construct the mean-field Lagrangian for the chiral spirals. We prepare the mean-field propagator and write the gap equation down. The results in this section are the basis for the perturbation theory, which treats subdominant terms ignored at the mean-field level. \subsection{Preliminaries} In (1+1)-dimensional models, the chirality (i.e.\ the eigenvalue of $\gamma_5$) characterizes the moving directions of particles. In higher dimensions, the corresponding $\gamma$-matrix is not $\gamma_5$, but $\Gamma_{i5} \equiv \gamma_0 \gamma_{i\parallel}$ for particles moving to the $x_{i\parallel}$-direction. We define \begin{equation} \psi_{i \pm} \equiv \frac{1 \pm \Gamma_{i5} }{2} \psi_i \,. \end{equation} This $\Gamma_{i5}$ satisfies the following algebraic relations: \begin{equation} (\Gamma_{i5})^2=1\,,\quad \{\Gamma_{i5}, \gamma_{i 0} \}=0\,,\quad \{\Gamma_{i5}, \gamma_{i \parallel} \}=0\,,\quad [\Gamma_{i5}, \gamma_{i \perp} ]=0\,. \end{equation} The free Lagrangian can be decomposed into two pieces. The longitudinal part is defined with $(+,+)$ or $(-,-)$ combinations\footnote Our definition of $\bar{\psi} = \psi^\dag \gamma^0$. Our metric is $g_{\mu \nu} = g^{\mu \nu} = {\rm diag}(1,-1,-1)$.}; \begin{equation} {\mathcal L}^{\rm kin}_{i\parallel} = \psi^\dag_{i+} \,\mathrm{i}(\partial_0 - \partial_{i\parallel}) \psi_{i+} + \psi^\dag_{i-} \,\mathrm{i}(\partial_0 + \partial_{i\parallel}) \psi_{i-} \,, \end{equation} and the part made of $(+,-)$ combinations is \begin{equation} {\mathcal L}^{\rm kin}_{i\perp} = \bar{\psi}_{i+} \,\mathrm{i}\,\Slash{\partial}_\perp \psi_{i-} + \bar{\psi}_{i-} \,\mathrm{i}\,\Slash{\partial}_\perp \psi_{i+} \,. \end{equation} Below, we will drop off the index $i$ as far as no confusion arises. At finite density with the Fermi momentum $Q$, it is natural to measure momenta of fermions from the Fermi surface. Accordingly, we take fields with shifted momenta\footnote We will use lower index expressions for momenta, and $Q$ should be interpreted as the lower component. And when we use the vector, that means the lower index components. For instance, $\vec{q} = (q_1, q_2)$ and $\vec{x} = (x_1, x_2)$.}, \begin{equation} \psi_\pm (x) = \mathrm{e}^{\mathrm{i}\, Q x_\parallel \Gamma_5} \psi'_\pm (x) = \mathrm{e}^{\pm \mathrm{i}\, Q x_\parallel} \psi'_\pm (x) \end{equation} or \begin{equation} \psi'_\pm ( \delta \vec{p}) = \psi_\pm (\delta p_\parallel \pm Q, p_\perp) \,, \qquad \big(\delta \vec{p} = (\delta p_\parallel, p_\perp) \big)\,, \end{equation} in momentum space. We use the notation $\delta \vec{p}$ to emphasize that momenta of $\psi'$ field are measured from the Fermi surface. Using the $\psi'$ field, one can easily identify dominant and subdominant terms at large density. In the $\psi'$-representation, the longitudinal part becomes\footnote{ In the grand canonical ensemble the basis $\psi'$ with $Q=\mu_q$ eliminates the chemical potential term reflecting that in the $\psi'$-representation we can deal with dynamics near the Fermi surface as in vacuum. This simple logic is not directly applicable in the canonical ensemble since the density constraint does not explicitly appear at the Lagrangian level. } \begin{equation} {\mathcal L}^{\rm kin}_{\parallel} \rightarrow \psi'^\dag_{+} \big[ \mathrm{i}(\partial_0 - \partial_{\parallel}) - Q \big] \psi'_{+} + \psi'^\dag_{-} \big[ \mathrm{i}(\partial_0 + \partial_{\parallel}) - Q \big] \psi'_{-}\,, \end{equation} and the transverse kinetic term and the mass term acquire the oscillating factors, \begin{equation} {\mathcal L}^{\rm kin}_{\perp} \rightarrow \bar{\psi}'_{+} \,\mathrm{i}\,\Slash{\partial}_\perp \psi'_{-}\ \mathrm{e}^{-2\,\mathrm{i} Q x_\parallel} + \bar{\psi}'_{-} \,\mathrm{i}\,\Slash{\partial}_\perp \psi'_{+}\ \mathrm{e}^{2\,\mathrm{i} Q x_\parallel}\,. \end{equation} Such oscillatory terms are suppressed near the Fermi surface by powers of $1/Q$. In the free theory, in fact, the excitation energy at $|\delta p_\parallel|\ll Q$ is \begin{equation} \epsilon^{{\rm free}}( \delta \vec{p} ) = \sqrt{ (Q + \delta p_\parallel)^2 + p_\perp^2} - Q = |\delta p_\parallel| + \frac{ \delta p_\parallel^2 + p_\perp^2 }{2 Q} +\cdots \,. \end{equation} Terms with oscillating factors define what we call ``subdominant'' terms. The four-Fermi interactions can be also separated into dominant terms and subdominant terms, \begin{align} (\bar{\psi} \psi)^2 &= \frac{1}{2} \big( (\bar{\psi} \psi)^2 + (\bar{\psi}\,\mathrm{i}\, \Gamma_5 \psi)^2 \big) + \frac{1}{2} \big( (\bar{\psi} \psi)^2 - (\bar{\psi}\,\mathrm{i}\, \Gamma_5 \psi)^2 \big) \notag \\ &= 2\,(\bar{\psi}_+ \psi_- ) (\bar{\psi}_- \psi_+ ) + \big( (\bar{\psi}_- \psi_+ )^2 + (\bar{\psi}_+ \psi_- )^2 \big) \notag \\ &\longrightarrow~ 2\,(\bar{\psi}'_+ \psi'_- ) (\bar{\psi}'_- \psi'_+ ) + (\bar{\psi}'_- \psi'_+ )^2 \mathrm{e}^{4\,\mathrm{i} Q x_\parallel} + (\bar{\psi}'_+ \psi'_- )^2 \mathrm{e}^{-4\,\mathrm{i} Q x_\parallel} \,. \label{eq4fermi'1} \end{align} The first term corresponds to the continuous symmetric part, which becomes IR dominant at high density. We will apply the mean-field Ansatz to dominant terms, while subdominant terms are treated as perturbation. In this treatment, a gap will be found only near the Fermi surface. The gaps will not open periodically in momentum space because of the absence of different harmonics\footnote{ The current problem is different from the problem of the Peierls instability with an external periodic potential. In our case the coupling between the mean field and particles depend on the $\pm$ combination. For instance, the mean field $\langle \bar{\psi}_+ \psi_- \rangle = \Delta \mathrm{e}^{-2\,\mathrm{i} Qx_\parallel}$ can scatter the particles from the $+$ region to the $-$ region, but cannot scatter from the $-$ region to the $+$ region. In this way particles and holes are kept around the Fermi surface, without going to the higher harmonic regions.}. Finally, for later convenience, we write Eq.~(\ref{eq4fermi'1}) in momentum space including the form factor explicitly. For $(+-)(-+)$ combinations of the dominant part, we have \begin{align} & \int_{q,p,k} \big( \bar{\psi}_+( \vec{p}+\vec{q} ) \psi_-( \vec{p} ) \big)~ \big( \bar{\psi}_-( \vec{k}) \psi_+( \vec{k}+\vec{q}) \big)~ \theta_{p,k} \notag \\ = & \int_{q, \delta p, \delta k} \big( \bar{\psi}'_+( \delta \vec{p}+\vec{q} -2Q \vec{n} ) \, \psi'_-( \delta \vec{p} ) \big)\, \big( \bar{\psi}'_-( \delta \vec{k}) \, \psi'_+( \delta \vec{k}+\vec{q} - 2Q \vec{n} ) \big)\, \theta_{\delta p, \delta k}\,. \label{eq4fermi+-1} \end{align} Note that if $\vec{q} \simeq 2Q \vec{n}$, all fields can be close to the Fermi surface simultaneously when $\delta \vec{p} \sim \delta \vec{k} \sim \vec{0}$. This is the reason why the configuration with $\vec{q} \sim 2Q\vec{n}$ becomes dominant in the path integral. For this reason, we should choose the wave vector of the chiral spirals to be $\vec{q} = 2Q\vec{n}$ in the high-density limit. This expression also indicates that the exciton-type condensation (i.e.\ homogeneous chiral condensation) with $\vec{q} = \vec{0}$ is not favored energetically. On the other hand, for $(+-)(+-)$ combinations of the subdominant part, we have \begin{align} & \int_{q,p,k} \big( \bar{\psi}_+( \vec{p}+\vec{q} ) \psi_-( \vec{p} ) \big)~ \big( \bar{\psi}_+( \vec{k}) \psi_- ( \vec{k}+\vec{q}) \big)~ \theta_{p,k} \notag \\ = & \int_{q, \delta p, \delta k} \big( \bar{\psi}'_+( \delta \vec{p}+\vec{q} -2Q \vec{n} ) \psi'_-( \delta \vec{p} ) \big)~ \big( \bar{\psi}'_+ ( \delta \vec{k}) \psi'_- ( \delta \vec{k}+\vec{q} + 2Q \vec{n} ) \big)~ \notag \\ & \qquad ~\times ~\theta \big( \Lambda_{\rm f}^2 - ( \delta \vec{p} - \delta \vec{k} - 2Q \vec{n} )^2 \big)\,. \label{eq4fermi+-2} \end{align} In contrast to the dominant terms, subdominant interactions require that at least one fermion must go far away from the Fermi surface regardless of any $\vec{q}$. The propagation of such a fermion serves the $1/Q$ suppression in the quantum corrections. \subsection{Formal Treatment: Bosonization} Now let us introduce the mean-field Ansatz for one patch. The auxiliary-field method may be applied as before. There are two slight modifications on the standard approach. One is that our boson field is introduced as a complex field since we need to eliminate quark bilinears, $\bar{\psi}_+ \psi_-$ and $\bar{\psi}_- \psi_+$, which have complex phase factors that are opposite to each other. Another is that boson fields are used to be replaced with only dominant four-Fermi interactions. Inserting an identity to the original partition function, we introduce the bosonic terms, \begin{equation} {\mathcal S}_\Phi = - \frac{N_{\rm c}}{2G}~ \int \mathrm{d} x_0 \int_{p,q,k} \Phi^\dag( \vec{q}; \vec{p}, x_0) ~\theta_{p,k}~ \Phi (\vec{q}; \vec{k}, x_0)\,, \label{alboson} \end{equation} to the original action. For the moment, we will explicitly write the $x_0$ coordinate. We replace the dominant four-Fermi interaction with a Yukawa-type vertex by shifting the boson fields, \begin{equation} \begin{split} \Phi^\dag ( \vec{q}; \vec{p}, x_0) ~&\longrightarrow ~ \Phi^\dag ( \vec{q}; \vec{p}, x_0) + \frac{2G}{N_{\rm c}} ~\bar{\psi}_+ ( \vec{p} + \vec{q}, x_0) ~ \psi_- (\vec{p}, x_0)\,, \\ \Phi (\vec{q}; \vec{k}, x_0) ~&\longrightarrow ~ \Phi ( \vec{q}; \vec{k}, x_0 ) + \frac{2G}{N_{\rm c}} ~\bar{\psi}_- ( \vec{k}, x_0 ) ~ \psi_+ ( \vec{k} + \vec{q}, x_0 )\,, \end{split} \end{equation} then the Yukawa vertex is \begin{align} \hspace{-1em} {\mathcal S}_{\Phi,\psi} = & - \int \mathrm{d} x_0 \int_{p,q} \Big[~\int_k \theta_{p,k}~ \Phi (\vec{q}; \vec{k}, x_0) ~\Big] ~\bar{\psi}_+ (\vec{p}+\vec{q}, x_0 ) ~ \psi_- (\vec{p}, x_0 ) \notag \\ & \quad - \int \mathrm{d} x_0 \int_{p,q} \Big[~ \int_k \theta_{p,k}~ \Phi^\dag (\vec{q}; \vec{k}, x_0) ~\Big] ~\bar{\psi}_- (\vec{p}, x_0 ) ~ \psi_+ (\vec{p} + \vec{q}, x_0 )\,, \end{align} where terms inside of $[\cdots]$ in the first and second terms are the self-energy, $\Sigma_m^\dag (-\vec{q}; \vec{p})$ and $\Sigma_m(\vec{q}; \vec{p})$, respectively. And the equation motion is \begin{align} &\langle \Phi (\vec{q};\vec{k}) \rangle = - \frac{2G}{N_{\rm c}} ~\langle \bar{\psi}_- (\vec{k}, x_0 ) ~ \psi_+ (\vec{k} + \vec{q}, x_0 ) \rangle\,, \notag \\ &\langle \Phi^\dag (\vec{q};\vec{k}) \rangle = - \frac{2G}{N_{\rm c}} ~\langle \bar{\psi}_+ (\vec{k}+\vec{q}, x_0 ) ~ \psi_- (\vec{k}, x_0 ) \rangle\,. \end{align} The RHS will appear to be $x_0$ independent, so we will not have to write the $x_0$ coordinate in $\langle \Phi (\vec{q};\vec{k}) \rangle$ and $\langle \Phi^\dag (\vec{q};\vec{k}) \rangle$ anymore. Below we will not explicitly write $x_0$ dependence of fermion fields for notational simplicity. \subsection{Mean Field for Chiral Spirals and the Quasi-Particle Spectrum} According to the arguments around Eq.~(\ref{eq4fermi+-1}), in the high-density limit the Ansatz for chiral spirals in the $i$-th patch is\footnote{ In our definition of $\Phi^\dag(\vec{q};\vec{k})$ in Eq.(\ref{alboson}), $\Phi^\dag(\vec{q};\vec{k})$ actually carries momentum $-\vec{q}$.} \begin{equation} \Phi_{0} (\vec{q};\vec{k}) = (2\pi)^2 \delta ( \vec{q} - 2Q \vec{n}_i ) \, \Delta (\vec{k})\,, \qquad \Phi_{0}^\dag (\vec{q};\vec{k}) = (2\pi)^2 \delta ( \vec{q} - 2Q \vec{n}_i ) \, \Delta (\vec{k})\,, \end{equation} where $\Delta$ is a real field which characterizes the magnitude of the condensate. We will give what would happen if we chose different $\vec{q}$ in Appendix~\ref{vqneq2Q}. The Fourier transformation with respect to the total momentum of the boson fields gives the expression, \begin{equation} \Phi_{0} (\vec{x};\vec{k}) = \Delta(\vec{k}) ~\mathrm{e}^{2\,\mathrm{i} Q x_\parallel}\,, \qquad \Phi^\dag_{0} (\vec{x};\vec{k}) = \Delta(\vec{k}) ~\mathrm{e}^{-2\,\mathrm{i} Q x_\parallel}\,, \end{equation} from which we can see that the complex nature of the boson fields is taken into account in the phase factor. For later convenience, let us define the mass gap function, \begin{equation} M (\vec{p}) \equiv \int_k~ \theta_{p,k}~ \Delta (\vec{k}) \,, \end{equation} which will be determined self-consistently after constructing the mean-field propagator. Using this shorthand notation and shifting momentum $\vec{p} \rightarrow \vec{p} + Q \vec{n}_i$, the mass vertex becomes \begin{align} & {\mathcal S}_{\Phi,\psi} \rightarrow {\mathcal S}_M = - \int \mathrm{d} x_0 \int_p \big(\, M(\vec{p}-Q\vec{n}_i )~ \bar{\psi}_+ (\vec{p} + Q \vec{n}_i ) \psi_- (\vec{p} - Q \vec{n}_i) \notag \\ & \qquad\qquad\qquad + M (\vec{p} - Q \vec{n}_i ) ~ \bar{\psi}_- (\vec{p} - Q \vec{n}_i ) \psi_+ (\vec{p} + Q \vec{n}_i ) ~\big) \notag \\ &\; = - \int \mathrm{d} x_0 \! \int_{\delta p} \big(\, M' (\delta\vec{p} )~ \bar{\psi}'_+ ( \delta \vec{p} ) \psi'_- ( \delta \vec{p} ) + M' (\delta\vec{p} )~ \bar{\psi}'_- ( \delta \vec{p} ) \psi'_+ ( \delta \vec{p} ) ~\big) \,. \end{align} In the last line of the above equation, we have replaced the loop momentum $\vec{p}$ with $\delta \vec{p}$ and have defined $M'(\delta \vec{p}) \equiv M(\delta \vec{p} - Q \vec{n}_i)$. Here it is very important to notice that although our inhomogeneous condensate breaks the translational invariance, momenta measured from the Fermi surface, $\delta \vec{p}$, is a conserved quantity. This is true as far as we include only dominant terms. The conservation of $\delta \vec{p}$ is violated by corrections such as transverse kinetic terms or subdominant parts of four-Fermi interactions, but they are subleading effects suppressed by powers of $1/Q$. At leading order, thanks to the conservation of $\delta \vec{p}$, the eigenvalue problem is diagonal in momentum space. Then we can formally derive the mean-field spectrum of quasi-particles using the mass function $M'(\delta\vec{p})$. The eigenvalue problem for the longitudinal plus mass terms is\footnote{ When we write $2\times 2$ matrix expressions, that means that each element is proportional to the $2\times 2$ identitiy matrix, ${\bf 1}_{2\times 2}$.} \begin{equation} \Psi'^\dag \begin{pmatrix \delta p_\parallel + Q ~&~ M'(\delta \vec{p}) \\ M' (\delta \vec{p}) ~&~ - \delta p_\parallel + Q \end{pmatrix} \Psi'(\delta \vec{p} ) = E_{{\rm MF}} (\delta \vec{p} )~ \big( \Psi'^\dag \Psi'( \delta \vec{p} ) \big) \,, \end{equation} where a notation, $\Psi' (\delta p_\parallel) = \big(\psi'_{+}(\delta \vec{p}), \psi'_{-}(\delta \vec{p} ) \big)^T$, is introduced. The eigenvalue has upper and lower branches, \begin{equation} E_{ {\rm MF} } (\delta \vec{p}) = Q \pm \omega(\delta \vec{p} )\,, \qquad \big(~ \omega(\delta \vec{p} ) = \sqrt{ \delta p_\parallel^2 + M' (\delta \vec{p} )^2 } ~\big) \,, \end{equation} where the gap opens at $|p_\parallel|=Q$, and its influence survives up to distance $\sim \Lambda_{\rm c}$ from the Fermi surface. (For the case that the chiral spiral has a wave vector $Q'\neq Q$, see Appendix~\ref{vqneq2Q}.) Since the energy level for particles with a mass gap is pushed down as compared to free particles, the energy gain inside of one patch is $\sim M \times \text{(phase space)} = M \times \Lambda_{\rm c} Q \tan \Theta$. Also, note that the phase space does not change before and after the formation of the mass gap, so that the Fermi volume conservation is automatically satisfied. It is important to specify the relation between these upper and lower branches and the respective momentum regions. They can be summarized as \begin{equation} E_{\nearrow} = Q + \omega(\delta p_\parallel)\quad (p_\parallel >Q)\,, \qquad E_{\swarrow} = Q - \omega(\delta p_\parallel)\quad (p_\parallel <Q)\,, \end{equation} for states moving to the $+$ direction, and \begin{equation} E_{\nwarrow} = Q + \omega(\delta p_\parallel)\quad (p_\parallel < -Q) \,, \qquad E_{\searrow} = Q - \omega(\delta p_\parallel)\quad (p_\parallel > -Q) \,, \end{equation} for states moving to the $-$ direction. See Fig.~\ref{fig:singleparticleE} for a graphical summary. In Appendix~\ref{vqneq2Q}, we repeat calculations for $\vec{q} \neq 2Q\vec{n}$, and explain that at sufficiently high density, the choice $\vec{q} = 2Q\vec{n}$ is the best way to minimize the single-particle contributions with the fixed particle number constraint. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{0.6}[0.6] { \includegraphics[scale=.30]{singleparticleE.pdf} } \hspace{0.3cm} \scalebox{0.6}[0.6] { \includegraphics[scale=.30]{singleparticleEprime.pdf} } \end{center} \vspace{0.2cm} \caption{ The mean-field single-particle dispersion in the presence of the chiral spirals. The particle orbit is occupied up to $|p_\parallel| \le Q$. The Fermi volume conservation is satisfied before and after the formation of chiral spirals.\ \ (Left) The energy $E_{ {\rm MF} }$ in the $\psi$-representation. The gap opens at the edge of the Fermi surface.\ \ (Right) The energy $\delta E = E_{{\rm MF}} - Q$ in the $\psi'$-representation. } \label{fig:singleparticleE} \vspace{0.2cm} \end{figure} \subsection{The Mean-Field Propagator in the Canonical Ensemble} Let us construct the mean-field propagator for the quasi-particle. We are working in the canonical ensemble with a fixed shape of the Fermi sea, i.e.\ one wedge, filled with quasi-particles up to $|p_\parallel| \le Q$. This can be restated as the condition that all occupied states must take the lower energy branch of the mean-field spectra, $E=Q-\omega$. To take into account such information in the propagator, we have only to take a proper $\mathrm{i} \epsilon$ prescription as we usually do in the vacuum case. The dominant fermion bilinear part of the action (longitudinal plus mass terms) is \begin{equation} \hspace{-1em} {\mathcal S}_{\psi'}^{\parallel+M} \!=\! \int \!\mathrm{d} p_0 \int_{\delta p} \bar{\Psi}' (p_0, \delta \vec{p}) ~\big(\, (p_0-Q) \gamma^0 + \delta p_\parallel \gamma^\parallel - M' (\delta \vec{p}) \,\big) ~\Psi'(p_0, \delta \vec{p}) \,. \end{equation} Note that $Q$ couples $\gamma^0$, not to $\gamma^\parallel$, as a consequence that we measure momenta of $\psi_\pm$ in an opposite way. Defining $\delta p_0 = p_0 -Q$, our mean-field propagator in the region of condensation is, \begin{equation} {\mathcal S}_{{\rm MF}}(\delta p) = \mathrm{i} ~\frac{ \delta p_0 \gamma^0 +\delta p_\parallel \gamma^\parallel + M'( \delta \vec{p}) } {~(\delta p_0)^2 - (\delta p_\parallel)^2 - M'^2 (\delta \vec{p} ) + \mathrm{i} \epsilon~} \cdot \theta_\perp (\delta \vec{p} ) \,, \end{equation} and out of the condensation region, it is just a free quark propagator. Here $\mathrm{i}\epsilon$ is introduced as in the vacuum case in order to classify the propagations of particles in the upper and the lower energy branches. The last function $\theta_\perp$ takes into account the phase space restriction of one wedge, \begin{align} \theta_\perp(\delta \vec{p} ) ~&\equiv~ \theta \big( |Q + \delta p_\parallel| \tan\Theta - |p_\perp| \big) ~ \theta \big( |Q - \delta p_\parallel| \tan\Theta - | p_\perp | \big) \notag \\ ~&\simeq~ \theta \big( Q \tan \Theta - |p_\perp| \big) \qquad (~ {\rm for}~ Q\gg |\delta p_\parallel| ~) \,. \end{align} This means that both quasi-particles and holes must be within one wedge to create chiral spirals. For the most part, we will assume the approximate expression in the second line, provided that $Q$ is sufficiently large. In the following, we will frequently use the component expression which is defined as \begin{equation} {\mathcal S}_{{\rm MF}} (\delta p) = \begin{pmatrix} ~S_{++} ~ &~ S_{+-} ~ \\ ~S_{-+} ~ & ~ S_{--} ~ \end{pmatrix} = \begin{pmatrix} \displaystyle \frac{1+\Gamma_5}{2}~{\mathcal S}_{{\rm MF}} ~\frac{1-\Gamma_5}{2} ~ &~~ \displaystyle \frac{1+\Gamma_5}{2}~{\mathcal S}_{{\rm MF}}~ \frac{1+\Gamma_5}{2} ~ \\[1.6ex] \displaystyle \frac{1-\Gamma_5}{2}~{\mathcal S}_{{\rm MF}} ~\frac{1-\Gamma_5}{2} ~ &~~ \displaystyle \frac{1-\Gamma_5}{2}~{\mathcal S}_{{\rm MF}} ~\frac{1+\Gamma_5}{2} ~ \end{pmatrix}\,. \end{equation} Some useful component expression is given also in Appendix~\ref{apppro}. \subsection{The Gap Equation and Its (1+1) Dimensional Character} Since we already have the formal expression of the propagator, we can now write the gap equation explicitly. The equation is (with the trace for the Dirac indices\footnote{ Here we remind readers that we are using four component spinors, so the trace gives a factor $4$.}), \begin{align} \hspace{-2ex} M'(\delta \vec{p}) &= 2 G \int \frac{\mathrm{d} \delta k_0}{2\pi} \int_{\delta k} ~\theta_{\delta p,\delta k}~ {\rm tr} \bigg[ \frac{ \mathrm{i} M'( \delta \vec{k}) } {~(\delta k_0)^2 - (\delta k_\parallel)^2 - M'^2 (\delta \vec{k}) + \mathrm{i}\epsilon~} \bigg] \, \theta_\perp (\delta \vec{k} ) \notag \\ &= 4 G \int \frac{\mathrm{d} \delta k_\parallel\, \mathrm{d} \delta k_\perp}{(2\pi)^2} \frac{M'(\delta \vec{k}) } {~\sqrt{ \delta k_\parallel^2 + M'^2(\delta \vec{k})~}~} ~\theta_{\delta p,\delta k} ~\theta_\perp(\delta \vec{k}) \,. \end{align} When we treat the self-consistent equation, we have to distinguish two situations depending on the fermion momenta. One case is that $p_\perp$ resides sufficiently far from the patch boundary. The other case is that $p_\perp$ is so close to the boundary that its mass gap is affected by other patches. In this section, we focus only on the former case without the boundary effects and postpone the discussion on the latter case to the later sections, which actually goes beyond the one-patch problem. For $\vec{p}_\perp$ sufficiently far from the patch boundary, the restriction $\theta_\perp (\vec{ \delta k} )$ is automatically satisfied by the other restriction $\theta_{\delta p, \delta k}$, so the former does not play any essential role. Then the gap equation has (1+1)-dimensional solutions. Indeed, we can find the mass gap function independent of $p_\perp$, \begin{equation} M'_0(\delta \vec{p}) = M'_0 (\delta p_\parallel) \,. \end{equation} When we look for such a solution, we can factorize the integral over $k_\perp$, which gives the (1+1)-dimensional gap equation, \begin{align} \hspace{-0.5cm} M'_0 (\delta p_\parallel) &= 4 G \int \frac{\mathrm{d} \delta k_\parallel}{2\pi} \frac{M'_0 (\delta k_\parallel ) } {~\sqrt{ \delta k_\parallel^2 + M_0^{\prime~ 2} (\delta k_\parallel ) }~} \int \frac{\mathrm{d} \delta k_\perp}{2\pi} ~\theta_{\delta p,\delta k} \notag \\ &= \frac{4 G}{\pi} \int_{\delta p_\parallel - \Lambda_f}^{ \delta p_\parallel + \Lambda_f} \frac{\mathrm{d} \delta k_\parallel}{2\pi} \frac{M'_0 (\delta k_\parallel ) } {\sqrt{ \delta k_\parallel^2 + M_0^{\prime ~2} (\delta k_\parallel ) }} \cdot \sqrt{ \Lambda_{\rm f}^2 - (\delta p_\parallel - \delta k_\parallel)^2 }\,, \end{align} where the $p_\perp$ dependence disappears from the RHS as it should be. Let us note that this factorization could not yield the $p_\perp$-independent solutions if we did not ignore corrections from the transverse kinetic terms $\sim p_\perp^2/Q$ or $\theta_\perp(\delta \vec{k})$. If one of such terms become relevant, then the $p_\perp$ dependence of the RHS does not disappear in the above manipulations. This means that the (1+1)-dimensional mass function can be obtained only if $Q$ is sufficiently large and $p_\perp$ is not too close to the one-patch boundary. Now let us show that the non-trivial gap arises from the IR effects near the Fermi surface. We shall consider the case with $\delta p_\parallel =0$. If we separate the integral region of the RHS into regions below and above a certain scale $c \Lambda_{\rm f}$ such that the momentum dependence in $M_0'$ can be ignored below $c \Lambda_{\rm f}$. Assuming that $c \Lambda_{\rm f} < \Lambda_{\rm f}$, we have \begin{align} M'_0 (0) &\simeq \frac{8 G \Lambda_{\rm f}}{\pi} \int_{ 0 }^{ c \Lambda_{\rm f} } \frac{\mathrm{d} \delta k_\parallel}{2\pi} \frac{M'_0 (0) } {~\sqrt{ \delta k_\parallel^2 + M_0^{\prime~ 2}(0) }~} + \text{(finite positive terms)} \notag \\ &= M'_0(0) \cdot \frac{4 G \Lambda_{\rm f}}{\pi^2} \ln \bigg( \frac{ c \Lambda_{\rm f} }{M'_0(0)} \bigg) + \text{(finite positive terms)} \,, \end{align} where the logarithmic term comes from the (1+1)-dimensional character of the equation, and expresses the IR effects. In the same way as the Cooper instability in superconductivity, we can find the solution of the gap equation from the IR structure of the equation. Indeed, if we take $M_0'(0)$ too small, the logarithmic part is divergingly large and the RHS exceeds the LHS much. Thus $M'_0(0)$ must be taken substantially large until the IR contributions are tempered to be the same order of the LHS. We point out that one oversimplification in the above expression is related to our approximation to ignore the transverse kinetic terms. If we recover those kinetic terms, the logarithmic part must be modified effectively as \begin{equation} \ln \bigg( \frac{ c \Lambda_{\rm f} }{M'_0(0)} \bigg) ~ \rightarrow ~\ln \bigg( \frac{ c \Lambda_{\rm f} }{M'_0(0) + p_\perp^2/Q} \bigg) \,, \end{equation} which tempers the growth in the IR region. Thus, the gap solution might not be found unless the density is sufficiently large. Finally, as a solution of the gap equation, $M_0'$ is parametrically given as \begin{equation} M_0' \sim \Lambda_{\rm f} ~ \mathrm{e}^{ -C/G\Lambda_{\rm f}} \,, \end{equation} where $C$ is some number. Within our approximation, the size of the gap should be at least larger than that in vacuum because of larger phase space for low energy excitations which contribute to the formation of the gap. \section{Perturbation Theory with Chiral Spiral Mean Fields} \label{formal} In this section we develop systematic computation of the corrections from subdominant terms up to the leading order of the $1/N_{\rm c}$ expansion. Using the stationary phase approximation at large $N_{\rm c}$, the fermionic partition function under chiral spiral backgrounds for fixed $\Theta$ is \begin{align} Z_\psi [\langle \Phi \rangle,\Theta] &= \int {\mathcal D}\psi' {\mathcal D}\bar{\psi'} ~\mathrm{e}^{\mathrm{i} ({\mathcal S}_{{\rm MF}} + \Delta {\mathcal S} )} \notag\\ &= Z_{{\rm MF}} ~ \big\langle 1 + \mathrm{i}\, \Delta {\mathcal S} + \frac{\mathrm{i}^2}{2!} (\Delta {\mathcal S})^2 + \cdots \big\rangle_{{\rm MF}}\,, \end{align} where $\langle \cdots \rangle_{{\rm MF}}$ is the expectation value when we use the mean-field weight, $\mathrm{e}^{\mathrm{i} {\mathcal S}_{{\rm MF} } }$, in the path integral. The action is made of \begin{align} {\mathcal S}_{{\rm MF}} &= \sum_i \big( {\mathcal S}_i^{\parallel + M} + {\mathcal S}_i^{\Phi} \big), \notag \\ \Delta {\mathcal S} &= \sum_i \big( {\mathcal S}_i^{\perp} + {\mathcal S}_i^{\rm sub. int}\big) + \sum_{i \neq j} {\mathcal S}_{i,j} + \sum_{i,j \neq k} {\mathcal S}_{i,jk} + \sum_{i \neq j,k \neq l} {\mathcal S}_{ij,kl} \,, \end{align} where ${\mathcal S}_{{\rm MF}}$ is the mean-field action which is the (uncorrelated) sum of one-patch actions. The ${\mathcal S}_i^\perp$ and ${\mathcal S}_i^{\rm sub.int}$ are subdominant terms inside of the $i$-th patch, which were not treated in the last section. Finally ${\mathcal S}_{i,j},~\cdots$ describe the interactions among different patches. The energy density functional is given by ${\mathcal E} [\langle \Phi \rangle, \Theta] = -\mathrm{i} \ln Z / {\mathcal V}_3 = {\mathcal E}_{{\rm MF}} + \Delta {\mathcal E}$ (with ${\mathcal V}_3$ being the (2+1)-dimensional space-time volume), where \begin{equation} {\mathcal E}_{{\rm MF}} = \frac{-\mathrm{i}}{ {\mathcal V}_3 } \ln Z_{{\rm MF}}\,, \qquad \Delta {\mathcal E} = \frac{-\mathrm{i}}{ {\mathcal V}_3 } \cdot \big\langle \mathrm{i}\,\Delta {\mathcal S} + \frac{\mathrm{i}^2}{2!} (\Delta {\mathcal S})^2 + \cdots \big\rangle_{{\rm MF}}^{{\rm conn.}} \,. \end{equation} We wish to measure the energy benefit from the chiral spiral formations. To do so, we need to subtract the energy of the trivial configuration, and so we should compute, \begin{equation} \delta {\mathcal E} [\langle \Phi \rangle, \Theta] = {\mathcal E} [\langle \Phi \rangle, \Theta] - {\mathcal E}[0, 0] \,. \end{equation} When we apply the perturbative expansion soon later, it is convenient to reorganize the above expression into \begin{equation} \delta {\mathcal E} [\langle \Phi \rangle, \Theta] = \big( {\mathcal E} [\langle \Phi \rangle, \Theta] - {\mathcal E}[0, \Theta] \big) + \big( {\mathcal E} [0, \Theta] - {\mathcal E}[0, 0] \big) \,. \end{equation} The first term expresses genuine condensation effects, while the second term comes from the deformation energy, which was already computed at the introduction and turned out to be $\sim p_{\rm F}^3 \Theta^4$. Our task below is the computation of the quantity inside of the first parentheses. In the following, we will use the $\psi'$-representation. The advantage of doing this is that the momentum $\delta p$ in the propagator is conserved, and we can express the propagator as a function of relative distance in space-time, $x-y$. Noting this fact, many of diagrams can be easily cast away. Let us recall that the subdominant terms have oscillating factors multiplied to the fermion fields. Non-zero contributions remain only if the combination of vertices has the oscillating factors in a form of $\mathrm{e}^{\pm \mathrm{i} Q(x-y)_\parallel},\,\mathrm{e}^{\pm 2\,\mathrm{i} Q(x-y)_\parallel},\,\cdots$ because propagators are functions of $x-y$. In other words, this is simply a consequence of the momentum conservation in our shifted variables. For example, let us see the first-order expansion, that gives a tadpole contribution, such as \begin{equation} \int \mathrm{d}^3 x \Big( \big\langle \bar{\psi}'_{+} \mathrm{i} \Slash{\partial}_\perp \psi'_{-}\big\rangle_{{\rm MF}}~ \mathrm{e}^{-2\mathrm{i} Q x_\parallel} + \big\langle \bar{\psi}'_{-} \mathrm{i} \Slash{\partial}_\perp \psi'_{+}\big\rangle_{{\rm MF}}~ \mathrm{e}^{2\mathrm{i} Q x_\parallel} \Big) \,. \end{equation} This is, however, a space-independent quantity times an oscillation factor, and so its spatial integral vanishes for $Q\neq 0$. This is an example of the momentum conservation. Non-zero correction terms start to arise from the second order of the perturbative expansion. In the following, we will first compute the corrections from one patch which include transverse kinetic terms and subdominant four-Fermi interactions in one patch. Second, we will compute interaction terms including patch-patch interactions. \section{Corrections from One Patch} \label{secpert} We consider the perturbative corrections inside of the $i$-th patch. For the moment we will omit the subscript $i$. The sources of second-order corrections are enumerated as follows: (1) (transverse terms)$^2$, (2) (four-Fermi interaction terms)$^2$, (3) (transverse terms) $\times$ (four-Fermi interaction terms). The last one, (3), cannot cancel oscillating factors out, and only (1) and (2) contribute to the free energy. \subsection{Product of Transverse Kinetic Terms} \begin{figure}[tb] \begin{center} \scalebox{1.0}[1.0] { \includegraphics[scale=.20]{transpert.pdf} } \end{center} \caption{ A diagram for the transverse kinetic terms in the $i$-th patch. Here we used the $\psi'$-representation in which $\psi'$ fields effectively feel incoming and outgoing momenta, $2Q\vec{n}_i$. This makes at least one fermion go far away from the Fermi surface. } \label{fig:transpert} \vspace{0.3cm} \end{figure} The non-zero contributions from the product of transverse terms arise from $(+-)(-+)$ or $(-+)(+-)$ combinations, i.e. \begin{align} \hspace{-0.8cm}\Delta {\mathcal E}_{\rm trans} &= \frac{\mathrm{i}}{ { \mathcal V}_3 } \int \mathrm{d}^3 x\,\mathrm{d}^3 y~ \big\langle \big[ \bar{\psi}'_{+} \mathrm{i} \Slash{\partial}_\perp \psi'_{-}(x) \big] \big[ \bar{\psi}'_{-} \mathrm{i} \Slash{\partial}_\perp \psi'_{+}(y) \big] \big\rangle_{{\rm MF}}~ \mathrm{e}^{-2\mathrm{i} Q (x - y)_\parallel } \notag \\ &= N_{\rm c} \int \frac{\mathrm{d}^2 \delta p}{(2\pi)^2} ~ p_\perp^2 \, \int \frac{\mathrm{d}\delta p_0}{2\pi} \notag\\ &\qquad\times (-\mathrm{i})\, {\rm tr} \big[ ~S_{--}(\delta p_0,\delta p_\parallel ,p_\perp) ~S_{++}(\delta p_0,\delta p_\parallel +2 Q,p_\perp) ~ \big] \,. \label{eqtrans1} \end{align} After the $p_0$ integration, we have \begin{equation} \Delta {\mathcal E}_{\rm trans} = N_{\rm c} \int \frac{\mathrm{d} \delta p_\parallel}{2\pi} \int_{\theta_1(Q)} \frac{\mathrm{d} p_\perp}{2\pi} ~ p_\perp^2 \cdot \frac{ F( \delta p_\parallel,2Q) } {~\omega( \delta p_\parallel ) + \omega( \delta p_\parallel + 2Q) ~} \,, \label{eqtrans2} \end{equation} where the integral of the transverse momentum is restricted within one patch, \begin{equation} \int_{\theta_1(Q)} \frac{\mathrm{d} p_\perp}{2\pi} \equiv ~ \int \frac{\mathrm{d} p_\perp}{2\pi} ~\theta_\perp( \delta p_\parallel , p_\perp) ~ \theta_\perp( \delta p_\parallel+2Q, p_\perp) \,, \end{equation} and we defined the function $F(\delta p_\parallel, 2Q)$ as \begin{equation} F(\delta p_\parallel, 2Q) = 1 + \frac{ \delta p_\parallel }{~ \omega( \delta p_\parallel)~} \frac{ \delta p_\parallel +2Q }{~ \omega( \delta p_\parallel +2Q) ~ } \qquad ( 0 \le F \le 2) \,. \label{F} \end{equation} Now let us analyze Eq.~(\ref{eqtrans2}). The key point is that at least one of the momenta, $\delta p_\parallel$ or $\delta p_\parallel +2Q$, must be much larger than $\Lambda_{\rm c}$. If both of the momenta are much bigger than $\Lambda_{\rm c}$, then the mass gap does not exist, so that the result is just reduced to the one in the free theory. It is thus a non-trivial case when one of the momenta is close to the Fermi surface. Let us consider such a situation in the following. Supposing that $|\delta p_\parallel | < \Lambda_{\rm c}$, the other momentum $\delta p_\parallel + 2Q$ satisfies $2Q-\Lambda_{\rm c} < \delta p_\parallel +2Q < 2Q +\Lambda_{\rm c}$. Therefore we can use the free fermion dispersion for $\omega(\delta p_\parallel + 2Q) = \delta p_\parallel + 2Q$, and can apply an approximate upper bound, $Q \tan\Theta$, for the $p_\perp$-integral. In order to make physical interpretations, we should notice that generically the perturbative expansion contains the trivial contribution that is independent of the condensate, and this must be subtracted. Here we subtract such trivial contributions with the same phase space as the non-trivial one. Outside of the condensation domain, two contributions cancel out. After subtracting terms at $M'=0$, the non-trivial contribution of Eq.~(\ref{eqtrans2}) is given by \begin{align} \Delta {\mathcal E}_{\rm trans} &\simeq~ \frac{N_{\rm c}}{\pi^2} \int_0^{\Lambda_{\rm c}} \mathrm{d} \delta p_\parallel \int_0^{Q\tan \Theta} \! \mathrm{d} p_\perp\, \frac{p_\perp^2}{~2Q ~} \bigg[ \bigg(1 + \frac{\delta p_\parallel} { \sqrt{ \delta p_\parallel^2 + M'^2(\delta p_\parallel) } } \bigg) - 2\bigg] \notag \\ & \sim ~ -N_{\rm c} \cdot M' Q^2 \tan^3 \Theta \qquad(M'\sim \Lambda_f)\,, \label{eqtrans3} \end{align} where $-2$ in the first line represents the subtraction of the trivial contribution. Note that the energy correction before performing the integration is $\sim p_\perp^2/Q$, and so at the level of the computation of the fermion dispersion relation, this terms were negligible effects. The sign of Eq.~(\ref{eqtrans3}) is negative, which means that this term corresponds to an energy gain. This term can be regarded as a controllable correction only if it is sufficiently smaller than the energy gain from the one-patch mean field, $N_{\rm c} \cdot \Lambda_{\rm f}^2 Q \tan\Theta $. This requires the condition, $\Theta \ll (\Lambda_{\rm f}/Q)^{1/2}$, as already stated at the introduction. Finally let us mention on how the perturbative analysis of the correction from the transverse kinetic energy can be organized in a systematic way. If we go to higher order of the transverse kinetic terms, we have terms of higher powers of $p_\perp^2/Q$ in the integrand. Computing the $p_\perp$-integration with the integral region $\sim Q\tan \Theta$ results in terms of higher powers of $\Theta \ll 1$, which appears as an expansion parameter in the systematic computations. \subsection{Product of Four-Fermi Interaction Terms} \begin{figure}[tb] \begin{center} \scalebox{0.5}[0.5] { \includegraphics[scale=.35]{irrelevant1.pdf} } \scalebox{0.5}[0.5] { \includegraphics[scale=.35]{irrelevant2.pdf} } \end{center} \caption{ Contributions from the subdominant vertices. The $\psi'$-representation is used and all fields belong to the $i$-th patch.\ \ (Left) The $O(N_{\rm c}^0)$ contribution which is ignored in this work.\ \ (Right) The $O(N_{\rm c})$ contributions. The diagram can be interpreted as the condensate-condensate interaction mediated by particle-hole excitations. One of fermions in the internal loop must go far from the Fermi surface. } \label{fermisphere2} \vspace{0.2cm} \end{figure} We will compute the second-order perturbation of subdominant four-Fermi interaction terms, which reads \begin{equation} \frac{\mathrm{i}}{ {\mathcal V}_3 } \cdot \frac{G^2}{N_{\rm c}^2} \int \mathrm{d}^3 x\, \mathrm{d}^3 y~ \big\langle \big(\bar{\psi}'_{+} \psi'_{-}(x) \big)^2 \big( \bar{\psi}'_{-} \psi'_{+}(y) \big)^2 \big\rangle_{ {\rm MF} }~ \mathrm{e}^{-4\mathrm{i} Q (x - y)_\parallel}\,. \label{512eq4fermi1} \end{equation} Equation~(\ref{512eq4fermi1}) yields several distinct terms depending on how fermion lines are contracted. Here we introduce the hierarchy for different contractions by applying the $1/N_{\rm c}$ expansion. At large $N_{\rm c}$, the dominant contributions come from condensate-condensate interactions mediated by virtual quark-hole exchange. This situation is schematically described as \begin{equation} \hspace{-0.6cm} \sim ~G^2~ \frac{\langle \bar{\psi}'_+ \psi'_- \rangle_{ {\rm MF} } }{N_{\rm c}} ~ \bigg( (-\mathrm{i}) N_{\rm c} \!\int \mathrm{d}^3 z ~ {\rm tr} \big[ S_{++}(z) S_{--}(-z) \big] ~ \mathrm{e}^{-4\mathrm{i} Q z_\parallel} \bigg) ~ \frac{\langle \bar{\psi}'_- \psi'_+ \rangle_{ {\rm MF} } }{N_{\rm c}} \label{512eq4fermi2} \end{equation} with $z=x-y$ (see the right panel in Fig.~\ref{fermisphere2}). Here the form factor is not explicitly written yet. This contribution is $O(N_{\rm c})$ and positive, according to the previous calculations which treated the integral in the bracket. One fermion pair at $x$ and $y$ is contracted at one space-time point and becomes condensates. The remaining part is for the particle-hole propagation between $x$ and $y$. Loosely speaking, this part can be interpreted as the propagation of meson-like objects with total momentum $4Q$ (though ladder-type resummation is necessary to construct the meson propagation in fact). Let us note that the size of the condensate of $O(N_{\rm c})$ compensates for the suppression factor of $O(1/N_{\rm c})$ in the intrinsic interaction vertex. Other contractions without the condensate cannot be accompanied by a fermion loop of $O(N_{\rm c})$ and are suppressed by $1/N_{\rm c}$. We should not take Eq.~(\ref{512eq4fermi2}) literally, however, since the loop integral is UV divergent, which is regulated with the form factor. It should be mentioned that such apparent UV divergence couples to the condensate, so the subtraction of the trivial configuration without the condensate cannot regulate the UV behavior. Including the form factor explicitly and carrying out the integral in Eq.~(\ref{512eq4fermi1}), we have \begin{align} & \hspace{-4ex} \sim N_{\rm c} G^2 \int_{\delta p, \delta k, \delta l} \theta \big( \Lambda_{\rm f}^2 - ( \delta \vec{p} - \delta \vec{k} + 2Q \vec{n} )^2 \big) ~ \theta \big( \Lambda_{\rm f}^2 - ( \delta \vec{p} - \delta \vec{l} + 2Q \vec{n} )^2 \big)~ \notag \\ & \hspace{-3ex} \times \bigg( \int \!\mathrm{d} \delta k_0 \frac{ \langle \bar{\psi}'_+ \psi'_-( \delta k) \rangle }{N_{\rm c}} \bigg) \frac{ F( \delta p_\parallel, 4Q) } {~\omega( \delta p_\parallel ) + \omega( \delta p_\parallel + 4Q) ~} \bigg( \int \! \mathrm{d}\delta l_0 \frac{ \langle \bar{\psi}'_- \psi'_+ ( \delta l) \rangle }{ N_{\rm c}} \bigg) \,, \label{eqrestriction} \end{align} or equivalently, \begin{equation} \sim N_{\rm c} \int_{\delta p} M' \big( \delta \vec{p} + 2Q \vec{n} \big) ~ \frac{ F( \delta p_\parallel, 4Q) } {~\omega( \delta p_\parallel ) + \omega( \delta p_\parallel + 4Q) ~} ~ M' \big( \delta \vec{p} + 2Q \vec{n} \big)\,. \label{eqrestriction2} \end{equation} Here the phase-space restriction for the transverse momentum is not explicitly written. The condensate takes a finite value only if $|\delta k_\parallel|,~|\delta l_\parallel| < \Lambda_{\rm c}$. Once they are restricted within such a domain, $\delta \vec{p}~$ is also restricted around $-2Q\vec{n}$, and so the integral over the phase space has a UV cutoff. The integrand itself is suppressed by $1/Q$. Finally the energy cost from this contribution can be estimated as \begin{equation} \hspace{-0.3cm} \text{(one-patch energy cost)} \sim~ N_{\rm c} \, \frac{\Lambda_f^2}{Q} \cdot \Lambda_{\rm f} Q \tan \Theta ~\sim~ N_{\rm c} \Lambda_{\rm f}^3 \tan \Theta \,. \end{equation} Here, a phase-space factor $\sim \Lambda_{\rm f} Q\tan \Theta$ arises because one spatial momentum is not restricted by the $\theta$ function constraint in Eq.~(\ref{eqrestriction}). As expected, this contribution is much smaller than the one-patch mean-field contribution $\sim N_{\rm c} \cdot \Lambda_{\rm f}^2 Q\tan \Theta$, so can be safely ignored at large density. \section{Contributions from Patch Boundaries: Inter-Patch Effects} \label{sec:interference} \begin{figure}[tb] \vspace{0.2cm} \begin{center} \scalebox{1.0}[1.0] { \includegraphics[scale=.20]{patchint1.pdf} } \end{center} \vspace{0.2cm} \caption{ The domain where the chiral spiral mean field in one patch couples to particles (holes) in its nearest neighbor patches. The size of such a domain is $\sim \Lambda_f^2$ due to form factor effects. } \label{fermisphere3} \vspace{0.2cm} \end{figure} So far, we have ignored the interactions among fermions belonging to different patches. Due to the form-factor effects, such interactions occur only near the boundaries of patches. We will discuss the impact of such effects. The outline for our computational procedure is the following. The condensation effects generate the following three types of the energy contributions, \begin{equation} \hspace{-0.5cm} {\mathcal E}_{{\rm cond.}} \sim ~ N_p \, \big( {\mathcal E}^{\text{1-patch}}_{\rm inside}(M_0, \Theta) + {\mathcal E}^{\text{1-patch}}_{\rm B} (M_{\rm B},\Theta) + {\mathcal E}_{\rm int}^{\text{patches}} (M_{\rm B},\Theta) \big) \,, \end{equation} where ${\mathcal E}^{\text{1-patch}}_{\rm inside}$ and ${\mathcal E}^{\text{1-patch}}_{\rm B}$ are one-patch condensation energy in the region far from and close to the boundaries, and ${\mathcal E}_{\rm int}^{\text{patches}}$ represents the patch-patch interaction at the boundaries. The subscript B means the boundary, and $M_0$ and $M_{\rm B}$ can be considerably different. The energy density schematically behaves as \begin{align} {\mathcal E}_{{\rm cond.}}/N_{\rm c} ~&\sim~ \frac{1}{\Theta} ~ \big( -M_0 \Lambda_{\rm f} (Q\tan \Theta - \Lambda_{\rm f}) - M_{\rm B} \Lambda_{\rm f}^2 + {\mathcal E}_{\rm int}^{\text{patches}} (M_{\rm B},\Theta) \big) \notag \\ &\sim~ - M_0 \Lambda_{\rm f} Q ~+~ \frac{1}{\Theta} ~ \big( \Lambda_{\rm f}^2 ( M_0 - M_{\rm B} ) + {\mathcal E}_{\rm int}^{\text{patches}} ( M_{\rm B},\Theta ) \big) \,. \label{eq:conden} \end{align} We will show that ${\mathcal E}_{\rm int}^{\text{patches}}$ is positive (at least at the level of the second-order perturbation theory). Its strength is determined by the size of $M_{\rm B}$, and vanishes when $M_{\rm B} \rightarrow 0$. Equation~(\ref{eq:conden}) can be understood in twofold ways. If we regard the incoherent sum of the one-patch actions as our unperturbed action, $M_{\rm B}$ at the unperturbed level is $\sim M_0$, and ${\mathcal E}_{\rm int}^{\text{patches}}$ provides relatively large, positive contributions. This means that the inter-patch interactions between different chiral spirals reduce the energy gain from creating the condensates at the patch boundaries. Instead, if we assume $M_{\rm B}\ll M_0$, then the one-patch contribution is $M_0 - M_{\rm B} \sim M_0 \sim \Lambda_{\rm f}$, while ${\mathcal E}_{\rm int}^{\text{patches}}$ is negligible. In both limiting cases, the sign of $1/\Theta$ terms is positive and of $O(\Lambda_{\rm f}^3)$. The precise estimate of the $1/\Theta$ term would be given by self-consistently solving the gap equation of $M_{\rm B}$ with ${\mathcal E}_{\rm int}^{\text{patches}}$. This is beyond our scope in this paper. Instead, we will give several indicative discussions to understand inter-patch interactions at the patch boundaries, in both perturbative and non-perturbative manners. \subsection{Preliminaries} \begin{figure}[tb] \vspace{0.2cm} \begin{center} \scalebox{0.6}[0.6] { \includegraphics[scale=.35]{patchint2.pdf} } \scalebox{0.6}[0.6] { \includegraphics[scale=.35]{patchint3.pdf} } \end{center} \vspace{0.2cm} \caption{(Left) An influence of the $i$-th chiral spiral mean-field at the patch boundaries. Only the Fermi surface close to $i$-th patch is shown. The $i$-th chiral spiral can scatter a particle (hole) state in the $(i+1)$-th patch into a hole (particle) state in the $(i-1)$-th patch. Such processes are possible only for a particle and a hole close to the $i$-th patch boundary.\ \ (Right) A diagrammatic expression of the particle-condensate scattering. The vector $\vec{p}$ and $\vec{k}$ must be close each other. } \label{fig:patchint23} \vspace{1.0cm} \end{figure} As we have already seen, at large $N_{\rm c}$, the dominant correction terms come from condensate-condensate interactions mediated by virtual quark-hole excitations. Therefore we will take into account only such terms. Then we need consider only vertices in which two indices of the patches are identical, such as terms in ${\mathcal L}^{\rm int}_{i,j}$ or ${\mathcal L}^{\rm int}_{i,jk}$. Actually, because of the form factor, only a particle and a hole in $(i\pm1)$-th patches can directly couple to the $i$-th chiral-spiral mean field. Therefore we have only to consider the following types of the vertices (see also Fig.~\ref{fig:patchint23}), \begin{equation} \int_{p,k} \big\langle \bar{\psi}_{i+} (\vec{p} + 2Q\vec{n}_i ) ~ \psi_{i-} (\vec{p}) \big\rangle \, \bar{\psi}_{i-1,-} (\vec{k}) ~ \psi_{i+1,+} (\vec{k} + 2Q\vec{n}_i ) ~ \theta_{p,k}\,, \label{patchint1} \end{equation} where we have replaced a fermion bilinear in the $i$-th patch with the chiral spiral mean field. Using the shifted momentum variables, $\delta \vec{p} = \vec{p} +Q \vec{n}_i$ and $\delta \vec{k}=\vec{k} + Q \vec{n}_{i-1}$, and the $\psi'$-representation, Eq.~(\ref{patchint1}) can be rewritten as \begin{align} & \int_{\delta p, \delta k} \! \big\langle \bar{\psi}'_{i+} (\delta\vec{p} ) ~ \psi'_{i-}(\delta \vec{p}) \big\rangle \, \bar{\psi}'_{i-1,-} (\delta \vec{k}) ~ \psi'_{i+1,+} \big(\, \delta \vec{k} + Q(2\vec{n}_i - \vec{n}_{i-1} - \vec{n}_{i+1} ) \, \big) \notag \\ & \qquad\qquad \times \theta \big(\, \Lambda_{\rm f}^2 - (\delta \vec{p} - \delta \vec{k} - Q \vec{n}_i + Q \vec{n}_{i-1} )^2 \,\big) ~ . \label{patchint2} \end{align} In perturbative computations, it is necessary to decompose momenta of the fermion field into the longitudinal and transverse components. Here let us briefly summarize necessary ingredients for the computations. Note that \begin{equation} \vec{n}_{i\pm1} = \cos 2\Theta ~\vec{n}_i \mp \sin 2\Theta ~\vec{n}_{i\perp} \,, \qquad \vec{n}_{i\pm1, \perp} = \pm \sin 2\Theta ~\vec{n}_i + \cos 2\Theta ~\vec{n}_{i\perp} \,. \end{equation} If we write $\delta \vec{k} = \delta k_{\parallel} \vec{n}_{i-1} + k_{\perp} \vec{n}_{i-1,\perp}$, then the momentum in $\psi'_{i+1,+}$ can be decomposed into the $\vec{n}_{i+1}$ and the $\vec{n}_{i+1,\perp}$ directions, \begin{align} & \delta \vec{k} + Q(2\vec{n}_i - \vec{n}_{i-1} -\vec{n}_{i+1} ) \notag \\ & = ~ \big[ ~ \delta k_{\parallel} \cos 4\Theta + k_{\perp} \sin 4\Theta + Q( 2\cos2\Theta - \cos4\Theta -1)~ \big] ~\vec{n}_{i+1} \notag \\ &\qquad + ~\big[ ~ - \delta k_{\parallel} \sin 4\Theta + k_{\perp} \cos 4\Theta + Q( - 2\sin2\Theta + \sin 4\Theta )~ \big] ~\vec{n}_{i+1,\perp} \notag \\ & \simeq ~( \delta k_\parallel + 4 k_\perp \Theta) ~\vec{n}_{i+1} ~+~ k_\perp ~ \vec{n}_{i+1, \perp} ~+ ~ O(\delta k_\parallel \Theta, ~ Q\Theta^2) \,, \end{align} where we did not explicitly write quantities which are much smaller than $\Lambda_{\rm f}$. Similarly, let us simplify the expression of the argument in the form factor. A decomposition of the momentum, \begin{align} & \delta \vec{p} - \delta \vec{k} - Q \vec{n}_i + Q \vec{n}_{i-1} \notag \\ &= ~ \big[ ~ \delta p_\parallel - \delta k_{\parallel} \cos 2\Theta - k_{\perp} \sin 2\Theta + Q( \cos2\Theta -1)~ \big] ~\vec{n}_{i} \notag \\ &\qquad + \big[ ~ p_\perp - k_\perp \cos 2\Theta + \delta k_\parallel \sin 2\Theta - Q \sin2\Theta ~ \big] ~\vec{n}_{i+1,\perp} \notag \\ & \simeq ~ (\delta p_\parallel - \delta k_\parallel ) ~ \vec{n}_i ~ + ~ ( p_\perp - k_\perp - 2Q\Theta) ~ \vec{n}_{i\perp} ~+ ~ O(\delta k_\parallel \Theta, ~ Q\Theta^2) \,, \end{align} leads to \begin{equation} \theta \big( \, \Lambda_{\rm f}^2 - (\delta p_\parallel - \delta k_\parallel)^2 - ( p_\perp - k_\perp -2Q\Theta)^2 \,\big) \,. \end{equation} The $\theta$ function is non-zero only around the patch boundary, that is, $p_\perp \sim Q\Theta$ and $k_\perp \sim - Q \Theta$. Finally let us note that the projection operators in one patch are different from those in other patches. Indeed, \begin{align} \hspace{-2em} \frac{1+\Gamma_{i+1,5} }{2} \cdot\frac{1- \Gamma_{i-1,5} }{2} &= \frac{1+\gamma_0 \gamma_{i+1,\parallel} }{2}\cdot \frac{1- \gamma_0 (\gamma_{i+1,\parallel} \cos2\Theta - \gamma_{i+1\perp} \sin 2\Theta ) }{2} \notag \\ &= \big(\, (1- \cos2\Theta) + \sin 2\Theta \gamma_0 \gamma_{i+1,\perp} \,\big) ~\frac{1+ \Gamma_{i+1,5} }{4}\,, \end{align} and \begin{equation} \hspace{-1.5em} \frac{1\pm\Gamma_{i+1,5} }{2} \cdot\frac{1\pm \Gamma_{i-1,5} }{2} = \frac{1\pm \Gamma_{i+1,5} }{4}~ \big(\,(1+ \cos2\Theta) \mp \sin 2\Theta ~\gamma_0 \gamma_{i+1, \perp} \,\big)\,. \end{equation} In what follows, these slight modifications provide only negligible contributions of $O(\Theta)$, so we need not care them seriously in the computations. \subsection{Perturbative Consideration} The purpose of this subsection is to get the typical size of patch-patch interactions, and more importantly, to investigate whether interactions are attractive or repulsive. The latter can be done even without detailed estimates of the gap near the patch boundaries. The perturbative computations proceed in almost exactly the same way as before. After taking the residue, we have an expression analogous to Eq.~(\ref{eqrestriction2}), \begin{equation} N_{\rm c} \int_{\delta k} M'^2_{\rm B} ( \delta k_\parallel, k_\perp + 2Q \Theta ) ~ \frac{ F_{\rm B}( \delta k_\parallel, 4 k_\perp \Theta) } {~\omega_B( \delta k_\parallel ) + \omega_B( \delta k_\parallel + 4k_\perp \Theta) ~} ~\ge ~0 \,, \label{eq:patchint} \end{equation} where $0\le F_{\rm B} \le 2$ by definition of Eq.~(\ref{eqrestriction2}), and $k_\perp \sim - Q\Theta$. The product of $M'_{\rm B}$ comes from the condensates (i.e.\ fermion loops) in the $i$-th patch, and the remaining piece comes from virtual particle-hole excitations. We emphasize that for the momentum conservation to be satisfied, both condensates must come from the same patch. The subscript B is attached to remind that the sizes of the gap and mass function near the boundaries may be considerably different from those far from the boundaries. The sign is positive, so this is an energy cost. Although this is nothing beyond a generic fact inherent to the second-order perturbation, it indicates that boundary effects tend to temper the magnitude of the condensates near the boundaries. It is very important to notice that the energy cost may be comparable to the energy gain from forming a condensate within the same phase space $\sim \Lambda_{\rm f}^2$. In contrast to the case of subdominant terms, we do not have $1/Q$ suppression because all fermions can move around the Fermi surface during the virtual processes. Let us investigate the order of the magnitude. If we assumed $M'_{\rm B} \gg Q\Theta^2$, we can make an approximation that \begin{equation} \text{(Integrand in Eq.~(\ref{eq:patchint}))} \sim \frac{M_{\rm B}'^2}{ \sqrt{ \delta k_\parallel^2 + M_{\rm B}'^2 } } \bigg(1 - \frac{\delta k_\parallel \, k_\perp \Theta} {~\sqrt{ \delta k_\parallel^2 + M_{\rm B}'^2 }~} + \cdots \bigg) \,, \end{equation} thus the integral of the above integrand in the IR region gives an approximate expression as \begin{equation} \Lambda_{\rm f} M_{\rm B}'^2 \bigg[ \ln \bigg( \frac{\Lambda_{\rm f}}{M_{\rm B}'} \bigg) + \frac{2\, Q \Theta^2}{ \sqrt{ \Lambda_{\rm f}^2 + M_{\rm B}'^2~} } \cdots \bigg] ~\sim~ M_{\rm B}'^2 ~ \bigg( \frac{1}{G} + O(Q\Theta^2) \bigg) ~ \sim ~ \Lambda_{\rm f}^3\,, \end{equation} where we have used $G \sim \Lambda_{\rm f}^{-1}$ and the parametric behavior of the mass gap, $M_{\rm B}' \sim \Lambda_{\rm f}\, \mathrm{e}^{-C/G\Lambda_{\rm f}}$. This expression indeed confirms that the energy cost is of the same order as the energy gain from condensation effects within the same phase space. Unfortunately, any obvious expansion parameter did not appear for this perturbative expansion, so we have no good reason to cast away higher-order diagrams. For a more reasonable estimate, the non-perturbative computations are necessary to simultaneously treat patch-patch interactions and the mean-field problem near the patch boundaries. \subsection{Some Non-perturbative Considerations: A (1+1)-Dimensional Example of Two Chiral Spirals} \label{1+1Dexample} To get some insights for the inter-patch interactions between several chiral spirals, let us consider the simplest (1+1)-dimensional example. We assume the mean field which has two chiral spirals with wavevectors $Q_0$ and $Q_1$. We will see how these two chiral spirals affect each other. The mean-field eigenvalue equation is \begin{equation} \hspace{-2em} E_{{\rm MF}} ~ \Psi(x_\parallel) = \begin{pmatrix} \mathrm{i} \partial_\parallel ~&~ M_0\, \mathrm{e}^{2\mathrm{i} Q_0 x_\parallel} + M_1\, \mathrm{e}^{2\mathrm{i} Q_1 x_\parallel} \\ M_0\, \mathrm{e}^{-2\mathrm{i} Q_0 x_\parallel} + M_1\, \mathrm{e}^{-2\mathrm{i} Q_1 x_\parallel} ~&~ -\mathrm{i} \partial_\parallel \end{pmatrix} \Psi(x_\parallel) \,, \nonumber \\ \end{equation} where $\Psi (x_\parallel) = \big(\psi_{+}(x_\parallel), \psi_{-}(x_\parallel) \big)^T$ is defined as before. We rewrite this expression in the $\psi'$-representation in order to eliminate the oscillating factors. When we have two chiral spirals, those oscillating factors cannot be eliminated simultaneously. If $M_0 > M_1$, it is better to eliminate $\mathrm{e}^{\pm 2\mathrm{i} Q_0 x_\parallel}$, as will be clear in the following. Using $\psi_\pm = \psi'_\pm\,\mathrm{e}^{\pm\mathrm{i} Q_0 x_\parallel}$ and multiplying the Hamiltonian squared, we have the following Schr\"{o}dinger equation, \begin{equation} (E_{{\rm MF}} - Q_0)^2 ~ \Psi' (x_\parallel ) = \begin{pmatrix} {\mathcal H}'_{{\rm diag.}} ~&~ 2\delta Q M_1\, \mathrm{e}^{2\mathrm{i} \delta Q x_\parallel} \\ 2\delta Q M_1\, \mathrm{e}^{-2\mathrm{i} \delta Q x_\parallel} ~&~ {\mathcal H}'_{{\rm diag.}} \end{pmatrix} \Psi' (x_\parallel) \,, \end{equation} where $\delta Q= Q_1- Q_0$, and \begin{equation} {\mathcal H}'_{{\rm diag.}} = - \partial_\parallel^2 + (M_0 - M_1)^2 + 4M_0 M_1 \cos^2 \delta Q x_\parallel \,. \end{equation} The off-diagonal Hamiltonian has the amplitude proportional to $\delta Q M_1$. Therefore, if $|\delta Q|$ or $M_1$ is small enough, one can ignore the off-diagonal part. Here one can understand that our choice to eliminate $Q_0$ rather than $Q_1$ is suited for this approximation since $M_0 > M_1$. \begin{figure}[tb] \begin{center} \hspace{-0.2cm} \scalebox{0.5}[0.5] { \includegraphics[scale=.36]{potQ0.pdf} } \hspace{0.3cm} \scalebox{0.5}[0.5] { \includegraphics[scale=.36]{potQ01.pdf} } \end{center} \vspace{0.2cm} \caption{The potential $V(x_\parallel)-Q_0$.\ \ (Left) $\delta Q=0$ case. The potential is constant.\ \ (Right) $\delta Q \neq 0$ case. The potential is oscillating, so particles can stay around the valley of the potential. For very small $\delta Q$, its kinetic energy cost is small. } \vspace{0.3cm} \label{fig:pot} \end{figure} The diagonal part has a positive oscillating potential whose period is $1/|\delta Q|$; see Fig.~\ref{fig:pot}. A singularity lies at $\delta Q=0$ which leads to a constant potential, and the energy spectra are discontinuous from those at $\delta Q\neq 0$. If $|\delta Q|$ is small enough but not zero, then we can find the eigenfunction whose kinetic energy is very small. That is, \begin{equation} E \sim Q_0 \pm |M_0 - M_1| \,, \end{equation} for $0 < |\delta Q| \ll M_0$. When $M_0$ is comparable to $M_1$, they nearly cancel each other, making the effective gap small. This analysis indicates that two chiral spirals with different but similar wave vectors tend to reduce the energy gain in single particle contributions. If two chiral spirals have substantially different wavevectors, say, $|\delta Q| \sim M$, then the valley of the potential becomes narrow, and the kinetic energy is $\sim M$. In such a case inter-patch interactions do not strongly reduce the energy gain. This remark will be increasingly important when we consider the lower density region where higher harmonics of chiral spirals start to contribute because of the subdominant terms. We will discuss this in the next section. \section{Discussion} \label{secdiscussion} So far we have used the four-Fermi interaction which is strong enough to induce the chiral symmetry breaking near the Fermi surface. At the same time, we have relied on the high-density approximation to make our discussions simple enough to explore analytic insights. In reality with $N_{\rm c}=3$, however, gluons will be screened at high density, reducing the magnitude of our coupling constant. Perhaps our approximations may need improvements to be realistic. Hence we try to extrapolate our insights at high density into the lower-density domains, by arguing how correction terms grow up. Essentially interweaving chiral spirals are disturbed by the transverse dynamics and inter-patch interactions, and these effects become increasingly important at lower density. In this respect, we quote several elaborated numerical studies in the low-density side, in order to complement our current studies. At the same time, we will make use of our perturbative corrections to interpret some results in the existing literatures. Related to our high density approximation, it is very important to know how the $1/N_{\rm c}$ corrections grow up at higher density. We will summarize effects which we have ignored by using the large-$N_{\rm c}$ approximation. Another important question is how our interweaving chiral spirals look like in coordinate space. Up to $N_{\rm p}=3$, the chiral density has a periodic translational order and a orientational order that are classified by usual crystallography. Beyond $N_{\rm p}=3$, however, the chiral density wave is no longer periodic, but just shows certain patterns with orientational symmetry. We briefly discuss these aspects, leaving several interesting questions open for future studies. The remaining topic is the instanton-induced interaction \cite{Kobayashi:1970ji}, which is frequently used to introduce strong diquark correlations \cite{Rapp:1997zu}. We will see that this interaction would provide quite different effects below and above the strange-quark threshold. \subsection{Comparisons with Other Works in the Low Density Regime} \label{comparison} In Ref.~\cite{Rapp:2000zd}, the authors have numerically studied the chiral crystals in (3+1) dimensions in the density region, $\mu_{\rm q} = 0.4-0.6\;\text{GeV}$, using the NJL model and the model with the instanton-induced interactions. They studied the scalar-isoscalar channel with the plane-wave oscillations $\sim \sigma \,\mathrm{e}^{\mathrm{i}\vec{Q} \cdot \vec{r} }$. In the NJL model, the strength of the four-Fermi interaction becomes weaker at higher density because of the model cutoff, so they found only small mass gaps, $\sim O(10)\;\text{MeV}$. On the other hand, the instanton-induced interaction is stronger near the Fermi surface, so it is possible to have a large mass gap of $O(\Lambda_{\rm QCD})$. Importantly, they showed that the creation of the differently oriented chiral-density waves (crystals) does not provide much energy benefit, and the chiral density wave evolved only in one direction is energetically favored. This result can be interpreted as a consequence of inter-patch interactions, and is consistent with our current analysis. In the case of the chiral density wave in one particular direction, there can be a better solution than the simple plane wave. Recently, such a solution in the (3+1)-dimensional NJL model was found by the authors of Ref.~\cite{Nickel:2009ke} who have used trial functionals motivated from (1+1)-dimensional studies \cite{Schon:2000qy}. The following results are deeply relevant to ours: (i) The solution appears to be of a solitonic type at low density, and approaches the plane-wave type at high density. (ii) A quark number modulation also occurs at low density. It smoothly approaches the uniform distribution as the density increases. (iii) The chiral spirals with $\sigma$ and $\vec{\pi}$, which is naively expected, is not energetically favored as compared to single modulation of $\sigma$, as far as we keep $\mu_u=\mu_d$. These statements are all consistent with our analyses, and in fact could be inferred from our framework as follows. Let us explain this in order. (i) The deviation from the plane-wave solution is caused by the subdominant terms in our formulation, \begin{equation} \bar{\psi}_+' \Slash{\partial_\perp} \psi_-' \mathrm{e}^{-2\mathrm{i} Qx_\parallel} \,, \quad (\bar{\psi}'_+ \psi'_- )^2 \mathrm{e}^{ -4\mathrm{i} Q x_\parallel} \,,\quad \cdots \end{equation} which provide higher harmonics necessary to construct solitonic solutions at low density\footnote{ In (1+1) dimensions at $T=0$, one can validate this discussion by investigating the Gross-Neveu (GN) model \cite{Gross:1974jv} with or without the continuous chiral symmetry. The former is free from the subdominant terms, and the chiral spirals can appear at arbitrary low density and the quark density is always uniform. (At nonzero $T$, we have the twisted kink crystal in which the amplitude field also modulates \cite{Basar:2008im}.) On the other hand, in the version with discrete chiral symmetry, solitonic objects first appear at density beyond some critical chemical potential which is slightly lower than the constituent quark mass. As density increases, subdominant terms stop to disturb the chiral rotation, then quark distributions smoothly approach those with the chiral spirals and uniform quark density. See also Sec.~6 in Ref.~\cite{Kojo:2011fh}. }. As we discussed, these terms become unimportant as the density increases, recovering the plane-wave solutions. (ii) In the computations of the expectation value of the quark number, the non-perturbative mean-field propagator gives uniform distribution, while the perturbations from subdominant terms can generate the spatial modulation (for more explanations, see Sec.\ref{coordinate}). The distribution approaches the uniform one as the density increases\footnote{ Explicit calculations will be reported elsewhere. }. (iii) It is quite straightforward to extend our one-flavor studies to the multi-flavor ones in terms of $\Phi = (u,d,\cdots)^T$, and we can easily infer that the chiral spirals should emerge as a rotation in the $U(1)$ quark number sector, \begin{equation} \langle \bar{\Phi}_+ \Phi_- \rangle = \Delta\, \mathrm{e}^{-2\,\mathrm{i} Qx_\parallel} \,, \qquad \langle \bar{\Phi}_- \Phi_+ \rangle = \Delta\, \mathrm{e}^{2\,\mathrm{i} Qx_\parallel} \,, \end{equation} which are equivalent with the following combination, \begin{equation} \langle \bar{\Phi} \Phi \rangle = 2\Delta \cos 2Qx_\parallel \,, \qquad \langle \bar{\Phi} i \gamma_0 \gamma_\parallel \Phi \rangle = 2\Delta \sin 2Qx_\parallel \,. \end{equation} The expectation value of the latter was not calculated in Ref.~\cite{Nickel:2009ke}, though. Here we have not found any particular mechanism to generate flavor rotations, at least in the high-density limit. Of course, once we had explicit flavor breaking coming from conditions such as the charge neutrality and $\beta$-equilibrium\footnote{ These conditions are driving mechanism to destabilize a homogeneous color-superconducting phases \cite{Huang:2004bg} into the crystalline states \cite{Alford:2000ze,Casalbuoni:2002pa,Kiriyama:2006ui,Gorbar:2005tx}. }, they are likely to generate other chiral rotations as well. It would be interesting to consider astrophysical consequences of such inhomogeneous distributions. (See discussions on the implication to the glitch problem in Ref.~\cite{Alford:2000ze}, for example.) Assembling these works and insights in this paper, let us infer what kind of calculations are desirable at low density. While the chiral crystals were not favored in the plane-wave Ansatz in Ref.~\cite{Rapp:2000zd}, we know that higher harmonics become increasingly important at lower density, as shown in Ref.~\cite{Nickel:2009ke} and our perturbative calculations. They are relevant to describe a localized quark number density as well. An interesting question is whether crystals including higher harmonics are energetically more favored as compared to one-dimensional solitonic configuration in Ref.~\cite{Nickel:2009ke}. As discussed at the end of the previous section, inter-patch interactions are strong for the chiral spirals with close wavevectors. That argument, however, also implies that two chiral spirals with very different wavevectors do not strongly destroy one another. Thus, for configurations with higher harmonics, deconstruction due to inter-patch interactions might be tempered, so that the solitonic crystal structure might be energetically favored. \subsection{On the $1/N_{\rm c}$ Corrections} As the density increases, the $1/N_{\rm c}$ corrections grow up because the increasing phase space around the Fermi surface enhances low-energy quark fluctuations\footnote{ Needless to say, when we try to include the $1/N_{\rm c}$ corrections, we have to restart all of the computations from the vacuum problem, in order to renormalize the theory including fermion loops. Here we are arguing the medium-induced modification after the correct renormalization is made. }. The enhancement is parametrically $\sim (\mu_{\rm q}/\Lambda_{\rm QCD})^{d-1}$ where $d$ represents the number of spatial dimensions. Such effects are illustrated in the typical diagrams in Fig.~\ref{fig:1nc}. The diagrams (a) and (b) reduce the effective size of our coupling constant as the density increases. The diagram (c) modifies our resummation scheme or the mean-field treatment shown in Fig.~\ref{fig:SDeq}. In the terminology of the four-Fermi interaction, we have treated the Hartree term while ignoring the Fock term in the large-$N_{\rm c}$ limit, which will be modified. Especially let us note that in the diagram (c), all momenta of the loop, the incoming quark, and the outgoing quark need not to be close, in contrast to the leading-$N_{\rm c}$ contributions. Therefore once the $1/N_{\rm c}$ contributions to the condensate become comparable to the leading order, they will violate the locality of the quark-condensate interactions in momentum space. It means that inter-patch interactions among chiral spirals occur not only near the patch boundaries but everywhere near the Fermi surface. The situation becomes much more complicated than that we have treated in this paper. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{0.3}[0.3] { \hspace{-0.6cm} \includegraphics[scale=.60]{a_1nc.pdf} } \scalebox{0.3}[0.3] { \hspace{1.5cm} \includegraphics[scale=.60]{b_1nc.pdf} } \scalebox{0.3}[0.3] { \hspace{1.5cm} \includegraphics[scale=.60]{c_1nc.pdf} } \end{center} \vspace{0.0cm} \caption{ Some diagrammatic examples of the $1/N_{\rm c}$ corrections for: (a) the gluon propagator, (b) the quark-gluon vertex, (c) our resummation scheme shown in the left panel in Fig.~\ref{fig:SDeq}. Modifications by three- and four-gluon vertices should exist as well. } \label{fig:1nc} \end{figure} The quantitative estimation of the $1/N_{\rm c}$ corrections is a highly non-linear problem. It should be quite sensitive to the effective masses of quarks, while the mass is determined by non-perturbative interactions whose strength is reduced by the screening effects by quarks. In this respect, it does matter whether we took into account possibilities of the inhomogeneous condensates or not. Under the assumption of homogeneous chiral symmetry breaking, quarks lose the mass gaps originated from quark-antiquark condensation, and then fluctuations would rapidly grow up at finite density. However, if we included a possibility of chiral symmetry breaking near the Fermi surface, the quarks acquire the mass gap through the constituent quark mass, suppressing the fluctuations near the Fermi surface. Calculations including the latter have not been done yet. The above issue is also related to the validity of the stationary phase approximation. Here we have to distinguish two kinds of the fluctuations. One is the amplitude fluctuation, largely related to the size of the quark mass gap. Another is the phase fluctuation, which is massless. For the latter, one might suspect that, according to Mermin-Wagner-Coleman's arguments on lower dimensions \cite{Mermin:1966fe}, the IR fluctuations of the phases would be strong enough to destroy chiral spirals with quasi (1+1)-dimensional structure, and therefore quarks would lose their mass gaps. A few remarks are in order. First, the absence of the spontaneous symmetry breaking in real (1+1) dimensions is extremely sensitive to the {\it deep} IR region. Our phase fluctuations cannot access such a region since the Fermi surface always has finite curvature effects which provide the IR cutoff\footnote{ One might think that this cutoff argument would be inconsistent with our (1+1)-dimensional reduction of the gap equations. This is not so. The point is that the mass gap is much less sensitive to the {\it deep} infrared region --- even if we omit such small phase space, we can find the solution, like the NJL model in vacuum. } $\sim p_\perp^2/p_{\rm F}$. So even without using the large-$N_{\rm c}$ limit, we conclude that the chiral spirals have the long-range order. This issue was already addressed in Ref.~\cite{Kojo:2010fe}. Second, even if the system has only quasi long-range order, the mass gap is not washed out. The absence of the condensate does not mean the presence of a gapless quark, as far as the amplitude field takes a finite expectation value. For a (1+1)-dimensional example, see \cite{Witten:1978qu}. So a primary question concerned with the stationary phase approximation is whether the fields strongly fluctuate or not. Besides the fluctuation effects, we should also mention on the possibility of the diquark condensate at $N_{\rm c}=3$, which could not be addressed in the large-$N_{\rm c}$ limit. The Meissner effect would change our non-perturbative forces. This issue is beyond our scope, and can be addressed only by taking the energy competition between the interweaving chiral spirals and the color-superconducting phases. \subsection{A Coordinate Space Structure of the Interweaving Chiral Spirals} \label{coordinate} So far we have not discussed coordinate space structures of the interweaving chiral spirals. This is because like BCS theory, the energy minimization at high density is turned out to be sensitive only to the momentum space structures, at least in our model. This situation is very different from determination of crystal structures in atomic physics \cite{solid} where coordinate space descriptions are useful for the energy minimization. Nevertheless, it is certainly interesting to illustrate how various densities in the interweaving chiral spirals look like in coordinate space. In practice, the coordinate space considerations have potential relevance for the considerations of the density domain close to baryonic matter, in which coordinate space descriptions of quarks are more appropriate than momentum space ones. For a quark number density, the distribution is just uniform at the leading order of the high density expansion. This is because the quark number density made from different patch contributions, \begin{equation} \bar{\psi} \gamma_0 \psi = \sum_{i=1}^{N_{\rm p}} \left( \bar{\psi}_{i+} \gamma_0 \psi_{i+} + \bar{\psi}_{i-} \gamma_0 \psi_{i-} \right)~, \end{equation} has no mixture of $(+,-)$ fields. The spatially modulating contributions start to appear only after inclusion of perturbative corrections from subdominant terms such as $\bar{\psi}_+\, \mathrm{i}\, \Slash{\partial}_\perp \psi_-$, thus are suppressed at high density. On the other hand, a distribution of the chiral density is nontrivial. After summing up contributions from different patches, we have \begin{equation} \left\langle \bar{\psi} \psi (x) \right\rangle = \sum_{i=1}^{N_{\rm p}} \left\langle \bar{\psi}_{i+} \psi_{i-} (x) + \bar{\psi}_{i-} \psi_{i+} (x) \right\rangle \sim \Delta \sum_{i=1}^{N_{\rm p}} {\rm Re} \left( \mathrm{e}^{2 \mathrm{i} Q\vec{n}_i \cdot (\vec{x} - \vec{x}_i) } \right) ~, \end{equation} where we did not write the form factor dependence for notational simplicity, and we took into account $\vec{x}_i$ to make the amplitude field $\Delta$ real. Up to a patch number, $N_{\rm p}=3$, the chiral density has a periodic structure and an orientational symmetry known in the conventional crystallography. Shown in Fig.~\ref{fig:Crystals1} are the crystal structures of chiral density, ``chiral density crystals'', for $N_{\rm p}=1,2,3$. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{1.0}[1.0] { \hspace{0.2cm} \includegraphics[scale=.30]{Crystals1.pdf} } \end{center} \caption{The crystal structures of the chiral density for $N_{\rm p}=1,2,3$. The corresponding shapes of the Fermi sea are also shown. (For simplicity, we chose $\vec{x}_i=\vec{0}$.) } \label{fig:Crystals1} \vspace{0.2cm} \end{figure} The situation is, however, different beyond $N_{\rm p}=3$. The chiral density has only an $Z_{2N_{\rm p}}$ orientational symmetry, but does not possess a periodic structure (The simple proof is given in Appendix~\ref{proof}.) The spatial distributions show certain repeating patterns, though. (See Fig.\ref{fig:Crystals2}). \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{1.0}[1.0] { \hspace{0.2cm} \includegraphics[scale=.30]{Crystals2.pdf} } \end{center} \caption{The structures of the chiral density for $N_{\rm p}=6$. (Left) The shape of the Fermi sea. (Right) The coordinate space distributions of the chiral density where the center of figure is chosen as $\vec{x} = \vec{0}$. (For simplicity, we chose $\vec{x}_i=\vec{0}$.) Some circles connecting nodes are drawn as guidelines to look at patterns. } \label{fig:Crystals2} \vspace{0.2cm} \end{figure} A natural question is whether interweaving chiral spirals with $N_{\rm p} \ge 4$ can be classified by structures known in the solid state physics. So far three types of solids have been known: (i) amorphous (or glassy) structures without any of the long-range correlations, (ii) crystal structures with periodic translational order and long-range (near-neighbor bond) orientational order with special crystallographic discrete subgroups of the rotational group, and (iii) quasi-crystal structures that can have arbitrary types of orientational symmetry and the corresponding quasi-periodicity \cite{Shechtman84}. A nontrivial fact is that in spite of lack of periodicity, quasi-crystals are made of a finite number of unit cell species. In Fig.~\ref{fig:Penrose} we show a specific example of two dimensional quasi-crystals which is known as Penrose tiling. \begin{figure}[tb] \vspace{0.0cm} \begin{center} \scalebox{1.0}[1.0] { \hspace{0.2cm} \includegraphics[scale=.34]{Penrose.pdf} } \end{center} \caption{ The Penrose tiling as an example of two dimensional quasi-crystals. We have only two types of tiles to cover the entire space. There is no strict periodicity. (This figure is taken from Ref.~\cite{Penrose}.) } \label{fig:Penrose} \vspace{0.2cm} \end{figure} Interweaving chiral spirals state with $N_{\rm p} \ge 4$ does not have a periodicity, so they are not crystals. Also, the interweaving chiral spirals have definite rotational properties and the size of wavevectors, so perhaps do not belong to amorphous. It is not clear whether our interweaving chiral spirals belong to the remaining candidate, a quasi-crystal. In usual solids, their basic properties can be discussed by knowing ion species and their locations. The separation among ions are much larger than the size of ions, so it is good approximation to treat ion density as a sum of $\delta$-functions. The electron distribution is determined accordingly. Then we can attach a unit cell to each ion, and such cell structures appear repeatedly. Therefore studies of unit cells and their connections to other cells are enough to know properties of crystals. The relevant point here is that a number of cell and bond species is only finite. In case of interweaving chiral spirals, it is tempting to assign nodes of interweaving chiral spirals as alternatives of positions of ions. The problem, however, is that unlike electron density in atomic crystals, the chiral density around nodes is not well-determined from locations of nodes, perhaps because the minimum energy configuration at high density is dominantly determined by momentum space structures. We expect that a number of varieties in cell shapes might be finite but contents in them might have a lot of varieties. We leave further discussions about the classification of interweaving chiral spirals for future studies. We expect that such studies will become important for the region where the system changes from quark to baryonic matter, or appropriate descriptions for energy minimization changes from the momentum space to coordinate space one. \subsection{On the Six-Fermi Interaction} Let us briefly mention on the effects of the six-Fermi interaction that breaks $U(1)_{\rm A}$ symmetry for three flavors. When we discuss dynamics of $u,d$ quarks, we typically take the expectation value of $\bar{s} s$, and normalize the coupling constant as \begin{equation} \sim G_i ~(\bar{u} u) (\bar{d} d) (\bar{s} s) ~\longrightarrow~ \big( G_i \langle\bar{s} s\rangle \big) ~ (\bar{u} u) (\bar{d} d) = G_i^s ~(\bar{u} u) (\bar{d} d) \,, \end{equation} where we did not write any explicit structure of the Dirac $\gamma$ matrices. Thus at zero density, this interaction would renormalize the coupling constant of our model. At high density, however, the four-Fermi and the six-Fermi interactions show interesting differences. Let us first consider the following situation; $\mu \simeq \mu_u \simeq \mu_d \simeq \mu_s \gg \Lambda_{\rm QCD}$, where all light flavors form the quark Fermi sea. Using the $\psi'_\pm$-representation, the interaction looks like \begin{equation} \sim G_i \big(\, (\bar{u}'_+ u'_-) (\bar{d}'_+ d'_-) (\bar{s}'_+ s'_-) \mathrm{e}^{-6\mathrm{i} \mu_{\rm q} x_\parallel} + (\bar{u}'_+ u'_-) (\bar{d}'_+ d'_-) (\bar{s}'_- s'_+) \mathrm{e}^{-2\mathrm{i} \mu_{\rm q} x_\parallel} + \cdots \big)\,. \end{equation} Here let us note that all of vertices must have oscillation factors, i.e., they are subdominant terms. These terms tend to disturb the chiral spiral formations, but eventually become negligible at very high density. On the other hand, at intermediate density such that only $u$ and $d$ quarks have the Fermi sea, the vertex $\langle \bar{s}'s' \rangle$ is uniform and does not include the oscillating factor. Then we can find the dominant four-Fermi interactions which help the formation of the chiral spirals in the $u,d$ channels. Actually this is the situation discussed in the aforementioned work~\cite{Rapp:2000zd}. It is interesting to see whether any drastic changes happen near the strange-quark threshold. \section{Summary} \label{secsummary} In this paper we have argued how to construct the interweaving chiral spirals at high density. The patch size is determined by the balance between the energy gain from condensation and the energy cost from the deformed Fermi surface. The Fermi surface effects drove the spontaneous breaking of chiral, translational, and rotational invariance. A key ingredient of our discussions was that the interaction among quarks and the condensate, which is dominant at large $N_{\rm c}$, was local in momentum space. Because of this property, energy contributions could be characterized by the limited phase space for the scattering between quarks and the condensate. In particular, complicated interplay between differently oriented chiral spirals happen only near the patch boundaries, so we could roughly estimate the energy cost by just seeing the phase space for patch-patch interactions. We have argued the (2+1)-dimensional model for which the geometric shape of the Fermi sea is relatively simple. In reality with higher dimensions, however, such simplicity is no longer the case since there are many ways to interweave chiral spirals. On the other hand, the principle to choose the best shape should be relatively simple according to the arguments in this paper. We need get the largest energy gain from condensation by maximizing the area of the Fermi surface until the kinetic energy cost becomes crucial, and at the same time, we have to minimize the length of the patch intersection lines to reduce inter-patch interactions. To find such a geometric shape would be a mathematically well-defined problem. The interweaving chiral spirals will survive in (3+1) dimensions, since the key point is the IR enhancement of the interaction, which is the fundamental property based on QCD.\ \ This situation is quite different from charge or spin density waves in condensed matter physics, which are energetically favored in one-dimensional systems, but not in higher dimensions \cite{Dagotto}. The form of the chiral Lagrangian near the Fermi surface was derived in Ref.~\cite{Kojo:2010fe}, regarding the system near the Fermi surface as the (1+1)-dimensional chains with the transverse hopping. In the dispersion relation for the collective modes, transverse kinetic terms are suppressed by powers of $\Lambda_{\rm f}/Q$. As the density increases, therefore, the spectrum will approach the (1+1)-dimensional one, so that the IR fluctuations will be stronger. We also anticipate that the temperature effects strongly enhance the phase fluctuations \cite{Baym:1982ca}. Results on this issue will be reported in the future. \section*{Acknowledgments} We thank G.~Basar, G.~Dunne, E.J.~Ferrer, L.Y.~Glozman, V.~Incera, J.~Liao, S.~Nakamura, R.~Rapp, E.~Shuryak, G.~Torrieri, and I.~Zahed for useful comments and/or raising several important questions related to multiple (Quarkyonic) chiral spirals. Special thanks go to A.M.~Tsvelik for the collaboration \cite{Kojo:2010fe} with several of the present authors. We acknowledge the referee for constructive questions which have helped us to improve the original manuscript. T.K.\ acknowledges to S.~Carignano and M.~Buballa for explaining their NJL model studies on the chiral crystals before the publication. He also thanks the Asia Pacific Center for Theoretical Physics (APCTP) and Hashimoto Laboratory in RIKEN Nishina Center for their hospitality during his visit in March and April 2011. The research of Y.H.\ is supported by RIKEN and the Grand-in-Aid for the Global COE Program ``The Next Generation of Physics, Spun from Universality and Emergence'' from the Ministry of Education, Culture, Science and Technology (MEXT) of Japan; that of T.K.\ by the Postdoctoral Research Program of RIKEN; that of L.D.M.\ and R.D.P.\ by the U.S.\ Department of Energy under contract No.\ DE-AC02-98CH10886. R.D.P.\ also thanks the Alexander von Humboldt Foundation for their support.
1,314,259,996,874
arxiv
\section{Introduction} Basic process algebra (BPA)~\cite{Baeten:1991:PA:103272} is a fundamental model of infinite state systems, with its famous counterpart in the theory of formal languages: context free grammars in Greibach normal forms, which generate the entire context free languages. In 1987, Baeten, Bergstra and Klop~\cite{BaetenBergstraKlop1987,DBLP:journals/jacm/BaetenBK93} proved a surprising result that strong bisimilarity on normed BPA is decidable. This result is in sharp contrast to the classical fact that language equivalence is undecidable for context free grammar~\cite{Hopcroft:1990:IAT:574901}. After this remarkable discovery, decidability and complexity issues of bisimilarity checking on infinite state systems have been intensively investigated. See~\cite{DBLP:conf/concur/JancarM99,Burkart00verificationon,DBLP:journals/iandc/MollerSS04,Srba2004,Kucera2006} for a number of surveys. As regards to the strong bisimilarity checking on normed BPA, H\"uttel and Stirling~\cite{DBLP:conf/lics/HuttelS91} improved the result of Baeten, Bergstra and Klop using a more simplified proof by relating the strong bisimilarity of two normed BPA processes to the existence of a successful tableau system. Later, Huynh and Tian~\cite{DBLP:journals/tcs/HuynhT94} showed that the problem is in $\Sigma_{2}^{\mathrm{P}}$, the second level of the polynomial hierarchy. Before long, another significant discovery was made by Hirshfeld, Jerrum and Moller~\cite{DBLP:journals/tcs/HirshfeldJM96} who showed that the problem can even be decided in polynomial time, with the complexity $\mathcal{O}(N^{13})$. The running time was later improved~\cite{DBLP:conf/mfcs/LasotaR06,DBLP:conf/fsttcs/CzerwinskiL10}. All these algorithms take the approach of partition refinement, relying on the unique decomposition property and some efficient way of equality checking on compressed long strings. It deserves special mention that Czerwi\'{n}ski and Lasota~\cite{DBLP:conf/fsttcs/CzerwinskiL10} create a different refinement scheme. This refinement scheme was previously used in developing an polynomial-time algorithm of checking strong bisimilarity on normed basic parallel processes (normed $\mathrm{BPP}$)~\cite{DBLP:journals/mscs/HirshfeldJM96}. In this way Czerwi\'{n}ski and Lasota improve the running time to $\mathcal{O}(N^5)$. Hitherto, the best algorithm was reported in~\cite{CzerwinskiPhD}, whose running time is $\mathcal{O}(N^4\mathrm{polylog}(N))$. In the presence of silent actions the picture is less clear. Even the decidability for weak bisimilarity is still open. A remarkable discovery is made by Fu~\cite{DBLP:conf/icalp/Fu13} recently that branching bisimilarity~\cite{GlabbeekW96}, a standard refined alternative of weak bisimilarity, is decidable on normed BPA. Very recently, Czerwi\'{n}ski and Jan\v{c}ar confirm this problem to be in $\mathrm{NEXPTIME}$~\cite{DBLP:journals/corr/CzerwinskiJ14}. The current best lowerbound for weak bisimilarity is the $\mathrm{EXPTIME}$-hardness established by Mayr~\cite{Mayr2005}, whose proof can be slightly modified to show the $\mathrm{EXPTIME}$-hardness for branching bisimilarity as well. In retrospect one cannot help thinking that more attention should have been paid to the branching bisimilarity. Going back to the original motivation to equivalence checking, one would agree that a specification $spec$ normally contains no silent actions because silent actions are about how-to-do. It follows that $spec$ is weakly bisimilar to an implementation $impl$ if and only if $spec$ is branching bisimilar to $impl$ (Theorem~5.8.18 in~\cite{Baeten:1991:PA:103272}). In addition, in majority of practical examples, the branching bisimilarity and the weak bisimilarity coincide. What these observations tell us is that as far as verification is concerned the branching bisimilarity ought to play a bigger role than the weak bisimilarity, especially in the situations where branching bismilarity is easily decided. One major difficulty of checking weak or branching bisimilarity on normed $\mathrm{BPA}$ stems from the lack of nice structural properties such as unique decomposition property. By forcing the final action of every process to be observable, we have an important subset of normed $\mathrm{BPA}$, called totally normed $\mathrm{BPA}$, in which unique decomposition property still holds for branching bisimilarity. The bisimilarity checking on totally normed $\mathrm{BPA}$ also has a long history. In 1991, H\"uttel~\cite{DBLP:conf/cav/Huttel91} repeated the tableau construction developed in~\cite{DBLP:conf/lics/HuttelS91} for branching bisimilarity on totally normed BPA. Although H\"uttel's construction is not sound for weak bisimilarity, the relevant decidability can also be established~\cite{DBLP:journals/entcs/Hirshfeld96}. For the lower bound, $\mathrm{NP}$-hardness is established by St\v{r}\'{i}brn\'{a}~\cite{DBLP:journals/entcs/Stribrna98} for weak bisimilarity via a reduction from the knapsack problem. By inspecting St\v{r}\'{i}brn\'{a}'s proof, we are aware that the $\mathrm{NP}$-hardness still holds for any other bisimilarity, such as delay bisimilarity, $\eta$-bisimilarity, and even quasi-branching bisimilarity~\cite{GlabbeekW96}, except for branching bisimilarity. The requirement of branching bisimilarity that change-of-state silent actions must be explicitly bisimulated makes it impossible to realize nondeterminism by designing some gadgets via a bisimulation game. These crucial observations inspire us to rethink the possibility of designing more efficient algorithm for the problem of checking branching bisimilarity on totally normed $\mathrm{BPA}$. The paper provides a polynomial time algorithm for checking branching bisimilarity on totally normed $\mathrm{BPA}$. Therefore an instance is spotted that branching bisimilarity and weak bisimilarity are both decidable but lie in different complexity classes. For brevity, in the rest of this paper, `branching bisimilarity' will usually be referred to as `bisimilarity'. We avoid using the term `strong bisimilarity', since the strong bisimilarity can be interpreted as the bisimilarity for `realtime' processes. A realtime process is a process which can perform no silent action. The algorithm developed in this paper takes a similar partition refinement approach and the framework adopted in~\cite{DBLP:conf/fsttcs/CzerwinskiL10,CzerwinskiPhD}, which was designed to decide bisimilarity for realtime normed $\mathrm{BPA}$. This algorithm is called CL algorithm in this paper. The final efficient implementation of our algorithm is a generalized version of CL algorithm in the sense that, for realtime systems, our algorithm and CL algorithm are essentially the same. Our algorithm heavily relies on the technique of dynamic programming, which makes our implementation has the same computational complexity as CL algorithm. Although our algorithm seems very similar to the previous one, the technical details, including the definition of expansion and refinement operation, the theoretical development of its correctness are quite difficult than the previous CL algorithm. Without doubt, the consecutive silent transitions in the definition of branching bisimilarity cause severe problems in two aspects: the correctness and the efficiency. Note that the totally normedness guarantees that the number of consecutive silent actions are bounded by the number of constants. It is not hard to use this observation, together with the game theoretical view of branching bisimilarity, to design an algorithm which runs in polynomial space. However, the consecutive silent actions, which cause nondeterminism, did make checking branching bisimulation property take exponential time if the naive way was taken. The only way to overcome this difficulty is a proper usage of the technique of dynamic programming. When consecutive silent actions are eliminated by means of dynamic programming, we have a severe problem: why is the resulting algorithm still correct? In the situation of CL algorithm for realtime normed $\mathrm{BPA}$, there is a pre-defined refinement operation, $\mathsf{Ref}(\equiv)$. In that situation, we had a canonical definition of expansion and a canonical definition of relative decreasing bisimilarity (in our terminology). The final refinement operation $\mathsf{Ref}(\equiv)$ was defined as the decreasing bisimilarity wrt.~the expansion of $\equiv$. The refined equivalence relation was then constructed by a greedy algorithm, in each step of which two memberships were efficiently tested. Therefore, the correctness of CL algorithm was comparatively obvious. Unfortunately it is unlikely, if not impossible, to generate the above proof structure for CL algorithm to our algorithm, because there is no clear way to define the expansion relation like that in CL algorithm. Note that the expansion relation should be both correct and efficient. We had several aborted attempts before finally we decided to take some other ways. The correctness of CL algorithm depends on a clearly defined refinement operation which relies on two steps of operations: the expansion operation and the relative decreasing bisimilarity. Our crucial insight is that, there is no need to separate these two steps of operations. The main technical line is briefly outlined below. It takes several stages: \begin{itemize} \item At first, for realtime systems, we define the refinement operations by a way of combining the two steps of operation which was taken in CL algorithm into a cohesive whole. In this way, we have noticed that we defines exactly the same refinement operation as that in CL algorithm. \item Then, the refinement operation defined in the above way is smoothly generated for the style of branching bisimilarity. In this stage, our attention is centred on the property of the refined relation. The efficiency is never cared about. We prove that the refinement operation preserves congruence and the unique decomposition property. \item Then a characterization theorem is established for the refined congruence. In this characterization, the consecutive silent actions are completely eliminated. Thus the problem of efficiency is mainly solved. Using this characterization, the correctness proof for realtime systems can be obtained. But for systems with silent actions, it is not enough. \item Finally, the proof is finished by developing a simpler characterization which corresponds to our algorithm directly. In this stage, a special property of branching bisimilarity for processes in prime decomposition turns out to be quite useful. \end{itemize} The rest of the paper is organized as follows. Section~\ref{sec:Preliminaries} lays down the preliminaries. Section~\ref{sec:finite_representations} focuses on the unique decomposition property for branching bisimilarity on totally normed BPA. Then we describe our algorithm in Section~\ref{sec:naive-algorithm}. The suitable definition of refinement steps are discussed in Section~\ref{sec:refinement_steps}, and the correctness proof are provided in Section~\ref{sec:correctness}. Finally, Section~\ref{sec:remark} gives additional remarks. \section{Preliminaries}\label{sec:Preliminaries} \subsubsection{Basic Process Algebra} A {\em basic process algebra} ($\mathrm{BPA}$) system is a triple $(\mathbf{C}, \mathcal{A}, \Delta)$, where $\mathbf{C} = \{X_1, \ldots, X_n\}$ is a finite set of process constants, $\mathcal{A}$ is a finite set of actions, and $\Delta$ is a finite set of transition rules. The {\em processes}, ranged over by $\alpha,\beta,\gamma,\delta$, are generated by the following grammar: \[ \alpha \ \Coloneqq \ \epsilon \ \mid \ X \ \mid \ \alpha_1 \cdot \alpha_2. \] The syntactic equality is denoted by $=$. We assume that the sequential composition $\alpha_1\cdot\alpha_2$ is associative up to $=$ and $\epsilon \cdot \alpha = \alpha \cdot \epsilon = \alpha$. Sometimes $\alpha \cdot \beta$ is shortened as $\alpha\beta$. The set of processes is exactly $\mathbf{C}^{*}$, the strings over $\mathbf{C}$. There can be a special symbol $\tau$ in $\mathcal{A}$ for silent transition. Typically, $\ell$ is used to denote actions, while $a$ are used to denote visible (i.e. non-silent) actions. The transition rules in $\Delta$ are of the form $X \stackrel{\ell}{\longrightarrow} \alpha$. The following labelled transition rules define the operational semantics of the processes. \[ \begin{array}{c} \cfrac{X\stackrel{\ell}{\longrightarrow}P\in\Delta}{X\stackrel{\ell}{\longrightarrow}\alpha} \ \qquad \cfrac{ \alpha\stackrel{\ell}{\longrightarrow}\alpha'}{\alpha\cdot\beta \stackrel{\ell}{\longrightarrow} \alpha' \cdot \beta} \end{array} \] The operational semantics is structural, meaning that $\alpha \cdot\beta\stackrel{\ell}{\longrightarrow}\alpha'\cdot\beta$ whenever $\alpha\stackrel{\ell}{\longrightarrow}\alpha'$. We write $\Longrightarrow $ for the reflexive transitive closure of $\stackrel{\tau}{\longrightarrow}$, and $\stackrel{\widehat{\ell}}{\Longrightarrow}$ for $\Longrightarrow\stackrel{\ell}{\longrightarrow}\Longrightarrow$ if $\ell\ne\tau$ and for $\Longrightarrow$ otherwise. A process $\alpha$ is {\em normed} if $\alpha \stackrel{\ell_1}{\longrightarrow} \dots \stackrel{\ell_n}{\longrightarrow} \epsilon$ for some $\ell_1, \dots, \ell_n$. A process $\alpha$ is {\em totally normed} if it is normed, and moreover, $\ell_n \neq \tau$ whenever $\alpha \stackrel{\ell_1}{\longrightarrow} \dots \stackrel{\ell_n}{\longrightarrow} \epsilon$. A $\mathrm{BPA}$ definition $(\mathbf{C}, \mathcal{A}, \Delta)$ is (totally) normed if all processes defined in it are (totally) normed. We write (t)(n)BPA for the (totally)(normed) basic process algebra model. In other words, a $\mathrm{tnBPA}$ system is a $\mathrm{nBPA}$ system in which rules of the form $X \stackrel{\tau}{\longrightarrow} \epsilon$ are forbidden. We call a BPA system {\em realtime} if $\tau \not\in \mathcal{A}$. That is to say, a realtime system can not perform silent actions. Clearly, realtime totally normed BPA is exactly realtime normed BPA. \subsubsection{Bisimulations and Bisimilarities} In the presence of silent actions two well known process equalities are the branching bisimilarity~\cite{GlabbeekW96} and the weak bisimilarity~\cite{Milner1989}. \begin{definition}\label{def:beq} Let $\mathcal{R}$ be a relation on processes. $\mathcal{R}$ is a {\em branching bisimulation}, if the following hold whenever $\alpha \mathcal{R} \beta$: \begin{enumerate} \item If $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$, then either \begin{enumerate} \item $\ell = \tau$ and $\alpha'\mathcal{R}\beta$; or \item $\beta \Longrightarrow \beta''\stackrel{\ell}{\longrightarrow} \beta'$ and $\alpha'\mathcal{R} \beta'$ and $\alpha\mathcal{R}\beta''$ for some $\beta',\beta''$. \end{enumerate} \item If $\beta\stackrel{\ell}{\longrightarrow}\beta'$, then either \begin{enumerate} \item $\ell=\tau$ and $\alpha\mathcal{R}\beta'$; or \item $\alpha\Longrightarrow \alpha''\stackrel{\ell}{\longrightarrow} \alpha'$ and $\alpha'\mathcal{R} \beta'$ and $\alpha'' \mathcal{R} \beta$ for some $\alpha',\alpha''$. \end{enumerate} \end{enumerate} The {\em branching bisimilarity} $\simeq$ is the largest branching bisimulation. \end{definition} \begin{definition} A relation $\mathcal{R}$ is a {\em weak bisimulation} if the following are valid: \begin{enumerate} \item Whenever $\alpha \mathcal{R}\beta$ and $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$, then $\beta\stackrel{\widehat{\ell}}{\Longrightarrow}\beta'$ and $\alpha'\mathcal{R}\beta'$ for some $\beta'$. \item Whenever $\alpha\mathcal{R}\beta$ and $\beta \stackrel{\ell}{\longrightarrow} \beta'$, then $\alpha\stackrel{\widehat{\ell}}{\Longrightarrow}\alpha'$ and $\alpha'\mathcal{R} \beta'$ for some $\alpha'$. \end{enumerate} The {\em weak bisimilarity} $\approx$ is the largest weak bisimulation. \end{definition} Both $\simeq$ and $\approx$ are congruence relations for (t)nBPA. We remark that transitivity of $\simeq$ is not straightforward according to Definition~\ref{def:beq}, because the branching bisimulation $\mathcal{R}$ defined in Definition~\ref{def:beq} need not be transitive~\cite{DBLP:journals/ipl/Basten96}. To solve this problem, van Glabbeek and Weijland~\cite{GlabbeekW96} introduce a slightly different notion called {\em semi-branching bisimulation}. \begin{definition}\label{def:semi-beq} Let $\mathcal{R}$ be a relation on processes. $\mathcal{R}$ is a {\em semi-branching bisimulation} if the following hold whenever $\alpha \mathcal{R} \beta$: \begin{enumerate} \item If $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$, then either \begin{enumerate} \item $\ell = \tau$ and $\beta \Longrightarrow \beta'$ for some $\beta'$ such that $\alpha\mathcal{R}\beta'$ and $\alpha'\mathcal{R}\beta'$; or \item $\beta \Longrightarrow \beta''\stackrel{\ell}{\longrightarrow} \beta'$ and $\alpha'\mathcal{R} \beta'$ and $\alpha\mathcal{R}\beta''$ for some $\beta',\beta''$. \end{enumerate} \item If $\beta\stackrel{\ell}{\longrightarrow}\beta'$, then either \begin{enumerate} \item $\ell=\tau$ and $\alpha \Longrightarrow \alpha'$ for some $\alpha'$ such that $\alpha'\mathcal{R}\beta$ and $\alpha'\mathcal{R}\beta'$; or \item $\alpha\Longrightarrow \alpha''\stackrel{\ell}{\longrightarrow} \alpha'$ and $\alpha'\mathcal{R} \beta'$ and $\alpha'' \mathcal{R} \beta$ for some $\alpha',\alpha''$. \end{enumerate} \end{enumerate} \end{definition} Then it is easy to establish the following facts: \begin{enumerate} \item A branching bisimulation is a semi-branching bisimulation. \item A semi-branching bisimulation is transitive. \item The largest semi-branching bisimulation is an equivalence. \item The largest semi-branching bisimulation is a branching bisimulation. \end{enumerate} Now the largest semi-branching bisimulation is the same as $\simeq$, the largest branching bisimulation. If the involved system is realtime, then the branching bisimilarity and the weak bisimilarity are coincident. They are called the {\em strong bisimilarity} and are denoted by $\sim$ in literature. In this paper, branching bisimilarity is often abbreviated as {\em bisimilarity}. If the system is realtime, we also use the term bisimilarity to indicate strong bisimilarity. However, we tend to use the term `branching bisimilarity' in the situation of discussing on its relationship with weak bisimilarity. The following lemma, first noticed by van Glabbeek and Weijland~\cite{GlabbeekW96}, plays a fundamental role in the study of bisimilarity. \begin{lemma}\label{computation-lemma} If $\alpha \Longrightarrow \alpha'\Longrightarrow \alpha'' \simeq \alpha$ then $\alpha'\simeq \alpha$. \end{lemma} Let $\approxeq$ be a process equivalence. A silent action $\alpha\stackrel{\tau}{\longrightarrow}\alpha'$ is {\em state-preserving} with regards to $\approxeq$ if $\alpha'\approxeq \alpha$; it is {\em change-of-state} with regards to $\approxeq$ if $\alpha'\not\approxeq \alpha$. Branching bisimilarity strictly refines weak bisimilarity in the sense that only state-preserving silent actions can be ignored; a change-of-state must be explicitly bisimulated. Suppose that $\alpha \simeq \beta$ and $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$ is matched by the transition sequence $\beta \stackrel{\tau}{\longrightarrow} \dots \stackrel{\tau}{\longrightarrow} \beta^{i} \stackrel{\tau}{\longrightarrow} \dots \stackrel{\tau}{\longrightarrow}\beta'' \stackrel{\ell}{\longrightarrow} \beta'$. By definition one has $\alpha \simeq \beta''$. It follows from Lemma~\ref{computation-lemma} that $\alpha \simeq \beta^{i}$, meaning that all silent actions in $\beta\Longrightarrow \beta''$ are necessarily state-preserving. This property fails for the weak bisimilarity as the following example demonstrates. \begin{example}\label{example} Consider the $\mathrm{tnBPA}$ system whose rules are defined by \[ \{X \stackrel{b}{\longrightarrow} \epsilon, \ X \stackrel{\tau}{\longrightarrow} X', \ X'\stackrel{a}{\longrightarrow} \epsilon, \ X \stackrel{a}{\longrightarrow} \epsilon;\ Y \stackrel{b}{\longrightarrow} \epsilon, \ Y\stackrel{\tau}{\longrightarrow} Y', \ Y' \stackrel{a}{\longrightarrow} \epsilon\}. \] One has $X \approx Y$. However $ X \not\simeq Y$ since $Y\not\simeq Y'$. \end{example} \subsubsection{Norm} Given an tnBPA system $(\mathbf{C}, \mathcal{A}, \Delta)$. We relate a natural number $\mathtt{norm}(X)$, the {\em norm} of $X$, to every constant $X$, defined as the least $k$ such that $X \Longrightarrow \stackrel{a_1}{\longrightarrow} \Longrightarrow \dots \Longrightarrow\stackrel{a_k}{\longrightarrow} \epsilon$. Silent actions contribute zero to norm. $\mathtt{norm}$ is extended to processes by taking $\mathtt{norm}(\epsilon) = 0$ and $\mathtt{norm}(X \cdot \alpha) = \mathtt{norm}(X) + \mathtt{norm}(\alpha)$. \begin{lemma}\label{lem:normal_one} In a $\mathrm{tnBPA}$ system, $\mathtt{norm}(\alpha) = 0$ if and only if $\alpha = \epsilon$. \end{lemma} A transition $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$ is {\em decreasing}, denoted by $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$ if either $\ell \neq \tau$ and $\mathtt{norm}(\alpha) = \mathtt{norm}(\alpha') + 1$, or $\ell = \tau$ and $\mathtt{norm}(\alpha) = \mathtt{norm}(\alpha')$. The notion of decreasing transitions formalizes the intuition that a transition can be extended to a path which witnesses the norm of $\alpha$. \subsubsection{Standard Input} For technical convenience, we require the input $\mathrm{tnBPA}$ system $(\mathbf{C}, \mathcal{A}, \Delta)$ to be {\em standard}, which have the following two additional properties: \begin{enumerate} \item The constants in $\mathbf{C} = \{ X_i\}_{i=1}^{n} $ are ordered by non-decreasing norm, that is: \[ \mathtt{norm}(X_1) \leq \mathtt{norm}(X_2) \leq \ldots \leq \mathtt{norm}(X_n). \] \item Let $\mathbf{C}_i$ be the set $\{X_1, X_2, \ldots, X_i\}$ for $i=0,1,\ldots, n$. In particular, $\mathbf{C}_0 = \emptyset$ and $\mathbf{C}_n = \mathbf{C}$. Assume $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$, we need the property $\alpha \in \mathbf{C}_{i-1}^{*}$. This property does not hold in general because of the existence of loops like $X_i \Longrightarrow X_j \Longrightarrow X_i$. In this case we have $X_i \simeq X_j$ by Lemma~\ref{computation-lemma}, and we can transform the system by contracting $X_i$ and $X_j$ into one constant (removing $X_j$ and substituting all occurrences of $X_j$ in $\Delta$ by $X_i$) and eliminating the loop rules. All loops can be eliminated in this way. (By totally normedness, $X \stackrel{\tau}{\Longrightarrow}_{\mathrm{dec}} X \cdot Y$ is impossible.) Afterwards, we specify a partial order ${\preceq} \in \mathbf{C} \times \mathbf{C}$ such that $X \prec X'$ if and only if either $\mathtt{norm}(X) < \mathtt{norm}(X')$ or $X' \Longrightarrow_{\mathrm{dec}} X$. Then the order of constants are chosen to be any total order which extends $\prec$. These works can be done by computing the `dependency graph' and then calling an algorithm for topological sort. \end{enumerate} The size of a $\mathrm{tnBPA}$ system $(\mathbf{C}, \mathcal{A}, \Delta)$ is denoted by $|\Delta|$. A procedure is said to be {\em efficient} if it runs in polynomial time. The above discussion confirms that any $\mathrm{tnBPA}$ system can be efficiently transformed to a standard one with no size growing. \begin{lemma}\label{lem:normal_two} For every $\mathrm{tnBPA}$ system $(\{X_1, X_2, \ldots. X_n\}, \mathcal{A}, \Delta)$, there is a standard $\mathrm{tnBPA}$ system $(\{X_1', X_2',\ldots,X_m'\}, \mathcal{A}, \Delta')$ computable in at most $\mathcal{O}(|\Delta|^2)$ time, in which $m \leq n$ and $|\Delta'| \leq |\Delta|$. \end{lemma} From now on, the input $\mathrm{tnBPA}$ system is supposed to be standard, and is fixed as $(\mathbf{C}, \mathcal{A}, \Delta)$ where $\mathbf{C} = \{X_1, X_2, \ldots. X_n\}$. We will invariantly use $n$ to denote the size of $\mathbf{C}$, and $N$ to denote the size of the related $\mathrm{tnBPA}$ system. The problem is formally defined as follows: \begin {center}\small \begin{tabular}{|rp{9.5cm}|}\hline Problem: \quad & \textsc{Branching Bisimilarity on tnBPA} \\ Instance: \quad & A standard tnBPA system $(\mathbf{C} = \{X_i\}_{i=1}^{n}, \mathcal{A}, \Delta)$, and $\alpha, \beta \in \mathbf{C}^{*}$. \\ Question: \quad & $\alpha \simeq \beta$? \\ \hline \end{tabular} \end {center} We restate the important property for standard systems as the following lemma. \begin{lemma}\label{lem:decreasing_transition} Assume $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$, we have $\alpha \in \mathbf{C}_{i-1}^{*}$. \end{lemma} \subsubsection{Other Conventions} We will always use notation $\equiv$ to denote an equivalence/congruence relation on $\mathbf{C}^{*}$. An equivalence/congruence relation $ \equiv$ is {\em norm-preserving} if $\mathtt{norm}(\alpha) = \mathtt{norm}(\alpha')$ whenever $\alpha \equiv \alpha'$. In this paper, all the equivalence/congruence relations are supposed to be norm-preserving. This fact is not always explicitly stated. \section{Finite Representations}\label{sec:finite_representations} In this section, we propose a convenient way of representing bisimilarity and the approximating congruences \footnote{The proofs in this section is a generalization of the corresponding work for realtime normed BPA, say~\cite{DBLP:journals/tcs/HirshfeldJM96}. The readers familiar with these former works can only skim this part.}. From the algebraic view, the set of processes of $\mathrm{tnBPA}$ is exactly the free monoid generated by $\mathbf{C}$. The question is how to represent a congruence relation on $\mathbf{C}^{*}$. We will show that the bisimilarity $\simeq$ is a very special congruence. Not only is it finitely generated, but it enjoys a highly structured property called {\em unique decomposition property}. \subsection{Unique Decomposition Property of $\simeq$} Unique decomposition property plays a central role in all the algorithms for bisimilarity checking on realtime $\mathrm{nBPA}$. This important property also holds for bisimilarity on $\mathrm{tnBPA}$. Recall that a congruence $\equiv$ is {\em norm-preserving} if $\mathtt{norm}(\alpha) = \mathtt{norm}(\beta)$ whenever $\alpha \equiv \beta$. The following lemma is a direct consequence of Definition~\ref{def:beq}. \begin{lemma}\label{lem:norm-preserving} $\simeq$ is a norm-preserving congruence. \end{lemma} Let ${\equiv} \subseteq \mathbf{C}^{*} \times \mathbf{C}^{*}$ be an arbitrary norm-preserving congruence. Intuitively, a constant process $X_i$ is a composite if $X_i \equiv \alpha\beta$ for some $\alpha, \beta \neq \epsilon$. In this case we also have $\mathtt{norm}(\alpha), \mathtt{norm}(\beta) < \mathtt{norm}(X_i)$ from Lemma~\ref{lem:normal_one}. For technical convenience we will define $X_i$ to be a {\em composite} modulo $\equiv$ if $X_i \equiv \alpha$ for some $\alpha \in \mathbf{C}_{i-1}^{*}$. Otherwise, $X_i$ is called a {\em prime} modulo $\equiv$. Let $\mathbf{P} \subseteq \mathbf{C}$ be the set of primes modulo $\equiv$. By Lemma~\ref{lem:norm-preserving} and the well-foundedness of natural numbers, every $X \in \mathbf{C}$ has a {\em prime decomposition} $\alpha \in \mathbf{P}^{*}$ such that $X_i \equiv \alpha$. We say that $\equiv$ has unique decomposition property, or simply $\equiv$ is {\em decompositional} if every process has exactly one prime decomposition. It is the time to establish the unique decomposition property of $\simeq$. The following Lemma~\ref{lem:right-cancellation} and Theorem~\ref{thm:unique-decomposition} is standard, as is in the case of bisimilarity for realtime $\mathrm{nBPA}$~\cite{DBLP:journals/tcs/HirshfeldJM96}. The {\em right cancellation} property is established first. \begin{lemma}[Right Cancellation]\label{lem:right-cancellation} $\alpha\gamma \simeq \beta\gamma$ entails $\alpha \simeq \beta$. \end{lemma} \begin{proof} $\{(\alpha, \beta): \alpha\gamma \simeq \beta\gamma \mbox{ for some }\gamma\}$ is a bisimulation. \qed \end{proof} \begin{theorem}[Unique Decomposition Property of $\simeq$]\label{thm:unique-decomposition} $\simeq$ is decompositional. Let $X_{i_1} \ldots X_{i_p}$ and $X_{j_1} \ldots X_{j_q}$ be two irreducible decompositions such that $X_{i_1} \ldots X_{i_p} \simeq X_{j_1} \ldots X_{j_q}$. Then, $p=q$ and $X_{i_t} \simeq X_{j_t}$ for every $1 \leq t \leq p$. \end{theorem} \begin{proof} Assume on the contrary that $X_{i_1} \ldots X_{i_p}$ and $X_{j_1} \ldots X_{j_q}$ be two different irreducible decompositions with the least norm such that \[ X_{i_1} \ldots X_{i_p} \simeq X_{j_1} \ldots X_{j_q} . \] Suppose that \begin{equation}\label{eqn:proof_UDP1} X_{i_1} \ldots X_{i_p} \Longrightarrow_{\mathrm{dec}} \stackrel{a}{\longrightarrow}_{\mathrm{dec}} \gamma X_{i_2} \ldots X_{i_p}. \end{equation} These actions must be bisimulated (matched) by \begin{equation}\label{eqn:proof_UDP2} X_{j_1} \ldots X_{j_q} \Longrightarrow_{\mathrm{dec}} \stackrel{a}{\longrightarrow}_{\mathrm{dec}} \delta X_{j_2} \ldots X_{j_q} \end{equation} for some $\delta$ such that $\gamma X_{i_2} \ldots X_{i_p} \simeq \delta X_{j_2} \ldots X_{j_q}$. Since the norm of $\gamma X_{i_2} \ldots X_{i_p}$ and $\delta X_{j_2} \ldots X_{j_q}$ is strictly decremented, we have $X_{i_p} \simeq X_{j_q}$ from the induction hypothesis. Now by right cancellation lemma, $X_{i_1} \ldots X_{i_{p-1}} \simeq X_{j_1} \ldots X_{j_{q-1}}$. This contradicts with the minimum norm assumption. \qed \end{proof} On the other direction, right or left cancellation property is an implication of unique decomposition property. \begin{lemma}\label{lemma:udp_to_cancellation} Let $\equiv$ be decompositional. Then $\alpha\gamma \equiv \beta\gamma$ (or $\gamma\alpha \equiv \gamma\beta$) implies $\alpha \equiv \beta$. \end{lemma} \begin{remark} The proof of Lemma~\ref{lem:right-cancellation} and Theorem~\ref{thm:unique-decomposition} is standard~\cite{BaetenBergstraKlop1987,DBLP:journals/tcs/HirshfeldJM96}. Although the proof is fairly straightforward, it heavily depends on {\em branching} bisimilarity and {\em totally} normedness. For example in the above proof when actions coming from $X_{i_1}$ in (\ref{eqn:proof_UDP1}) are matched by the actions in (\ref{eqn:proof_UDP2}), the crucial point is that $X_{j_2}$ is never used. This cannot be proved in the case of weak bisimilarity, or in the case without totally normedness. We will have the following two counterexamples if branching bisimilarity is replaced by {\em weak} bisimilarity, or if the condition of totally normedness is abandoned. \end{remark} \begin{example}\label{example:weak_bisimilarity_decom} This counterexample is borrowed from~\cite{DBLP:conf/cav/Huttel91}. Consider the tnBPA system $(\{X, Y, B, A\}, \{a\}, \Delta)$, with \[ \Delta = \{ X \stackrel{a}{\longrightarrow} Y, Y \stackrel{a}{\longrightarrow} \epsilon, Y \stackrel{\tau}{\longrightarrow} X, A \stackrel{a}{\longrightarrow} \epsilon, A \stackrel{\tau}{\longrightarrow} B, B \stackrel{a}{\longrightarrow} \epsilon \}. \] Clearly, $A Y \approx B Y $ but $A \not\approx B$. Right cancellation property does not hold, neither does the unique decomposition property hold. \end{example} \begin{example} Consider the nBPA system $(\{X\}, \{a\}, \Delta)$, with \[ \Delta = \{ X \stackrel{a}{\longrightarrow} X, X \stackrel{\tau}{\longrightarrow} \epsilon\}. \] Clearly, $X \simeq XX \simeq XXX \simeq \ldots$. Unique decomposition property fails in this example merely because the existence of {\em idempotent} processes. \end{example} \subsection{Decomposition Bases}\label{subset:decomposition_base} A decompositional congruence over $\mathbf{C}^{*}$ can be represented by a decomposition base. A {\em decomposition base} $\mathcal{B}$ is a pair $(\mathbf{P}, \mathbf{E})$, in which $\mathbf{P} \subseteq \mathbf{C}$ specifies the set of primes, and $\mathbf{E}$ is a finite set of equations of the form $X = \alpha_X$ for every $X \in \mathbf{C} - \mathbf{P}$ and $\alpha_X \in \mathbf{P}^{*}$. The equation $X = \alpha_X$ realizes the fact that every composite $X$ is equal to a string of primes $\alpha_X$ which is the {\em prime decomposition} of $X$. The congruence relation generated by $\mathcal{B}$ is denoted by $\stackrel{\mathcal{B}}{\equiv}$. The {\em prime decomposition} of a process $\alpha$ with regard to $\mathcal{B}$ is denoted by $\mathtt{dcmp}_{\mathcal{B}}(\alpha)$. Formally, we set $\mathtt{dcmp}_{\mathcal{B}}(X) = X$ when $X \in \mathbf{P}$, and $\mathtt{dcmp}_{\mathcal{B}}(X) = \alpha_X$ wherever the equation $X = \alpha_X$ is in $\mathbf{E}$. The domain of $\mathtt{dcmp}_{\mathcal{B}}$ is extended to $\mathbf{C}^{*}$ naturally by setting $\mathtt{dcmp}_{\mathcal{B}}(\epsilon) = \epsilon$ and $\mathtt{dcmp}_{\mathcal{B}}(\alpha\cdot \beta) = \mathtt{dcmp}_{\mathcal{B}}(\alpha) \cdot \mathtt{dcmp}_{\mathcal{B}}(\beta)$. The following lemma makes checking $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ fairly easy by only computing the prime decompositions of $\alpha$ and $\beta$. \begin{lemma}\label{lem:dcmp} $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ if and only if $\mathtt{dcmp}_{\mathcal{B}}(\alpha) = \mathtt{dcmp}_{\mathcal{B}}(\beta)$. \end{lemma} In the rest of the paper, every congruence $\mathcal{B} = (\mathbf{P}, \mathbf{E})$ generated by a decomposition base $\mathcal{B}$ is assumed to be norm-preserving. Thus we must have $\mathtt{norm}(X) = \mathtt{norm}(\alpha_X)$ if the equation $X = \alpha_X$ is in $\mathbf{E}$. The following lemma formalizes the important observation that prime constants do not have state-preserving silent actions. \begin{lemma}\label{lem:tau_prime} Let $\mathcal{B}=(\mathbf{P}, \mathbf{E})$ be a decomposition base, and $X_i \in \mathbf{P}$. Assume $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$, we have $X_i \not \stackrel{\mathcal{B}}{\equiv} \alpha$. \end{lemma} \begin{proof} According to Lemma~\ref{lem:decreasing_transition}, $\alpha \in \mathbf{C}_{i-1}$. \begin{enumerate} \item If $\ell =\tau$ and $\mathtt{norm}(X_i) = \mathtt{norm}(\alpha)$. In this case, if we have $X_i \stackrel{\mathcal{B}}{\equiv} \alpha$, then according to the fact that $X_i$ being prime and Lemma~\ref{lem:dcmp}, $X_i = \mathtt{dcmp}_{\mathcal{B}}(X_i) = \mathtt{dcmp}_{\mathcal{B}}(\alpha)$. This is a contradiction. \item If $\ell \neq \tau$ and $\mathtt{norm}(X_i) = \mathtt{norm}(\alpha) + 1$, we cannot have $X_i \stackrel{\mathcal{B}}{\equiv} \alpha$ because $\stackrel{\mathcal{B}}{\equiv}$ is norm preserving. \qed \end{enumerate} \end{proof} The above property can be lifted from constants to processes, regarding Lemma~\ref{lemma:udp_to_cancellation}. \begin{lemma}\label{lem:tau_prime_string} Let $\mathcal{B}=(\mathbf{P}, \mathbf{E})$ be a decomposition base, and $\alpha \in \mathbf{P}^{*}$. Assume $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \gamma$, we have $\alpha \not \stackrel{\mathcal{B}}{\equiv} \gamma$. \end{lemma} \begin{remark} Algebraically, a decomposition base $\mathcal{B}$ can be understood as a {\em finite presentation} of a monoid. In fact, $\mathcal{B}$ specifies the quotient monoid $\mathbf{C}^{*} / \stackrel{\mathcal{B}}{\equiv}$. Moreover, the unique decomposition property says that the quotient monoid $\mathbf{C}^{*} / \stackrel{\mathcal{B}}{\equiv}$ is a free monoid. From computational point of view, $\mathcal{B}$ is a {\em string rewriting system}. Rewriting rules are exact the equations in $\mathbf{E}$ from left to right. Strings in {\em normal forms} are exact $\mathbf{P}^{*}$, the free monoid generated by $\mathbf{P}$. All composites can be reduced to its prime decompositions. Any $\alpha \in \mathbf{C}^{*}$ has a normal form. Church-Rosser property is guaranteed by the unique decomposition property, which makes checking $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ fairly easy by merely rewriting $\alpha$ and $\beta$ to their normal forms. \end{remark} \section{Description of the Algorithm}\label{sec:naive-algorithm} This section serves as the description of our algorithm. The algorithm takes the partition refinement approach. It is a generalized version of the one in~\cite{DBLP:conf/fsttcs/CzerwinskiL10}, which we call CL algorithm. However, unlike the original CL algorithm, the correctness of our algorithm is not obvious and is much more difficult to prove. This is the reason why we describe the algorithm before we prove its correctness. During the description, we also show some properties and requirements which make the algorithm work. A few properties are not proved until Section~\ref{sec:refinement_steps}. \subsection{Partition Refinements with Decomposition Bases} In order to decide whether $\alpha \simeq \beta$, we start with an initial congruence relation $\equiv_0$, and iteratively refine it. The refinement operation will be denoted by $\mathsf{Ref}$. By taking $\equiv_{i+1} = \mathsf{Ref}(\equiv_i)$, we have a sequence of congruence relations \[ {\equiv_0}, {\equiv_1} , {\equiv_2} , \ldots \] which satisfy \[ {\equiv_0} \supseteq {\equiv_1} \supseteq {\equiv_2} \supseteq \ldots. \] The correctness of the refinement operation adopted in this paper depends on the following requirements: \begin{enumerate} \item ${\simeq} \subseteq {\equiv_0}$. \item $\mathsf{Ref}({\simeq}) = {\simeq}$. \item If ${\simeq} \subsetneq {\equiv}$, then $ {\simeq} \subseteq \mathsf{Ref}({\equiv}) \subsetneq {\equiv}$. \end{enumerate} Once the sequence becomes stable, say ${\equiv_i} = {\equiv_{i+1}}$, we have ${\simeq} = {\equiv_i}$. \begin{remark} The refinement operation taken in this paper leads to a monotonic sequence $\{\equiv_i\}_{i \in \omega}$. Namely, \[ {\equiv_0} \supseteq {\equiv_1} \supseteq {\equiv_2} \supseteq \ldots. \] This property is not necessary in a general framework of refinement. One alternative is to replace the third requirement above by the following two: \begin{enumerate} \item[3'.] $\mathsf{Ref}$ is monotone. $\mathsf{Ref}({\equiv}) \subseteq \mathsf{Ref}({\equiv}')$ whenever ${\equiv} \subseteq {\equiv}'$. \item[4'.] If ${\simeq} \subsetneq {\equiv}$, then $\mathsf{Ref}({\equiv}) \neq {\equiv}$. \end{enumerate} \end{remark} In the algorithm, the congruences $\simeq$ and $\equiv_i$'s are all represented by decomposition bases. That is, all the intermediate $\equiv_i$ must be decompositional congruences. In the following, we will develop an implementation of the refinement steps in polynomial time. On the whole, the algorithm is an iteration: \begin{center} \small \begin{tabular}{|p{12cm}|}\hline\vspace{-2ex} \begin{itemize} \item[1.] Compute the initial base $\mathcal{B}_{\mathrm{init}}$ and set $\mathcal{B} = \mathcal{B}_{\mathrm{init}}$. \item[2.] Compute the base $\mathcal{B}'$ from $\mathcal{B}$. \item[3.] If $\mathcal{B}'$ equals $\mathcal{B}$ then halt and return $\mathcal{B}$. \item[4.] Assign new base $\mathcal{B}'$ to $\mathcal{B}$ and go to step 2. \vspace{-1ex} \end{itemize} \\ \hline \end{tabular} \end{center} Apparently, the algorithm relies on the base $\mathcal{B}_{\mathrm{init}}$ of the initial congruence $\equiv_0$ and the refinement step, computing $\mathcal{B}'$ from $\mathcal{B}$. \subsection{Outline of the Algorithm}\label{subsec:naive-algorithm} The framework of the algorithm is described as Fig.~\ref{Efficient_Algorithm}. \subsubsection{Initial Congruence} The base $\mathcal{B}_{\mathrm{init}} = (\mathbf{P}_{\mathrm{init}}, \mathbf{E}_{\mathrm{init}})$ of the initial congruence $\equiv_0$ is set as: \begin{itemize} \item $\mathbf{P}_{\mathrm{init}} = X_1$, \item $\mathbf{E}_{\mathrm{init}}$ contains $X_i = \underbrace{X_1 \cdot X_1 \cdot \ldots \cdot X_1}_{\mathtt{norm}(X_i)\textrm{ times}}$ for every $i > 1$. \end{itemize} For $\equiv_0$, we have the following properties. \begin{lemma} $\alpha \equiv_0 \beta$ if and only if $\mathtt{norm}(\alpha) = \mathtt{norm}(\beta)$. \end{lemma} \begin{lemma} \begin{enumerate} \item ${\equiv_0} \supseteq {\simeq}$. \item $\equiv_0$ is a norm-preserving and decompositional congruence. \end{enumerate} \end{lemma} \subsubsection{Properties of Refinement Steps} In order to understand the framework of the algorithm, We need to investigate the relationship between $\mathcal{B}' = (\mathbf{P}', \mathbf{E}') $ and $\mathcal{B} = (\mathbf{P}, \mathbf{E})$ in step~2. Later from the algorithm, we will confirm that ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$. Under this condition, we have the following key observation. \begin{lemma}\label{lem:PP} Let $\mathcal{B} = (\mathbf{P}, \mathbf{E})$ and $\mathcal{B}' = (\mathbf{P}', \mathbf{E}') $ be two decomposition bases. \begin{enumerate} \item If ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$, then $\mathbf{P} \subseteq \mathbf{P}'$. \item If $\mathbf{P}' = \mathbf{P}$, then $\mathcal{B}' = \mathcal{B}$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item Suppose $X_i \not\in \mathbf{P}'$, we show $X_i \not\in \mathbf{P}$. Since $X_i \not\in \mathbf{P}'$, there is an equation ${X_i = \alpha}$ in $\mathbf{E}'$ for some $\alpha \in \mathbf{C}_{i-1}^{*}$, which means $X_i \stackrel{\mathcal{B}'}{\equiv} \alpha$. Because ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$, we have $X_i \stackrel{\mathcal{B}}{\equiv} \alpha$, which means that $X_i$ is not a prime modulo $ \stackrel{\mathcal{B}}{\equiv}$. That is, $X_i \not\in \mathbf{P}$. \item Suppose that $\mathcal{B}' \subsetneq \mathcal{B}$. Then there is some $X_i$ such that $\mathtt{dcmp}_{\mathcal{B}}(X_i) \neq \mathtt{dcmp}_{\mathcal{B}'}(X_i)$. We have $ X_i \stackrel{\mathcal{B}}{\equiv} \mathtt{dcmp}_{\mathcal{B}}(X_i) $ and $ X_i \stackrel{\mathcal{B}'}{\equiv} \mathtt{dcmp}_{\mathcal{B}'}(X_i) $ Since ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$, we have $X_i \stackrel{\mathcal{B}}{\equiv} \mathtt{dcmp}_{\mathcal{B}'}(X_i)$, thus $\mathtt{dcmp}_{\mathcal{B}}(X_i) \stackrel{\mathcal{B}}{\equiv} \mathtt{dcmp}_{\mathcal{B}'}(X_i)$. Since $\mathtt{dcmp}_{\mathcal{B}}(X_i)$ and $\mathtt{dcmp}_{\mathcal{B}'}(X_i)$ are both in $\mathbf{P}^{*}$, we have $\mathtt{dcmp}_{\mathcal{B}}(X_i) = \mathtt{dcmp}_{\mathcal{B}'}(X_i)$, a contradiction. \qed \end{enumerate} \end{proof} According to Lemma~\ref{lem:PP}, we call constants in $\mathbf{P}$ {\em old primes} and constants in $\mathbf{P}' \setminus \mathbf{P}$ {\em new primes}. During the iterative procedure of refinement, once a constant becomes prime, it is a prime thereafter. If at certain step of iteration there is no new prime to add, the algorithm terminates. Thus we have the following property. \begin{proposition} There can be at most $n$ steps of iteration in the algorithm. \end{proposition} This confirms the termination of the algorithm and provides an implementation of the step 3 by checking if there are new primes. The remaining thing is to study the implementation of step~2. \begin{figure}[tbp] \begin{center} \begin{tabular}{|p{12cm}|}\hline\vspace{0ex} \textbf{\normalsize Framework of the algorithm:} \begin{enumerate} \item Initialize $\mathcal{B} = (\mathbf{P}, \mathbf{E})$; \item $\mathbf{P}' \coloneqq \mathbf{P}$; \item \textbf{repeat} \item \qquad $\mathbf{P} \coloneqq \mathbf{P}'$; $\mathbf{E} \coloneqq \mathbf{E}'$; $\mathbf{E}' \coloneqq \emptyset$; \item \qquad \textbf{for} each $X_i \in \mathbf{C}\setminus \mathbf{P}$ \textbf{do} \item \qquad\qquad $s \coloneqq \mathtt{dcmp}_{(\mathbf{P}', \mathbf{E}')}(\alpha_i)$; \item \qquad\qquad $flag \coloneqq \mathbf{true}$; \item \qquad\qquad $k \coloneqq \mathtt{lpfindex}_{(\mathbf{P}, \mathbf{E})}(X_i)$; \item \qquad\qquad \textbf{for} each $X_j \in \{ \mathtt{lpf}_{(\mathbf{P}, \mathbf{E})}(X_i) \} \cup \{X_{k+1}, \ldots, X_{i-1}\} \cap (\mathbf{P}' \setminus \mathbf{P})$ \textbf{do} \item \qquad\qquad\qquad \textbf{if} $\mathtt{lpftest}_{(\mathbf{P}', \mathbf{E}')}(X_i, X_j)$ \textbf{then} \item \qquad\qquad\qquad\qquad $\mathbf{E}' \coloneqq \mathbf{E}' \cup \{ X_i = X_j \cdot \mathtt{sffx}(\mathtt{norm}(X_i) - \mathtt{norm}(X_j); s) \}$; \item \qquad\qquad\qquad\qquad $flag \coloneqq \mathbf{false}$; \item \qquad\qquad\qquad \textbf{end if} \item \qquad\qquad \textbf{end for} \item \qquad\qquad \textbf{if} $flag$ \textbf{then} \item \qquad\qquad\qquad $\mathbf{P}' \coloneqq \mathbf{P}'\cup \{X_i\}$; \item \qquad\qquad \textbf{end if} \item \qquad \textbf{end for} \item \textbf{until} $\mathbf{P} = \mathbf{P}'$ \vspace{-1ex} \end{enumerate}\\ \hline \end{tabular}\vspace{-1ex} \end{center} \caption{Framework of Efficient Algorithm}\label{Efficient_Algorithm} \end{figure} \subsubsection{Computing $\mathcal{B}'$ from $\mathcal{B}$} Computation of $\mathcal{B}'$ proceeds as follows. First we assign $\mathbf{P}' = \mathbf{P}$ and $\mathbf{E}' = \emptyset$. Then we add appropriate constants to $\mathbf{P}'$ and appropriate equations to $\mathbf{E}'$. For every $i = 2, \ldots, n$ with $X_i \in \mathbf{C} \setminus \mathbf{P}$, we check whether there exists $\delta \in (\mathbf{P}' \cap \mathbf{C}_{i-1})^{*}$ such that $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$. If not, we add $X_i$ to $\mathbf{P}'$, otherwise we add the appropriate equation $X_i = \alpha$ to $\mathbf{E}'$. We emphasize that at the time $X_i$ is treated, we have already known whether $X_j \in \mathbf{P}'$ and $\mathtt{dcmp}_{\mathcal{B}'} (X_j)$ for every $j<i$. The efficient computation of $\mathcal{B}'$ from $\mathcal{B}$ relies on the following three aspects: \begin{enumerate} \item The candidates $\delta$ for testing $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ must be `small'. \item We need an correct and efficient way of deciding whether $(X_i, \delta)$ can be put into $\mathbf{E}'$, i.e.~$X_i \stackrel{\mathcal{B}'}{\equiv} \delta$. \item We need an efficient representation and manipulation on strings. \end{enumerate} The representation and operations on long strings can be implemented in a systematic way and will be discussed shortly in Section~\ref{subsec:longstring}. For the moment, we suppose that all the operations on strings appears in the algorithm are polynomial time computable. \subsection{Small Set of Candidates}\label{subsec:candidates} Now we confirm that, for every $X_i$, there is a small number of $\delta$'s which are required to determine whether $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$. In the case of realtime $\mathrm{nBPA}$, this is a significant discovery in CL algorithm, for it greatly reduces the expense of the algorithm. The same way is taken here, but the rationality will be confirmed later. Let $\mathcal{B} = (\mathbf{P}, \mathbf{E})$ be a decomposition base. We say that prime constant $X_j \in \mathbf{P}$ is the {\em leftmost prime factor} of $X_i$ wrt.~$\mathcal{B}$, denoted by $\mathtt{lpf}_{\mathcal{B}}(X_i) = X_j$, if $\mathtt{dcmp}_{\mathcal{B}}(X_i) = X_j\cdot \gamma$ for some $\gamma$. Clearly, $\mathtt{lpf}_{\mathcal{B}}(X_i)$ is unique. Now fix one decreasing transition rule $X_i \stackrel{\ell_i}{\longrightarrow}_{\mathrm{dec}} \alpha_i$ ($\ell_i = \tau$ is allowed. ) for every $X_i \in \mathbf{C}$. We use $\mathtt{sffx}(h; \alpha)$ to denote the suffix of string $\alpha$ with norm $h$. Note that $\mathtt{sffx}(h; \alpha)$ is undefined unless $\alpha$ has such a suffix with norm $h$. \begin{proposition}\label{prop:lpf1} Let $\mathcal{B}$ be a decomposition base such that $\stackrel{\mathcal{B}}{\equiv}$ is a decreasing branching bisimulation (Definition~\ref{def:dec_bisimulation_expansion}, Section~\ref{subsec:Expansion_in_general}). If $\mathtt{lpf}_{\mathcal{B}}(X_i) = X_j$, then \[ X_i \stackrel{\mathcal{B}}{\equiv} X_j \cdot \mathtt{sffx}(\mathtt{norm}(X_i)-\mathtt{norm}(X_j); \mathtt{dcmp}_{\mathcal{B}} (\alpha_i)). \] \end{proposition} \begin{proof} From $\mathtt{lpf}_{\mathcal{B}}(X_i) = X_j$, we have $X_i \stackrel{\mathcal{B}}{\equiv} X_j \cdot \alpha$ for some $\alpha$ satisfying $\mathtt{norm}(\alpha) = \mathtt{norm}(X_i)-\mathtt{norm}(X_j)$. Knowing $\stackrel{\mathcal{B}}{\equiv}$ is a decreasing branching bisimulation, we consider the transition $X_i \stackrel{\ell_i}{\longrightarrow}_{\mathrm{dec}} \alpha_i$. There are two cases: \begin{itemize} \item $\ell_i = \tau$ and $\alpha_i \stackrel{\mathcal{B}}{\equiv} X_j \cdot\alpha$. In this case, let $\beta = X_j$ and we have $\alpha_i \stackrel{\mathcal{B}}{\equiv} \beta \cdot \alpha$. \item $\ell_i \neq \tau$ or $\alpha_i \not\stackrel{\mathcal{B}}{\equiv} X_j \cdot\alpha$. In this case, we have $X_j \Longrightarrow_{\mathrm{dec}} \stackrel{\ell_i}{\longrightarrow}_{\mathrm{dec}} \beta$ such that $\alpha_i \stackrel{\mathcal{B}}{\equiv} \beta\cdot\alpha$. \end{itemize} In either case, we have $\alpha_i \stackrel{\mathcal{B}}{\equiv} \beta\cdot\alpha$ for some $\beta$. According to the fact that $\stackrel{\mathcal{B}}{\equiv}$ is decompositional, we get $\mathtt{dcmp}_{\mathcal{B}}(\alpha_i) = \mathtt{dcmp}_{\mathcal{B}}(\beta) \cdot \mathtt{dcmp}_{\mathcal{B}}(\alpha)$, and consequently $\mathtt{dcmp}_{\mathcal{B}}(\alpha) = \mathtt{sffx}(\mathtt{norm}(\alpha); \mathtt{dcmp}_{\mathcal{B}} (\alpha_i)) = \mathtt{sffx}(\mathtt{norm}(X_i)-\mathtt{norm}(X_j); \mathtt{dcmp}_{\mathcal{B}} (\alpha_i))$, hence $\alpha \stackrel{\mathcal{B}}{\equiv} \mathtt{sffx}(\mathtt{norm}(X_i)-\mathtt{norm}(X_j); \mathtt{dcmp}_{\mathcal{B}} (\alpha_i))$. Recall that $X_i \stackrel{\mathcal{B}}{\equiv} X_j \cdot \alpha$, we get $X_i \stackrel{\mathcal{B}}{\equiv} X_j \cdot\mathtt{sffx}(\mathtt{norm}(X_i)-\mathtt{norm}(X_j); \mathtt{dcmp}_{\mathcal{B}} (\alpha_i))$. \qed \end{proof} Assume that ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$. Comparing $\mathtt{lpf}_{\mathcal{B}'}(X_i)$ with $\mathtt{lpf}_{\mathcal{B}}(X_i)$, there are two possibilities: $\mathtt{lpf}_{\mathcal{B}'}(X_i) = \mathtt{lpf}_{\mathcal{B}}(X_i)$ or $\mathtt{lpf}_{\mathcal{B}'}(X_i) \neq \mathtt{lpf}_{\mathcal{B}}(X_i)$. If $\mathtt{lpf}_{\mathcal{B}'}(X_i) \neq \mathtt{lpf}_{\mathcal{B}}(X_i)$, the following property confirms that $\mathtt{lpf}_{\mathcal{B}'}(X_i)$ must be a new prime. \begin{proposition}\label{prop:lpf2} Assume that ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$. Let $X_{j'} = \mathtt{lpf}_{\mathcal{B}'}(X_i)$ and $X_j = \mathtt{lpf}_{\mathcal{B}}(X_i)$. If $j' \neq j$, then $j' > j$ and $X_{j'} \in \mathbf{P}' \setminus \mathbf{P}$. \end{proposition} \begin{proof} Assume on the contrary that $X_{j'} \in \mathbf{P}$, we have $X_i \stackrel{\mathcal{B}}{\equiv} X_j \cdot \gamma \stackrel{\mathcal{B}}{\equiv} X_{j'} \cdot \gamma'$, which violates unique decomposition property of $\stackrel{\mathcal{B}}{\equiv}$. \qed \end{proof} Now we can illustrate the algorithm framework in Fig.~\ref{Efficient_Algorithm}. The \textbf{repeat} block at line~3 realize the procedure of iteration. At every iteration, $\mathcal{B} = (\mathbf{P}, \mathbf{E})$ is updated to $\mathcal{B}' = (\mathbf{P}', \mathbf{E}')$. During an iteration, every constant $X_i$ which is current composite is treated in the fixed index order via the outer \textbf{for} block at line~5. Note that, when $X_i$ is treated, $\mathtt{dcmp}_{\mathcal{B}'} (\alpha_i)$ can be determined. Then the inner \textbf{for} block at line~9 is used for discovering a new decomposition of $X_i$ for $\mathcal{B}'$ by determining the leftmost prime factor $X_j$ of $X_i$. By Proposition~\ref{prop:lpf2}, $X_j$ can be unchanged (in the case $X_j = \mathtt{lpf}_{(\mathbf{P}, \mathbf{E})}(X_i)$), or be a new prime less than $X_i$ (in the case $X_j \in (\mathbf{P}' \setminus \mathbf{P})$ and $\mathtt{lpfindex}_{\mathcal{B}}(X_i) < j < i$), or be $X_i$ itself (in the case no $X_j$ is found in the inner \textbf{for} block at line~9). In the last case, variable $flag$ which is set true at line~7 remains being $\mathbf{true}$ and $X_i$ is added to the set $\mathbf{P}'$ of new primes (line~15). The operation $\mathtt{lpfindex}_{\mathcal{B}}(X_i)$ returns index $k$ such that $\mathtt{lpf}_{\mathcal{B}}(X_i) = X_k$. Using Proposition~\ref{prop:lpf1} and Proposition~\ref{prop:lpf2}, the set of candidates can be confined into the form of $X_j \cdot \mathtt{sffx}(\mathtt{norm}(X_i)-\mathtt{norm}(X_j); \mathtt{dcmp}_{\mathcal{B}'} (\alpha_i))$, Note that, in the inner \textbf{for} block, procedure $\mathtt{lpftest}_{\mathcal{B}'}(X_i,X_j)$ is used to check whether $X_j$ is the leftmost prime factor of $X_i$ modulo $\stackrel{\mathcal{B}'}{\equiv}$. In fact, it tests whether \begin{equation}\label{eqn:testing_Xi_alpha} X_i \stackrel{\mathcal{B}'}{\equiv} X_j \cdot \mathtt{sffx}(\mathtt{norm}(X_i)-\mathtt{norm}(X_j); \mathtt{dcmp}_{\mathcal{B}'} (\alpha_i)). \end{equation} In the rest part of this paper, the right hand side of Equation~(\ref{eqn:testing_Xi_alpha}) is denoted by $\delta$. We remark that $\delta \in \mathbf{P}' \cap \mathbf{C}_{i-1}$. Our goal is to find an efficient way to check whether $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$. \begin{remark} The small number of candidates of $\delta$ relies on Proposition~\ref{prop:lpf1}, which requires that $\stackrel{\mathcal{B}'}{\equiv}$ be a decreasing bisimulation. The definition of decreasing bisimulation will be introduced in Section~\ref{subsec:Expansion_in_general}. According to the refinement operation defined in Section~\ref{sec:refinement_steps}, $\stackrel{\mathcal{B}'}{\equiv}$ is assured to be a decreasing bisimulation. \end{remark} \subsection{Efficient Way of Testing $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$} The algorithm framework described in Fig.~\ref{Efficient_Algorithm} tells us an efficient way for the implementation of partition refinement on the unique decomposition congruences. Up to now, we have not discuss how the refinement operation is and how shall we realize it efficiently. That is, how $\mathtt{lpftest}_{(\mathbf{P}', \mathbf{E}')}(X_i, X_j)$ at line~10 is implemented. Now we present the details. That is , we present an efficient way to check whether $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$. In this way, we define $\mathcal{B}'$ from $\mathcal{B}$ via the algorithm. The whole testing is described in Fig.~\ref{Checking_EXP}. In later sections, we have further discussions on this implementation. For now, we only remark that, in the situation of realtime $\mathrm{nBPA}$, this implementation coincides with CL algorithm. The proof of correctness is deferred to Section~\ref{sec:refinement_steps}. \begin{figure}[tbp] \begin{center} \begin{tabular}{|p{12cm}|}\hline \vspace{0ex} \textbf{\normalsize Checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$:} \begin{enumerate} \item \textbf{test} $\mathtt{dcmp}_{\mathcal{B}}(X_i) = \mathtt{dcmp}_{\mathcal{B}}(\delta)$. if so, goto step~2; else \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} for every $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$, we have \begin{enumerate} \item either $\ell = \tau$ and $\mathtt{dcmp}_{\mathcal{B}'}(\alpha) = \delta$; \item or $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta$ for some $\beta$ and $\mathtt{dcmp}_{\mathcal{B}'} (\alpha) = \mathtt{dcmp}_{\mathcal{B}'} (\beta)$. \end{enumerate} If so, goto step~3; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} for every $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha$, we have $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta$ for some $\beta$ and $\mathtt{dcmp}_{\mathcal{B}} (\alpha) = \mathtt{dcmp}_{\mathcal{B}} (\beta)$. If so, goto step~4; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} whether $X_i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha$ for some $\alpha$ such that $\mathtt{dcmp}_{\mathcal{B}'} (\alpha) = \delta $. If so, goto step~7; else, goto step~5. \vspace{1ex} \item \textbf{test} for every $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta$, we have $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$ for some $\alpha$ such that $\mathtt{dcmp}_{\mathcal{B}'}(\alpha) = \mathtt{dcmp}_{\mathcal{B}'} (\beta)$. If so, goto step~6; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} for every $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta$, we have $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha$ for some $\alpha$ such that $\mathtt{dcmp}_{\mathcal{B}}(\alpha) = \mathtt{dcmp}_{\mathcal{B}} (\beta)$. If so, goto step~7; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{accept} $(X_i, \delta)$. \vspace{-1ex} \end{enumerate} \\ \hline \end{tabular}\vspace{-1ex} \end{center} \caption{Checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$}\label{Checking_EXP} \end{figure} We can state two properties which need to be used to make the whole framework Fig.~\ref{Efficient_Algorithm} work. \begin{lemma} In every iteration of Fig.~\ref{Efficient_Algorithm}, we get a decomposition base $\mathcal{B}'$ from $\mathcal{B}$. The following hold: \begin{enumerate} \item ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$. \item ${\stackrel{\mathcal{B}'}{\equiv}}$ is a decreasing bisimulation. \end{enumerate} \end{lemma} \begin{proof} Item~1 is an inference directly from Fig.~\ref{Checking_EXP}. Item~2 will be discussed in detail in Section~\ref{sec:refinement_steps}. \qed \end{proof} \subsection{Operations on Long Strings}\label{subsec:longstring} In the algorithm, we meet quite a few operations on strings whose length is exponential. Thus we need an efficient way to represent and manipulate them. This sort of improvement actually appears in all the previous work on strong bisimilarity checking on normed $\mathrm{BPA}$. There are many different ways to do so, and nothing special in our situation. Thus we only sketch the idea and provide some literature. In the previous work~\cite{DBLP:journals/tcs/HirshfeldJM96,DBLP:conf/mfcs/LasotaR06,DBLP:conf/fsttcs/CzerwinskiL10}, a long string is represented by a {\em straight-line program} (SLP), a context-free grammar (typically in Chomsky normal form) which generates only one word. The efficient algorithms rely on an efficient implementation of equality checking on SLP-compressed strings, which is typically implemented (as a special case) by an efficient algorithm of compressed pattern matching such as~\cite{DBLP:conf/cpm/MiyazakiST97,DBLP:conf/dagstuhl/Lifshits06}. Lohrey~\cite{DBLP:journals/gcc/Lohrey12} gives a nice survey on algorithms on SLP-compressed strings. One deficiency of the above scheme is that the procedure for string equality checking is called every time two strings need to compare, and previous computations are completely ignored. In~\cite{DBLP:journals/algorithmica/MehlhornSU97} and its improved version~\cite{DBLP:conf/soda/AlstrupBR00}, a data structure for finite set of strings is maintained, which supports concatenation, splitting, and equality checking operations. Czerwi\'{n}ski~\cite{CzerwinskiPhD} uses this technique to improve his previous algorithm~\cite{DBLP:conf/fsttcs/CzerwinskiL10}. \subsection{Analysis of Time Complexity} Now we give a very brief discussion of the time complexity of the whole algorithm. Some less important factors are deliberately neglected. Readers are referred to~Czerwi\'{n}ski~\cite{CzerwinskiPhD}. Consider the algorithm described in Fig.~\ref{Efficient_Algorithm}. The dominating factor is the operation $\mathtt{lpftest}_{(\mathbf{P}', \mathbf{E}')}(X_i, X_j)$ at line~10. We claim that there are totally $\mathcal{O}(n^2)$ invocations of $\mathtt{lpftest}$. In the implementation of $\mathtt{lpftest}$, we call the procedures described in Fig.~\ref{Checking_EXP}. The procedure treats processes as {\em normed strings}. Therefore, the time consumed depends on the costs of the operations on normed strings. We suppose that there are three operations of `normed' strings: $\mathtt{Concatenate}(\sigma_1, \sigma_2)$, $\mathtt{Split}(\sigma, h)$, and $\mathtt{Equal}(\sigma_1, \sigma_2)$, which are supposed to spend time $\mathsf{C}(N)$, $\mathsf{S}(N)$, and $\mathsf{E}(N)$, respectively. Claimed in~\cite{CzerwinskiPhD}, the best implementation is $\mathsf{C}(N) = \mathcal{O}(N\cdot \mathrm{polylog}N)$, $\mathsf{S}(N) = \mathcal{O}(N\cdot\mathrm{polylog}N)$, and $\mathsf{E}(N) = \mathcal{O}(\mathrm{polylog}N)$. Consider the procedures in Fig.~\ref{Checking_EXP}. The most time-consuming part is still the part of matching, which can perform $\mathcal{O}(N^2)$ times of $\mathtt{Equal}$ operations. This makes the total time of checking branching bisimilarity no difference from checking strong bisimilarity. The overall running time is $\mathcal{O}(N^4\cdot\mathrm{polylog}N)$. \section{The Refinement Operation}\label{sec:refinement_steps} Now, we start to discuss the correctness of the algorithm. In order to prove the correctness, we need to answer two questions: \begin{enumerate} \item What is the refinement operation corresponding to a step of iteration in our algorithm? \item How our algorithm can be derived from the refinement operation. \end{enumerate} In this section we answer the first question, and the second question will be answered in Section~\ref{sec:correctness}. Actually how to define the refinement operation for our algorithm is really not clear at the first glance. Thus we review the refinement operation adopted in CL algorithm in Section~\ref{subsec:Expansion_Pre}. Then in Section~\ref{subsec:Expansion_Pre} we find another way to define and understand the refinement operation in Section~\ref{subsec:Expansion_Another_undestanding}. Following this understanding, we attempt to define the refinement operation which turns out to be suitable for our algorithm in Section~\ref{subsec:Expansion_in_general}, and then show some basic properties. \subsection{The Refinement Operation for Realtime $\mathrm{nBPA}$}\label{subsec:Expansion_Pre} Before going into the tricky part of our definition of the refinement relation, let us review the reason why the algorithm is correct for the realtime $\mathrm{nBPA}$. This special case is comparatively easy. For convenience, we describe the procedure of checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ for realtime $\mathrm{nBPA}$ in Fig.~\ref{Checking_EXP_REALTIME}. This is nothing but a special case of~Fig.~\ref{Checking_EXP}, and it is a slightly simplified version of the corresponding procedure in CL algorithm. \begin{figure}[tbp] \begin{center} \begin{tabular}{|p{12cm}|}\hline \vspace{0ex} \textbf{\normalsize Checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ in the case of REALTIME systems:} \begin{enumerate} \item \textbf{test} $\mathtt{dcmp}_{\mathcal{B}}(X_i) = \mathtt{dcmp}_{\mathcal{B}}(\delta)$. If so, goto step~2; else \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} for every $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$, we have \begin{quote} $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta$ for some $\beta$ and $\mathtt{dcmp}_{\mathcal{B}'} (\alpha) = \mathtt{dcmp}_{\mathcal{B}'} (\beta)$. \end{quote} If so, goto step~3; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} for every $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha$, we have \begin{quote} $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta$ for some $\beta$ and $\mathtt{dcmp}_{\mathcal{B}} (\alpha) = \mathtt{dcmp}_{\mathcal{B}} (\beta)$. \end{quote} If so, goto step~4; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} for every $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta$, we have \begin{quote} $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$ for some $\alpha$ such that $\mathtt{dcmp}_{\mathcal{B}'}(\alpha) = \mathtt{dcmp}_{\mathcal{B}'} (\beta)$. \end{quote} If so, goto step~5; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{test} for every $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta$, we have \begin{quote} $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha$ for some $\alpha$ such that $\mathtt{dcmp}_{\mathcal{B}}(\alpha) = \mathtt{dcmp}_{\mathcal{B}} (\beta)$. \end{quote} If so, goto step~6; else, \textbf{reject} $(X_i, \delta)$. \vspace{1ex} \item \textbf{accept} $(X_i, \delta)$. \vspace{-1ex} \end{enumerate} \\ \hline \end{tabular}\vspace{-1ex} \end{center} \caption{Checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ for Realtime Systems}\label{Checking_EXP_REALTIME} \end{figure} At first we review the framework of the correctness proof for CL algorithm. In the case of bisimilarity for realtime $\mathrm{nBPA}$, we can define the following well-known {\em expansion} relation directly from the definition of bisimulation. \begin{definition}\label{def:sexp} Let $\mathcal{R}$ be a binary relation on \textbf{realtime} processes. The {\em expansion} of $\mathcal{R}$, $\mathsf{Exp}(\mathcal{R})$, contains all pairs $(\alpha, \beta)$ satisfying the following conditions: \begin{enumerate} \item Whenever $\alpha \stackrel{a}{\longrightarrow} \alpha'$, then $\beta \stackrel{a}{\longrightarrow} \beta'$ and $\alpha'\mathcal{R}\beta'$ for some $\beta'$. \item Whenever $\beta \stackrel{a}{\longrightarrow} \beta'$, then $\alpha \stackrel{a}{\longrightarrow} \alpha'$ and $\alpha'\mathcal{R} \beta'$ for some $\alpha'$. \end{enumerate} \end{definition} For realtime system, a relation $\mathcal{R}$ is a bisimulation if and only if $\mathcal{R} \subseteq \mathsf{Exp}(\mathcal{R})$. Bisimilarity $\simeq$ is the largest relation $\mathcal{R}$ which satisfies $\mathcal{R} = \mathsf{Exp}(\mathcal{R})$. Definition~\ref{def:sexp} is well-behaved in the sense that ${\mathsf{Exp}(\equiv)} \cap {\equiv} \subsetneq {\equiv}$ if ${\equiv}$ is not a bisimulation, and $\mathsf{Exp}(\equiv)$ is a norm-preserving congruence suppose that $\equiv$ is. However, we cannot simply define the refinement relation $\mathsf{Ref}(\equiv)$ to be ${\mathsf{Exp}(\equiv)} \cap {\equiv}$, because $\mathsf{Exp}(\equiv) \cap {\equiv}$ may not be a decompositional congruence even if $\equiv$ is. In other words, we cannot always find a $\mathcal{B}'$ such that ${\stackrel{\mathcal{B}'}{\equiv}} = {\mathsf{Exp}(\stackrel{\mathcal{B}}{\equiv})} \cap {\stackrel{\mathcal{B}}{\equiv}}$. The way to solve this problem is to find a decompositional congruence ${\stackrel{\mathcal{B}'}{\equiv}} = \mathsf{Ref}(\stackrel{\mathcal{B}}{\equiv})$ which lies between $\simeq$ and ${\mathsf{Exp}(\stackrel{\mathcal{B}}{\equiv})} \cap {\stackrel{\mathcal{B}}{\equiv}}$. The way suggested in~\cite{DBLP:journals/mscs/HirshfeldJM96} is that $ \mathsf{Ref}(\equiv)$ be the decreasing bisimilarity wrt.~${\mathsf{Exp}(\equiv)} \cap {\equiv}$. \begin{definition}\label{def:relative_realtime} Let $\mathcal{R}$ be a relation on \textbf{realtime} processes. $\mathcal{R}$ is a {\em decreasing bisimulation} if the following hold whenever $\alpha \mathcal{R} \beta$: \begin{enumerate} \item Whenever $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$, then $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ such that $\alpha'\mathcal{R} \beta'$. \item Whenever $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$, then $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$ such that $\alpha'\mathcal{R} \beta'$. \end{enumerate} Let $\equiv$ be a norm-preserving congruence. The {\em decreasing bisimilarity wrt. $\equiv$}, denoted by $\simeq_{\mathrm{dec}}^{\equiv}$, is the largest decreasing bisimulation contained in $\equiv$. \end{definition} We do not justify the rationality of the relation $\simeq_{\mathrm{dec}}^{\equiv}$. The fact is that $\simeq_{\mathrm{dec}}^{\equiv}$ is a congruence, and moreover, it satisfies the following: \begin{enumerate} \item $\simeq_{\mathrm{dec}}^{\equiv}$ is decompositional if $\equiv$ is right-cancellative. \item $\mathsf{Exp}(\equiv)$ and also ${\mathsf{Exp}(\equiv)} \cap {\equiv}$ is right-cancellative if ${\equiv}$ is decompositional. \end{enumerate} According to these two facts, $\simeq_{\mathrm{dec}}^{{\mathsf{Exp}(\equiv)} \cap {\equiv}}$ is decompositional whenever ${\equiv}$ is. From here, we can define $\mathsf{Ref}(\equiv)$ to be $\simeq_{\mathrm{dec}}^{{\mathsf{Exp}(\equiv)} \cap {\equiv}}$. In order to get a characterization of $\simeq_{\mathrm{dec}}^{{\mathsf{Exp}(\equiv)} \cap {\equiv}}$, we need the following expansion relation for decreasing bisimilarity. \begin{definition} Let $\mathcal{R}$ be a binary relation on \textbf{realtime} processes. The {\em decreasing expansion} of $\mathcal{R}$, $\mathsf{Exp}_{\mathrm{dec}}(\mathcal{R})$, contains all pairs $(\alpha, \beta)$ satisfying the following conditions: \begin{enumerate} \item Whenever $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$, then $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ and $\alpha'\mathcal{R} \beta'$ for some $\beta'$. \item Whenever $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$, then $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$ and $\alpha'\mathcal{R} \beta'$ for some $\alpha'$. \end{enumerate} \end{definition} Then we can establish the following important property for realtime $\mathrm{nBPA}$: $(\alpha, \beta) \in \simeq_{\mathrm{dec}}^{\equiv}$ if and only if \begin{center} $\alpha \equiv \beta$ and $(\alpha, \beta) \in \mathsf{Exp}_{\mathrm{dec}}({\simeq_{\mathrm{dec}}^{\equiv}})$. \end{center} From this fact, considering that ${\mathsf{Ref}(\equiv)} = {\simeq_{\mathrm{dec}}^{{\mathsf{Exp}(\equiv)}\cap {\equiv}}}$, we have: $(\alpha, \beta) \in \mathsf{Ref}(\equiv)$ if and only if \begin{center} $\alpha \equiv \beta$ and $(\alpha,\beta) \in {\mathsf{Exp}(\equiv)}$ and $(\alpha, \beta) \in \mathsf{Exp}_{\mathrm{dec}}(\mathsf{Ref}(\equiv))$. \end{center} Now, to prove ${\stackrel{\mathcal{B}'}{\equiv}} = {\mathsf{Ref}(\stackrel{\mathcal{B}}{\equiv})}$, it suffices to prove: \begin{center} $\alpha \stackrel{\mathcal{B}'}{\equiv} \beta$ if and only if $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ and $(\alpha,\beta) \in \mathsf{Exp}(\stackrel{\mathcal{B}}{\equiv})$ and $(\alpha, \beta) \in \mathsf{Exp}_{\mathrm{dec}}(\stackrel{\mathcal{B}'}{\equiv})$. \end{center} According to this characterization, apparently we have ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$. Now it is time to explain that the procedure in Fig.~\ref{Checking_EXP_REALTIME} is actually based on this characterization. Suppose we want to check whether $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$. It suffices to check the following three conditions: \begin{enumerate} \item $X_i \stackrel{\mathcal{B}}{\equiv} \delta$. \item $(X_i, \delta) \in \mathsf{Exp}_{\mathrm{dec}}(\stackrel{\mathcal{B}'}{\equiv})$. \item $(X_i,\delta) \in \mathsf{Exp}(\stackrel{\mathcal{B}}{\equiv})$. \end{enumerate} Notice that these three conditions are deliberately arranged in the above order. Now we study the procedure described in~Fig.~\ref{Checking_EXP_REALTIME}. Step~1 corresponds to Condition~1: checking $X_i \stackrel{\mathcal{B}}{\equiv} \delta$. Step~2 and Step~4 correspond to Condition~2: checking $(X_i, \delta) \in \mathsf{Exp}_{\mathrm{dec}}(\stackrel{\mathcal{B}'}{\equiv})$. Step~3 and Step~5 partly correspond to Condition~3: checking $(X_i,\delta) \in \mathsf{Exp}(\stackrel{\mathcal{B}}{\equiv})$. In Step~3 and Step~5, we find that only increasing transitions are treated. This is because the decreasing transitions are already treated in Step~2 and Step~4, in which stricter requirements are tested, considering ${\stackrel{\mathcal{B}'}{\equiv}} \subseteq {\stackrel{\mathcal{B}}{\equiv}}$. \subsection{Another Understanding of the Refinement Operation}\label{subsec:Expansion_Another_undestanding} The characterization of the refinement operation defined in Section~\ref{subsec:Expansion_Pre} is fine. However, currently we do not know how to generalize this characterization to non-realtime systems. The main problem is that we cannot find a feasible way to define the expansion relation. This is because the technique of dynamic programming is used in the algorithm. This makes the expansion of $\stackrel{\mathcal{B}}{\equiv}$, if there is a way to define, not only depend on $\stackrel{\mathcal{B}}{\equiv}$, but also depend on $\stackrel{\mathcal{B}'}{\equiv}$. This fact makes it very difficult to generalize the correctness proof in the way taken in CL algorithm. Thus we hope to find another better way to prove the correctness of our algorithm. Before doing this in non-realtime systems, the attempt is first made in realtime systems. That is, we develop another characterization of the refinement operation for the procedure in Fig.~\ref{Checking_EXP_REALTIME}. The basic idea is to integrate the three parts into a whole concept, which we called {\em decreasing bisimilarity with expansion}. To avoid confusion, readers are suggested to forget the terminologies and notations taken in Section~\ref{subsec:Expansion_Pre}, because the forms of the following terminologies and notations can be close to the ones in Section~\ref{subsec:Expansion_Pre}, but their meanings are different. We do not provide proofs for the lemmas and theorems below, because they are special cases for those in Section~\ref{subsec:Expansion_in_general}. \begin{definition}\label{def:dec_bisimulation_expansion_realtime} Let $\equiv$ be a norm-preserving congruence on \textbf{realtime} processes, and let ${\mathcal{R}} \subseteq {\equiv}$ be a relation on \textbf{realtime} processes. We say $\mathcal{R}$ is a {\em decreasing bisimulation with expansion of} $\equiv$ if the following conditions hold whenever $\alpha \mathcal{R} \beta$: \begin{enumerate} \item Whenever $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$, \begin{enumerate} \item if $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$, then $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ for some $\beta'$ such that $\alpha'\mathcal{R} \beta'$; \item if $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha'$, then $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$ for some $\beta'$ such that $\alpha' \equiv \beta'$. \end{enumerate} \item Whenever $\beta \stackrel{\ell}{\longrightarrow} \beta'$, \begin{enumerate} \item if $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$, then $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$ for some $\alpha'$ such that $\alpha'\mathcal{R} \beta'$. \item if $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$, then $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha'$ for some $\alpha'$ such that $\alpha'\equiv \beta'$. \end{enumerate} \end{enumerate} The {\em decreasing bisimilarity with expansion of $\equiv$}, denoted by $\simeq^{\equiv}$, is the largest decreasing bisimulation with expansion of ${\equiv}$. \end{definition} The following lemma confirms the validity of Definition~\ref{def:dec_bisimulation_expansion_realtime}. \begin{lemma}\label{lem:property_decreasing_bisimulation_expansion_realtime} The following properties hold: \begin{enumerate} \item The identity relation is a decreasing bisimulation with expansion of $\equiv$. \item Let $\mathcal{R}$ be a decreasing bisimulation with expansion of $\equiv$. Then, $\mathcal{R}^{-1}$ is also a decreasing bisimulation with expansion of $\equiv$. \item Let $\mathcal{R}_1$ and $\mathcal{R}_2$ be two decreasing bisimulation with expansion of $\equiv$. Then, $\mathcal{R}_1 \circ \mathcal{R}_2$ is also a decreasing bisimulation with expansion of $\equiv$. \item Let $\{\mathcal{R}_{\lambda}\}_{\lambda \in I}$ be a set of decreasing bisimulation with expansion of $\equiv$. Then, $\bigcup_{\lambda \in I} \mathcal{R}_{\lambda}$ is a decreasing bisimulation with expansion of $\equiv$. \end{enumerate} \end{lemma} According to Lemma~\ref{lem:property_decreasing_bisimulation_expansion_realtime}, $\simeq^{\equiv}$ is an equivalence relation. According to Definition~\ref{def:dec_bisimulation_expansion_realtime}, any decreasing bisimulation with expansion of $\equiv$ must be norm-preserving, thus $\simeq^{\equiv}$ is also norm-preserving. Moreover, we have \begin{lemma} $\simeq^{\equiv}$ is a norm-preserving congruence. \end{lemma} Now we can define $\mathsf{Ref}(\equiv) = {\simeq^{\equiv}}$. The validity depends on the following two properties. \begin{lemma} \begin{enumerate} \item $ {\simeq^{\simeq}} = {\simeq}$. \item If ${\simeq} \subsetneq {\equiv}$, then $ {\simeq} \subseteq {\simeq^{\equiv}} \subsetneq {\equiv}$. \end{enumerate} \end{lemma} The unique decomposition property of $\simeq^{\equiv}$ can be established in the same way as that of $\simeq$, but relies on the right cancellation property of ${\equiv}$. \begin{theorem}[Unique Decomposition Property of $\simeq^{\equiv}$]\label{thm:unique-decomposition-relative-realtime} Let ${\equiv}$ be a norm-preserving congruence which is right-cancellative. Then, $\simeq^{\equiv}$ is decompositional. \end{theorem} It is not hard to establish the following characterization theorem of $\simeq^{\equiv}$. \begin{theorem}\label{lem:char_relative_bisimilarity_realtime} Let $\alpha$, $\beta$ be realtime nBPA processes. Then, $\alpha \simeq^{\equiv} \beta$ if and only if $\alpha \equiv \beta$ and \begin{enumerate} \item Whenever $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$, \begin{enumerate} \item if $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$, then $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ for some $\beta'$ such that $\alpha' \simeq^{\equiv} \beta'$; \item if $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha'$, then $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$ for some $\beta'$ such that $\alpha' \equiv \beta'$. \end{enumerate} \item Whenever $\beta \stackrel{\ell}{\longrightarrow} \beta'$, \begin{enumerate} \item if $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$, then $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$ for some $\alpha'$ such that $\alpha' \simeq^{\equiv}\beta'$. \item if $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$, then $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha'$ for some $\alpha'$ such that $\alpha'\equiv \beta'$. \end{enumerate} \end{enumerate} \end{theorem} When $\mathsf{Ref}(\equiv)$ is defined as $\simeq^{\equiv}$, namely ${\stackrel{\mathcal{B}'}{\equiv}} = {\simeq^{\stackrel{\mathcal{B}}{\equiv}}}$, using Theorem~\ref{lem:char_relative_bisimilarity_realtime}, we can get exactly the procedure of checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ in Fig.~\ref{Checking_EXP_REALTIME}. It should be stressed that $\simeq^{\equiv}$ defined in Definition~\ref{def:dec_bisimulation_expansion_realtime} is exact the same as according to ${\simeq_{\mathrm{dec}}^{{\mathsf{Exp}(\equiv)}\cap {\equiv}}}$ in Section~\ref{subsec:Expansion_Pre}. They are two different understandings of the same refinement operation. \subsection{The Refinement Operation for Non-realtime Systems}\label{subsec:Expansion_in_general} We spend a lot of space to discuss the correctness of the algorithm for realtime processes. The reason is that we want to generalize the way to show the correctness of our algorithm of checking branching bisimilarity for totally normed BPA. It turns out that the classical proof for CL algorithm cannot be generalized directly. So we find another characterization of the refinement operation in Section~\ref{subsec:Expansion_Another_undestanding}. It turns out that this one, as expected, can be used to show the correctness of our algorithm described in Fig.~\ref{Checking_EXP}. In this section we discuss the refinement operation in detail. We start from the notion of {\em decreasing bisimilarity with expansion}. \begin{definition}\label{def:dec_bisimulation_expansion} Let $\equiv$ be a norm-preserving congruence on processes, and let ${\mathcal{R}} \subseteq {\equiv}$ be a relation on processes. We say $\mathcal{R}$ is a {\em decreasing bisimulation with expansion of} $\equiv$ if the following conditions hold whenever $\alpha \mathcal{R} \beta$: \begin{enumerate} \item Whenever $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$, then \begin{enumerate} \item either $\ell = \tau$ and $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^m$ for some $m \geq 0$ and $\beta^1,\ldots,\beta^m$ such that $\alpha'\mathcal{R} \beta^m$ and $\alpha \mathcal{R} \beta^k$ for every $1 \leq i \leq m$; \item or $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^m \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ for some $m \geq 0$ and $\beta^1,\ldots,\beta^m$ and $\beta'$ such that $\alpha'\mathcal{R} \beta'$ and $\alpha \mathcal{R} \beta^k$ for every $1 \leq i \leq m$. \item or $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^m \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$ for some $m \geq 0$ and $\beta^1,\ldots,\beta^m$ and $\beta'$ such that $\alpha' \equiv \beta'$ and $\alpha \mathcal{R} \beta^k$ for every $1 \leq i \leq m$. \end{enumerate} \item Whenever $\beta \stackrel{\ell}{\longrightarrow} \beta'$, then \begin{enumerate} \item either $\ell=\tau$ and $\alpha \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^m$ for some $m \geq 0$ and $\alpha^1,\ldots,\alpha^m$ such that $\alpha^m\mathcal{R} \beta'$ and $\alpha^k \mathcal{R} \beta$ for every $1 \leq i \leq m$; \item or $\alpha \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^m \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$ for some $m \geq 0$ and $\alpha^1,\ldots,\alpha^m$ and $\alpha'$ such that $\alpha'\mathcal{R} \beta'$ and $\alpha^k \mathcal{R} \beta$ for every $1 \leq i \leq m$. \item or $\alpha \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha^m \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha'$ for some $m \geq 0$ and $\alpha^1,\ldots,\alpha^m$ and $\alpha'$ such that $\alpha'\equiv \beta'$ and $\alpha^k \mathcal{R} \beta$ for every $1 \leq i \leq m$. \end{enumerate} \end{enumerate} The {\em decreasing bisimilarity with expansion of $\equiv$}, denoted by $\simeq^{\equiv}$, is the largest decreasing bisimulation with expansion of ${\equiv}$. If a relation ${\mathcal{R}} \subseteq {\equiv}$ satisfies the above conditions except for 1(c) and 2(c) whenever $\alpha \,\mathcal{R}\,\beta$, then we call $\mathcal{R}$ a {\em decreasing bisimulation}. \end{definition} Some explanation should be made on Definition~\ref{def:dec_bisimulation_expansion}. Firstly, assume that $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$ for instance. We know the corresponding transition is increasing or decreasing. If the transition is increasing, the only possibility is to take the matched transitions as the item~1(c). If the transition is decreasing, there are two subcases. The item~1(a) corresponds to the situation of $\ell = \tau$ and this silent transition can be vacantly matched. The item~1(b) corresponds to the situation that either $\ell$ is not silent, or the silent transition must be explicitly matched. Whenever $\alpha \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha'$, we cannot tell which one of item~1(a) or item~1(b) should be chosen. So we must test the condition~1(a), and if 1(a) does not hold then we test condition~1(b). Secondly, when a transition $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$ is matched by $\beta$, Definition~\ref{def:dec_bisimulation_expansion} takes a different style from Definition~\ref{def:beq}, the common definition of branching bisimulation. Consider the condition~1(b) for example. In this case we require that the matching sequence of $\beta$ to be \[ \beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^m \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta' \] such that $\alpha'\mathcal{R} \beta'$ and $\alpha \mathcal{R} \beta^i$ for every $1 \leq i \leq m$. That is , every intermediate $\beta^i$ must be related to $\alpha$. In Definition~\ref{def:beq}, however, we take the simplified matching sequence of $\beta$: \[ \beta \stackrel{\tau}{\Longrightarrow}_{\mathrm{dec}} \beta'' \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta' \] such that $\alpha'\mathcal{R} \beta'$ and $\alpha \mathcal{R} \beta''$. The reason is explained as follows. In the normal definition of branching bisimulation, although we do not require $\alpha \mathcal{R} \beta^i$ for every intermediate $\beta^i$, the largest bisimulation, $\simeq$, satisfy the Computation Lemma (Lemma~\ref{computation-lemma}). Thus if $\mathcal{R}$ is replaced by $\simeq$, namely if $\alpha \simeq \beta$ and $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$ is matched by \[ \beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta^m \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta' \] such that $\alpha' \simeq \beta'$ and $\alpha \simeq \beta^m$, then we immediately have $\alpha \simeq \beta^i$ for every $1 \leq i \leq m$. But at present we cannot establish Computation Lemma for $\simeq^{\equiv}$, since this property depends on another equivalence $\equiv$, and in Definition~\ref{def:dec_bisimulation_expansion} we do not impose any restrictions on $\equiv$. Thus Computation Lemma could not be established if normal style matchings are taken in Definition~\ref{def:dec_bisimulation_expansion}. Thus one way of defining decreasing bisimulation with expansion of $\equiv$ is to strengthen the relevant requirements. We take this style not because we need the Computation Lemma, but because we need the conditions appearing in Definition~\ref{def:dec_bisimulation_expansion} to be close to the conditions checked by the algorithm. Thirdly, the `semi-branching' style (see Definition~\ref{def:semi-beq}) is taken in the case of vacant matching. This is not necessary but is helpful to show the transitivity of $\simeq^{\equiv}$. The following lemma confirms that the relation $\simeq^{\equiv}$ is well-defined. \begin{lemma}\label{lem:property_decreasing_bisimulation_expansion} The following properties hold: \begin{enumerate} \item The identity relation is a decreasing bisimulation with expansion of $\equiv$. \item Let $\mathcal{R}$ be a decreasing bisimulation with expansion of $\equiv$. Then, $\mathcal{R}^{-1}$ is also a decreasing bisimulation with expansion of $\equiv$. \item Let $\mathcal{R}_1$ and $\mathcal{R}_2$ be two decreasing bisimulation with expansion of $\equiv$. Then, $\mathcal{R}_1 \circ \mathcal{R}_2$ is also a decreasing bisimulation with expansion of $\equiv$. \item Let $\{\mathcal{R}_{\lambda}\}_{\lambda \in I}$ be a set of decreasing bisimulation with expansion of $\equiv$. Then, $\bigcup_{\lambda \in I} \mathcal{R}_{\lambda}$ is a decreasing bisimulation with expansion of $\equiv$. \end{enumerate} \end{lemma} According to Lemma~\ref{lem:property_decreasing_bisimulation_expansion}, $\simeq^{\equiv}$ is an equivalence relation. Since $\simeq^{\equiv}$ is a decreasing bisimulation with expansion of $\equiv$ according to Definition~\ref{def:dec_bisimulation_expansion}, we have \begin{lemma}\label{lem:decreasing_bisimulation_inequality} ${\simeq^{\equiv}} \subseteq {\equiv}$. \end{lemma} According to Definition~\ref{def:dec_bisimulation_expansion}, any decreasing bisimulation with expansion of $\equiv$ must be norm-preserving, thus $\simeq^{\equiv}$ is also norm-preserving. Moreover, $\simeq^{\equiv}$ is a congruence. \begin{lemma} $\simeq^{\equiv}$ is a norm-preserving congruence. \end{lemma} \begin{proof} We only show that $\simeq^{\equiv}$ is a congruence. Let \[ \mathcal{S} = \{(\alpha_1 \cdot \alpha_2, \beta_1\cdot \beta_2) \;|\; \alpha_1 \simeq^{\equiv} \beta_1 \mbox{ and } \alpha_2 \simeq^{\equiv} \beta_2\} \cup {\simeq^{\equiv}}. \] We show $\mathcal{S}$ is a decreasing bisimulation with expansion of $\equiv$. This is done by checking the conditions in Definition~\ref{def:dec_bisimulation_expansion} for every $(\alpha_1 \cdot \alpha_2, \beta_1\cdot \beta_2) \in \mathcal{S}$. If $\alpha_1 = \epsilon = \beta_1$. This is a trivial case. If $\alpha_1 \neq \epsilon$ and $\beta_1 \neq \epsilon$. This proof is done by case studies. We study only two cases. Other cases are similar. \begin{itemize} \item Suppose there is a transition $\alpha_1\alpha_2 \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha_1'\alpha_2$, we shall find the matching from $\beta_1\beta_2$. Remember $\alpha_1 \simeq^{\equiv} \beta_1$, thus every transition $\alpha_1 \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha_1'$ has a matching from $\beta_1$. Say, we have the matching: \[ \beta_1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^m \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta_1' \] such that $\alpha_1' \simeq^{\equiv} \beta_1'$ and $\alpha_1 \simeq^{\equiv} \beta_1^i$ for every $1 \leq i \leq m$. Then we have \[ \beta_1\beta_2 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^1\beta_2 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^i\beta_2 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^m\beta_2 \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta_1'\beta_2. \] According to the definition of $ \mathcal{S}$ and the fact $\alpha_2 \simeq^{\equiv} \beta_2$, we have $(\alpha_1'\alpha_2, \beta_1'\beta_2) \in \mathcal{S}$, and $(\alpha_1\alpha_2, \beta_1^i\beta_2) \in \mathcal{S}$ for every $1 \leq i \leq m$. \item Suppose there is a transition $\alpha_1\alpha_2 \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha_1'\alpha_2$, we shall find the matching from $\beta_1\beta_2$. Remember $\alpha_1 \simeq^{\equiv} \beta_1$, thus every transition $\alpha_1 \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha_1'$ has a matching from $\beta_1$. Say, we have the matching: \[ \beta_1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^m \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta_1' \] such that $\alpha_1' \equiv \beta_1'$ and $\alpha_1 \simeq^{\equiv} \beta_1^i$ for every $1 \leq i \leq m$. Then we have \[ \beta_1\beta_2 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^1\beta_2 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^i\beta_2 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1^m\beta_2 \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta_1'\beta_2. \] According to the definition of $ \mathcal{S}$ and the fact $\alpha_2 \simeq^{\equiv} \beta_2$, we have $(\alpha_1\alpha_2, \beta_1^i\beta_2) \in \mathcal{S}$ for every $1 \leq i \leq m$. Knowing ${\simeq^{\equiv}} \subseteq {\equiv}$, we have $\alpha_2 \equiv \beta_2$, and by congruence of ${\equiv}$ we have $\alpha_1'\alpha_2 \equiv \beta_1'\beta_2$. \qed \end{itemize} \end{proof} Now we can define the refinement operation $\mathsf{Ref}(\equiv)$ as ${\simeq^{\equiv}}$. The validity of this definition depends on the following lemma. \begin{lemma} The following two properties hold. \begin{enumerate} \item $ {\simeq^{\simeq}} = {\simeq}$. \item If ${\simeq} \subsetneq {\equiv}$, then $ {\simeq} \subseteq {\simeq^{\equiv}} \subsetneq {\equiv}$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item At first, we show that ${\simeq} \subseteq {\simeq^{\equiv}}$ for every ${\equiv} \supseteq {\simeq}$. As a special case, we have ${\simeq} \subset {\simeq^{\equiv}}$. Assume that $\alpha \simeq \beta$, then we can check that conditions in Definition~\ref{def:dec_bisimulation_expansion}, taking $\mathcal{R} = {\simeq}$. This is a routine work, by applying the Computation Lemma (Lemma~\ref{computation-lemma}). To see why ${\simeq^{\simeq}} \subseteq {\simeq}$, we notice ${\simeq^{\equiv}} \subseteq {\equiv}$ (Lemma~\ref{lem:decreasing_bisimulation_inequality}), and take ${\equiv}$ to be $\simeq$. \item By the proof of the first item and Lemma~\ref{lem:decreasing_bisimulation_inequality}, we already have $ {\simeq} \subseteq {\simeq^{\equiv}} \subseteq {\equiv}$ whenever ${\simeq} \subseteq {\equiv}$. Now we assume further that ${\simeq} \subseteq {\simeq^{\equiv}} = {\equiv}$, we will show that ${\simeq} = {\simeq^{\equiv}} = {\equiv}$. It suffices to show $ {\simeq^{\equiv}} = {\equiv}$ is a branching bisimulation (Definition~\ref{def:beq}). Because ${\simeq^{\equiv}}$ is a decreasing bisimulation with expansion of $\equiv$, it satisfies the conditions in Definition~\ref{def:dec_bisimulation_expansion}. By taking $\mathcal{R}$ to be both ${\simeq^{\equiv}}$ and $\equiv$, we see that bisimulation property in Definition~\ref{def:beq} can be inferred. \qed \end{enumerate} \end{proof} The unique decomposition property of $\simeq^{\equiv}$ can be established in the same way as that of $\simeq$, but relies on the right cancellation property of ${\equiv}$. \begin{theorem}[Unique Decomposition Property of $\simeq^{\equiv}$]\label{thm:unique-decomposition-relative} Let ${\equiv}$ be a norm-preserving congruence which is right-cancellative. Then, $\simeq^{\equiv}$ is decompositional. \end{theorem} \begin{proof} It suffices to show that to show that $\{(\alpha, \beta): \alpha\gamma \simeq^{\equiv} \beta\gamma \mbox{ for some }\gamma\}$ is a decreasing branching bisimulation wrt.~$\equiv$. In the proof the right cancellativity of ${\equiv}$ is used. Then the proof goes in the same way as in Theorem~\ref{thm:unique-decomposition}. \qed \end{proof} According to Theorem~\ref{thm:unique-decomposition-relative}, $\simeq^{\equiv}$ is decompositional whenever $\equiv$ is. This is the key property to define refinement operation. Now, our refinement operation $\mathsf{Ref}(\equiv)$ can be defined as $\simeq^{\equiv}$. \section{The Correctness of the Algorithm}\label{sec:correctness} In this section we will show that the $\mathcal{B}'$ constructed from $\mathcal{B}$ during an iteration is exactly the decomposition base of $\mathsf{Ref}(\stackrel{\mathcal{B}}{\equiv}) = {\simeq^{\stackrel{\mathcal{B}}{\equiv}}}$ defined in Section~\ref{subsec:Expansion_in_general}. \subsection{The Characterization of ${\simeq^{\equiv}}$}\label{subsec:Expansion_characterization} Remember in Section~\ref{subsec:Expansion_Another_undestanding} we have remarked that the procedure in Fig.~\ref{Checking_EXP_REALTIME} is correct for realtime systems. At that time the proof is straightforward, because the procedure checks exactly the conditions in the characterization theorem (Theorem~\ref{lem:char_relative_bisimilarity_realtime}), which are exactly the conditions in Definition~\ref{def:dec_bisimulation_expansion_realtime}. However, this is not the case now, and there are a number of subtleties. In the following, we will develop some terminologies, which make us easier to formulate our results. First we need an adequate notion of `expansion' relation which is suitable for Definition~\ref{def:dec_bisimulation_expansion} and close to the testing procedure. We call this notion {\em compound expansion}. \begin{definition}\label{def:compound_expansion} Let $\equiv$ be a norm-preserving congruence on processes, and let ${\mathcal{R}} \subseteq {\equiv}$ be a relation on processes. The {\em compound expansion} wrt.~$\mathcal{R}$ and $\equiv$, denoted by $\mathsf{ComExp}_{\equiv}(\mathcal{R})$, contains all pairs $(\alpha, \beta)$ which satisfy $\alpha \equiv \beta$ and the following conditions: \begin{enumerate} \item Whenever $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$, then either \begin{enumerate} \item $\ell = \tau$ and $\alpha' \mathcal{R} \beta$; or \item $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ and $\alpha'\mathcal{R} \beta'$ for some $\beta'$; or \item $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$ and $\alpha' \equiv \beta'$ for some $\beta'$; or \item $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta''$ and $\alpha \mathcal{R} \beta''$ for some $\beta''$. \end{enumerate} \item Whenever $\beta \stackrel{\ell}{\longrightarrow} \beta'$, then either \begin{enumerate} \item $\ell=\tau$ and $\alpha \mathcal{R} \beta'$; or \item $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha'$ and $\alpha'\mathcal{R} \beta'$ for some $\alpha'$; or \item $\alpha \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha'$ and $\alpha' \equiv \beta'$ for some $\alpha'$; or \item $\alpha \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha''$ and $\alpha'' \mathcal{R} \beta$ for some $\alpha''$. \end{enumerate} \end{enumerate} \end{definition} The correctness of Definition~\ref{def:compound_expansion} is confirmed by the following lemmas. \begin{lemma}\label{lem:decreasing_branching_bisimulation_oneside_contain} If $\mathcal{R}$ is a decreasing bisimulation with expansion of $\equiv$ (see Definition~\ref{def:dec_bisimulation_expansion}), then $\mathcal{R} \subseteq \mathsf{ComExp}_{\equiv}(\mathcal{R})$. In particular, ${\simeq^{\equiv}} \subseteq \mathsf{ComExp}_{\equiv}(\simeq^{\equiv})$. \end{lemma} \begin{proof} This fact is an inference of Definition~\ref{def:compound_expansion} and Definition~\ref{def:dec_bisimulation_expansion}. Compare the conditions in these two definitions. When $\mathcal{R}$ is a decreasing bisimulation with expansion of $\equiv$ and $\alpha \mathcal{R} \beta$, $(\alpha,\beta)$ satisfies the conditions in Definition~\ref{def:dec_bisimulation_expansion}. Then we can find that $(\alpha,\beta)$ also satisfies the conditions in Definition~\ref{def:compound_expansion}. \qed \end{proof} \begin{lemma}\label{lem:decreasing_branching_bisimulation_twoside_contain} $\mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ is a decreasing bisimulation with expansion of $\equiv$. In particular, $ \mathsf{ComExp}_{\equiv}(\simeq^{\equiv}) \subseteq {\simeq^{\equiv}}$. \end{lemma} \begin{proof} At first, remember that $\mathsf{ComExp}_{\equiv}(\mathcal{R}) \subseteq {\equiv}$ according to Definition~\ref{def:compound_expansion}. This is the prerequisite of $\mathsf{ComExp}_{\equiv}(\mathcal{R})$ being a decreasing bisimulation with expansion of $\equiv$. This fact will be implicitly used in the remaining proof. Let $(\alpha, \beta) \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ and $\alpha \stackrel{\ell}{\longrightarrow} \alpha'$. According to the definition of $\mathsf{ComExp}_{\equiv}$ (Definition~\ref{def:compound_expansion}), there are four cases: \begin{enumerate} \item $\ell = \tau$ and $\alpha' \simeq^{\equiv} \beta$. In this case, we have $(\alpha', \beta) \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ according to Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain}. Thus condition~1(a) of Definition~\ref{def:dec_bisimulation_expansion} holds (with $m=0$). \item $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ and $\alpha' \simeq^{\equiv} \beta'$ for some $\beta'$. In this case, we have $(\alpha', \beta') \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ according to Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain}. This is the special case of the condition~1(b) of Definition~\ref{def:dec_bisimulation_expansion} in which $m = 0$. Thus condition~1(b) of Definition~\ref{def:dec_bisimulation_expansion} holds. \item $\beta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$ and $\alpha' \equiv \beta'$ for some $\beta'$. This is the special case of the condition~1(c) of Definition~\ref{def:dec_bisimulation_expansion} in which $m = 0$. Thus condition~1(c) of Definition~\ref{def:dec_bisimulation_expansion} holds. \item $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta''$ and $\alpha \simeq^{\equiv} \beta''$ for some $\beta''$. In this case, we have $(\alpha, \beta'') \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ according to Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain}. We can now use induction hypothesis on the pair $(\alpha, \beta'')$. Note that this case can not happen forever. Finally, case~1 or case~2 or case~3 must happen. \begin{itemize} \item If case~1 happens finally, then we have $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_m$ for some $m > 0$ and $\beta_1,\ldots,\beta_m$ such that $\alpha' \simeq^{\equiv} \beta_m$ and $(\alpha, \beta_k)\in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ for every $1 \leq i \leq m$. Now we also have $(\alpha', \beta_m) \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ according to Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain}. Consequently condition~1(a) of Definition~\ref{def:dec_bisimulation_expansion} in which $m > 0$ holds. \item If case~2 happens finally, then we get $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_m \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta'$ for some $m > 0$ and $\beta_1,\ldots,\beta_m$ and $\beta'$ such that $(\alpha' , \beta') \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ and $(\alpha, \beta_k) \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ for every $1 \leq i \leq m$, according to Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain}. Consequently condition~1(b) of Definition~\ref{def:dec_bisimulation_expansion} in which $m > 0$ holds. \item If case~3 happens finally, then we get $\beta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_1 \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta_m \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta'$ for some $m > 0$ and $\beta_1,\ldots,\beta_m$ and $\beta'$ such that $\alpha' \equiv \beta'$ and $(\alpha, \beta_k) \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ for every $1 \leq i \leq m$, according to Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain}. Consequently condition~1(c) of Definition~\ref{def:dec_bisimulation_expansion} in which $m > 0$ holds. \qed \end{itemize} \end{enumerate} \end{proof} From Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain} and Lemma~\ref{lem:decreasing_branching_bisimulation_twoside_contain}, we conclude the following important characterization of $\simeq^{\equiv}$. \begin{theorem}\label{thm:char_relative_bisimilarity} $\alpha \simeq^{\equiv} \beta$ if and only if $(\alpha, \beta) \in \mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$. \end{theorem} \begin{remark} The inverse of Lemma~\ref{lem:decreasing_branching_bisimulation_oneside_contain} also holds. That is, If a relation $\mathcal{R}$ satisfies $\mathcal{R} \subseteq \mathsf{ComExp}_{\equiv}(\mathcal{R})$, then $\mathcal{R}$ is a decreasing bisimulation with expansion of $\equiv$. According to this fact and Theorem~\ref{thm:char_relative_bisimilarity}, the congruence ${\simeq^{\equiv}}$ is the greatest fixpoint of $\mathsf{ComExp}_{\equiv}$. Thus the congruence ${\simeq^{\equiv}}$ can be completely characterized via the operation $\mathsf{ComExp}_{\equiv}$. Readers may have noticed that the conditions in Definition~\ref{def:compound_expansion} are quite different from the conditions in Definition~\ref{def:dec_bisimulation_expansion}. In Definition~\ref{def:dec_bisimulation_expansion} we lay stress on getting a congruence relation from a congruence relation. On the other hand, in Definition~\ref{def:compound_expansion}, the purpose is to give a characterization which makes the conditions easy to check in the algorithm. We do not need $\mathsf{ComExp}_{\equiv}(\mathcal{R})$ to satisfy a lot of favourite properties. The difference between these two definitions must be highlighted, because it does not happen in the case of realtime $\mathrm{nBPA}$, and the existence of silent actions do make things difficult. However, according to Theorem~\ref{thm:char_relative_bisimilarity}, $\mathsf{ComExp}_{\equiv}({\simeq^{\equiv}})$ is definitely a favourite congruence. \end{remark} \subsection{The Correctness of the Algorithm }\label{subsec:Expansion_correctness} Theorem~\ref{thm:char_relative_bisimilarity} gives us a potential way to get an implementation of the refinement operation. That is, it provides a potential way to implement $\mathtt{lpftest}_{(\mathbf{P}', \mathbf{E}')}(X_i, X_j)$ at line~10. In the following discussion, for convenience we presuppose that $\stackrel{\mathcal{B}'}{\equiv}$ is equal to ${\simeq^{\stackrel{\mathcal{B}}{\equiv}}}$. We will develop more properties of $\stackrel{\mathcal{B}'}{\equiv}$. According to Theorem~\ref{thm:char_relative_bisimilarity}, checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ is equivalent to checking $(X_i, \delta) \in \mathsf{ComExp}_{\stackrel{\mathcal{B}}{\equiv}}(\stackrel{\mathcal{B}'}{\equiv})$. Note at first that $ \mathsf{ComExp}_{\stackrel{\mathcal{B}}{\equiv}}(\stackrel{\mathcal{B}'}{\equiv})$ concerns relation $\mathcal{B}'$ itself, and $\mathcal{B}'$ is not completely known at the moment. Fortunately, we have the following two critical observations. \begin{description} \item[Observation~1.] At the moment of testing on the pair $(X_i,\delta)$, we have already known the base $\mathcal{B}$ and a profile of $\mathcal{B}'$ whose constances with indexes less than $i$. Thus we can suppose that $\mathtt{dcmp}_{\mathcal{B}} (X_i)$ is known for every $i$ such that $1 \leq i \leq n$, and $\mathtt{dcmp}_{\mathcal{B}'} (X_j)$ is known for every $j$ such that $1 \leq j < i$. Therefore we are capable to answer whether $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ for any $\alpha,\beta \in \mathbf{C}^{*}$, and whether $\alpha \stackrel{\mathcal{B}'}{\equiv} \beta$ for any $\alpha,\beta \in \mathbf{C}_{i-1}^{*}$. \item[Observation~2.] Whenever decreasing transitions are concerned, say $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta$, according to Lemma~\ref{lem:decreasing_transition}, we have $\beta \in \mathbf{C}_{i-1}^{*}$. \end{description} With these two observations, we can develop the efficient procedure for checking $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ for realtime system (remember Theorem~\ref{lem:char_relative_bisimilarity_realtime}). But at present, the situation is more complicated. In the presence of silent actions, the above two observations cannot directly lead to the efficient procedure. The consecutive silent actions do cause inconvenience. Investigate the following scenario. Assume we want to show $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ and let $X_i \stackrel{\ell}{\longrightarrow} \alpha$ be a transition which is required to be matched by $\delta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta'' \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \ldots \stackrel{\ell}{\longrightarrow} \beta$ with $X_i \stackrel{\mathcal{B}'}{\equiv} \beta''$. In this situation we still do not know whether $X_i \stackrel{\mathcal{B}'}{\equiv} \beta''$ because $\mathtt{dcmp}_{\mathcal{B}'} (X_i)$ still needs computing. To handle this difficulty, we need some other techniques. Before doing this, we notice the following critical observation: \begin{description} \item[Observation~3.] According to the fact of $\delta \in \mathbf{P}'^{*}$ and Lemma~\ref{lem:tau_prime_string}, $\delta$ has no transition of the form $\delta \stackrel{\tau}{\longrightarrow} \beta'$ which satisfies $\delta \stackrel{\mathcal{B}'}{\equiv} \beta'$. \end{description} Whenever $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$, the above critical observation gives rise to the following lemma. \begin{lemma}\label{lem:critical} Assume ${\stackrel{\mathcal{B}'}{\equiv}} = {\simeq^{\stackrel{\mathcal{B}}{\equiv}}}$. When $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ and $\delta \stackrel{\tau}{\longrightarrow}_{\mathrm{def}} \beta$, then we do \textbf{not} have $X_i \stackrel{\mathcal{B}'}{\equiv} \beta$. \end{lemma} According to Lemma~\ref{lem:critical}, we can draw the following two assertions. First, when transition $\delta \stackrel{\tau}{\longrightarrow}_{\mathrm{def}} \beta$ is matched by $X_i$, the vacantly matching cannot happen. Second, when transition $X_i \stackrel{\ell}{\longrightarrow} \alpha$ is matched by $\delta$, the `state-preserving' silent transitions cannot occurred. Within these two assertions, Theorem~\ref{thm:char_relative_bisimilarity} can be written as follows. \begin{theorem}\label{thm:exp_relative_membership_simplified} Let $\mathcal{B} = (\mathbf{P}, \mathbf{E})$ and $\mathcal{B}' = (\mathbf{P}', \mathbf{E}')$ be two decomposition bases which validate ${\stackrel{\mathcal{B}'}{\equiv}} = {\simeq^{\stackrel{\mathcal{B}}{\equiv}}}$. Assume $\delta \in {\mathbf{P}'}_{i-1}^{*}$, then $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ if and only if $X_i \stackrel{\mathcal{B}}{\equiv} \delta$ and the following conditions are satisfied: \begin{enumerate} \item Whenever $X_i \stackrel{\ell}{\longrightarrow} \alpha$, then either \begin{enumerate} \item $\ell = \tau$ and $\alpha \stackrel{\mathcal{B}'}{\equiv} \delta$; or \item $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta$ and $\alpha \stackrel{\mathcal{B}'}{\equiv} \beta$ for some $\beta$; or \item $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta$ and $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ for some $\beta$. \end{enumerate} \item Either $X_i \stackrel{\tau}{\longrightarrow} \alpha$ and $\alpha \stackrel{\mathcal{B}'}{\equiv} \delta$ for some $\alpha$; or, whenever $\delta \stackrel{\ell}{\longrightarrow} \beta$, either \begin{enumerate} \item $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$ and $\alpha \stackrel{\mathcal{B}'}{\equiv} \beta$ for some $\alpha$, or \item $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha$ and $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ for some $\alpha$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} Remember that Theorem~\ref{thm:char_relative_bisimilarity} confirms that $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ if and only if $(X_i, \delta) \in \mathsf{ComExp}_{\equiv}(\stackrel{\mathcal{B}'}{\equiv})$. According to Definition~\ref{def:compound_expansion}, $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ if and only if $X_i \stackrel{\mathcal{B}}{\equiv} \delta$ and: \begin{enumerate} \item Whenever $X_i \stackrel{\ell}{\longrightarrow} \alpha$, then either \begin{enumerate} \item $\ell = \tau$ and $\alpha \stackrel{\mathcal{B}'}{\equiv} \delta$; or \item $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \beta$ and $\alpha \stackrel{\mathcal{B}'}{\equiv} \beta$ for some $\beta$; or \item $\delta \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \beta$ and $\alpha' \stackrel{\mathcal{B}}{\equiv} \beta'$ for some $\beta$; or \item $\delta \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \beta'$ and $X_i \stackrel{\mathcal{B}'}{\equiv} \beta'$ for some $\beta'$. \end{enumerate} \item Whenever $\delta \stackrel{\ell}{\longrightarrow} \beta$, then either \begin{enumerate} \item $\ell=\tau$ and $X_i \stackrel{\mathcal{B}'}{\equiv} \beta$; or \item $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{dec}} \alpha$ and $\alpha \stackrel{\mathcal{B}'}{\equiv} \beta$ for some $\alpha$; or \item $X_i \stackrel{\ell}{\longrightarrow}_{\mathrm{inc}} \alpha$ and $\alpha \stackrel{\mathcal{B}}{\equiv} \beta$ for some $\alpha$; or \item $X_i \stackrel{\tau}{\longrightarrow}_{\mathrm{dec}} \alpha'$ and $\alpha' \stackrel{\mathcal{B}'}{\equiv} \beta$ for some $\alpha'$. \end{enumerate} \end{enumerate} Now making use of Lemma~\ref{lem:critical}, we can draw the conclusion that the case~1(d) and case~2(a) cannot happen! Now the conditions above become the conditions in Theorem~\ref{thm:exp_relative_membership_simplified}. \qed \end{proof} Comparing with Theorem~\ref{thm:char_relative_bisimilarity}, Theorem~\ref{thm:exp_relative_membership_simplified} has a great advantage. When we need to determine whether $X_i \stackrel{\mathcal{B}'}{\equiv} \delta$ or not, according to Theorem~\ref{thm:exp_relative_membership_simplified}, we only require to checking several conditions which depends only on $\mathcal{B}$ and the profile of $\mathcal{B}'$ in which only constants with index less than $i$ are involved. Thus we can use this fact to construct $\mathcal{B}'$ in the `bottom-up' way, which is exactly the procedure described in Fig.~\ref{Checking_EXP}. The proof of correctness of the algorithm is now finished. \section{Remark}\label{sec:remark} \subsection{Other Bisimilarities On Totally Normed $\mathrm{BPA}$} Comparing with branching bisimilarity, other bisimilarities tend to be more flexible so that they are currently known to be NP-hard on $\mathrm{tnBPA}$. On the occasion of weak bisimilarity, there are two different problems deserving to consideration. First, it is no longer decompositional, as is shown in Example~\ref{example:weak_bisimilarity_decom}. Second, it is capable to encode NP-complete problem due to its more flexible matching style. There is a variant of weak bisimilarity called delay bisimilarity, which is still decompositional on $\mathrm{tnBPA}$. Using unique decomposition property, we can confirm that delay bisimilarity is in PSPACE. The way is barely to guess a decomposition base $\mathcal{B} = (\mathbf{P}, \mathbf{E})$ and check that $\stackrel{\mathcal{B}}{\equiv}$ a delay bisimulation. Still, the bisimulation property needs to carefully defined. Anyway, it is technically much easier than checking branching bisimilarity. Finally we conjecture that deciding bisimilarities other than branching bisimilarity on tnBPA is PSPACE complete. \subsection{On Branching Bisimilarity Checking} In the situation that silent transitions are treated unobservable, branching bisimilarity arouses interest of researchers. In most of the cases, previous decidability and complexity results for weak bisimilarity still hold for branching bisimilarity. There are two remarkable exceptions. The decidability of branching bisimilarity is established by Czerwi\'{n}ski, Hofman and Lasota~\cite{DBLP:conf/concur/CzerwinskiHL11} on normed BPP, and by Fu~\cite{DBLP:conf/icalp/Fu13} on normed BPA. In these two cases, decidability of weak bisimilarity is unknown. Recently, we have proven that branching (and weak) bisimilarity is undecidable on every model above BPA and BPP in the PRS hierarchy even in the normed case~\cite{DBLP:conf/icalp/YinFHHT14}. It is believed that branching bisimilarity is easier to decide than weak bisimilarity. Currently, there is no real instance to support this belief. This paper provides an interesting instance. We expect that more instances will be discovered in the future. \vspace*{4.5mm}\noindent {\bf Acknowledgement}. The author would like to thank S{\l}awomir Lasota for letting me know the work of Czerwi\'{n}ski~\cite{CzerwinskiPhD}, the current fastest algorithm for checking strong bisimilarity on normed $\mathrm{BPA}$; to thank the members of BASICS for their helpful discussions on related topics. \bibliographystyle{plain}
1,314,259,996,875
arxiv
\section{Introduction} Let $(M^n, g)$ be a smooth, compact Riemannian manifold without boundary, and its dimension $n\geq 3$. The Yamabe problem \cite{Ya} is to find a metric conformal to $g$ such that it has constant scalar curvature. This problem was solved by Yamabe, Trudinger, Aubin and Schoen in \cite{Ya, Tr, A, S} . A different approach to the Yamabe problem is the Yamabe flow, which was proposed by Hamilton \cite{H}. Denote $R_g$ the scalar curvature of $g$ and $r_g$ the mean value of $R_g$, i. e. $$r_g=\frac{\int_M R_g dV_g}{\int_M dV_g}.$$ Consider the following parabolic equation \begin {equation} \label{1.2} \frac{\partial g}{\partial t}=-(R_g-r_g)g. \end {equation} Hamilton showed the short time existence for \eqref{1.2} in \cite{H}. Chow \cite{Chow} proved that \eqref{1.2} approaches to a metric of constant scalar curvature provided that the initial metric is locally conformally flat and has positive Ricci curvature. In \cite{Ye}, Ye obtained uniform a priori $C^1$ bounds for the solution of \eqref{1.2} on any conformally flat manifold, and showed that \eqref{1.2} smoothly converge to a metric of constant scalar curvature. Ye also proved that the Yamabe flow \eqref{1.2} exits for all time and converges smoothly to a unique limit of constant scalar curvature provided that the initial metric is scalar negative or scalar flat. By use of the general concentration-compactness result \cite{Struwe}, Schwetlick and Struwe\cite{SS} proved the convergence of the Yamabe flow when $3\leq n\leq 5$ provided that the initial metric has large energy. In \cite{B05}, Brendle proved the convergence of the flow for arbitrary initial energy. The CR geometry, which is the abstract model of real hypersurfaces in complex manifolds, has a lot of analogy with the geometry of Riemannian manifolds. Many mathematicians have made outstanding contributions in this field, such as Chern and Moser \cite{CM}, Fefferman\cite{Feff}, Folland \cite{F}, Folland and Stein\cite{FS}, Jerison and Lee \cite{JL-JDG, JL-JAMS, JL89}, Tanaka \cite{T}, and Webster\cite{W}, etc.. Jerison and Lee\cite{JL-JDG} studied a Yamabe type problem on CR manifolds. To distinguish it with the Riemannian Yamabe problem, it is called the CR Yamabe problem. Suppose that $(M,\th)$ is a compact strongly psedo-convex CR manifold of real dimension $2n+1$ with a given contact form $\th$. The CR Yamabe problem is to find a contact form $\tilde{\th}$ conformal to $\th$ such that its Webster scalar curvature is constant. If we define a new contact form $\tilde{\th}=u^{\frac{2}{n}}\th$, where $u>0$, and denote $\tilde{R}$ ($R$ resp. ) the pseudo-Hermitian Webster scalar curvature with respect to the contact form $\tilde{\th}$ ($\th$, resp.), then the CR Yamabe problem is reduced to solve the following CR Yamabe equation \begin{equation}\label{CRYE} -(2+\frac{2}{n})\triangle_b u+Ru=\tilde{R}u^{1+\frac{2}{n}}, \end{equation} where $\triangle_b$ is the sub-Lapacian of $M$. The CR Yamabe invariant is defined as $$\l(M,\th)=\inf\{\frac{\int_M [(2+\frac{2}{n})\|\nabla_{\th} u\|^2+Ru^2]dV_{\th}}{(\int_M u^{2+\frac{2}{n}}dV_{\th})^{\frac{n}{n+1}}}:u>0,u\in S^2_1(M)\}.$$ Here $dV_{\th}$ is the volume form with respect to the contact form $\th$, $S^2_1(M)$ is the Folland-Stein space, which is the completion of $C^1(M)$ with respect to the norm $$||u||_{S^2_1(M)}=(\int_M(|\nabla_{\th} u|^2_{\th}+|u|^2)dV_{\th})^{\frac{1}{2}}.$$ Jerison and Lee \cite{JL-JDG} solved the CR Yamabe problem when $n\geq 2$ and $M$ is not locally CR equivalent to the sphere. The remaining cases were solved by Gamara\cite{Ga}, and Gamara, Yacoub \cite{GY}.\\ Since $\l(M,\th)$ is determined by the CR structure, which is independent of the choice of $\th$, we denote it by $\l(M)$ from now on. It is natural to ask if we can solve the CR Yamabe problem by a parabolic argument. Namely, as an analogue to the Yamabe flow on a Riemannian manifold, one can construct the CR Yamabe flow as follows: \begin{equation}\label{CRYF} \frac{\partial}{\partial t}\tilde{\th}(t)=-(\tilde{R}-\tilde{r})\tilde{\th}(t). \end{equation} Here $\tilde{r}$ is the average value of the pseudohermitian scalar curvature $\tilde{R}$, defined by $$\tilde{r}=\frac{\int_M \tilde{R}dV_{\tilde{\th}}}{\int_M dV_{\tilde{\th}}}.$$ The CR Yamabe flow was firstly studied by Chang and Cheng \cite{CC}. They proved the short time existence in all dimensions and obtained a Harnack type inequality in dimension three. Zhang \cite{Z} proved the long time existence and convergence for the case $\l(M)<0$. For the case $\l(M)>0$, Ho \cite{Ho12} proved the long time existence for all dimensions, and the convergence when $M$ is the sphere. Ho and the authors \cite{HSW} proved the convergence when $n=1$ recently. For a given contact form $\th_0$ on $M$, we say $\tilde{\th}$ is conformal to $\th_0$ if there is a positive function $f$ such that $$\tilde{\th}=f\th_0.$$ Let $[\th_0]$ be the conformal class of a given contact form $\th_0$ on $M$. If we assume that $\l(M)=0$, then we can find a contact form $\th\in [\th_0]$ with flat pseudohermitian scalar curvature. Without loss of generalization, we may assume it is $\th_0$ itself. We consider the following CR Yamabe flow: \begin{equation}\label{CRYF1} \left\{ \begin{array}{ll} \frac{\partial}{\partial t}\tilde{\th}(t)&=-(\tilde{R}-\tilde{r})\tilde{\th}(t),\\[0.2cm] \tilde{\th}(t)&=u^{\frac{2}{n}}(t)\th_0,\\ \tilde{\th}(t)|_{t=0}&=\th. \end{array} \right. \end{equation} Here $\th$ may be $\th_0$ or some other fixed contact form from the conformal class $[\th_0]$, i.e. $$\th=u(\cdot,0)^{\frac{2}{n}}\th_0.$$ In this paper, we follow the idea of Ye \cite{Ye}(Page 45-47) to prove the following main theorem: \begin {thm}\label {main} Let $(M,\th_0)$ be a smooth, strictly pseudo-convex $2n+1$ dimensional compact CR manifold. Suppose $\l(M)=0$, then the CR Yamabe flow \eqref {CRYF1} exists for all time, and converges to a contact form with flat pseudo-Hermitian scalar curvature exponentially. \end {thm} The convergence argument depends on a Poincar\'e inequality and a CR Gagliardo-Nirenberg type inequality. In section 2, we recall some basic concepts in CR geometry, derive a global version of Poincar\'e inequality on CR manifolds. In section 3, we prove the long time existence and exponential convergence of the CR Yamabe flow \eqref {CRYF1}. In the appendix, we prove a Gagliardo-Nirenberg type interpolation inequality in CR geometry. \section{Preliminaries and Notations} Let $M$ be an orientable, real, $(2n+1)$-dimensional manifold. A CR structure on $M$ is given by a complex $n$-dimensional subbundle $T_{1,0}$ of the complexified tangent bundle ${\mathbb C}TM$ of $M$, satisfying $T_{1,0}\cap T_{0,1}=\{0\}$, where $T_{0,1}=\bar{T}_{1,0}$. We assume the CR structure is formally integrable, that is, $T_{1,0}$ satisfies the Frobenius condition $[T_{1,0}, T_{1,0}]\subset T_{1,0}$. Set $G=Re(T_{1,0}\oplus T_{0,1})$. Then $G$ is a real $2n$-dimensional sub-bundle of $TM$. Then $G$ carries a natural complex structure map: $J: G\rightarrow G$ given by $J(V+\bar{V})=\sqrt{-1}(V-\bar{V})$ for $V\in T_{1,0}$. Let $E\subset T^{\ast}M$ denote the real line bundle $G^{\bot}$. Because we assume $M$ is orientable, and the complex structure $J$ induces an orientation on $G$, $E$ has a global non-vanishing section. A choice of such a 1-form $\th$ is called a pseudo-Hermitian structure on $M$. Associated with such $\theta$, the real symmetric bilinear form $L_\theta$ on $G$: \begin{equation*} L_\theta(V,W)=d\theta(V,JW),~~V,W\in G \end{equation*} is called the $Levi-form$ of $\theta$. $L_\theta$ extends by complex linearity to $\mathbb{C}G$, and induces a Hermitian form on $T_{1,0}$, which we write \begin{equation*} L_\theta(V,\bar W)=-\sqrt{-1} d\theta(V,\bar W),~~V,W\in T_{1,0} \end{equation*} If $\theta$ is replaced by $\tilde\theta=f\theta$, $L_\theta$ changes conformally by $L_{\tilde\theta}=fL_\theta$. We assume that $M$ is {\it{strictly pseudo-convex}}, that is, $L_\theta$ is positive definite for a suitable $\theta$. In this case, $\theta$ defines a contact structure on $M$, and we call $\theta$ a contact form. Then we define the volume form on $M$ as $dV_{\theta}=\theta\wedge (d\theta)^n$. We can choose a unique $T$ called the characteristic direction such that $\theta(T)=1$, $d\theta(T, \cdot)=0$, and $TM=G\oplus \mathbb{R}T$. Then we can define a co-frame $\{\theta, \theta^1, \theta^2, \cdots, \theta^n\}$ satisfying $\theta^\alpha(T)=0$, which is called admissible coframe. Its dual frame $\{T, Z_1, Z_2, \cdots, Z_n\}$ is called admissible frame. In this co-frame, we have $d\theta=\sqrt{-1} h_{\alpha\bar\beta}\theta^\alpha\wedge\theta^{\bar\beta}$, $h_{\alpha\bar\beta}$ is a Hermitian matrix. $h_{\a\bar{\b}}$ and $h^{\a\bar{\b}}$ are used to lower and raise the indices. The sub-Laplacian operator $\triangle_b$ is defined by $$\int_M (\triangle_b u)fdV_{\th}=-\int_M\langle du,df\rangle_{\th}dV_{\th},$$ for all smooth function $f$. Here $<,>_{\th}$ is the inner product induced by $L_{\th}$. We denote $|\nabla_{\th}u|^2=\langle du,du\rangle_{\th}$. Tanaka \cite{T} and Webster \cite{W} showed there is a natural connection in the bundle $T_{1,0}$ adapted to a pseudo-Hermitian structure, which is called the Tanaka-Webster connection. To define this connection, we choose an admissible co-frame $\{\th^{\a}\}$ and dual frame $\{Z_{\a}\}$ for $T_{1,0}$. Then there are uniquely determined 1-forms $\omega_{\a\bar{\b}}$, $\tau_{\a}$ on $M$, satisfying \begin{eqnarray} d\theta^\alpha &=& \omega^\alpha_\beta\wedge\theta^\beta+\theta\wedge\tau^\alpha,\\ dh_{\alpha\bar\beta} &=& h_{\alpha\bar\gamma}\omega_{\bar\beta}^{\bar\gamma}+\omega_{\alpha}^{\gamma}h_{\gamma\bar\b},\\ \tau_\alpha\wedge\theta^\alpha &=& 0. \end{eqnarray} From the third equation, we can find $A_{\alpha\gamma}$, such that $$\tau_\alpha=A_{\alpha\gamma}\theta^\gamma$$ and $A_{\alpha\gamma}=A_{\gamma\alpha}$. Here $A_{\alpha\gamma}$ is called the pseudohermitian torsion. With this connection, the covariant differentiation is defined by $$ \nabla Z_\alpha=\omega_\alpha^\beta\otimes Z_\beta,~~~~\nabla Z_{\bar\alpha}=\omega_{\bar\alpha}^{\bar\beta}\otimes Z_{\bar\beta},~~~~\nabla T=0. $$ $\{\omega^{\alpha}_\beta\}$ are called connection 1-forms. For a smooth function $f$ on $M$, we write $f_\alpha=Z_\alpha f,~~f_{\bar\alpha}=Z_{\bar\alpha} f,~~f_0=Tf$, so that $df=f_\alpha \theta_\alpha+f_{\bar\alpha} \theta_{\bar\alpha}+f_0 \theta$. The second covariant differential $\nabla^2 f$ is the 2-tensor with components \begin{equation*} \begin{split} f_{\alpha\beta} &=\overline{\bar f_{\bar\alpha\bar\beta}}=Z_\beta Z_\alpha f-\omega_\alpha^\gamma(Z_\beta) Z_\gamma f, ~~f_{\alpha\bar\beta} =\overline{\bar f_{\bar\alpha\beta}}=Z_{\bar\beta} Z_\alpha f-\omega_\alpha^\gamma(Z_{\bar\beta}) Z_\gamma f,\\ f_{0\alpha} &=\overline{\bar f_{0\bar\alpha}}=Z_\alpha Tf,~~f_{\alpha0}=\overline{\bar f_{\bar\alpha 0}}=TZ_\alpha f-\omega_\alpha^\gamma(T) Z_\gamma f,~~f_{00}=T^2 f. \end{split} \end{equation*} The connections forms also satisfy $$ d\omega_\beta^\alpha-\omega_\beta^\gamma\wedge\omega_\gamma^\alpha=\frac{1}{2}R_{\beta~~\rho\sigma}^{~~\alpha}\theta^\rho\wedge\theta^{\sigma}+ \frac{1}{2}R_{\beta~~\bar\rho\bar\sigma}^{~~\alpha}\theta^{\bar\rho}\wedge\theta^{\bar\sigma} +R_{\beta~~\rho\bar\sigma}^{~~\alpha}\theta^\rho\wedge\theta^{\bar\sigma}+R_{\beta~~\rho 0}^{~~\alpha}\theta^\rho\wedge\theta-R_{\beta~~\bar\sigma 0}^{~~\alpha}\theta^{\bar\sigma}\wedge\theta. $$ We call $R_{\beta\bar\alpha\rho\bar\sigma}$ the pseudohermitian curvature. Contractions of the pseudohermitian curvature yield the pseudohermitian Ricci curvature $R_{\rho\bar\sigma}=R_{\alpha~~\rho\bar\sigma}^{~~\alpha}$, or $R_{\rho\bar\sigma}=h^{\alpha\bar\beta}R_{\alpha\bar\beta\rho\bar\sigma}$, and the pseudohermitian scalar curvature $R=h^{\rho\bar\sigma}R_{\rho\bar\sigma}$. The sub-Laplacian operator in this connection can be expressed by \begin{equation} \Delta_b u=u^\alpha_\alpha+u^{\bar\alpha}_{\bar\alpha} \end{equation} If we define $\tilde{\th}=u^{\frac{2}{n}}\th$, then we have $$\tilde{\triangle}_b f=u^{-(1+\frac{2}{n})}(u\triangle_b f+2<du,df>_{\th}),$$ where $\tilde{\triangle}_b$ is the sub-Laplacian operator with respect to the contact form $\tilde{\th}$ (see (2.4) in \cite{Ho12} for example). If we set $$\tilde{u}=r^{-1}u,$$ then we have the following CR transformation law $$(-(2+\frac{2}{n})\tilde{\triangle}_b +\tilde{R})\tilde{u}=r^{-1-\frac{2}{n}}(-(2+\frac{2}{n})\triangle_b +R)u.$$ If we substitute $r=u$, then we get the CR Yamabe equation \eqref{CRYE}. If $\{W_1,\cdots,W_n\}$ is a frame for $T^{1,0}$ over some open set $U\subset M$ which is orthonormal with respect to the given pseudo-Hermitian structure on $M$, we call $\{W_1,\cdots,W_n\}$ a pseudo-Hermitian frame. $\{W_1,\cdots,W_n,\overline{W}_1,\cdots, \overline{W}_n, T\}$ forms a local frame for $\mathbb{C}TM$. Now let $U$ be a relatively compact open subset of a normal coordinate neighborhood, with contact form $\th$ and pseudo-Hermitian frame $\{W_1,\cdots, W_n\}.$ Let $X_j={\rm{Re}} W_j$ and $X_{j+n}={\rm{Im}} W_j$. Denote $X^{\a}=X_{\a_1}\cdots X_{\a_k}$, where $\a=(\a_1,\cdots,\a_k)$. We also denote $l(\a)=k$. Define the norm $$\|f\|_{S_k^p(U)}=\sup_{l(\a)\leq k}\|X^{\a}f\|_{L^p(U)}.$$ The Folland-Stein space $S_k^p(U)$ is the completion of $C_0^{\infty}$ with respect to the norm $\|\cdot\|_{S_k^p(U)}$ (See \cite{FS}). Now we use the notations in \cite{FS} as follows. Denote $H^k$ the Hilbert space $S_k^2$. Define $$\Gamma_{\b}(U)=\{f\in C^0(\bar{U}):|f(x)-f(y)|\leq C\r(x,y)^{\b} \},$$ with norm $$||f||_{\Gamma_{\b}(U)}=\sup_{x\in U}|f(x)|+\sup_{x,y\in U}\frac{|f(x)-f(y)|}{\r(x,y)^{\b}}.$$ For any integer $k\geq 1$ and $k<\b<k+1$, define $$\Gamma_{\b}(U)=\{f\in C^0(\bar{U}):X^{\a}f\in \Gamma_{\b-k}(U),l(\alpha)\leq k \},$$ with norm $$||f||_{\Gamma_{\b}(U)}=\sup_{x\in U}|f(x)|+\sup_{x,y\in U,l(\a)\leq k}\frac{|X^{\a}f(x)-X^{\a}f(y)|}{\r(x,y)^{\b-k}}.$$ If we fix local coordinates $(z,t)=\Theta_{\xi}$ for a fixed point $\xi\in U$, the standard H\"{o}lder space $\Lambda_{\b}(U)$ is defined for $0<\b<1$ by $$\Lambda_{\b}(U)=\{f\in C^0(\bar{U}):|f(x)-f(y)|\leq C||x-y||^{\b} \},$$ with norm $$||f||_{\Lambda_{\b}(U)}=\sup_{x\in U}|f(x)|+\sup_{x,y\in U,l(\a)\leq k}\frac{|X^{\a}f(x)-X^{\a}f(y)|}{||x-y||^{\b-k}}.$$ For any integer $k\geq 1$ and $k<\b<k+1$, define $$\Lambda_{\b}(U)=\{f\in C^0(\bar{U}):(\partial/\partial x)^{\a}f\in \Lambda_{\b-k}(U),l(a)\leq k \}.$$ Now for a compact strictly pseudo-convex psedo-Hermitian manifold $M$, choose a finite open covering $U_1, \cdots, U_m$, each $U_j$ has the properties of $U$ above. Choose a $C^{\infty}$ partition of unity $\varphi_i$ subordinate to this covering, and define $$S_k^p(M)=\{f\in L^1(M):\phi_j f\in S_k^p(U_j) \};$$ $$\Gamma_{\b}(M)=\{f\in C^0(M):\phi_j f\in \Gamma_{\b}(U_j) \};$$ $$\Lambda_{\b}(M)=\{f\in C^0(M):\phi_j f\in \Lambda_{\b}(U_j) \}.$$ Then we have the following Lemma, see \cite{FS}, or Proposition 5.7 in \cite{JL-JDG}: \begin{lemma}\label{FS1} For each positive non-integer $\b$, each $r$, $1<r<\infty$, and each integer $k\geq 1$, there exists a constant $C$ such that for every $f\in C_0^{\infty}(U)$,\\ (1) $||f||_{\Gamma_{\b}(U)}\leq C ||f||_{S_k^r(U)}$, where $\frac{1}{r}=\frac{k-\b}{2n+2};$\\ (2) $||f||_{\Lambda_{\b/2}}\leq ||f||_{\Gamma_{\b}(U)}$;\\ (3) $||f||_{S_2^r(U)}\leq C (||\triangle_b f||_{L^r(U)}+||f||_{L^r(U)});$\\ (4) $|f||_{\Gamma_{\b+2}(U)}\leq C(||\triangle_b f||_{\Gamma_{\b}(U)}+||f||_{\Gamma_{\b}(U)}).$\\ The constants $C$ depend only on the frame constants. \end{lemma} We have the following corollary immediately. \begin{coro} \label{FS2} Let $(M,\th)$ be a smooth, strictly pseudo-convex $2n+1$ dimensional compact CR manifold without boundary. Then there is an integer $k>0$, such that $H^k(M)$ embeds into $C^0(M)$. \end{coro} \begin{proof} This is a direct consequence of Lemma \ref{FS1} (1), and $\Gamma_{\b}(M)\subset C^0(M)$. \end{proof} Following CR version Sobolev Embedding Theorem was given by Jerison and Lee \cite{JL-JDG}. \begin{prop}{\rm{(\cite{JL-JDG})}}\label{Theorem 2.1} For $\frac{1}{s}=\frac{1}{r}-\frac{k}{2n+2}$, where $1<r<s<\infty$. Then we have $$S_k^r(M)\subset L^s(M).$$ \end{prop} Next we recall a CR version Poincar\'{e} inequality. In \cite{JL-JDG}, Jerison and Lee proved a Poincar\'{e} type inequality for compact, strictly pseudo-convex CR manifolds. \begin {thm}(See \cite{JL-JDG}, Proposition 5.13) \label{p} Let $(M,\th_0)$ be a compact, strictly pseudo-convex CR manifold, $U$ is a relatively compact open subset of a normal coordinate neighborhood of $(M,\th)$, $B_r$ is a ball of radius $r$, $B_r\subset U$. Then for any $f$ satisfying $|\nabla_{\th_0} f|\in L^q(B_r)$, $1<q<\infty$, there exits a constant $C$ independent of $f$ such that \begin{equation}\label{p1} \int _{B_r}|f(x)-f_{B_r}|^q dV_{\th_0}\leq C r^q\int _{B_r}|\nabla_{\th_0}f|^qdV_{\th_0}, \end{equation} where $f_{B_r}=\frac{\int _{B_r}f(x)dV_{\th_0}}{\int _{B_r}dV_{\th_0}}$. \end {thm} As a corollary of Theorem \ref{p}, we have \begin {lemma}\label{p2} Under the condition of Theorem \ref{p}, we have the following Poincar\'{e} type inequality: $$\int _{B_r}|f(x)|^2 dV_{\th_0}\leq C\int _{B_r}|\nabla_{\th_0}f|^2dV_{\th_0},$$ where $C$ is a positive constant independent of $f$. \end {lemma} \begin{proof} We choose $v(x)$ satisfying $f(x)=v(x)-v_{B_r}$. Since $|\nabla_{\th_0}f|^2=|\nabla_{\th_0}v|^2$, this lemma follows from Theorem \ref{p} by letting $q=2$. \end{proof} By the above Poincar\'{e} inequalities, we know for any $x_0\in M$, there exists a ball $B_r{(x_0)}$ such that the above Poincar\'{e} inequalities are satisfied on $B_r{(x_0)}$. Since $(M,\th_0)$ is compact, then we can obtain the following global Poincar\'{e} inequalities, which are the corollaries of Theorem \ref{p} and Lemma \ref{p2}. \begin{coro}\label{p3} Under the condition of Theorem \ref{p}, for any $f\in C^{\infty}(M)$, we have the following global Poincar\'{e} inequality: \begin{equation}\label{p33} \int _M|f(x)-\bar{f}|^2 dV_{\th_0}\leq C\int_M|\nabla_{\th_0}f|^2dV_{\th_0}, \end{equation} where $C$ is a positive constant independent of $f$, and $\bar{f}=\frac{\int _M f(x)dV_{\th_0}}{\int _MdV_{\th_0}}$. \end{coro} \begin{coro} Under the condition of Theorem \ref{p}, for any $f\in C^{\infty}(M)$, we have the following global Poincar\'{e} inequality: \begin{equation}\label{p4} \int _M|f(x)|^2 dV_{\th_0}\leq C\int_M|\nabla_{\th_0}f|^2dV_{\th_0}, \end{equation} where $C$ is a positive constant independent of $f$. \end{coro} Now we prove the following theorem, which is a Poincar\'{e} type inequality. \begin{thm}\label{p5} Let $(M,\th_0)$ be a compact, strictly pseudoconvex CR manifold. For any $f\in C^{\infty}(M)$, we have the following global Poincar\'{e} type inequality: $$\|\nabla_{\th_0} f\|_{L^2(M,\th_0)}\leq C\|\triangle_b f\|_{L^2(M,\th_0)},$$ for some $C>0$ independent of $f$. \end{thm} \begin{proof} From Proposition 5.7(c) in \cite{JL-JDG}, we know there is a positive constant $C$ independent of $f$, such that $$\|f\|_{S_2^2(M,\th_0)}\leq C(\|\triangle_b f\|_{L^2(M,\th_0)}+\|f\|_{L^2(M,\th_0)}).$$ Therefore we obtain \begin{equation}\label{2nd} \|\nabla_{\th_0} f\|_{L^2(M,\th_0)}\leq C(\|\triangle_b f\|_{L^2(M,\th_0)}+\|f\|_{L^2(M,\th_0)}). \end{equation} We use the contradiction argument to prove the inequality. Suppose the inequality in the theorem is not true, then there exists a sequence $\{f_j\}$ such that $$j\|\triangle_b f_j\|_{L^2(M,\th_0)}\leq \|\nabla_{\th_0} f_j\|_{L^2(M,\th_0)},$$ Then by \eqref{2nd}, we have $$ \|\nabla_{\th_0} f_j\|_{L^2(M,\th_0)}\leq C(\|\triangle_b f_j\|_{L^2(M,\th_0)}+\|f_j\|_{L^2(M,\th_0)}).$$ We may require that $\|\nabla_{\th_0} f_j\|_{L^2(M,\th_0)}=1$, for any $j$. Thus, as $j$ tends to infinity, we have $$\|\triangle_b f_j\|_{L^2(M,\th_0)}\rightarrow 0.$$ Let $u_j=f_j-\bar{f_j}$, here $\bar{f_j}=\frac{\int_M f_j dV_{\th_0}}{\int_M dV_{\th_0}}$. By \eqref{p33}, we have $$\|f_j-\bar{f_j}\|_{L^2(M,\th_0)}\leq \|\nabla_{\th_0} f_j\|_{L^2(M,\th_0)}\leq C .$$ Then there is a subsequence of $u_j$ converges weakly in $S_2^2$, we may assume it is $u_j$ itself. Then we have $u_j\rightarrow u$ in $S_1^2$ sense for some $u$, and $$\int_M |\nabla_{\th_0} u_j|^2 dV_{\th_0}=-\int_M u_j\triangle_b u_jdV_{\th_0}\le \|u_j\|_{L^2(M, \theta_0)}\cdot\|\triangle_b u_j\|_{L^2(M, \theta_0)}\rightarrow 0$$ as $j\rightarrow \infty$, which means $\|\nabla_{\th_0} u\|_{L^2(M,\th_0)}=0$. This contradicts the fact that $\|\nabla_{\th_0} u_j\|_{L^2(M,\th_0)}=\|\nabla_{\th_0} f_j\|_{L^2(M,\th_0)}=1$. \end{proof} At the end of this section, we recall some basic properties of the CR Yamabe flow \eqref {CRYF}. Under this flow, we have the following evolution equations \cite{Ho12}. \begin {lemma} Under the CR-Yamabe flow \eqref{CRYF}, we have \\(1) $\frac{\partial}{\partial t}dV_{\tilde{\th}}=-(n+1)(\tilde{R}-\tilde{r})dV_{\tilde{\th}};$ \\(2) $\frac{\partial}{\partial t}u=-\frac{n}{2}(\tilde{R}-\tilde{r})u$; \\(3) $\frac{d\tilde{r}}{dt}=-n\int_M (\tilde{R}-\tilde{r})^2dV_{\tilde{\th}};$ \\(4) $\frac{\partial}{\partial t}\tilde{R}=(n+1)\tilde{\triangle}_b\tilde{R}+(\tilde{R}-\tilde{r})\tilde{R};$ \end {lemma} We also need the following lemmata, which were proved in \cite{Ho12} (Propositions 3.1, 3.3 and 3.4). \begin {lemma}\label{v} The volume of $M$ does not change under the CR Yamabe flow. \end {lemma} \begin {lemma}\label{r} The function $t\mapsto \tilde{r}(t)$ is bounded from below and non-increasing under \eqref{CRYF}. \end {lemma} \section {Scalar flat case of the CR Yamabe flow} By the CR Yamabe equation \eqref {CRYE}, we can reduce the CR Yamabe flow \eqref {CRYF} to the following evolution equation of the conformal factor: \begin{equation}\label{1} \frac{\partial}{\partial t} u^{\frac{n+2}{n}}=\frac{(n+2)(n+1)}{n}(\triangle_b u+\frac{n}{2n+2}\tilde{r} u^{\frac{n+2}{n}}) \end{equation} with $u(\cdot,0)^{\frac{2}{n}}\th_0=\th$. We have the following lemma. \begin {lemma}\label {3.1} Under the condition of Theorem \ref {main}, $\tilde{r} \geq 0$ for all the time. \end {lemma} \begin {proof} By the definition of $\l (M)$, we obtain $$\l(M)=\inf\{\frac{\tilde{r}}{(\int_M u^{2+\frac{2}{n}}dV_{\th_0})^{\frac{n}{n+1}}}:u>0,u\in S^2_1(M)\}.$$ Since $\lambda(M)=0$, we therefore have $\tilde{r} \geq 0$. \end {proof} Then we have the following corollary: \begin{coro} Under the condition of Theorem \ref {main}, if $\th=\th_0$, then the Yamabe flow \eqref {CRYF1} exists for all time, and $\tilde{r}\equiv 0$, $u\equiv 1$. \end{coro} \begin {proof} This is a direct consequence of Lemmata \ref{3.1} and \ref{r}. \end {proof} Now we prove the following theorem. \begin {thm} \label {3.2} Under the condition of Theorem \ref{main}, for any $T>0$, there exists a constant $C(T)$, such that $u_{\min}(0)\le u(x,t)\leq C(T)$ for $t\in [0,T]$. \end {thm} \begin {proof} Since $M$ is compact, we denote $x(t)$ to be the set of points in $M$ where $u_{\min}(t)$ is obtained. Then we have \begin{eqnarray*} \frac{du_{\min}^{\frac{n+2}{n}}}{dt}(t) &\geq & \inf \{\frac{\partial}{\partial t}(u^{\frac{n+2}{n}})(x,t):x\in x(t)\}\\ &=& \inf \{\frac{(n+2)(n+1)}{n}(\triangle_b u+\frac{n}{2n+2}\tilde{r} u^{\frac{n+2}{n}}(t)):x\in x(t)\}\\ &\geq & \frac{n+2}{2} \tilde{r} u_{\min}^{\frac{n+2}{n}}(t)\\ &\geq & 0, \end{eqnarray*} which means $$u_{\min}(t)\geq u_{\min} (0).$$ Similarly we get $$\frac{du_{\max} ^{\frac{n+2}{n}}}{dt}(t)\leq \frac{n+2}{2} \tilde{r} u_{\max} ^{\frac{n+2}{n}}(t)\leq \frac{n+2}{2} \tilde{r}(0) u_{\max} ^{\frac{n+2}{n}}(t).$$ Therefore, we can obtain $$u_{\min}(0)\leq u(x,t)\leq u_{\max} (0)e^{\frac{n}{2}\tilde{r}(0)t}.$$ \end {proof} \begin {thm} Under the condition of Theorem \ref {main}, for any $T>0$, there exists a constant $C>0$ independent of $T$ such that $$\frac{1}{C}\leq u(x,t) \leq C,$$ for any $t\in [0,T]$. \end {thm} \begin {proof} First we show that the function $f(t):=(\frac{u_{\max}(t)}{u_{\min}(t)})^{\frac{n+2}{n}}$ is non-increasing. In fact, for any $h>0$, we have \begin{eqnarray*} \frac{f(t+h)-f(t)}{h} &=& \frac{1}{h}(\frac{u_{\max} ^{\frac{n+2}{n}}(t+h)}{u_{\min} ^{\frac{n+2}{n}}(t+h)}-\frac{u_{\max}^{\frac{n+2}{n}}(t)}{u_{\min}^{\frac{n+2}{n}}(t)})\\ &=& \frac {1}{u_{\min}^{\frac{n+2}{n}}(t+h)}\frac{u_{\max}^{\frac{n+2}{n}}(t+h)-u_{\max}^{\frac{n+2}{n}}(t)}{h}\\ & &-\frac{u_{\max}^{\frac{n+2}{n}}(t)}{u_{\min}^{\frac{n+2}{n}}(t+h)u_{\min}^{\frac{n+2}{n}}(t)}\frac{u_{\min}^{\frac{n+2}{n}}(t+h)-u_{\min}^{\frac{n+2}{n}}(t)}{h}. \end{eqnarray*} Thus we have \begin{eqnarray*} & &\lim \limits_{h\rightarrow 0}\sup \frac{f(t+h)-f(t)}{h}\\ &=&\lim \limits_{h\rightarrow 0}\sup ( \frac {1}{u_{\min}^{\frac{n+2}{n}}(t+h)}\frac{u_{\max}^{\frac{n+2}{n}}(t+h)-u_{\max}^{\frac{n+2}{n}}(t)}{h}\\ & &-\frac{u_{\max}^{\frac{n+2}{n}}(t)}{u_{\min}^{\frac{n+2}{n}}(t+h)u_{\min}^{\frac{n+2}{n}}(t)}\frac{u_{\min}^{\frac{n+2}{n}}(t+h)-u_{\min}^{\frac{n+2}{n}}(t)}{h})\\ &\leq &\lim \limits_{h\rightarrow 0}\sup \frac {1}{u_{\min}^{\frac{n+2}{n}}(t+h)}\frac{u_{\max}^{\frac{n+2}{n}}(t+h)-u_{\max}^{\frac{n+2}{n}}(t)}{h}\\ & &-\lim \limits_{h\rightarrow 0}\inf \frac{u_{\max}^{\frac{n+2}{n}}(t)}{u_{\min}^{\frac{n+2}{n}}(t+h)u_{\min}^{\frac{n+2}{n}}(t)}\frac{u_{\min}^{\frac{n+2}{n}}(t+h)-u_{\min}^{\frac{n+2}{n}}(t)}{h}\\ &\leq & \frac{1}{u_{\min}^{\frac{n+2}{n}}(t)}\frac{du_{\max}^{\frac{n+2}{n}}}{dt}(t)-\frac{u_{\max} ^{\frac{n+2}{n}}(t)}{(u_{\min}^{\frac{n+2}{n}}(t))^2}\frac{du_{\min}^{\frac{n+2}{n}}}{dt}(t)\\ &\leq & \frac{1}{u_{\min}^{\frac{n+2}{n}}(t)}\frac{n+2}{2} \tilde{r} u_{\max}^{\frac{n+2}{n}}(t)-\frac{u_{\max} ^{\frac{n+2}{n}}(t)}{(u_{\min}^{\frac{n+2}{n}}(t))^2}\frac{n+2}{2} \tilde{r} u_{\min}^{\frac{n+2}{n}}(t)\\ &=& 0. \end{eqnarray*} Then we get \begin {equation} \label {2} \frac{u_{\max}(t)}{u_{\min}(t)}\leq \frac{u_{\max}(0)}{u_{\min}(0)}. \end {equation} It has been shown in Lemma \ref{v} that the volume is invariant under the CR Yamabe flow. We therefore have $$\text{Vol}(M,\th)=\int_M u^{2+\frac{2}{n}}dV_{\th_0}\geq u_{\min} ^{2+\frac{2}{n}}\text{Vol}(M,\th_0),$$ thus $$u_{\min}(t)\leq (\frac{\text{Vol}(M,\th)}{\text{Vol}(M,\th_0)})^{\frac{n}{2n+2}}.$$ Putting these together, we obtain $$u_{\max}(t)\leq \frac{u_{\max}(0)}{u_{\min}(0)}(\frac{\text{Vol}(M,\th)}{\text{Vol}(M,\th_0)})^{\frac{n}{2n+2}}.$$ \end {proof} Once we get the $C^0$ estimate of $u(x,t)$, we may use the same argument in \cite{HSW}(page 12) to show all higher order derivatives of $u(x,t)$ are uniformly bounded on $[0,\infty)$. Then $u(t)$ converges to a smooth function $u_{\infty}$ as $t\rightarrow \infty$. Next we show that $u(t)$ converges to a smooth function $u_{\infty}$ at an exponential rate. Actually, we will show that $u_{\infty}$ is a constant. We first prove the following lemma. \begin {lemma} Under the condition of Theorem \ref {main}, $\tilde{r}\rightarrow 0$ as $t\rightarrow \infty$. \end {lemma} \begin {proof} If $\tilde{r}\geq C>0$, for some positive constant $C$, then from the proof of Theorem \ref{3.2}, we get \begin{eqnarray*} \frac{du_{\min}^{\frac{n+2}{n}}}{dt}(t)&\geq & \frac{n+2}{2} \tilde{r} u_{\min}^{\frac{n+2}{n}}(t)\\ &\geq &C\cdot\frac{n+2}{2}\cdotu_{\min}^{\frac{n+2}{n}}(t). \end{eqnarray*} Thus $$u_{\min} ^{\frac{n+2}{n}}(t)\geq e^{\frac{n+2}{2} Ct}u_{\min}^{\frac{n+2}{n}}(0).$$ But this contradicts with Theorem 3.2. Therefore we have $\tilde{r}\rightarrow 0$ as $t\rightarrow \infty$. \end {proof} Next we show that the convergence is exponential. \begin {lemma} Under the condition of Theorem \ref {main}, the pseudo-Hermitian scalar curvature $\tilde{r}(t)\rightarrow 0$ exponentially as $t\rightarrow \infty$. \end {lemma} \begin {proof} Since $$\frac{\partial}{\partial t} u=(n+1)\triangle_b u\cdot u^{-\frac{2}{n}}+\frac{n}{2}\tilde{r} u,$$ we have $$\frac{1}{n+1}\frac{\partial}{\partial t} u=\triangle_b u\cdot u^{-\frac{2}{n}}+\frac{n}{2n+2}\tilde{r} u,$$ and $$\frac{1}{n+1}\frac{\partial}{\partial t} u \cdot\triangle_b u=(\triangle_b u)^2\cdot u^{-\frac{2}{n}}+\frac{n}{2n+2}\tilde{r} u\cdot\triangle_b u.$$ Integrating both sides of the above equality over $M$, we have $$\frac{1}{n+1}\int_M \frac{\partial}{\partial t} u \cdot\triangle_b udV_{\th_0}=\int_M(\triangle_b u)^2\cdot u^{-\frac{2}{n}}dV_{\th_0}+\frac{n}{2n+2}\tilde{r} \int_M u\cdot\triangle_b udV_{\th_0}.$$ Since \begin{eqnarray*} \frac{1}{n+1}\int_M \frac{\partial}{\partial t} u \cdot\triangle_b udV_{\th_0} &=& -\frac{1}{n+1} \int_M \nabla_{\th_0} u\cdot \nabla_{\th_0} (\frac{\partial}{\partial t} u)dV_{\th_0} \\ &= & -\frac{1}{2n+2}\int_M \frac{\partial}{\partial t} |\nabla_{\th_0} u|^2 dV_{\th_0} \\ &=& -\frac{1}{2n+2}\frac{d}{dt} \int_M |\nabla_{\th_0} u|^2 dV_{\th_0}, \end{eqnarray*} then we get \begin{equation}\label{3} \frac{1}{n+1}\frac{d}{dt} \int_M |\nabla_{\th_0} u|^2 dV_{\th_0}=-2\int_M(\triangle_b u)^2\cdot u^{-\frac{2}{n}}dV_{\th_0}+\frac{n}{n+1}\tilde{r} \int_M |\nabla_{\th_0} u|^2dV_{\th_0}. \end{equation} By Theorem \ref{p5}, we have \begin{equation}\label{4} \parallel\nabla_{\th_0} u\parallel _{L^2(M,\th_0)}\leq C \parallel\triangle_b u\parallel_{L^2(M,\th_0)}. \end{equation} Here $C$ is some positive constant independent of $u$. By \eqref {4}, we have \begin{eqnarray*} \int_M(\triangle_b u)^2\cdot u^{-\frac{2}{n}}dV_{\th_0}&\geq & \frac{1}{u_{\max}^{\frac{2}{n}}}\int_M(\triangle_b u)^2dV_{\th_0} \\ &\geq & C\int_M |\nabla_{\th_0} u|^2 dV_{\th_0}, \end{eqnarray*} for some positive constant $C$. Substituting this inequality into \eqref {3}, we get $$\frac{1}{n+1}\frac{d}{dt} \int_M |\nabla_{\th_0} u|^2 dV_{\th_0}\leq (\frac{n}{n+1}\tilde{r} -2C)\int_M |\nabla_{\th_0} u|^2 dV_{\th_0}.$$ Then for sufficiently large $t$, there exists a positive constant $A$, such that $$\frac{d}{dt} \log \int_M |\nabla_{\th_0} u|^2 dV_{\th_0}\leq (n+1)(\frac{n}{n+1}\tilde{r} -2C)\leq -A,$$ from which we get \begin{equation}\label{5} \tilde{r}(t)=\frac{\int_M (2+\frac{2}{n})|\nabla_{\th_0} u|^2dV_{\th_0}}{\text{Vol}(M,\th)}\leq C\cdot e^{-At}, \end{equation} for $t$ sufficiently large. \end {proof} From the proof of Lemma 3.3, we also get $$ \|\nabla_{\th_0}u\|^2_{L^2(M, \th_0)}\le C\cdot e^{-At}, $$ which will be used later. Now we prove the following theorem: \begin {thm} Under the condition of Theorem \ref {main}, the solution u(t) of the CR Yamabe flow \eqref{CRYF} converges to a constant at an exponential rate. \end {thm} \begin {proof} Since \begin{eqnarray*} \frac{d}{dt} \int_M u^{\frac{n+2}{n}}dV_{\th_0} &=& \int_M \frac{d}{dt}(u^{\frac{n+2}{n}})dV_{\th_0}\\ &=&\frac{(n+2)(n+1)}{n}\int_M \triangle_b udV_{\th_0}+\frac{n+2}{2} \tilde{r} \int_M u^{\frac{n+2}{n}} dV_{\th_0}\\ &=&\frac{n+2}{2} \tilde{r}\int_M u^{\frac{n+2}{n}}dV_{\th_0} \leq C\cdot e^{-At}\cdot\int_M u^{\frac{n+2}{n}}dV_{\th_0}, \end{eqnarray*} therefore $\int_M u^{\frac{n+2}{n}}dV_{\th_0}$ is bounded from above and non-decreasing, which means $$\lim \limits_{t\rightarrow \infty}\int_M u^{\frac{n+2}{n}}(x,t)dV_{\th_0}=L,$$ for some positive constant $L$. Hence, there exists a constant $C$ such that $$\frac{d}{dt} \int_M u^{\frac{n+2}{n}}dV_{\th_0} \leq C\cdot e^{-At}.$$ Then for $t_2>t_1$, and $t_1$ sufficiently large, we have \begin{eqnarray*} |\int_M u^{\frac{n+2}{n}}(x,t_2)dV_{\th_0}-\int_M u^{\frac{n+2}{n}}(x,t_1)dV_{\th_0}|&=&\int_M u^{\frac{n+2}{n}}(x,t_2)dV_{\th_0}-\int_M u^{\frac{n+2}{n}}(x,t_1)dV_{\th_0}\\ &\leq&C(e^{-At_1}-e^{-At_2}). \end{eqnarray*} Let $t_2\rightarrow \infty$, we get $$|\int_M u^{\frac{n+2}{n}}dV_{\th_0}-L|\leq C\cdot e^{-At}.$$ for $t$ sufficiently large. By Corollary \ref{p3} and H\"{o}lder inequality, we have $$\parallel u^{\frac{n+2}{n}}-\frac{1}{V}\int_M u^{\frac{n+2}{n}}dV_{\th_0}\parallel^2_{L^2(M,\th_0)}\leq C \parallel\nabla_{\th_0} u\parallel^2_{L^2(M,\th_0)}\leq C\cdot e^{-At}.$$ Let $f=u^{\frac{n+2}{n}}-\frac{1}{V}\int_M u^{\frac{n+2}{n}}dV_{\th_0}$, then $\int_M f dV_{\th_0}=0$. We apply Theorem \ref{it1} in the Appendix below by choosing $a=\frac{1}{2}$, $p=q=r=2$, $j=k$ and $m=2k$, and use the fact that the higher order derivatives of $u$ are uniformly bounded for all $t\geq 0$, we get $$\parallel u^{\frac{n+2}{n}}-\frac{1}{V}\int_M u^{\frac{n+2}{n}}dV_{\th_0}\parallel_{H^k(M, \th_0)}\leq C\cdot e^{-At}.$$ Then by Corollary \ref{FS2}, we obtain $$| u^{\frac{n+2}{n}}-\frac{1}{V}\int_M u^{\frac{n+2}{n}}dV_{\th_0}| \leq C\cdot e^{-At}.$$ Let $t \rightarrow \infty$, we get $u^{\frac{n+2}{n}}\rightarrow \frac{L}{V}$ exponentially. \end {proof} \section{Appendix} The Gagliardo-Nirenberg interpolation inequality is a result in the theory of Sobolev spaces that estimates the weak derivatives of a function. The estimates are in terms of $L^p$ norms of the function and its derivatives, and the inequality "interpolates" into various values of $p$ and orders of differentiation. The result is of particular importance in the theory of elliptic partial differential equations. It was proposed by Nirenberg and Gagliardo, see \cite{N}. For Riemannian case, the Gagliardo-Nirenberg type interpolation inequality was prove by Aubin (see \cite{A1}, Theorem 3.70). Due to the lack of relevant references, we did not find the similar inequalities in CR geometry. In this section, we try to establish a Gagliardo-Nirenberg type inequality in CR geometry. Let $(M,\th)$ be a smooth, strictly pseudoconvex $2n+1$ dimensional compact CR manifold without boundary. We choose an admissible coframe $\{\th^{\a}\}$ and dual frame $\{Z_{\a}\}$ for $T_{1,0}$. We adopt the same notations as in \cite{JL89}. Let $\alpha,\beta,\gamma,\cdots\in \{1,2\cdots, n\}$, and $a,b,c, \cdots\in \{1,2\cdots, 2n\}$, and $\bar{\a}=\a+n$. We denote $\nabla^{|j|}f$ the $j-$th covariant derivative of $f$ in the Tanaka-Webster connection in the sense $$\parallel \nabla^{|j|}f\parallel^2=\nabla^{a_1}\nabla^{a_2}\cdots \nabla^{a_j} f\nabla_{a_1}\nabla_{a_2}\cdots \nabla_{a_j} f,$$ here $a_i\in \{1,2\cdots, 2n\}$ and $\nabla_{a_i}$ means $\nabla_{Z_{a_i}}$. From now on we denote $\parallel f \parallel_p$ be the $L^p$ norm of $f$. By the existence of the Possion type equation $\triangle_b f=C$(see \cite{KN}). We denote $G_P(x)$ is the Green's function of the sub-Laplacian operator $\triangle_b$ which satisfies $$\triangle_b G_P(X)=\delta_P(x)-\frac{1}{V},$$ where $V$ is the volume of $(M,\th)$, and $\delta_P(x)$ is the Dirac function at $P$. For the general case of the Green's function see \cite{CMY}. By the definition of Dirac function, we have \begin{equation}\label{G} \varphi (P)=\frac{1}{V}\int_M \f dV_{\th} + \int_M G_P(x)\triangle_b \f(x)dV_{\th}. \end{equation} We now prove the following theorem: \begin{thm}\label{it1} Let $(M,\th)$ be a smooth, strictly pseudoconvex $2n+1$ dimensional compact CR manifold without boundary. Let $q$, $r$ be real numbers $1\leq q,r < \infty$ and $j,m$ integers $0\leq j<m$. Then there exists a constant $K$ depending only on $n$, $m$, $j$, $q$, $r$ and $(M,\th_0)$, such that for all $f\in C^{\infty}$ with $\int f\ dV_{\th}=0$, we have: $$\parallel \nabla^{|j|} f\parallel_p \leq K \parallel \nabla^{|m|} f \parallel_r^a \cdot \parallel f\parallel_q^{1-a}.$$ Here $\frac{1}{p}=\frac{j}{2n+2}+a(\frac{1}{r}-\frac{m}{2n+2})+(1-a)\frac{1}{q}$, for all $a$ in the interval $\frac{j}{m}\leq a <1$, for which $p$ is non-negative. \end{thm} We follow the idea of Aubin in \cite{A1}, we first prove the following lemma: \begin{lemma}\label{it2} Let $(M,\th)$ be a smooth, strictly pseudoconvex $2n+1$ dimensional compact CR manifold without boundary, and $p$, $q$ real numbers satisfying $\frac{1}{p}=\frac{1}{q}-\frac{1}{2n+2}$, $1\leq q< 2n+2$. Then there exists a constant $K$ depending only on $p$, $q$, $n$ and $(M,\th)$, for any function $\f\in C^1(M)$ with $\int_M \f dV_{\th}=0$, we have $$\parallel\f \parallel_p\leq \parallel \nabla \f\parallel_q.$$ \end{lemma} \begin{proof} Since $\int_M \f dV_{\th}=0$, by \eqref{G}, we have $$\f(P)=\int_M G_P(x)\triangle_b \f (x)dV_{\th},$$ from which we get \begin{eqnarray*} |\f(P)| &\leq & \int_M \parallel\nabla G_P\parallel \cdot \parallel \nabla \f\parallel dV_{\th}\\ &=& \int_M (\parallel\nabla G_P\parallel \cdot \parallel \nabla \f\parallel^q)^{\frac{1}{q}}\cdot \parallel\nabla G_P\parallel^{1-\frac{1}{q}} dV_{\th}\\ &\leq & (\int_M \parallel\nabla G_P\parallel \cdot \parallel \nabla \f\parallel^qdV_{\th})^{\frac{1}{q}}\cdot (\int_M \parallel\nabla G_P\parallel dV_{\th})^{1-\frac{1}{q}}. \end{eqnarray*} Here we have used the H\"{o}lder inequality. Then we obtain $$\parallel \f \parallel_q\leq \parallel \nabla \f \parallel_q \sup_{P\in M} \int_M \parallel \nabla G_P\parallel dV_{\th}.$$ Then by Folland-Stein imbedding theorem, we obtain $$\parallel \f\parallel_p \leq C(\parallel\nabla \f\parallel_q +\parallel \f \parallel_q)\leq K \parallel\nabla \f\parallel_q.$$ Here $K=C+C\cdot \sup_{P\in M} \int_M \parallel \nabla G_P\parallel dV_{\th}.$ \end{proof} Next, we prove the following Lemma, which is a generalized Poincar\'{e} type inequality. \begin{lemma}\label{it3} Let $(M,\th)$ be a smooth, strictly pseudoconvex $2n+1$ dimensional compact CR manifold without boundary, and $p$, $q$, $r$ real numbers satisfying $1\leq q,r< \infty$, $p\geq 2$. Set $\frac{2}{p}=\frac{1}{q}+\frac{1}{r}$. Then for any functions $f\in C^{\infty}(M)$, we have: $$\parallel \nabla f\parallel_p^2\leq (\sqrt{2n}+|p-2|)\parallel f\parallel_q\cdot \parallel \nabla^{|2|}f\parallel_r.$$ \end{lemma} \begin{proof} By a direct computation, we have \begin{eqnarray*} \nabla^a(f\parallel \nabla f\parallel^{p-2}\nabla _a f) &= & \parallel \nabla f\parallel^p+f\parallel \nabla f\parallel^{p-2}\nabla ^a\nabla_a f\\ &+& (p-2)\parallel \nabla f\parallel^{p-4}f\nabla_{ab}f\nabla^a f\nabla^b f. \end{eqnarray*} Especially, if $p=2$, we have $\parallel \nabla f\parallel_2^2=-\int_M f\triangle_b fdV_{\th}$. Then Lemma \ref{it3} is just the Poincar\'{e} type inequality we proved above. If $p>2$, we have $$\parallel \nabla f\parallel_p^p=-\int_M f\triangle_b f \parallel \nabla f\parallel^{p-2}+(2-p)\int_M \parallel \nabla f\parallel^{p-4}f\nabla_{ab}f\nabla ^a f \nabla^b f dV_{\th}.$$ Since $|\triangle_b f|^2\leq 2n \parallel \nabla^{|2|}f\parallel^2$ and $|\nabla_{ab}f \nabla^a \nabla^b f|\leq \parallel \nabla^{|2|}f \parallel \cdot\parallel \nabla f\parallel^2$, we choose $r$ such that $\frac{1}{q}+\frac{1}{r}+\frac{p-2}{p}=1$. By H\"{o}lder inequality, we have $$\parallel \nabla f\parallel_p^p\leq (\sqrt{2n}+|p-2|)\parallel f\parallel_q\cdot \parallel \nabla^{|2|}f\parallel_r \cdot \parallel \nabla f\parallel_p^{p-2},$$ and the desired result follows. \end{proof} Now we prove Theorem \ref{it1}. First we note if the two cases $j=0$, $m=1$ and $j=1$, $m=2$ are proved, the general case will be followed by induction by applying the inequality $$\parallel \nabla \parallel\nabla ^{|l|} f\parallel\parallel\leq \parallel \nabla ^{|l+1|} f\parallel,$$ which follows from the fact that the Tanaka-Webster connection is compatible with the inner product $\langle \cdot,\cdot \rangle_{\th}$ and Cauchy-Schwarz inequality. From Lemma \ref{it2}, we have $$\parallel f\parallel_s\leq C \parallel \nabla f\parallel_t,$$ where $\frac{1}{s}=\frac{1}{t}-\frac{1}{2n+2}>0.$ For the case $j=0$, $m=1$. By H\"{o}lder inequality, we have $$\parallel f\parallel_p\leq \parallel f\parallel_s^a\parallel f\parallel_q^{1-a}.$$ Here $\frac{1}{p}=\frac{a}{s}+\frac{1-a}{q}$, i.e. $\frac{1}{p}-\frac{1}{q}=a(\frac{1}{s}-\frac{1}{q})$. Then we choose $t=r<2n+2$, from which we get $$\parallel f\parallel_p\leq C \parallel \nabla f\parallel_r^a \parallel f\parallel_q^{1-a},$$ which means $\frac{1}{p}=a(\frac{1}{r}-\frac{1}{2n+2})+(1-a)\frac{1}{q}$. If $r\geq 2n+2$, we choose $\mu$ such that $\frac{1}{ap}=\frac{1}{\mu}-\frac{1}{2n+2}$. Let $h=|f|^{\frac{1}{a}}$, we have $$\parallel h\parallel_{ap}\leq C \parallel \nabla h\parallel_{\mu},$$ again by H\"{o}lder inequality, we have $$\parallel f\parallel_p^{\frac{1}{a}}\leq \frac{C}{a}\parallel \|\nabla f \| \cdot |f|^{\frac{1}{a}-1}\parallel_{\mu}\leq \frac{C}{a} \parallel \nabla f\parallel_r \cdot \parallel f\parallel_q ^{\frac{1}{a}-1},$$ the desired consequence follows. For the case $j=1$, $m=2$. If $a=\frac{j}{m}=\frac{1}{2}$, Theorem \ref{it1} is just Lemma \ref{it3}. Then for $r\geq 2n+2$, and $\frac{1}{2}<a<1$, the interpolation inequality follows from H\"{o}lder inequality. If $r\geq 2n+2$, by induction, we apply the first case to $\parallel \nabla f\parallel$ and get $$\parallel \nabla f\parallel_p\leq C \parallel \nabla^{|2|}f\parallel_r^b \parallel \nabla f\parallel_s^{1-b},$$ where $\frac{1}{p}=\frac{1}{s}+b(\frac{1}{r}-\frac{1}{2n+2}-\frac{1}{s})>0$, $\frac{2}{s}=\frac{1}{r}+\frac{1}{q}$, and $a=\frac{1+b}{2}$. i.e. $$\frac{1}{p}=\frac{1}{2n+2}+a(\frac{1}{r}-\frac{2}{2n+2})+(1-a)\frac{1}{q},$$ and the proof is completed.
1,314,259,996,876
arxiv
\section{Introduction} Early brain development encompasses many crucial structural and physiological modifications that have an influence on health later in life. Changes in T1 and T2 relaxation times may provide valuable clinical information about ongoing biological processes, as well as a better insight into the early stages of normal maturation~\cite{deoni_quantitative_2010}. Indeed, quantitative MRI (qMRI) has revealed biomarkers sensitive to subtle changes in brain microstructure that are characteristic of abnormal patterns and developmental schemes in newborns~\cite{dingwall_t2_2016,schneider_evolution_2016}. T1 and T2 mapping of the developing fetal brain would afford physicians new resources for pregnancy monitoring, including quantitative diagnostic support in equivocal situations and prenatal counselling, as well as postnatal management. Unfortunately, current relaxometry strategies require long scanning times that are not feasible in the context of \emph{in utero} fetal brain MRI due to unpredictable fetal motion in the womb~\cite{chen_t2_2018,gholipour_fetal_2014,leppert_t2_2009,travis_more_2019}. As such, very little work has explored \emph{in vivo} qMRI of the developing fetal brain. Myelination was characterized \emph{in utero} using a mono-point T1 mapping based on fast spoiled gradient echo acquisitions ~\cite{abd_almajeed_myelin_2004}, and more recently by fast macromolecular proton fraction mapping~\cite{yarnykh_quantitative_2018}. T2* relaxometry of the fetal brain has been explored through fast single-shot multi-echo gradient echo-type echo-planar imaging (GRE-EPI)~\cite{vasylechko_t2_2015} and, recently, based on a slice-to-volume registration of 2D dual-echo multi-slice EPI with multiple time points reconstructed into a motion-free isotropic high-resolution (HR) volume~\cite{blazejewska_3d_2017}. To our knowledge, similar strategies have not been investigated for \emph{in utero} T2 mapping yet. Today, super-resolution (SR) techniques have been adopted to take advantage of the redundancy between multiple T2-weighted (T2w) low-resolution (LR) series acquired in orthogonal orientations and thereby reconstruct a single isotropic HR volume of the fetal brain with reduced motion sensitivity for thorough anatomical exploration~\cite{ebner_automated_2020,gholipour_robust_2010,kainz_fast_2015,rousseau_super-resolution_2010,tourbier_efficient_2015}. In clinical routine, 2D thick slices are typically acquired in a few seconds using T2w multi-slice single-shot fast spin echo sequences~\cite{gholipour_fetal_2014}. We hypothesize that the combination of SR fetal brain MRI with the sensitivity of qMRI would enable reliable and robust 3D HR T2 relaxometry of the fetal brain~\cite{bano_model-based_2020,blazejewska_3d_2017}. In this context, we have explored the feasibility of repeatable, accurate and robust 3D HR T2 mapping from SR-reconstructed clinical fast T2w Half-Fourier Acquisition Single-shot Turbo spin Echo (HASTE) with variable echo time (TE) on a quantitative MR phantom~\cite{keenan_kathryn_e_multi-site_2016}. \section{Methodology} \subsection{Model Fitting for T2 Mapping} The T2w contrast of an MR image is governed by an exponential signal decay characterized by the tissue-specific relaxation time, T2. Since any voxel within brain tissue may contain multiple components, a multi-exponential model is the closest to reality. However, it requires long acquisition times that are not acceptable in a fetal imaging context. The common simplification of a single-compartment model~\cite{dingwall_t2_2016,leppert_t2_2009} allows for fitting the signal according to the following equation: \begin{equation} \hat{X}\textsubscript{TE} = \mathcal{M}\textsubscript{0} e^{\frac{-TE}{T2}}, \label{eq:model} \end{equation} where $\mathcal{M}$\textsubscript{0} is the equilibrium magnetization and $\hat{X}$\textsubscript{TE} is the signal intensity at a given echo time TE at which the image is acquired. As illustrated in Figure~\ref{fig1}, the time constant T2 can be estimated in every voxel by fitting the signal decay over TE with this mono-exponential analytical model~\cite{milford_mono-exponential_2015}. We aim at estimating a HR 3D T2 map of the fetal brain with a prototype algorithm. Our strategy is based on SR reconstruction from orthogonal 2D multi-slice T2w clinical series acquired at variable TE (see complete framework in Figure~\ref{fig1}). For every $TE_{i}$, a motion-free 3D image $\mathbf{\hat{X}}_{TE_{i}}$ is reconstructed using a Total-Variation (TV) SR reconstruction algorithm~\cite{tourbier_sebastientourbiermialsuperresolutiontoolkit_2019,tourbier_efficient_2015} which solves: \begin{equation} \begin{aligned} \mathbf{\hat{X}}_{TE_{i}} = \arg\min_{\mathbf{X}} \ \frac{\lambda}{2} \sum_{kl} \| \underbrace{\mathbf{D}_{kl}\mathbf{B}_{kl}\mathbf{M}_{kl}}_{\mathbf{H}_{kl}} \mathbf{X} - \mathbf{X}_{kl,TE_{i}}^{LR}\|^2 + \|\mathbf{X}\|_{TV}, \end{aligned} \label{eq:sr} \end{equation} where the first term relates to data fidelity, $k$ being the $k$-th LR series $\mathbf{X}_{TE_{i}}^{LR}$ and $l$ the $l$-th slice. $\|\mathbf{X}\|_{TV}$ is a TV prior introduced to regularize the solution while $\lambda$ balances the trade-off between both data and regularization ($\lambda$=0.75). $\mathbf{D}$ and $\mathbf{B}$ are linear downsampling and Gaussian blurring operators given by the acquisition characteristics. $\mathbf{M}$, which encodes the rigid motion of slices, is set to the identity transform in the absence of motion. The model fitting described in Equation~\ref{eq:model} is computed in every voxel of a SR 3D volume estimated at time TE. T2 maps are computed using a non-linear least-squares optimization (MATLAB, MathWorks, R2019a). As shown in Figure~\ref{fig1}-B, the T2 signal decay may reveal an offset between the first echoes and the rest of the curve that can be explained by stimulated echoes~\cite{mcphee_limitations_2018} and the sampling order of the HASTE sequence. It is common practice to exclude from the fitting the first points that exhibit the pure spin echo without the stimulated echo contributions~\cite{milford_mono-exponential_2015}. \begin{figure}[hbt!] \centering \includegraphics[width=0.8\textwidth]{Lajous_et_al_Super-Resolution_T2_Mapping_Figure1.pdf} \caption{Evaluation framework. Reference T2 values of elements (a), (b) and (c) (blue dashed area) of the NIST phantom are measured by (A-1-a) single-echo spin echo (SE) and (A-1-b) multi-echo spin echo (MESE) sequences. (A-2) Low-resolution orthogonal HASTE images acquired at variable TE are SR-reconstructed into (A-3) an isotropic volume for every TE. (B) The signal decay as a function of TE is fitted in each voxel by a mono-exponential model. (C) Resulting voxel-wise T2 maps.} \label{fig1} \end{figure} \subsection{Validation Study} \subsubsection{Quantitative Phantom.} Our validation is based on the system standard model 130 that was established by the National Institute for Standards and Technology (NIST) of the United States in collaboration with the International Society for Magnetic Resonance in Medicine (ISMRM). It is produced by QalibreMD (Boulder, CO, USA) and is hereafter referred to as the NIST phantom~\cite{keenan_kathryn_e_multi-site_2016}. This quantitative phantom was originally developed to assess the repeatability and reproducibility of MRI protocols across vendors and sites. Our study focuses on a region-of-interest (ROI) represented by a blue square in Figure~\ref{fig1}-A. It is centered on elements of the NIST phantom that have relaxometry properties close to those reported in the literature for \emph{in vivo} brain tissue of fetuses and preterm newborns at 1.5 T~\cite{blazejewska_3d_2017,hagmann_t2_2009,nossin-manor_quantitative_2013,vasylechko_t2_2015,yarnykh_quantitative_2018}, namely T2 values higher than 170 ms and 230 ms in grey matter and white matter respectively, and high T1 values. Accordingly, we focus on the following areas: (a) T2=428.3 ms, (b) T2=258.4 ms and (c) T2=186.1 ms, with a relatively high T1/T2 ratio (4.5-6.9), and which fall within a field-of-view similar to that of fetal MRI. \subsubsection{MR Imaging.} Acquisitions are performed at 1.5 T (MAGNETOM Sola, \linebreak[4]Siemens Healthcare, Erlangen, Germany), with an 18-channel body coil and a 32-channel spine coil (12 elements used). Three clinical T2w series of 2D thick slices are acquired in orthogonal orientations using an ultra-fast multi-slice HASTE sequence (TE=$90ms$, TR=$1200ms$, excitation/refocusing pulse flip angles of 90\textdegree/180\textdegree, interslice gap of 10\%, voxel size of $1.13 \times 1.13 \times 3.00 mm^3$). For consistency with the clinical fetal protocol, a limited field-of-view ($360 \times 360 mm^2$) centered on the above-referenced ROI is imaged. Each series contains 23 slices and is acquired in 28 seconds. We extend the TE of this clinical protocol in order to acquire additional sets of three orthogonal series, leading to six configurations with 4, 5, 6, 8, 10, or 18 TEs uniformly sampled over the range of 90 ms to 298 ms. The acquisition time is about 90 seconds per TE, thus the total acquisition time ranges from 6 minutes (4 TEs) to 27 minutes (18 TEs). Binary masks are drawn on each LR series for reconstruction of a SR volume at every TE, as illustrated in Figure~\ref{fig1}-C. \subsubsection{Gold-Standard Sequences for T2 Mapping.} A conventional single-echo spin echo (SE) sequence with variable TE is used as a reference for validation (TR=$5000ms$, 25 TEs sampled from 10 to $400ms$, voxel size of $0.98 \times 0.98 \times 6.00 mm^3$). One single 2D slice is imaged in 17.47 minutes for a given TE, which corresponds to a total acquisition time of more than 7 hours. As recommended by the phantom manufacturer, an alternative multi-echo spin echo (MESE) sequence is used for comparison purposes (TR=$5000ms$, 32 TEs equally sampled from 13 to $416 ms$, voxel size of $0.98 \times 0.98 \times 6.00 mm^3$). The total acquisition time to image the same 2D slice is of 16.05 minutes. Gold-standard and HASTE acquisitions are made publicly available in our repository \cite{lajous_dataset_2020} for further reproducibility and validation studies. \subsubsection{Evaluation Procedure.} We evaluate the accuracy of the proposed 3D SR T2 mapping framework with regard to T2 maps obtained from HASTE, MESE and SE acquisitions. Since only one single 2D coronal slice is imaged by SE and MESE sequences, quantitative measures are computed on the corresponding slice of the coronal 2D HASTE series and 3D SR images. At a voxel-wise level, T2 standard deviation (SD) and R\textsuperscript{2} are computed to evaluate the fitting quality. A region-wise analysis is conducted over the three ROIs previously denoted as (a), (b) and (c). An automated segmentation of these areas in the HASTE and SR images is performed by Hough transform followed by a one-pixel dilation. Mean T2 values $\pm$ SD are estimated within each ROI. The relative error in T2 estimation is computed using SE measurements as reference values. It is defined as the difference in T2 measures between either HASTE or SR and the corresponding SE reference value normalized by the SE reference value. This metric is used to evaluate the accuracy of the studied T2 mapping technique as well as its robustness to noise (see also Supplementary Material - Table I). We run the same MRI protocol (SE, MESE and HASTE) on three different days in order to study the repeatability of T2 measurements. The relative error in T2 estimation between two independent experiments (i.e., on two different days) is calculated as described above, using every measure in turn as a reference. Thus, we are able to evaluate the mean absolute percentage error $|\Delta\varepsilon|$ as the average of relative absolute errors in T2 estimation over all possible reference days. The coefficient of variation (CV) for T2 quantification represents the variability (SD) relative to the mean fitted T2 value. \section{Results} \subsection{3D Super-Resolution T2 Mapping} Voxel-wise T2 maps as derived from one coronal HASTE series and from the 3D SR reconstruction of three orthogonal HASTE series are shown in Figure~\ref{fig2} together with associated standard deviation maps. HASTE series show Gibbs ringing in the phase-encoding direction at the interface of the different elements. Since SE and MESE images are corrupted in a similar way across all TEs, a homogeneous T2 map is recovered for every element of the phantom (Figure~\ref{fig2}). Instead, as HASTE acquisitions rely on a variable k-space sampling for every TE, resulting T2 maps are subject to uncompensated Gibbs artifacts that cannot easily be corrected due to reconstruction of HASTE images by partial Fourier techniques~\cite{kellner_gibbs-ringing_2016}. Interestingly though, Gibbs ringing is much less pronounced in the SR reconstructions where it is probably attenuated by the combination of orthogonal series. Of note, \emph{in vivo} data are much less prone to this artifact. \begin{figure}[htb!] \centering \includegraphics[width=0.64\textwidth]{Lajous_et_al_Super-Resolution_T2_Mapping_Figure2.pdf} \caption{Comparison of voxel-wise T2 maps and T2 SD maps estimated from SE, MESE, HASTE and corresponding SR reconstruction at variable TE} \label{fig2} \end{figure} \subsection{Repeatability Study} As highlighted in Table~\ref{tab1}, T2 estimation is highly repeatable over independent measurements with a mean CV of less than 4\%, respectively 8\%, for T2 quantification from HASTE acquisitions, respectively SR with 5 TEs. The mean absolute percentage error is less than 5\% in HASTE images, respectively 10\% in SR. \begin{table \caption{Repeatability of T2 mapping strategies between three independent experiments. Mean fitted T2 value $\pm$ SD, CV, mean absolute difference and mean absolute percentage error in T2 estimation are presented. The lowest difference for each (ROI, method) pair is shown in bold.} \label{tab1} \centering \begin{tiny} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & SE & \multicolumn{6}{ c|}{HASTE/SR}\\ \hline \multicolumn{2}{|c|}{TEs} & 25 & 4 & 5 & 6 & 8 & 10 & 18\\ \hline \multirow{3}{*}{\rotatebox{90}{T2(ms)}} & (a) & 380$\pm$7 & 422$\pm$2/288$\pm$7 & 427$\pm$12/312$\pm$21 & 425$\pm$2/352$\pm$13 & 451$\pm$4/409$\pm$18 & 454$\pm$0/403$\pm$20 & 451$\pm$2/407$\pm$12\\ & (b) & 256$\pm$5 & 314$\pm$6/188$\pm$8 & 304$\pm$12/223$\pm$11 & 315$\pm$4/258$\pm$5 & 337$\pm$6/287$\pm$8 & 335$\pm$9/288$\pm$4 & 333$\pm$10/297$\pm$4\\ & (c) & 187$\pm$2 & 252$\pm$9/159$\pm$10 & 247$\pm$2/180$\pm$14 & 249$\pm$4/207$\pm$6 & 267$\pm$1/225$\pm$5 & 267$\pm$6/223$\pm$4 & 263$\pm$3/233$\pm$4\\ \hline \multirow{3}{*}{\rotatebox{90}{CV(\%)}} & (a) & 1.8 & 0.4/\textbf{2.3} & 2.8/6.7 & 0.5/3.8 & 1.0/4.4 & \textbf{0.0}/5.0 & 0.4/3.0\\ & (b) & 1.8 & 1.8/4.4 & 3.9/4.8 & \textbf{1.3}/2.0 & 1.8/2.9 & 2.8/1.6 & 2.9/\textbf{1.2}\\ & (c) & 1.0 & 3.6/6.2 & 0.8/7.7 & 1.5/2.9 & \textbf{0.3}/2.4 & 2.2/\textbf{1.7} & 1.1/1.9\\ \hline \multirow{3}{*}{\rotatebox{90}{$|\Delta$T2$|$(ms)}} & (a) & 9.1 & 1.9/\textbf{8.9} & 15.8/24.4 & 2.7/17.8 & 5.2/23.3 & \textbf{0.2}/26.8 & 2.6/15.4\\ & (b) & 6.1 & 7.7/11.1 & 14.2/13.8 & \textbf{4.9}/6.0 & 7.7/10.8 & 11.4/5.6 & 12.8/\textbf{4.2}\\ & (c) & 2.1 & 12.1/11.8 & 2.6/16.4 & 4.5/7.7 & \textbf{1.1}/7.2 & 6.9/\textbf{4.9} & 3.8/5.3\\ \hline \multirow{3}{*}{\rotatebox{90}{$|\Delta\varepsilon|(\%)$}} & (a) & 2.4 & 0.4/\textbf{3.1} & 3.7/7.7 & 0.6/5.1 & 1.2/5.7 & \textbf{0.0}/6.7 & 0.6/3.8\\ & (b) & 2.4 & 2.4/5.9 & 4.6/6.3 & \textbf{1.6}/2.3 & 2.3/3.8 & 3.4/1.9 & 3.9/\textbf{1.4}\\ & (c) & 1.1 & 4.8/7.3 & 1.1/9.4 & 1.8/3.7 & \textbf{0.4}/3.2 & 2.6/\textbf{2.2} & 1.4/2.3 \\ \hline \end{tabular} \end{tiny} \end{table} \subsection{Impact of the Number of Echo Times on T2 Measurements} In an effort to optimize the acquisition scheme, especially regarding energy deposition and reasonable acquisition time in a context of fetal examination, we investigate the influence of the number of TEs on the T2 estimation accuracy. As T2 quantification is highly repeatable throughout independent measurements, the following results are derived from an arbitrarily-selected experiment. T2 estimation by both clinical HASTE acquisitions and corresponding SR reconstructions demonstrates a high correlation with reference SE values over the 180-400 ms range of interest (Supplementary Material - Figure I). Bland-Altman plots presented in Figure~\ref{fig3} report the agreement between HASTE-/SR-based T2 quantification and SE reference values in the three ROIs. The average error in T2 estimation from HASTE series is almost the same across all configurations. The difference in T2 measurements is independent of the studied ROI. Conversely, the average error in SR T2 quantification varies with the number of TEs, the smallest average difference being for 6 TEs. In a given configuration, the difference in T2 measurements depends on the targeted value. \begin{figure}[htb!] \centering \includegraphics[width=0.8\textwidth]{Lajous_et_al_Super-Resolution_T2_Mapping_Figure3.pdf} \caption{Bland-Altman plots of differences in T2 quantification between HASTE / corresponding SR and reference SE in three ROIs for various numbers of TEs} \label{fig3} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.8\textwidth]{Lajous_et_al_Super-Resolution_T2_Mapping_Figure4.pdf} \caption{Relative error in T2 quantification according to the method and number of TEs as compared to reference SE measurements} \label{fig4} \end{figure} Figure~\ref{fig4} displays the relative error in T2 quantification from MESE, HASTE and SR as compared to SE measurements according to the number of TEs acquired. MESE provides T2 quantification with a small dispersion but with low out-of-plane resolution and prohibitive scanning times in a context of fetal MRI. In the following, its 16\%-average relative error is considered an acceptable reference error level. HASTE-based T2 quantification overestimates T2 values in the range of interest by around 25\%. As for MESE-based T2 mapping, such an overestimation can be attributed to stimulated echo contamination~\cite{mcphee_limitations_2018}. In the case of SR, quantification errors vary with the number of TEs acquired. Below six echoes, SR underestimates T2 values on average, and enables a dramatic improvement in T2 quantification over the HASTE-based technique, but only for T2 values less than 200 ms. Above 6 TEs, SR exhibits approximately the same average error as MESE. For six echoes, SR outperforms HASTE and MESE over the whole range of T2 values studied with a relative error less than 11\%. Furthermore, preliminary results on T2 quantification from HASTE images corrupted by higher levels of noise and their corresponding SR reconstructions in this optimized set-up using six echoes demonstrate the robustness of the proposed SR T2 mapping technique (Supplementary Material - Table I). Overall, SR substantially outperforms HASTE for T2 quantification. \section{Conclusion} This work demonstrates the feasibility of repeatable, accurate and robust 3D isotropic HR T2 mapping in a reasonable acquisition time based on SR-\linebreak[4]reconstructed clinical fast spin echo MR acquisitions. We show that SR-based T2 quantification performs accurately in the range of interest for fetal brain studies (180-430 ms) as compared to gold-standard methods based on SE or MESE. Moreover, it could be straightforwardly translated to the clinic since only the TE of the HASTE sequence routinely used in fetal exams needs to be adapted. A pilot study will be conducted on an adult brain to replicate these results \emph{in vivo}. Although our study focuses on static data, the robustness of SR techniques to motion makes us hypothesize that 3D HR T2 mapping of the fetal brain is feasible. The number of TEs required for an accurate T2 quantification in this context has to be explored. \subsubsection{Acknowledgements.} This work was supported by the Swiss National Science Foundation through grants 141283 and 182602, the Center for Biomedical Imaging (CIBM) of the UNIL, UNIGE, HUG, CHUV and EPFL, the Leenaards and Jeantet Foundations, and the Swiss Heart Foundation. The authors would like to thank Yasser Alem\'an-G\'omez for his help in handling nifti images. \bibliographystyle{splncs04}
1,314,259,996,877
arxiv
\section{NEUTRINO - ELECTRON SCATTERING} Neutrino-electron scatterings ($\rm{\nu_e}(\rm{\bar{\nu_e}}) - e^{-}$) are fundamental electroweak processes which play important roles in neutrino oscillation studies and in probing the electroweak parameters of the Standard Model(SM) and in the studies of neutrino properties such as the electromagnetic moments and charge radius\cite{pdg06}. The differential cross section for $\rm{\bar{\nu_e}}-e^{-}$ scattering can be written as\cite{pdg06,kayser79}: \begin{eqnarray} \frac{d\sigma _{SM}}{dT}(\rm{\bar{\nu_e}} e)& = & \frac{G_{F}^{2}m_{e}}{2\pi } \left[\left(g_{V}-g_{A}\right) ^{2}+\left( g_{V}+g_{A}+2\right) ^{2}\left(1- \frac{T}{E_{\nu }}\right) ^{2}-(g_{V}-g_{A})(g_{V}+g_{A}+2)\frac{m_{e}T} {E_{\nu}^{2}}\right] \label{eq_cross} \end{eqnarray} where $T$ is the kinetic energy of the recoil electron, $E_{\nu }$ is the incident neutrino energy and $g_{V}$, $g_{A}$ are coupling constants which can be expressed as $ g_{V}=-\frac{1}{2}+2\sin ^{2}\theta _{W} $ and $ g_{A}=-\frac{1}{2}$. The total cross section for $\rm{\bar{\nu_e}}-e^{-}$ scattering can be written as \begin{eqnarray} \sigma _{SM} = \int_{T}\int_{E_{\nu }}\frac{d\sigma _{SM}} {dT}\frac{d\phi}{dE_{\nu}}dE_{\nu}dT = \frac{G_{F}^{2}m_{e}}{2\pi}\left\{ \begin{array}{c} \left(g_{V}-g_{A}\right) ^{2}I_{1}+\left(g_{V}+g_{A}+2\right)^{2}I_{2} \\ -(g_{V}-g_{A})(g_{V}+g_{A}+2)I_{3} \end{array}\right\}\text{ \ \ }\label{eq_cross2} \end{eqnarray} where $I_{1},$ $I_{2},$ $I_{3}$ are integrals of the function of 1, $\left( 1-T/E_{\nu }\right) ^{2}$ and $m_{e}T/E_{\nu }^{2}$ over the antineutrino spectrum and the recoil energy of electron, respectively. In the low energy neutrino studies we must consider the electron mass dependent term $I_{3}$ in Eq. \ref{eq_cross2} because of its significant contribution to the cross section\cite{kayser79}. The value of weak mixing angle ($\rm{sin ^2 \theta_{W}}$) was measured precisely at high energy (10-100~GeV) at the accelerators, and at lower energy with Moller scattering and atomic parity violation experiments\cite{pdg06}. The interactions $\rm{\nu_e}(\rm{\bar{\nu_e}}) - e^{-}$ have the additional unique features of being sensitive to the contributions of charged current (CC), neutral current (NC) and their interference (INT). The cross-sections of $\rm{\nu_e} - e^{-}$ have been measured at accelerators\cite{lampf}. For reactor $\rm{\bar{\nu_e}}-e^{-}$, the existing data are either controversial\cite{reines,vogelengel} or with large uncertainties\cite{munuexpt}. There is much room for improvement and this work is an attempt to bridge this gap. \section{EXPERIMENTAL SET-UP} An important component of the TEXONO research program is to study $\rm{\bar{\nu_e}}-e^{-}$ elastic scattering at the MeV reactor neutrino range. The neutrino laboratory is located at the Kuo-Sheng Nuclear Power Plant a distance of 28~m from the reactor core with 2.9~GW of thermal power, having a total flux of about $6.4\times 10^{12} ~ cm^{-2}s^{-1}$. The details of neutrino source and neutrino spectrum were discussed in Ref.~\cite{texono11}. The CsI(Tl) scintillation detector array is enclosed by $4\pi$ low-activity passive shielding materials with a total mass of 50~tons, as well as a layer of active cosmic-ray (CRV) plastic scintillator panels. The entire target space is covered by a plastic bag flushed with dry nitrogen to suppress background due to the diffusion of the radioactive radon gas\cite{texono12}. \begin{figure}[hbt] \begin{minipage}{18pc} \includegraphics[width=15pc]{csi_det.eps} \caption{\label{csiarray} Schematic diagram of the CsI(Tl) crystal scintillator array. } \end{minipage}\hspace{2pc}% \begin{minipage}{18pc} \includegraphics[width=15pc]{csi_specs.eps} \caption{\label{spectra} Measured spectra at the various stages of background suppression. } \end{minipage} \end{figure} The CsI(Tl) crystals were arranged as a $12\times 9$ array matrix inside an OFHC copper box, as shown schematically in Figure~\ref{csiarray}. The detector consisted of 100 crystals giving a total mass of 200~kg. Each single crystal module has a hexagonal-shaped cross-section with 2~cm side, 40~cm length and 2~kg mass. The light output was read out at both ends of the crystal by PMTs with low-activity glass of 29~mm diameter. The properties, advantages and the performance of the prototype modules of CsI(Tl) scintillating crystal detector were documented elsewhere\cite{texono12,texono13,texono21}. These properties make crystal scintillators suitable for the study of low energy neutrino experiments. The PMT signals were recorded by 20~MHz Flash Analog-to-Digital-Converters (FADCs) running on a VME-based data acquisition system\cite{texono24}. The sum of the two PMT signals gives the energy of the event, while their difference provides information on the longitudinal ``Z'' position. An energy resolution of $< 10\%$ FWHM and a Z-resolution of $\sim$2~cm at 660~keV as well as excellent $\alpha$/$\gamma$ event identification by pulse shape discrimination (PSD) were demonstrated in prototype studies\cite{texono21}. \section{DATA ANALYSIS} Neutrino-induced candidate events were selected through the suppression of: (a) cosmic-ray and anti-Compton background by CRV and multiplicity cuts, (b) accidental and $\alpha$- events by PSD, and (c) external background by Z-position cut. The spectra at the various stages of the background rejection were displayed in Figure~\ref{spectra}. In situ calibration was achieved using the measured $\gamma$-lines from $^{137}$Cs, $^{40}$K and $^{208}$Tl. A signal to background ratio of $\sim$1/15 at 3~MeV was achieved. The spectra measured during the Reactor OFF periods constituted a background measurement. The internal contaminations of the $^{238}$U and $^{238}$Th series were measured\cite{yfzhu} and found to be negligible compared to the observed background rates. Residual background at the relevant 3$-$6~MeV range are either cosmic-ray induced or due to coincidence of $\gamma$-emissions following $^{208}$Tl decays. Their intensities were evaluated from the {\it in situ} multi-hit samples, the $^{208}$Tl-2614~keV lines as well as from simulation studies, and the results provide the second background measurement. The background from both methods was subsequently combined (BKG) and subtracted from the candidate Reactor-ON samples. \section{PHYSICS RESULTS} A total of 31874.7/7860.1~kg-day of Reactor ON/OFF data was recorded and the combined ON$-$BKG residual spectrum is displayed in Figure~\ref{residual}, from which various electroweak parameters were derived. Only events with energy more than 3~MeV above the $^{208}$Tl end-point were used for physics analysis. There is an excess in the residual spectrum corresponding to $\sim$400 neutrino-induced events. The uncertainties cited in what follows are only statistical. Intense efforts on the studies of systematic effects are underway. \subsection{Cross Section and Electroweak Parameters} Denoting the measured event rate as \begin{equation} R _{exp}=\zeta \cdot R_{SM} \end{equation} where $R_{SM}$ is the SM predicted values, the residual spectrum of Figure~\ref{residual} corresponds to $ \zeta = 1.00 \pm 0.32 (stat) $ with $\chi^{2}/dof = 9.78/9$, giving \begin{equation} \rm{sin ^2 \theta_{W}} = 0.24 \pm 0.05 (stat) ~~~ . \end{equation} The allowed region in the $g_V - g_A$ plane is depicted in Fig. \ref{gvga}. The accuracy is comparable to that achieved in accelerator-based $\rm{\nu_e} - e$ scattering experiments\cite{lampf}. \begin{figure}[hbt] \begin{minipage}{18pc} \includegraphics[width=15pc]{chi2.eps} \caption{\label{residual} The combined ON-BKG residual spectrum. The best-fit gives identical curve as the standard model prediction. } \end{minipage}\hspace{2pc}% \begin{minipage}{18pc} \includegraphics[width=15pc]{gv_ga.eps} \caption{\label{gvga} The 1-$\sigma$ allowed region $ g_{V}-g_{A}$ space, together with $\rm{sin ^2 \theta_{W}}$. } \end{minipage} \end{figure} Residual spectra from OFF-BKG data were extracted and used for demonstrating the validity of background understanding and the analysis procedures. The fractional deviation (OFF-BKG)/OFF = 0.011 $\pm$ 0.018 at $\chi^{2}/dof = 8.23/9 $ indicates excellent agreement with SM expectations and good systematic control. \begin{table}[hbt] \caption{The expected $\zeta$ ratios for the different interference scenario and how they are compared to the measured one.} \label{tab_inter} \begin{tabular}{lc} \\ \hline \hline Interference & $\zeta$ \\ \hline Destructive($\eta=1$) & 1 \\ Constructive($\eta=-1$) & 2.46 \\ No Interference($\eta=0$) & 1.73 \\ \hline Measurement & $1.00 \pm 0.32 (stat) $ \\ \hline \hline \end{tabular} \end{table} To study the interference term, the event rate is parametrized as \begin{equation} R_{exp} = R^{CC} + R^{NC} + \eta \cdot R^{INT} \label{eq_sm_cni} \end{equation} where $R^{CC/NC/INT}$ are the SM charged-, neutral currents and interference contributions, respectively. Table~\ref{tab_inter} shows the expectations on $\zeta$ for the possible cases. The measured value of $\zeta$ verifies the SM prediction of destructive interference. \subsection{Magnetic Moment and Neutrino Charge Radius} Existence of neutrino magnetic moment ($\rm{\mu_{\bar{\nu}_{e}}}$) would contribute an additional term\cite{vogelengel,munureview} to the cross-section of Eq.~\ref{eq_cross}: \begin{equation} \left( \frac{d\sigma }{dT}\right) _{\mu _{\nu}}=\frac{\pi \alpha _{em}^{2}\mu _{\nu}^{2}}{m_{e}^{2}}\left[ \frac{1-T_{e}/E_{\nu}}{T_{e}}\right] \label{eq_mm} ~~. \end{equation} Parametrizing the measured event rates as \begin{equation} R_{exp} = R_{SM} + \kappa^{2} \cdot R(\mu_{\nu} = 10^{-10}~\mu_B) ~~ , \end{equation} the best fit value of $ \kappa^{2} = -0.52 \pm 2.74$ at $\chi^{2}/dof = 9.79/9 $ was obtained. A limit of \begin{equation} \rm{\mu_{\bar{\nu}_{e}}} < 2.0 \times 10^{-10} \times \mu_{B} \end{equation} at 90\% CL was derived. A finite neutrino charge radius $\rm{\langle r_{\bar{\nu}_e}^2\rangle}$ would lead to radiative corrections\cite{vogelengel,rashba} which modify the electroweak parameters by: \begin{equation} g_{V} \rightarrow -\frac{1}{2}+2\rm{sin ^2 \theta_{W}} + (2\sqrt{2}\pi\alpha_{em}/3G_F) \rm{\langle r_{\bar{\nu}_e}^2\rangle} ~~~~~ ; ~~~~~ \rm{sin ^2 \theta_{W}} \rightarrow \rm{sin ^2 \theta_{W}} + (\sqrt{2}\pi\alpha_{em}/3 G_F)\rm{\langle r_{\bar{\nu}_e}^2\rangle} \label{eq_new_sin2} \end{equation} where $\alpha_{em}$ and $G_F$ are the fine structure and Fermi constants, respectively. Results of \begin{equation} \rm{\langle r_{\bar{\nu}_e}^2\rangle} = (0.12 \pm 2.07) \times 10^{-32} ~cm^{2} \label{eq_charge_rad2} \end{equation} at $\chi2 / dof = 9.82/9$ were derived accordingly.
1,314,259,996,878
arxiv
\section{Introduction} There is a long and rich history of optimal transport (OT) problems initiated by Gaspard Monge (1746–1818), a French mathematician, in the 18th century. During recent decades, OT problems have found fruitful applications in our daily lives \cite{villani2008optimal}. Consider the resource allocation problem, as illustrated in Fig.~\ref{fig:1}. Suppose that an operator runs $n$ warehouses and $m$ factories. Each warehouse contains a certain amount of valuable raw material, i.e., the resources, that is needed by the factories to run properly. Furthermore, each factory has a certain demand for raw material. Suppose the total amount of the resources in the warehouse equals the total demand for the raw material in the factories. The operator aims to move all the resources from warehouses to factories, such that all the demands for the factories could be successfully met, and the total transport cost is as small as possible. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.95\textwidth]{figure/transport.png} \end{tabular} \caption{Illustration for the resource allocation problem. The resources in warehouses are marked in blue, and the demand for each factory is marked in red. }\label{fig:1} \end{center} \vspace*{-0.2in} \end{figure} The resource allocation problem is a typical OT problem in practice. To put these problems in mathematical language, one can regard the resources as a whole and the demands as a whole as two probability distributions. For example, the resources from warehouses in Fig.~1 can be regarded as a non-uniform discrete distribution supported on three discrete points, and each of the points represents the geographical location of a particular warehouse. OT methods aim to find a transport map (or plan), between these two probability distributions with the minimum transport cost. Formal definitions for the transport map, the transport plan, and the transport cost will be given in Section~2. Nowadays, many modern statistical and machine learning problems can be recast as finding the optimal transport map (or plan) between two probability distributions. For example, domain adaptation \cite{muzellec2019subspace,courty2017optimal,flamary2019concentration}, aims to learn a well-trained model from a source data distribution and transfer this model to adopt a target data distribution. Another example is deep generative models \cite{goodfellow2014generative,meng2019large,arjovsky2017wasserstein,chen2018optimal} target at mapping a fixed distribution, e.g., the standard Gaussian or uniform distribution, to the underlying population distribution of the genuine sample. During recent decades, OT methods have been reinvigorated in a remarkable proliferation of modern data science applications, including machine learning \cite{alvarez2018structured, courty2017optimal,peyre2019computational, arjovsky2017wasserstein, canas2012learning, flamary2018wasserstein,meng2019large}, statistics \cite{del2019central, cazelles2018geodesic,panaretos2019statistical}, and computer vision \cite{ferradans2014regularized,rabin2014adaptive, su2015optimal,peyre2019computational}. Although OT finds a large number of applications in practice, the computation of OT meets challenges in the big data era. Traditional methods estimate the optimal transport map (OTM) by solving differential equations \cite{brenier1997homogenized,benamou2002monge} or by solving a problem of linear programming \cite{rubner1997earth,pele2009fast}. Consider two $p$-dimensional samples with $n$ observations within each sample. The calculation of the OTM between these two samples using these traditional methods requiring $O(n^3\log(n))$ computational time \cite{peyre2019computational,seguy2017large}. Such a sizable computational cost hinders the broad applicability of optimal transport methods. To alleviate the computational burden for OT, there has been a large number of work dedicated to developing efficient computational tools in the recent decade. One class of methods, starting from \cite{cuturi2013sinkhorn}, considers solving a regularized OT problem instead of the original one. By utilizing the Sinkhorn algorithm (detailed in Section~3), the computational cost for solving such a regularized problem can be reduced to $O(n^2\log(n))$, which is a significant reduction from $O(n^3\log(n))$. Based on this idea, various computational tools are developed to solve the regularized OT problem as quickly as possible \cite{altschuler2017near,peyre2019computational}. By combining the Sinkhorn algorithm and the idea of low-rank matrix approximation, recently, \cite{altschuler2019massively} proposed an efficient algorithm with a computational cost that is approximately proportional to $n$. Although not covered in this paper, regularization-based optimal transport methods even appear to have better theoretical properties than the unregularized counterparts; see \cite{genevay2017learning,montavon2016wasserstein,rigollet2018entropic} for details. Another class of methods aims to estimate the OTM efficiently using random or deterministic projections. These so-called projection-based methods tackle the problem of estimating a $p$-dimensional OTM by breaking down the problem into a series of subproblems, each of which finds a one-dimensional OTM using projected samples \cite{pitie2005n,pitie2007automated,bonneel2015sliced,rabin2011wasserstein}. The subproblems can be easily solved since the one-dimensional OTM is equivalent to sorting, under some mild conditions. The projection-based methods reduce the computational cost for calculating OTMs from $O(n^3\log(n))$ to $O(Kn\log(n))$, where $K$ is the number of iterations until convergence. With the help of these computational tools, OT methods have been widely applied to various biomedical research. Take single-cell RNA sequencing data as an example, OT methods can be used to study developmental time courses to infer ancestor-descendant fates for cells and help researchers to better understand the molecular programs that guide differentiation during development. For another example, OT methods can be used as data augmentation tools for increasing the number of observations; and thus to improve the accuracy and the stability of various downstream analyses. The rest of the paper is organized as follows. We start in Section~2 by introducing the essential background of the OT problem. In Section~3, we present the details of regularization-based OT methods and their extensions. Section~4 is devoted to projection-based OT methods, including both random projection methods and deterministic projection methods. In Section~5, we show several applications of OT methods on real-world problems in biomedical research. \section{Background of the Optimal Transport Problem} In the aforementioned resource allocation problem, the goal is to transport the resources in the warehouse to the factories with the least cost, say the total fuel consumption of trucks. Here, the resources in the warehouse and the demand in the factories can be regarded as discrete distributions. We now introduce the following example that extend the discrete setting to the continuous setting. Suppose there is a worker who has to move a large pile of sand using a shovel in his hand. The goal of the worker is to erect with all that sand a target pile with a prescribed shape, say a sandcastle. Naturally, the worker wishes to minimize the total ``effort", which intuitively, in the sense of physical, can be regarded as the ``work", the product of force and displacement. A French mathematician Gaspard Monge (1746–1818) once considered such a problem and formulated it into a general Mathematical problem, i.e., the optimal transport problem \cite{villani2008optimal,peyre2019computational}: among all the possible transport maps $\phi$ between two probability measures $\mu$ and $\nu$, how to find the one with the minimum transport cost? Mathematically, the optimal transport problem can be formulated as follows. Let $\mathscr{P}(\mathbb{R}^p)$ be the set of Borel probability measures in $\mathbb{R}^p$, and let \begin{eqnarray*} \mathscr{P}_2(\mathbb{R}^p)=\left\{\mu\in\mathscr{P}(\mathbb{R}^p)\Big|\int||x||^2\mbox{d}\mu(x)<\infty\right\}. \end{eqnarray*} For $\mu,\nu\in\mathscr{P}_2(\mathbb{R}^p)$, let $\Phi$ be the set of all the so-called measure-preserving maps $\phi:\mathbb{R}^p\rightarrow\mathbb{R}^p$, such that $\phi_{\#}(\mu) = \nu$ and $\phi^{-1}_{\#}(\nu) = \mu.$ Here, $\#$ represents the push-forward operator, such that for any measurable $\Omega\subset \mathbb{R}^p$, $\phi_{\#}(\mu)(\Omega)=\mu(\phi^{-1}(\Omega))$. Among all the maps in $\Phi$, the optimal transport map defined under a cost function $c(\cdot,\cdot)$ is \begin{eqnarray}\label{Monge_map} \phi^\dagger :=\underset{\phi \in \Phi}{\mbox{arg inf}} \int_{\mathbb{R}^p} c(x, \phi(x)) \mbox{d}\mu(x). \end{eqnarray} One popular choice for the cost function is $c(x,y)=\|x-y\|^2$, with which Equation~(\ref{Monge_map}) becomes \begin{eqnarray}\label{Monge_map2} \phi^\dagger :=\underset{\phi \in \Phi}{\mbox{arg inf}} \int_{\mathbb{R}^p} \|x-\phi(x)\|^2 \mbox{d}\mu(x). \end{eqnarray} Equation~(\ref{Monge_map2}) is called the Monge formulation, and its solution $\phi^\dagger$ is called the optimal transport map (OTM), or the Monge map. The well-known Brenier’s Theorem \cite{brenier1991polar} stated that, when the cost function $c(x,y)=\|x-y\|^2$, if at least one of $\mu$ and $\nu$ has a density with respect to the Lebesgue measure, then the OTM $\phi^\dagger$ in Equation~(\ref{Monge_map2}) exists and is unique. In other words, the OTM $\phi^\dagger$ may not exists, i.e., the solution of Equation~(\ref{Monge_map2}) may not be a map, when the conditions of Brenier's Theorem is not met. To overcome such a limitation, Kantorovich \cite{kantorovich1942translation} considered the following set of ``couplings", \begin{eqnarray}\label{eqn:coupling} &\mathcal{M}(\mu,\nu)=\{\pi\in\mathscr{P}(\mathbb{R}^p\times\mathbb{R}^p) \mbox{ } s.t.\mbox{ } \forall\mbox{ } \text{Borel set} \mbox{ } A, B\subset\mathbb{R}^p ,\nonumber\\ &\pi(A\times\mathbb{R}^p)=\mu(A),\mbox{ } \pi(\mathbb{R}^p\times B)=\nu(B) \}. \end{eqnarray} Intuitively, a coupling $\pi\in\mathcal{M}(\mu,\nu)$ is a joint distribution of $\mu$ and $\nu$, such that two particular marginal distributions of $\pi$ are equal to $\mu$ and $\nu$, respectively. Instead of finding the OTM, Kantorovich formulated the optimal transport problem as finding the optimal coupling, \begin{eqnarray}\label{K_form} \pi^* := \underset{\pi \in \mathcal{M}(\mu, \nu)}{\mbox{arg inf}} \int \|x- y \|^2 \mbox{d}\pi(x,y). \end{eqnarray} Equation~(\ref{K_form}) is called the Kantorovich formulation (with $L_2$ cost) and its solution $\pi^*$ is called the optimal transport plan (OTP). The key difference between the Monge formulation and the Kantorovich formulation is that the latter does not require the solution to be a one-to-one map, as illustrated in Fig.~\ref{fig:2}. The Kantorovich formulation is more realistic in practice, compared with the Monge formulation. Take the resource allocation problem as an example, as described in Section~1. It is unreasonable to assume that there always exists a one-to-one map between warehouses and factories, which can meet all the demands for the factories. The optimal solution of such resource allocation problems thus is usually an OTP instead of an OTM. Note that, although the Kantorovich formulation is more flexible than the Monge formulation, it can be shown that when the OTM exists, the OTP is equivalent to the OTM. \begin{figure}[b] \sidecaptio \includegraphics[scale=.45]{figure/otm_otp.png} \caption{Comparison between optimal transport map (OTM) and optimal transport plan (OTP). Left: an illustration of OTM, which is a one-to-one map. Right: an illustration of OPT, which may not necessarily to be a map.} \label{fig:2} \end{figure} Close related to the optimal transport problem is the so-called Wasserstein distance. Intuitively, if we think the optimal transport problem (either in the Monge formulation or the Kantorovich formulation) as an optimization problem, then the Wasserstein distance is simply the optimal objective value of such an optimization problem, with certain power transform. Suppose the OTM $\phi^\dagger$ exists, the Wasserstein distance of order $k$ is defined as \begin{eqnarray}\label{W_dist} W_k(\mu, \nu):=\left(\int_{\mathbb{R}^p} \|X- \phi^\dagger(X) \|^k \mbox{d}\mu \right)^{1/k}. \end{eqnarray} Let $\{\bm{x}_i\}_{i=1}^n$ and $\{\bm{y}_i\}_{i=1}^n$ be two samples generated from $\mu$ and $\nu$, respectively. One thus can estimate $\phi^\dagger$ using these two samples, and we let $\widehat{\phi}^\dagger$ to denote the corresponding estimator. The Wasserstein distance $W_k(\mu, \nu)$ thus can be estimated by \begin{eqnarray*} \widehat{W}_k(\mu, \nu):= \left(\frac{1}{n} \sum\limits_{i=1}^n \|\bm{x}_i- \widehat{\phi}^\dagger(\bm{x}_i) \|^k \right)^{1/k}. \end{eqnarray*} The Wasserstein distance respecting to the Kantorovich formulation can be defined analogously. We refer to \cite{weed2019sharp,del2019central2,panaretos2019statistical} and the reference therein for theoretical properties of Wasserstein distances. Without further notification, we focus on the $L_2$ norm throughout this paper, i.e., $k=2$ in Equation~(\ref{W_dist}), and we abbreviate $W_2(\mu,\nu)$ by $W(\mu,\nu)$. \section{Regularization-based Optimal Transport Methods} In this section, we introduce a family of numerical schemes to approximate solutions to the Kantorovich formulation~(\ref{K_form}). Such numerical schemes add a regularization penalty to the original optimal transport problem, and one can then solve the regularized problem instead. Such a regularization-based approach has long been studied in nonparametric regression literature to balances the trade-off between the goodness-of-fit and the model and the roughness of a nonlinear function \cite{gu2013smoothing,ma2015efficient,zhang2018statistical,meng2020more}. Cuturi first introduced the regularization approach in OT problems \cite{cuturi2013sinkhorn} and showed that the regularized problem could be solved using a simple alternate minimization scheme, requiring $O(n^2\log(n)p)$ computational time. Moreover, it can be shown that the solution to the regularized OT problem can well-approximate the solution to its unregularized counterpart. We call such numerical schemes the regularization-based optimal transport methods. We now present the details and some extensions of these methods as follows. \subsection{Computational Cost for OT Problems} We first introduce how to calculate the empirical Wasserstein distance by solving a linear system. Let $\bm{p}$ and $\bm{q}$ be two probability distributions supported on a discrete set $\{\bm{x}_i\}_{i=1}^n$, where $\bm{x}_i\in\Omega$ for $i=1,\ldots,n$, and $\Omega\subset\mathbb{R}^p$ is bounded. We identify $\bm{p}$ and $\bm{q}$ as the vectors located on the simplex $$\Delta_n := \left\{\bm{v} \in \mathbb{R}^n: \mbox{ }\sum_{i=1}^n \bm{v}_i = 1, \mbox{ and }\bm{v}_i\geq0, \mbox{ } i=1,\ldots,n. \right\},$$ whose entries denote the weight of each distribution assigned to the points of $\{\bm{x}_i\}_{i=1}^n$. Let $\mathbf{C}\in\mathbb{R}^{n\times n}$ be the pair-wise distance matrix, where $\mathbf{C}_{ij} = \|\bm{x}_i-\bm{x}_j\|^2$, and $\mathbf{1}_n$ be the all-ones vector with $n$ elements. Recall the definition of coupling in Equation~(\ref{eqn:coupling}), and analogously, we denote by $\mathcal{M}(\bm{p}, \bm{q})$ the set of coupling matrices between $\bm{p}$ and $\bm{q}$, i.e., \begin{eqnarray*} \mathcal{M}(\bm{p}, \bm{q})=\left\{ \mathbf{P}\in\mathbb{R}^{n\times n}:\mbox{ } \mathbf{P}\mathbf{1}_n=\bm{p},\mbox{ } \mathbf{P}^\T\mathbf{1}_n=\bm{q} \right\}. \end{eqnarray*} For brevity, this paper focuses on square matrices $\mathbf{C}$ and $\mathbf{P}$, since extensions to rectangular cases are straightforward. Let $\langle \cdot,\cdot \rangle$ denote the summation of the element-wise multiplication, such that, for any two matrix $\mathbf{A},\mathbf{B}\in\mathbb{R}^{n\times n}$, $\langle \mathbf{A},\mathbf{B} \rangle=\sum_{i=1}^n\sum_{j=1}^n \mathbf{A}_{ij}\mathbf{B}_{ij}$. According to the Kantorovich formulation in Equation~(\ref{K_form}), the Wasserstein distance between $\bm{p}$ and $\bm{q}$, i.e., $W(\bm{p}, \bm{q})$ thus can be calculated through solving the following optimization problem \begin{eqnarray}\label{OT_dist} \underset{\mathbf{P} \in \mathcal{M}(\bm{p},\bm{q})}{\min} \left \langle \mathbf{P},\mathbf{C} \right\rangle, \end{eqnarray} which is a linear program with $O(n)$ linear constraints. The coupling matrix $\mathbf{P}$ is called the optimal coupling matrix, when the optimization problem~(\ref{OT_dist}) achieves the minimum value, i.e., the optimal coupling matrix is the minimizer of the optimization problem~(\ref{OT_dist}). Note that when the OTM exists, the optimal coupling matrix $\mathbf{P}$ is a sparse matrix, such that there is exactly one non-zero element in each row and each column of $\mathbf{P}$, respectively. Practical algorithms for solving the problem~(\ref{OT_dist}) through linear programming requiring a computational time of the order $O(n^3 \log(n))$ for fixed $p$ \cite{peyre2019computational}. Such a sizable computational cost hinders the broad applicability of OT methods in practice for the datasets with large sample size. \subsection{Sinkhorn Distance} To alleviate the computation burden for OT problems, \cite{cuturi2013sinkhorn} considered a variant of the minimization problem in Equation~(\ref{OT_dist}), which can be solved within $O(n^2\log(n)p)$ computational time using the Sinkhorn scaling algorithm, originally proposed in \cite{sinkhorn1967diagonal}. The solution of such a variant is called the Sinkhorn ``distance" \footnote{We use quotations here since it is not technically a distance; see Section 3.2 of \cite{cuturi2013sinkhorn} for details. The quotes are dropped henceforth.}, defined as \begin{eqnarray}\label{sink_dist} W_\eta(\bm{p}, \bm{q}) = \underset{\mathbf{P} \in \mathcal{M}(\bm{p},\bm{q})}{\min} \left \langle \mathbf{P},\mathbf{C} \right\rangle-\eta^{-1}H(\mathbf{P}), \end{eqnarray} where $\eta>0$ is the regularization parameter, and $H(\mathbf{P})=\sum_{i=1}^n\sum_{j=1}^n\mathbf{P}_{ij}\log(1/\mathbf{P}_{ij})$ is the Shannon entropy of $\mathbf{P}$. We adopt the standard convention that $0\log(1/0) = 0$ in the Shannon entropy. We present a fundamental definition as follows \cite{sinkhorn1967diagonal}. \begin{definition} Given $\bm{p}$, $\bm{q} \in \Delta_n$ and $\mathbf{K}\in \mathbb{R}^{n\times n}$ with positive entries, the Sinkhorn projection $\Pi_{\mathcal{M}(\bm{p},\bm{q})} (\mathbf{K})$ of $\mathbf{K}$ onto $\mathcal{M}(\bm{p}, \bm{q})$ is the unique matrix in $\mathcal{M}(\bm{p}, \bm{q})$ of the form $\mathbf{D}_1\mathbf{K}\mathbf{D}_2$ for positive diagonal matrices $\mathbf{D}_1,\mathbf{D}_2\in\mathbb{R}^{n\times n}$. \end{definition} Let $\mathbf{P}^\eta$ be the minimizer, i.e., the optimal coupling matrix, of the optimization problem~(\ref{sink_dist}). Throughout the paper, all matrix exponentials and logarithms will be taken entrywise, i.e., $(e^\mathbf{A})_{ij} :=e^{\mathbf{A}_{ij}}$ and $(\log \mathbf{A})_{ij} := \log \mathbf{A}_{ij}$ for any matrix $\mathbf{A} \in\mathbb{R}^{n\times n}$. \cite{cuturi2013sinkhorn} built a simple but key connection between the Sinkhorn distance and the Sinkhorn projection, \begin{align}\label{sink_dist2} \mathbf{P}^\eta &= \underset{\mathbf{P} \in \mathcal{M}(\bm{p},\bm{q})}{\mbox{argmin}}\left \langle \mathbf{P}, \mathbf{C} \right\rangle-\eta^{-1}H(\mathbf{P}) \nonumber\\ &= \underset{\mathbf{P} \in \mathcal{M}(\bm{p},\bm{q})}{\mbox{argmin}}\left \langle \eta\mathbf{C}, \mathbf{P} \right\rangle-\eta^{-1}H(\mathbf{P}) \nonumber\\ &=\underset{\mathbf{P} \in \mathcal{M}(\bm{p},\bm{q})}{\mbox{argmin}}\left \langle -\log \left(e^{-\eta\mathbf{C}}\right),\mathbf{P} \right\rangle-\eta^{-1}H(\mathbf{P}) \nonumber\\ &= \Pi_{\mathcal{M}(\bm{p},\bm{q})}\left(e^{-\eta\mathbf{C}}\right). \end{align} Equation~(\ref{sink_dist2}) suggests the minimizer of the optimization problem~(\ref{sink_dist}) takes the form $\mathbf{D}_1(e^{-\eta\mathbf{C}})\mathbf{D}_2$, for some positive diagonal matrices $\mathbf{D}_1,\mathbf{D}_2\in\mathbb{R}^{n\times n}$, as illustrated in Fig.~\ref{fig:sinksolve}. Moreover, it can be shown that the minimizer in Equation~(\ref{sink_dist2}) exists and is unique due to the strict convexity of $-H(\mathbf{P})$ and the compactness of $\mathcal{M}(\bm{p}, \bm{q})$. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.95\textwidth]{figure/sinksolve.png} \end{tabular} \caption{The minimizer of the regularized optimal transport problem~\eqref{sink_dist} takes the form $\mathbf{D}_1(e^{-\eta\mathbf{C}})\mathbf{D}_2$, for some unknown diagonal metrics $\mathbf{D}_1$ and $\mathbf{D}_2$. }\label{fig:sinksolve} \end{center} \vspace*{-0.2in} \end{figure} Based on Equation~(\ref{sink_dist2}), \cite{cuturi2013sinkhorn} proposed a simple iterative algorithm, which is also known as the Sinkhorn-Knopp algorithm, to approximate $\mathbf{P}^\eta$. Let $x_i, y_i, p_i, q_i$ be the $i$-th element of the vector $\bm{x},\bm{y},\bm{p}$, and $\bm{q}$, respectively, for $i=1,\ldots,n$. For simplicity, we now use $\mathbf{A}$ to denote the matrix $e^{-\eta\mathbf{C}}$. Intuitively, the Sinkhorn-Knopp algorithm works as an alternating projection procedure that renormalizes the rows and columns of $\mathbf{A}$ in turn, so that they match the desired row and column marginals $\bm{p}$ and $\bm{q}$. In specific, at each step, it prescribes to either modify all the rows of $\mathbf{A}$ by multiplying the $i$-th row by $(p_i/\sum_{j=1}^n\mathbf{A}_{ij})$, for $i=1,\ldots,n$, or to do the analogous operation on the columns. Here, $\sum_{j=1}^n \mathbf{A}_{ij}$ is simply the $i$-th row sum of $\mathbf{A}$. Analogously, we also use $\sum_{i=1}^n \mathbf{A}_{ij}$ to denote the $j$-th column sum of $\mathbf{A}$. The standard convention that $0/0 = 1$ is adopted in the algorithm if it occurs. The algorithm terminates when the matrix $\mathbf{A}$, after $k$-th iteration, is sufficiently close to the polytope $\mathcal{M}(\bm{p},\bm{q})$. The pseudocode for the Sinkhorn-Knopp algorithm is shown in Algorithm~\ref{alg:ALG3}. \begin{algorithm} \caption{{\sc Sinkhorn($\mathbf{A}, \mathcal{M}(\bm{p},\bm{q}),\epsilon$)}} \label{alg:ALG3} \begin{algorithmic} \State \textbf{Initialize:} $k\leftarrow 0$;\enspace $\mathbf{A}^{[0]}\leftarrow \mathbf{A}/\|\mathbf{A}\|_1$;\enspace $\bm{x}^{[0]}\leftarrow\mathbf{0}$;\enspace $\bm{y}^{[0]}\leftarrow\mathbf{0}$\enspace \Repeat \State $k\leftarrow k+1$ \State \textbf{if} $k$ is odd \textbf{then} \State \qquad $x_i\leftarrow\log(p_i/\sum_{j=1}^n\mathbf{A}^{[k-1]}_{ij})$, \enspace for $i=1,\ldots,n$ \State \qquad $\bm{x}^{[k]}\leftarrow\bm{x}^{[k-1]}+\bm{x}$;\enspace $\bm{y}^{[k]}\leftarrow\bm{y}^{[k-1]}$ \State \textbf{else} \State \qquad $y_j\leftarrow\log(q_i/\sum_{i=1}^n\mathbf{A}^{[k-1]}_{ij})$, \enspace for $j=1,\ldots,n$ \State \qquad $\bm{y}^{[k]}\leftarrow\bm{y}^{[k-1]}+\bm{y}$;\enspace $\bm{x}^{[k]}\leftarrow\bm{x}^{[k-1]}$ \State $\mathbf{D}_1\leftarrow \mbox{diag}(\exp(\bm{x}^{[k]}))$;\enspace $\mathbf{D}_2\leftarrow \mbox{diag}(\exp(\bm{y}^{[k]}))$ \State$\mathbf{A}^{[k]}=\mathbf{D}_1\mathbf{A}\mathbf{D}_2$ \Until $\mbox{dist}(\mathbf{A}^{[k]}, \mathcal{M}(\bm{p},\bm{q}))\leq\epsilon$ \State \textbf{Output:} $\mathbf{P}^\eta = \mathbf{A}^{[k]}$ \end{algorithmic} \end{algorithm} One question remaining for Algorithm~\ref{alg:ALG3} is how to determine the size of $\eta$, which balances the trade-off between the computation time and the estimation accuracy. In specific, a small $\eta$ is associated with a more accurate estimation of the Wasserstein distance as well as longer computation time \cite{genevay2019sample}. Algorithm~\ref{alg:ALG3} requires a computational cost of the order $O(n^2\log(n)pK)$, where $K$ is the number of iterations. It is known that $K=O(\epsilon^{-2})$ in order to let Algorithm~\ref{alg:ALG3} to achieve the desired accuracy. Recently, \cite{altschuler2017near} proposed a new greedy coordinate descent variant of the Sinkhorn algorithm with the same theoretical guarantees and a significantly smaller number of iterations. With the help of Algorithm~\ref{alg:ALG3}, the regularized optimal transport problem can be solved reliably and efficiently in the cases when $n\approx 10^4$ \cite{cuturi2013sinkhorn,genevay2016stochastic}. \subsection{Sinkhorn Algorithms with the Nystr$\ddot{\mathit{\textbf{o}}}$m Method} Although the Sinkhorn-Knopp algorithm has already yielded impressive algorithmic benefits, its computational complexity and memory usage are of the order of $n^2$, since such an algorithm involves the calculation of the $n\times n$ matrix $e^{-\eta\mathbf{C}}$. Such a quadratic computational cost makes the calculation of Sinkhorn distances prohibitively expensive on the datasets with millions of observations. To alleviate the computation burden, \cite{altschuler2019massively} proposed to replace the computation of the entire matrix $e^{-\eta\mathbf{C}}$ with its low-rank approximation. Computing such approximations is a problem that has long been studied in machine learning under different names, including Nystr$\ddot{\mbox{o}}$m method \cite{williams2001using,wang2013improving}, sparse greedy approximations \cite{smola2000sparse}, incomplete Cholesky decomposition \cite{fine2001efficient}, and CUR matrix decomposition \cite{mahoney2009cur}. These methods draw great attention in the subsampling literature due to its close relationship to the {\it algorithmic leveraging} approach \cite{ma2015leveraging,meng2017effective,zhang2018statistical,ma2020asymptotic}, which has been widely applied in linear regression models \cite{mahoney2011randomized,drineas2012fast,ma2015statistical}, logistic regression \cite{wang2018optimal}, and streaming time series \cite{xie2019online}. Among the aforementioned low-rank approximation methods, the Nystr$\ddot{\mbox{o}}$m method is arguably the most extensively used one in the literature \cite{wang2015practical,mahoney2016lecture}. We now briefly introduce Nystr$\ddot{\mbox{o}}$m method, followed by the fast Sinkhorn algorithm proposed in \cite{altschuler2019massively} that utilize Nystr$\ddot{\mbox{o}}$m for low-rank matrix approximation. Let $\mathbf{K}\in\mathbb{R}^{n\times n}$ be the matrix that we aim to approximate. Let $s<n$ be a positive integer, $\mathbf{S}$ be a $n\times s$ column selection matrix \footnote{A column selection matrix is the one that all the elements of which equals zero except that there exists one element in each column that equals one.}, and $\mathbf{R}=\mathbf{K}\mathbf{S}\in\mathbb{R}^{n\times s}$ be the so-called sketch matrix of $\mathbf{K}$. In other words, $\mathbf{R}$ is a matrix that contains certain columns of $\mathbf{K}$. Consider the optimization problem \begin{eqnarray}\label{nystrom} \widetilde{\mathbf{X}}=\underset{\mathbf{X}\in\mathbb{R}^{s\times s}}{\mbox{argmin}}\|\mathbf{S}^\T(\mathbf{K}-\mathbf{R}\mathbf{X}\mathbf{R}^\T)\mathbf{S}\|^2_F, \end{eqnarray} where $\|\cdot\|_F$ denotes the Frobenius norm. Equation~(\ref{nystrom}) suggests the matrix $\mathbf{R}\widetilde{\mathbf{X}}\mathbf{R}^\T$ can be utilized as a low-rank approximation of $\mathbf{K}$, since such a matrix is the closest one to $\mathbf{K}$ among all the semi-positive definite metrics that have rank at most $s$. Let $(\cdot)^+$ to denote the Moore-Penrose inverse of a matrix. It is known that the minimizer of the optimization problem~(\ref{nystrom}) takes the form \begin{eqnarray*} \widetilde{\mathbf{X}} = (\mathbf{S}^\T\mathbf{R})^+(\mathbf{S}^\T\mathbf{K}\mathbf{S})(\mathbf{R}^\T\mathbf{S})^+ = (\mathbf{S}^\T\mathbf{K}\mathbf{S})^+; \end{eqnarray*} see \cite{wang2015practical} for technical details. Consequently, we have the following low-rank approximation of $\mathbf{K}$, \begin{eqnarray*} \mathbf{K}\approx \mathbf{R}(\mathbf{S}^\T\mathbf{K}\mathbf{S})^+\mathbf{R}^\T, \end{eqnarray*} and such an approximation is called the Nystr$\ddot{\mbox{o}}$m method, as illustrated in Fig.~\ref{fig:nystron}. It is known that the Nystr$\ddot{\mbox{o}}$m method is highly efficient, and could reliably be run on problems of size $n\approx 10^6$ \cite{wang2015practical}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.95\textwidth]{figure/nystron.png} \end{tabular} \caption{Illustration for the Nystr$\ddot{\mbox{o}}$m method. }\label{fig:nystron} \end{center} \vspace*{-0.2in} \end{figure} Algorithm~\ref{alg:ALG4} introduces NYS-SINK \cite{altschuler2019massively}, i.e., the Sinkhorn algorithm implemented with the the Nystr$\ddot{\mbox{o}}$m method. The notations are analogous to the ones in Algorithm~\ref{alg:ALG3}. \begin{algorithm} \caption{{\sc Nys-sink($\mathbf{A}, \mathcal{M}(\bm{p},\bm{q}),\epsilon$,$s$)}} \label{alg:ALG4} \begin{algorithmic} \State \textbf{Input:} $\mathbf{A}$, $\bm{p}$, $\bm{q}$, $s$ \State \textit{Step 1:} Calculate the Nystr$\ddot{\mbox{o}}$m approximation of $\mathbf{A}$ (with rank $s$), denoted by $\widetilde{\mathbf{A}}.$ \State \textit{Step 2:} $\widetilde{\mathbf{P}}^\eta$ = {\sc Sinkhorn($\widetilde{\mathbf{A}}, \mathcal{M}(\bm{p},\bm{q}),\epsilon$)} \State \textbf{Output:} $\widetilde{\mathbf{P}}^\eta$ \end{algorithmic} \end{algorithm} Algorithm~\ref{alg:ALG4} requires a memory cost of the order $O(ns)$ and a computational cost of the order $O(ns^2p)$. When $s\ll n$, these costs are significant reductions compared with $O(n^2)$ and $O(n^2\log(n)p)$ for Algorithm~\ref{alg:ALG3}, respectively. \cite{altschuler2019massively} reported that Algorithm~\ref{alg:ALG4} could reliably be run on problems of size $n\approx 10^6$ on a single laptop. There are two fundamental questions when implementing the Nystr$\ddot{\mbox{o}}$m method in practice: (1) how to decide the size of $s$; and (2) given $s$, how to construct the column selection matrix $\mathbf{S}$. For the latter question, we refer to \cite{gittens2016revisiting} for an extensive review of how to construct $\mathbf{S}$ through weighted random subsampling. There also exists recursive strategy \cite{musco2017recursive} for potentially more effective construction of $\mathbf{S}$. For the former question, various data-driven strategies have been proposed to determine the size of $s$ that is adaptive to the low-dimensional structure of the data. These strategies are developed under different model setups, including kernel ridge regression \cite{gittens2016revisiting,musco2017recursive,calandriello2020analysis}, kernel K-means \cite{he2018kernel,wang2019scalable}, and so on. Recently, \cite{an2021efficient} further improved the efficiency through Nesterov's smoothing technique. Consider the optimal transport problem that of our interest, \cite{altschuler2019massively} assumed the data are lying on a low-dimensional manifold, and the authors developed a data-driven strategy to determine the effective dimension of such a manifold. \section{Projection-based Optimal Transport Methods} In the cases when $n\gg p$, one can utilize projection-based optimal transport methods for potential faster calculation as well as smaller memory consumption, compared with regularization-based optimal transport methods. These projection-based methods build upon a key fact that the empirical one-dimensional OTM under the $L_2$ norm is equivalent to sorting. Utilizing such a fact, the projection-based OT methods tackle the problem of estimating a $p$-dimensional OTM by breaking down the problem into a series of subproblems, each of which finds a one-dimensional OTM using projected samples \cite{pitie2005n,pitie2007automated,bonneel2015sliced,rabin2011wasserstein}. The projection direction can be selected either at random or at deterministic, based on different criteria. Generally speaking, the computational cost for these projection-based methods are approximately proportional to $n$, and the memory cost of which is at the order of $O(np)$, which is a significant reduction from $O(n^2)$ when $p\ll n$. We will cover some representatives of the projection-based OT methods in this section. \subsection{Random Projection OT Method} The random projection method, also called the Radon probability density function (PDF) transformation method, is first proposed in \cite{pitie2005n} for transferring the color between different images. Intuitively, an image can be represented as a three-dimensional sample in the RGB color space, in which each pixel of the image is an observation. The goal of color transfer is to find a transport map $\phi$ such that the color of the transformed source image follows the same distribution of the color of the target image. Although the map $\phi$ does not have to be the OTM in this problem, the random projection method proposed in \cite{pitie2005n} can be regarded as an estimation method for OTM. The random projection method is built upon the fact that two PDFs are identical if the marginal distributions, respecting all possible one-dimensional projection directions, of these two PDFs, are identical. Since it is impossible to consider all possible projection directions in practice, the random projection method thus utilizes the Monte Carlo method and considers a sequence of randomly generated projection directions. The details of the random projection method are summarized in Algorithm~\ref{alg:ALG1}. The computational cost for Algorithm~\ref{alg:ALG1} is at the order of $O(n\log(n)pK)$, where $K$ is the number of iterations under converge. We illustrate Algorithm~\ref{alg:ALG1} in Fig.~\ref{fig:alg1}. \begin{algorithm}[ht] \caption{Random projection method for OTM} \label{alg:ALG1} \begin{algorithmic} \State \textbf{Input:} the source matrix $\mathbf{X}\in \mathbb{R}^{n\times p}$ and the target matrix $\mathbf{Y}\in \mathbb{R}^{n\times p}$ \State $k\leftarrow 0$,\enspace $\mathbf{X}^{[0]}\leftarrow \mathbf{X}$ \Repeat \State (a) generate a random projection direction $\bm{\zeta}_k\in\mathbb{R}^p$ \State (b) find the one-dimensional OTM $\phi^{(k)}$ that matches $\mathbf{X}^{[k]}\bm{\zeta}_k$ to $\mathbf{Y}\bm{\zeta}_k$ \State (c) $\mathbf{X}^{[k+1]}\leftarrow \mathbf{X}^{[k]}+(\phi^{(k)}(\mathbf{X}^{[k]}\bm{\zeta}_k)-\mathbf{X}^{[k]}\bm{\zeta}_k)\bm{\zeta}_k^\T$ \State (d) $k\leftarrow k+1$ \Until converge \State The final estimator is given by $\widehat{\phi}:\mathbf{X}\rightarrow\mathbf{X}^{[k]}$ \end{algorithmic} \end{algorithm} \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.95\textwidth]{figure/alg1.png} \end{tabular} \caption{Illustration of Algorithm~\ref{alg:ALG1}. In the $k$-th iteration, a random projection direction $\bm{\zeta}_k$ is generated, and the one-dimensional OTM is calculated that match the projected sample $\mathbf{X}^{[k]}\bm{\zeta}_k$ to $\mathbf{Y}\bm{\zeta}_k$. }\label{fig:alg1} \end{center} \vspace*{-0.2in} \end{figure} Instead of randomly generating the projection directions using the Monte Carlo method, one can also generate a sequence of projection directions with ``low-discrepancy", i.e., the directions that are distributed as disperse as possible on the unit sphere. The low-discrepancy sequence has been widely applied in the field of quasi-Monte Carlo and has been extensively employed for numerical integration \cite{owen2003quasi} and subsampling in big data \cite{meng2020more}. We refer to \cite{lemieux2009book,leobacher2014introduction,dick2013high,glasserman2013monte} for more in-depth discussions on quasi-Monte Carlo methods. It is reported in \cite{pitie2005n} that using a low-discrepancy sequence of projection directions yields a potentially faster convergence rate. Close related to the random projection method is the sliced method. The sliced method modifies the random projection method by considering a large set of random directions from $\mathbb{S}^{d-1}$ in each iteration, where $\mathbb{S}^{d-1}$ is the $d$-dimensional unit sphere. The ``mean map'' of the one-dimensional OTMs over these random directions is considered as a component of the final estimate of the desired OTM. Let $L$ be the number of projection directions considered in each iteration. Consequently, the computational cost of the sliced method is at the order of $O(n\log(n)pKL)$, where $K$ is the number of iterations until convergence. Although the sliced method is $L$ times slower than the random projection method, in practice, it is usually observed that the former yields a more robust estimation of the latter. We refer to \cite{bonneel2015sliced,rabin2011wasserstein} for more implementation details of the sliced method. \subsection{Projection Pursuit OT Method} Despite the random projection method works reasonably well in practice, for moderate or large $p$, such a method suffers from slow or none convergence due to the nature of randomly selected projection directions. To address this issue, \cite{meng2019large} introduced a novel statistical approach to estimate large-scale OTMs \footnote{The code is available at https://github.com/ChengzijunAixiaoli/PPMM.}. The proposed method, named projection pursuit Monge map (PPMM), combines the idea of projection pursuit \cite{friedman1981projection} and sufficient dimension reduction \cite{li2018sufficient}. The projection pursuit technique is similar to boosting that search for the next optimal direction based on the residual of previous ones. In each iteration, PPMM aims to find the ``optimal" projection direction, guided by sufficient dimension reduction techniques, instead of using a randomly selected one. Utilizing these informative projection directions, it is reported in \cite{meng2019large} that the PPMM method yields a significantly faster convergence rate than the random projection method. We now introduce some essential background of sufficient dimension reduction techniques, followed by the details of the PPMM method. Consider a regression problem with a univariate response $T$ and a $p$-dimensional predictor $Z$. Sufficient dimension reduction techniques aim to reduce the dimension of $Z$ while preserving its regression relation with $T$. In other words, such techniques seek a set of linear combinations of $Z$, say $\mathbf{B}^\T Z$ with some projection matrix $\mathbf{B}\in~\mathbb{R}^{p \times q}$ ($q<p$), such that $T$ depends on $Z$ only through $\mathbf{B}^{\T}Z$, i.e., \begin{eqnarray}\label{eqn:sdr} T \mathrel{\text{{$\perp\mkern-10mu\perp$}}} Z | \mathbf{B}^{\T}Z. \end{eqnarray} Let ${\cal S}(\mathbf{B})$ to denote the column space of $\mathbf{B}$. We call ${\cal S}(\mathbf{B})$ a sufficient dimension reduction subspace (s.d.r. subspace) if $\mathbf{B}$ satisfy Formulation~(\ref{eqn:sdr}). Moreover, if the intersection of all possible s.d.r. subspaces is still an s.d.r. subspace, we call it the central subspace and denote it as ${\cal S}_{T|Z}$. Note that the central subspace is the s.d.r. subspace with the minimum number of dimensions. Some popular sufficient dimension reduction techniques include sliced inverse regression (SIR) \cite{li1991sliced}, principal Hessian directions (PHD) \cite{li1992principal}, sliced average variance estimator (SAVE) \cite{cook1991sliced}, directional regression (DR) \cite{li2007directional}, among others. Under some regularity conditions, it can be shown that these methods can induce an s.d.r. subspace that equals the central subspace. Consider estimating the OTM between a source sample and a target sample. One can form a regression problem using these two samples, i.e., add a binary response variable by labeling them as 0 and 1, respectively. The PPMM method utilizes sufficient dimension reduction techniques to select the most ``informative'' projection direction. Here, we call a projection direction $\bm{\xi}$ the most informative one, if the projected samples have the most substantial `` discrepancy.'' The discrepancy can be measured by the difference of the $k$th order moments or central moments. For example, the SIR method measures the discrepancy using the difference of means, while the SAVE method measures the discrepancy using the difference of variances. The authors in \cite{meng2019large} considered the SAVE method and showed that the most informative projection direction was equivalent to the eigenvector corresponding to the largest eigenvalue of the projection matrix $\mathbf{B}$, estimated by SAVE. The detailed algorithm for PPMM is summarized in Algorithm \ref{alg:ALG2} as follows. \begin{algorithm} \caption{Projection pursuit Monge map (PPMM)} \label{alg:ALG2} \begin{algorithmic} \State \textbf{Input:} two matrix $\mathbf{X}\in \mathbb{R}^{n\times p}$ and $\mathbf{Y}\in \mathbb{R}^{n\times p}$ \State $k\leftarrow 0$,\enspace $\mathbf{X}^{[0]}\leftarrow \mathbf{X}$ \Repeat \State (a) calculate the most informative projection direction $\bm{\xi}_k\in\mathbb{R}^p$ between $\mathbf{X}^{[k]}$ and $\mathbf{Y}$ using SAVE \State (b) find the one-dimensional OTM $\phi^{(k)}$ that matches $\mathbf{X}^{[k]}\bm{\xi}_k$ to $\mathbf{Y}\bm{\xi}_k$ \State (c) $\mathbf{X}^{[k+1]}\leftarrow \mathbf{X}^{[k]}+(\phi^{(k)}(\mathbf{X}^{[k]}\bm{\xi}_k)-\mathbf{X}^{[k]}\bm{\xi}_k)\bm{\xi}_k^\T$ \State (d) $k\leftarrow k+1$ \Until converge \State The final estimator is given by $\widehat{\phi}:\mathbf{X}\rightarrow\mathbf{X}^{[k]}$ \end{algorithmic} \end{algorithm} The computational cost for Algorithm~\ref{alg:ALG2} mainly resides in steps (a) and (b). Within each iteration, steps (a) and (b) require the computational cost of the order $O(np^2)$ and $O(n\log(n)$, respectively. Consequently, the overall computational cost for Algorithm~\ref{alg:ALG2} is at the order of $O(Knp^2+Kn\log(n))$, where $K$ is the number of iterations. Although not theoretical guaranteed, it is reported in \cite{meng2019large} that $K$ is approximately proportional to $p$ in practice, in which case the computational cost for PPMM becomes $O(np^3+n\log(n)p)$. Compared with the computational cost for the Sinkhorn algorithm, i.e., $O(n^2\log(n)p)$, PPMM has a lower order of the computational cost when $p\ll n$. We illustrate Algorithm~\ref{alg:ALG2} in Fig.~\ref{fig:alg2}. Although not covered in this section, the PPMM method can be easily extended to calculate the OTP, with minor modifications \cite{meng2019large}. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.95\textwidth]{figure/alg2.png} \end{tabular} \caption{Illustration of Algorithm~\ref{alg:ALG2}. The left panel shows that in the $k$-th iteration, the most informative projection direction $\bm{\xi}_k$ is calculated by SAVE. The right panel shows that the one-dimensional OTM is calculated to match the projected sample $\mathbf{X}^{[k]}\bm{\xi}_k$ to $\mathbf{Y}\bm{\xi}_k$. }\label{fig:alg2} \end{center} \vspace*{-0.2in} \end{figure} \section{Applications in Biomedical Research} In this section, we present some cutting-edge applications of optimal transport methods in biomedical research. We first present how optimal transport methods can be utilized to identify developmental trajectories of single cells \cite{schiebinger2019optimal}. We then review a novel method for augmenting the single-cell RNA-seq data \cite{marouf2020realistic}. The method utilizes the technique of generative adversarial networks (GAN), which is closely related to optimal transport methods, as we will discuss later. \subsection{Identify Development Trajectories in Reprogramming} The rapid development of single-cell RNA sequencing (scRNA-seq) technologies has enabled researchers to identify cell types in a population. These technologies help researchers to answer some fundamental questions in biology, including how individual cells differentiate to form tissues, how tissues function in a coordinated and flexible fashion, and which gene regulatory mechanisms support these processes \cite{tanay2017scaling}. Although sc-RNA-seq technologies have been opening up new ways to tackle the aforementioned questions, other questions remain. Since these technologies require to destroy cells in the course of sequencing their gene expression profiles, researchers cannot follow the expression of the same cell across time. Without further analysis, researchers thus are not able to answer the questions like what was the origin of certain cells at earlier stages and their possible fates at later stages; what and how regulatory programs control the dynamics of cells? To answer these questions, one natural solution is to develop computational tools to connect the cells within different time points into a continuous cell trajectory. In other words, although different cells are recorded in each time point, for each cell, the goal is to identify the ones that are analogous to its origins and its fates in earlier stages and late stages, respectively. A large number of methods have been developed to achieve this goal; see \cite{kester2018single,saelens2019comparison,farrell2018single,fischer2019inferring} and the reference therein. A novel approach was proposed in \cite{tanay2017scaling} to reconstruct cell trajectories. They model the differentiating population of cells as a stochastic process on a high-dimensional expression space. Recall that different cells are recorded independently at different time points. Consequently, the unknown fact to the researchers is the joint distribution of expression of the unobserved cells between different pairs of time points. To infer how the differentiation process evolves over time, the authors assume the expression of each cell changes within a relatively small range over short periods. Based on such an assumption, one thus can infer the differentiation process though optimal transport methods, which naturally gives the transport map between two distributions, respecting to two time points, with the minimum transport cost. Figure~\ref{fig:trajectory} illustrates an idea to search for the ``cell trajectories''. For gene expression $\mathbf{X}_t$ of any set of cells at time $t$, it can be transported to a later time point $t+1$ according to OTP from the distribution over $\mathbf{X}_t$ to the distribution over the cells at time $t+1$. Analogously, $\mathbf{X}_t$ can be transported from a former time point $t-1$ by back-winding the OPT from the distribution over $\mathbf{X}_t$ to the distribution over the cells at time $t-1$ (The left and middle panels in Fig.~\ref{fig:trajectory}). The trajectory combines the transportation between any two neiboring time points (The right panel in Fig.~\ref{fig:trajectory}). Thus, OTP helps to infer the differentiation process of cells the at any time along the trajectory. \begin{figure}[h] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.95\textwidth]{figure/trajectory0.png} \end{tabular} \caption{Illustration for cell trajectories along time. Left: cells at each time point. Middle: OPT between distributions over cells at each time point. Right: cell trajectories based on OPT. }\label{fig:trajectory} \end{center} \vspace*{-0.2in} \end{figure} The authors in \cite{tanay2017scaling} used optimal transport methods to calculate the differentiation process between consecutive time points and then compose all the transport maps together to obtain the cell trajectories over long time-intervals. The authors also considered unbalanced transport \cite{chizat2018scaling} for modeling cellular proliferation, i.e., cell growth and death. Analyzing around 315,000 cell profiles sampled densely across 18 days, the authors found reprogramming unleashes a much wider range of developmental programs and subprograms than previously characterized. \subsection{Data Augmentation for Biomedical Data} Recent advances in scRNA-seq technologies have enabled researchers to measure the expression of thousands of genes at the same time and to scrutinize the complex interactions in biological systems. Despite wide applications, such technologies may fail to quantify all the complexity in biological systems in the cases when the number of observations is relatively small, due to economic or ethical considerations or simply because the sample size of available patients is low \cite{munafo2017manifesto}. The problem of a small sample size results in biased results since a small sample may not be a decent representative of the population. Not only arising from biomedical research, such a problem also arises from the research in various fields, including computer vision and deep learning, which require considerable quantity and diversity of data during the training process \cite{lecun2015deep,goodfellow2016deep}. In these fields, data augmentation is a widely-applied strategy to alleviate the problem of small sample sizes, without actually collecting new data. In computer vision, some elementary algorithms for data augmentation include cropping, rotating, and flipping; see \cite{shorten2019survey} for a survey. These algorithms, however, may not be suitable for augmenting data in biomedical research. Compared with these elementary algorithms, a more sophisticated approach for data augmentation is to use generative models, including generative adversarial nets (GAN) \cite{goodfellow2014generative}, the ``decoder'' network in variational autoencoders \cite{kingma2013auto}, among others. Generative models aim to generate ``fake'' samples that are indistinguishable from the genuine ones. The fake samples then can be used, alongside the genuine ones, in down-stream analysis to artificially increase sample sizes. Generative models have been widely used for generating realistic images \cite{dosovitskiy2016generating,liu2017auto}, songs \cite{blaauw2016modeling,engel2017neural}, and videos \cite{liang2017dual,vondrick2016generating}. Many variants of the GAN method have been proposed recently, and of particular interest is the Wasserstein GAN \cite{arjovsky2017wasserstein}, which utilizes the Wasserstein distance instead of the Jensen–Shannon divergence in the standard GAN for measuring the discrepancy between two samples. The authors showed that the Wasserstein GAN yields a more stable training process compared with the standard GAN, since Wasserstein distance appears to be a more powerful metric than the Jensen–Shannon divergence in GAN. Nowadays, GAN has been widely used for data augmentation in various biomedical research \cite{frid2018synthetic, frid2018gan,madani2018chest}. Recently, \cite{marouf2020realistic} proposed a novel data augmentation method for scRNA-seq data. The proposed method, called single-cell GAN, is developed based on Wasserstein GAN. The authors showed the proposed method improves downstream analyses such as the detection of marker genes, the robustness and reliability of classifiers, and the assessment of novel analysis algorithms, resulting in the potential reduction of the number of animal experiments and costs. Note that generative models are closely related to optimal transport methods. Intuitively, a generative model is equivalent to finding a transport map from random noises with a simple distribution, e.g., Gaussian distribution or uniform distribution, to the underlying population distribution of the genuine sample. Recent studies suggest optimal transport methods outperform the Wasserstein GAN for approximating probability measures in some special cases \cite{lei2019geometric,lei2020geometric}. Consequently, researchers may consider using optimal transport methods instead of GAN models for data augmentation in biomedical research for potentially better performance. \section*{Acknowledgment} The authors would like to acknowledge the support from the U.S. National Science Foundation under grants DMS-1903226, DMS-1925066, the U.S. National Institute of Health under grant R01GM122080.
1,314,259,996,879
arxiv
\section{Introduction} A fundamental problem in fractal geometry is that how the projections affect dimension. Recall the classical Marstand-Mattila projection theorem: Let $E\subset \mathbb{R}^{n}, n\geq2,$ be a Borel set with Hausdorff dimension $s$. \begin{itemize} \item If $s\leq m$, then the orthogonal projection of $E$ onto almost all $m$-dimensional subspaces has Hausdorff dimension $s$. \item If $s>m$, then the orthogonal of $E$ onto almost all $m$-dimensional subspaces has positive $m$-dimensional Lebesgue measure. \end{itemize} In 1954 Marstand \cite{Marstrand} proved this projection theorem in the plane. In 1975 Mattila \cite{Mattila1975} proved this for general dimension via 1968 Kaufman's \cite{Kaufman} potential theoretic methods. Furthermore the bounds on the dimensions of the exceptional sets of projections was well studied. We put these results in the following form Theorem \ref{thm:M} which come from a recent survey paper of Falconer, Fraser, and Jin \cite{Falconer}, and Mattila \cite[Corollary 5.12]{Mattila}. Theorem \ref{thm:M} $(a)$ was proved by Kaufman \cite{Kaufman} for $m=1, n=2$, and by Mattila \cite{Mattila1975} for general $1\leq m<n$. The estimate $(b)$ was proved by Falconer \cite{Falconer1982} and all known proofs depend on Fourier transform. The estimate $(c)$ was proved by Peres and Schlag \cite{Peres} under their generalized projections. \begin{theorem}[Bounds on the dimensions of the exceptional sets]\label{thm:M} Let $E \subset \mathbb{R}^{n}$ be a Borel set and $s=\dim E$. (a) If $s\leq m$ and $t \in (0, s]$, then \[ \dim \{V \in G(n,m) : \dim \pi_{V} (E) < t \} \leq m(n - m) -(m-t). \] (b) If $s> m$, then \[ \dim \{V \in G(n,m) : \mathcal{H}^{m}(\pi_{V} (E)) = 0\} \leq m(n - m) -(s-m). \] (c) If $s>2m,$ then \[ \dim \{V \in G(n,m) : \pi_{V}(E) \text{ has empty interior }\} \leq m(n - m) -(s-2m). \] \end{theorem} Here $G(n,m)$, called Grassmanian manifold, denotes the collection of all the $m$-dimensional linear subspaces of $\mathbb{R}^{n}$. The notation $\dim E$ denotes the Hausdorff dimension of the set $E$, and $\pi_{V}: \mathbb{R}^{n}\rightarrow V$ stands for the orthogonal projections onto $V$. Recalling that the Grassmanian manifold $G(n,m)$ has dimension $m(n-m)$, this is why we compare with it in the above estimates. Note that the upper bound in $(b)$ is sharp, while the knowledge for the upper bounds in $(a)$ and $(c)$ are not complete, see Mattila \cite[Chapter 5]{Mattila} for more details. Recently there has been a growing interest in studying finite field version of some classical problems arising from Euclidean spaces. For instance, there are finite field Kakeya sets (also called Besicovitch sets) \cite{Dvir}, \cite{Green}, \cite{Wolff}, \cite{Zhang}, there are finite field Erd\H{o}s/ Falconer distance problem \cite{IsoevichRudnev}, \cite{Tao1}, etc. Motivated by the above works, we study the projections in vector spaces over finite fields. We show some notations first. Let $\mathbb{F}_p$ denote the finite field with $p$ elements where $p$ is prime, and $\mathbb{F}_{p}^{n}$ be the $n$-dimensional vector space over this field. The number of the $m$-dimensional linear subspaces of $\mathbb{F}_{p}^{n}$ is ${n \choose m}_{p}$ which is called Gaussian coefficient, see \cite{Cameron}, \cite{Ko} for more details. Note that to obtain a $m$-dimensional subspace, it is sufficient to choose $m$ linear independent vectors. For the first vector we have $p^{n}-1$ choices from $\mathbb{F}_{p}^{n}$ (except the zero vector); for the second vector we have $p^{n}-p$ choices to make that the second vector is independent to the first one; and so on. In the end we have $(p^{n}-1)(p^{n}-p)\cdots (p^{n}-p^{m-1})$ amount of choices. Note that for each $m$-dimensional subspace, there are amount of $(p^{m}-1)(p^{n}-p)\cdots (p^{m}-p^{m-1})$ choices which generate (or span) the same subspace. It follows that (see \cite[Theorem 6.3]{Cameron}, \cite{Ko} for more details) \begin{equation}\label{eq:bino} {n \choose m}_{p}=\frac{(p^{n}-1)(p^{n}-p)\cdots (p^{n}-p^{m-1})}{(p^{m}-1)(p^{m}-p)\cdots (p^{m}-p^{m-1})}. \end{equation} One interesting fact of Gaussian binomial coefficient is that \begin{equation*} {n \choose m}_{p} =(1+o(p)) p^{m(n-m)}. \end{equation*} The notation $o(p)$ means that $o(p)$ goes to zero as $p$ goes to infinity. However, the following estimate is enough for our purpose. Throughout the paper, we assume that the prime number $p$ is large enough such that \begin{equation}\label{eq:condition} p^{m(n-m)} \leq {n \choose m}_{p} \leq 2 p^{m(n-m)}. \end{equation} Note that the exponent $m(n-m)$ is the dimension of the real Grassmanian manifold. For convenience, we use the same notation $G(n,m)$ to denote all the $m$-dimensional linear subspaces of $\mathbb{F}_{p}^{n}$. Before we show the definition of projections in vector spaces over finite fields, let's recall the orthogonal projections in Euclidean spaces. Let $E\subset \mathbb{R}^{n}$ and $V$ be a subspace of $\mathbb{R}^{n}$. Then the orthogonal projection of $E$ onto $V$ is defined as \[ \pi_{V}(E)=\{x\in V: (x+V^{\perp})\cap E \neq \emptyset\} \] where $V^{\perp}$ is the orthogonal complement of $V$. Note that the vector space $\mathbb{F}_{p}^{n}$ is not an inner product space (in general). For instance the Lagrange's (four square) theorem, every nature number is the sum of four squares, implies that $\mathbb{F}_{p}^{n}$ is never an inner product space for $n\geq 4$. Therefore, the word orthogonal make no sense in these spaces. Thus we need a new way to define `projection' in $\mathbb{F}_{p}^{n}$. The following is one of these choices. Let $E$ be a subset of $\mathbb{F}_{p}^{n}$ and $W$ be a non-trivial subspace of $\mathbb{F}_{p}^{n}$. Let $\pi^{W}(E)$ denote the collection of cosets of $W$ which intersect $E$, i.e., \[ \pi^{W}(E)=\{x+W: E\cap (x+W) \neq \emptyset, x\in \mathbb{F}_{p}^{n}\}. \] In this paper we are interested in the cardinality of $\pi^{W}(E)$. Let $|J|$ denote the cardinality of a set $J$. Observe that if $E\subset \mathbb{R}^{n}$ is a finite set (i.e., $|E|<\infty$) then \[ |\pi_{W^{\perp}}(E)|=|\pi^{W}(E)|. \] Observe that for any set $E \subset \mathbb{F}_{p}^{n}$ and $W\in G(n,n-m)$ (Lagrange's group theorem), \[ |\pi^{W}(E)|\leq \min\{|E|, p^{m}\}. \] In analogy of Theorem \ref{thm:M}, we have the following finite fields version. \begin{theorem}\label{thm:maintheorem} Let $E \subset \mathbb{F}_{p}^{n}$. Then (a) for any $N<\frac{1}{2}|E|$, \[ |\{W\in G(n,n-m): |\pi^{W} (E)|\leq N\}| \leq 4 p^{(n-m)m-m}N; \] (b) for any $\delta \in (0,1)$, \[ |\{W\in G(n,n-m): |\pi^{W} (E)|\leq \delta p^{m}\}| \leq 2\left(\frac{\delta}{1-\delta}\right) p^{m(n-m)+m}|E|^{-1}. \] \end{theorem} We note that Theorem \ref{thm:maintheorem} $(a)$ for $m=n-1$ follows from Orponen's pair argument \cite[Estimate (2.1)]{Orponen0}), and Theorem \ref{thm:maintheorem} $(b)$ for $m=1,n=2$ follows from Murphy and Petridis \cite[Corollary 1]{MurphyPetridis}. We immediately have the following corollary via the special choices of $N$ and $\delta$ in Theorem \ref{thm:maintheorem}. \begin{corollary}\label{cor:mainclaim} Let $E \subset \mathbb{F}_{p}^{n}$ with $|E|=p^{s}$. (a) If $s\leq m$ and $t \in (0, s]$, then \[ | \{W \in G(n,n-m) : |\pi^{W} (E)| \leq p^{t}/10 \} \leq \frac{1}{2} p^{m(n - m) -(m - t)}. \] (b) If $s> m$, then \[ | \{W \in G(n,n-m) : |\pi^{W} (E)| \leq p^{m}/ 10 \}| \leq \frac{1}{2} p^{m(n - m) -(s-m)}. \] (c) If $s>2m$, then \[ | \{W \in G(n,n-m) :|\pi^{W} (E)|\neq p^{m} \}| \leq 4 p^{m(n - m) -(s-2m)}. \] \end{corollary} The Corollary \ref{cor:mainclaim} $(c)$ follows by the choice of $\delta=\frac{p^{m}-1}{p^{m}}$, and an easy fact that if $|\pi^{W}(E)|>p^{m}-1$ then $|\pi^{W}(E)|=p^{m}$ (i.e. $E$ intersects each coset of $W$). Note that the exponents in Corollary \ref{cor:mainclaim} are the same as in Theorem \ref{thm:M}, and $|\pi^{W}(E)|=p^{m}$ correspondence to the existence of interior in Theorem \ref{thm:M} $(c)$. I do not know whether these bounds are sharp. We formulate the following finite field version of one conjecture for the dimension bound of the exceptional set in the Euclidean plane. For more backgrounds, and partial improvements on the size of the exceptional set in Euclidean plane, see \cite[Chapter 5]{Mattila}, \cite[Theorem 1.2]{Oberlin}, \cite{Orponen0} \cite[Proposition 1.11]{Orponen1}, \cite{Orponen2}, \cite{Orponen3}. \begin{conjecture} For any $s/2\leq t\leq s\leq 1$ and $E\subset \mathbb{F}_{p}^{2}$ with $p^{s}/C\leq |E|\leq C p^{s}$. Then \[ | \{W \in G(2,1) : |\pi^{W} (E)|\leq p^{t} \} |\leq C(s,t, C) p^{2t-s}. \] \end{conjecture} In Euclidean space, there are various random fractal sets which do not have exceptional set in the projection theorem, see \cite{SS} for more details and reference therein. For the finite field case, we study the projections of random sets in $\mathbb{F}_{p}^{n}$ (percolation on $\mathbb{F}_{p}^{n}$). We have the following results. \begin{theorem}\label{thm:small} For any $0<s\leq m$, there is a positive number $p_{0}=p_{0}(n,m,s)$ such that for any prime number $p\geq p_0$, there exists a subset $E\subset \mathbb{F}_{p}^{n}$ with $p^{s}/2\leq |E|\leq 2p^{s}$ such that, \[ |\pi^{W}(E)|\geq |E|/24 \text{ for all $W\in G(n,n-m)$ }. \] \end{theorem} \begin{theorem}\label{thm:large} For any $m<s\leq n$, there is a positive number $p_{0}=p_{0}(n,m,s)$ such that for any prime number $p\geq p_0$, there exists a subset $E \subset \mathbb{F}_{p}^{n}$ with $p^{s}/2\leq |E|\leq 2p^{s}$ such that, \[ |\pi^{W}(E)| = p^{m} \text{ for all $W\in G(n,n-m)$ }. \] \end{theorem} Fourier analysis plays an important role in many topics in fractal geometry. Furthermore there are some results which all known proofs depend on Fourier transforms, for example the statement $(c)$ of Theorem \ref{thm:M}. Note that the proof for Theorem \ref{thm:maintheorem} $(b)$ also depends on the (discrete) Fourier transformation. \begin{proposition}\label{pro:Fourier} Let $E\subset \mathbb{F}_{p}^{n}$ with $|\widehat{E}(\xi)|\leq C |E|^{\alpha}$ for all $\xi\neq 0$ where $\alpha \in [1/2, 1)$ and $C$ is a positive constant. (a) If $|E|\leq C_1 p^{\frac{m}{2-2\alpha}}$, then \[ |\pi^{W}(E)| \geq C_{2}|E|^{2-2\alpha} \text{ for all $W\in G(n,n-m)$ }. \] (b) If $|E|> C_1 p^{\frac{m}{2-2\alpha}}$, then \[ |\pi^{W}(E)| \geq p^{m}/2 \text{ for all $W\in G(n,n-m)$}. \] (c) If $|E|> C_3 p^{\frac{m}{1-\alpha}}$, then \[ |\pi^{W}(E)|=p^{m} \text{ for all } W\in G(n,n-m). \] The constants $C_{1}, C_{2}, C_{3}$ only depend on the constant $C$ and $\alpha$. \end{proposition} Here and in what follows, $\xi\neq 0$ means that $\xi$ is a non-zero vector in $\mathbb{F}_{p}^{n}$. For $E\subset \mathbb{F}_{p}^{n}$, we simply write $E(x)$ for the characteristic function of $E$, and $\widehat{E}$ it's discrete Fourier transform. Now we discuss the condition that $\alpha \in [1/2,1)$. By the definition of Fourier transformation (see \eqref{eq:dede}), for any subset $E\subset \mathbb{F}_{p}^{n}$, \begin{equation*} |\widehat{E}(\xi)|\leq |E| \text{ for all } \xi \neq 0. \end{equation*} For the case $\alpha=1/2$, Iosevich and Rudnev \cite{IsoevichRudnev} called these sets Salem sets. To be formal, a subset $E\subset \mathbb{F}_{p}^{n}$ is called a $(C, s)$ Salem set if $p^{s}/C\leq |E|\leq Cp^{s}$ and for any $\xi\neq 0$, \[ |\widehat{E}(\xi)|\leq C \sqrt{|E|}. \] Iosevich and Rudnev \cite{IsoevichRudnev} introduced the finite fields Salem sets for their study of Erd\H{o}s/Falconer distance problem in vector spaces over finite fields. Note that this is a finite fields version of Salem sets in Euclidean spaces, see \cite[Chapter 3]{Mattila} for more details on Salem sets in Euclidean spaces. Roughly speaking the Fourier coefficients of Salem sets have the `best possible' upper bound. This follows by the Plancherel identity, \[ \sum_{\xi \in \mathbb{F}_{p}^{n}}|\widehat{E}(\xi)|^{2}=p^{n}|E|. \] To be precise, let $E\subset \mathbb{F}_{p}^{n}$ with $|E|\leq p^{n}/2$ and $|\widehat{E}(\xi)|\leq |E|^{\alpha}$ for any $\xi\neq 0$. Then \[ p^{n}|E| -|E|^{2} = \sum_{\xi \neq 0}|\widehat{E}(\xi)|^{2}\leq p^{n}|E|^{2\alpha}. \] It follows that \[ 1/2\leq |E|^{2\alpha-1}. \] Thus $\alpha\geq 1/2$ provided $E$ is large enough. See \cite[Proposition 2.6]{Babai}, \cite{IsoevichRudnev} for more details. Observe that Proposition \ref{pro:Fourier} implies that finite field Salem sets do not have exceptional directions in the Corollary \ref{cor:mainclaim}. Furthermore, if there exists $(C, s)$ Salem set where $C$ does not depend on $p$, then by Proposition \ref{pro:Fourier} we can obtain Theorems \ref{thm:small}-\ref{thm:large}. However, it seems that the only known examples of Salem sets in $\mathbb{F}_{p}^{n}$ are the discrete paraboloid and the discrete sphere, and both the size of the discrete paraboloid and the discrete sphere in $\mathbb{F}_{p}^{n}$ are roughly $p^{n-1}$, see \cite{IsoevichRudnev} for more details. It is natural to ask that does there exists $(C, s)$ Salem set in $\mathbb{F}_{p}^{n}$ for any (large) prime number $p$. The above results and \cite[Problem 20]{Mattila2004} suggest that there exists $(C, s)$ Salem set only if $s$ is an integer. If we loose the condition in the definition of Salem sets, the author \cite{Chen2017}, Hayes \cite{Hayes} showed the existence of (weak) Salem sets in any given size via the random subsets of $\mathbb{F}_{p}^{n}$. A set $E$ is called (weak) Salem set if there is a constant $C$ (does not depend on $p$) such that \[ |\widehat{E}(\xi)|\leq C \sqrt{|E| \log p} \text{ for all } \xi \neq 0. \] See Chen \cite{ChenX} for a random construction of Salem set in Euclidean space (with a $\log$ factor also). The structure of the paper is as follows. In Section \ref{sec:preliminaries}, we set up notation and lemmas for later use. We prove Theorem \ref{thm:maintheorem}, Theorems \ref{thm:small}-\ref{thm:large}, and Proposition \ref{pro:Fourier} in Section \ref{sec:main}, Section\ref{sec:random}, and Section \ref{sec:Fourier} respectively. In the last section, we extend an identity of Murphy and Petridis \cite{MurphyPetridis}. We give another definition of projections in $\mathbb{F}_{p}^{n}$, and show that our results still holds under this definition. \section{preliminaries} \label{sec:preliminaries} For $1\leq m\leq n$, recall that $G(n,m)$ stands for all the $m$-dimensional linear subspaces of $\mathbb{F}_{p}^{n}$. Let $A(n,m)$ denote the family of all $m$-dimensional planes, i.e., the translation of some $m$-dimensional subspace. Let $W\in G(n,n-m)$, then observe by Lagrange's group theorem that there are $p^{m}$ cosets of $W$. Let $x_{W,j}+W, 1\leq j\leq p^{m}$ be the different cosets of $W$. Let $G \subset G(n,n-m)$, and define \[ G'= \{x_{W,j}+ W: 1\leq j\leq p^{m}, W \in G\}. \] \noindent{\bf Outline of the method.} The method is an adaptation of the counting pairs argument of Orponen \cite[Estimate (2.1)]{Orponen0} to our setting. Let $E\subset \mathbb{F}_{p}^{n}$ and $W\in G(n,n-m)$, then $|E| =\sum_{j=1}^{p^{m}} |E\cap (x_{W,j}+W)|,$ and the Cauchy-Schwarz inequality implies \begin{equation}\label{eq:pairs} |E|^{2}\leq |\pi^{W}(E)|\sum_{j=1}^{p^{m}} |E\cap (x_{W,j}+W)|^{2}. \end{equation} Note that $|E\cap (x_{W,j}+W)|^{2}$ is the amount of pairs of $E$ inside $x_{W,j}+W$. Let $N\leq p^{m}$, define \[ \Theta=\{W\in G(n,n-m): |\pi^{W} (E)|\leq N\}. \] Summing two sides over $W\in \Theta$ in estimate $\eqref{eq:pairs}$, we obtain \begin{equation}\label{eq:argument} |\Theta| |E|^{2} \leq N\sum_{W\in \Theta}\sum_{j=1}^{p^{m}}|E\cap (x_{W,j}+W)|^{2}:=N\mathcal{E}(E,\Theta'). \end{equation} Therefore, the left problem is to estimate $\mathcal{E}(E, \Theta')$. \subsection{Counting pairs and energy arguments} Motivated by the above pairs argument of Orponen \cite[Estimate (2.1)]{Orponen0}, an incidence identity of Murphy and Petridis \cite[Lemma 1]{MurphyPetridis}, and the additive energy in additive combinatorics \cite[Chapter 2]{TaoVu}, we give the following definition which plays a key role for the proof of Theorem \ref{thm:maintheorem}. \begin{definition} Let $E\subset \mathbb{F}_{p}^{n}$ and $ \mathcal{A} \subset A(n,k)$. Define the (generalized) energy of $E$ on $\mathcal{A}$ as \[ \mathcal{E}(E, \mathcal{A}) =\sum_{W\in \mathcal{A}} |E\cap W|^{2}. \] \end{definition} \begin{remark} Recalling the additive energy, if $A, B\subset \mathbb{F}_{p}$ then the additive energy $E(A, B)$ between $A$ and $B$ is defined as the quantity \begin{equation*} E(A, B) = |\{(a, a', b, b') \in A \times A \times B \times B : a + b = a' + b'\}|. \end{equation*} Observe that by our notation \begin{equation*} E(A, B)=\mathcal{E}(A\times B, \mathcal{L}), \end{equation*} where \[ \mathcal{L}=\{\ell_{k}\}_{k=0}^{p-1}, \text{ and } \ell_{k}=\{(x,y)\in \mathbb{F}_{p}^{2} :x+y=k\}. \] For more details on additive energy, see \cite{TaoVu}. \end{remark} \subsection{Discrete Fourier transformation} In the following we collect the basic facts about Fourier transformation which related to our setting. For more details on discrete Fourier analysis, see Green \cite{Green}, Stein and Shakarchi \cite{Stein}. Let $f : \mathbb{F}_{p}^{n}\longrightarrow \mathbb{C}$ be a complex value function. Then the Fourier transform of $f$ at $\xi \in \mathbb{F}_{p}^{n}$ is defined as \begin{equation}\label{eq:dede} \widehat{f}(\xi)=\sum_{x\in \mathbb{F}_{p}^{n}} f(x)e(-x\cdot \xi), \end{equation} where $e(-x \cdot \xi)=e^{-\frac{2\pi i x \cdot\xi}{p}}$ and the dot product \[ x\cdot\xi =x_1\xi_1+\cdots +x_n\xi_n\,(\text{mod}\, p). \] Recall the following Plancherel identity, \begin{equation*} \sum_{\xi \in \mathbb{F}_{p}^{n}}|\widehat{f}(\xi)|^{2}=p^{n}\sum_{x\in \mathbb{F}_{p}^{n}} |f(x)|^{2}. \end{equation*} Specially for the subset $E\subset \mathbb{F}_{p}^{n}$, we have \[ \sum_{\xi \in \mathbb{F}_{p}^{n}} |\widehat{E}|^{2}=p^{n}| E|. \] In the following of this subsection we intend to establish the following (`Plancherel identity on subspaces') Lemma \ref{lem:fff}. To be formal we give some notation first. For $W\in G(n,n-m)$, we define the `orthogonal complement' of $W$ as \[ Per(W):=\{x\in \mathbb{F}_{p}^{n}: x\cdot w=0 \,(\text{mod}\, p), w\in W\}. \] Note that unlike in the Euclidean spaces, here $W\cap Per(W)$ can be some non-trivial subspace. For example let $W=\text{span}\{(1,1)\}\subset \mathbb{F}_{2}^{2}$ then $Per(W)= W$. However, the rank-nullity theorem of linear algebra (or the solution of system of linear equations) implies that for any subspace $W \subset\mathbb{F}_{p}^{n}$, \begin{equation}\label{eq:rank} \dim W+\dim Per(W)=n. \end{equation} The following result shows the connection between $|E\cap (x_{W,j}+W)|, 1\leq j\leq p^{m}$ and the Fourier transform of $E$, and the identity \eqref{eq:kk} plays an important role in the proof of Theorem \ref{thm:maintheorem} (b) and the proof of Proposition \ref{pro:Fourier}. \begin{lemma}\label{lem:fff} Use the above notation. We have \begin{equation}\label{eq:kk} \sum_{j=1}^{p^{m}} | E \cap (x_{W,j}+W)|^{2}=p^{-m}\sum_{\xi\in Per(W)} |\widehat{E}(\xi)|^{2}. \end{equation} \end{lemma} \begin{proof} Let $\xi \in Per(W)$ then \begin{equation*} \begin{aligned} \widehat{E}(\xi)&=\sum_{x\in \mathbb{F}_{p}^{n}}E(x)e(-x\cdot \xi)\\ &=\sum_{j=1}^{p^{m}} \sum_{ w\in W}E(x_{W,j}+w)e(-(x_{W,j}+w)\cdot\xi)\\ &=\sum_{j=1}^{p^{m}} |E\cap (x_{W,j}+W)|e(-x_{W,j}\cdot\xi). \end{aligned} \end{equation*} It follows that \begin{equation*} \begin{aligned} |\widehat{E}(\xi)|^{2}&=\sum_{j=1}^{p^{m}}|E\cap (x_{W,j}+W)|^{2} \\ &+\sum_{j\neq k} |E\cap (x_{W,j}+W)||E\cap (x_{W,k}+W)|e(-(x_{W,j}-x_{W,k})\cdot\xi). \end{aligned} \end{equation*} Note that $(x_{W,j}-x_{W,k}) \notin W $ for any $j\neq k$. Together with the following Lemma \ref{lem:character}, we obtain \[ \sum_{\xi\in Per(W)} e(-(x_{W,j}-x_{W,k})\cdot\xi)=0. \] Thus we complete the proof. \end{proof} \begin{lemma}\label{lem:character} Let $V\in G(n,k)$ and $x\not\in V$. Then \[ \sum_{y\in Per(V)} e(-x \cdot y)=0. \] \end{lemma} \begin{proof} We claim that there exists $y_{0}\in Per(V)$ such that $y_{0}\cdot x\neq 0$. Suppose that $x\cdot y=0$ for any $y\in Per(V)$. It follows that \[ Per(V)\subset Per(V\cup \{x\}). \] Combing with the estimate \eqref{eq:rank} (rank-nullity theorem), we obtain that $n-k\leq n-k-1$ which is impossible. Since $Per(V)$ is a subspace and $e(-y_{0}\cdot x)\neq 1$, we obtain \[ e(-y_{0}\cdot x) \sum_{y\in Per(V)} e(-x \cdot y)=\sum_{y\in Per(V)} e(-x \cdot y), \] and hence $\sum_{y\in Per(V)} e(-x \cdot y)=0.$ \end{proof} \begin{remark} We do not know if the Lemma \ref{lem:character} also holds for vector spaces over general finite fields. For that case we will take nonprincipal character instead of $e^{\frac{2 \pi i x}{p}}$. We note that the Lemma \ref{lem:character} is the only place in this paper where the prime field $\mathbb{F}_{p}$ is needed. \end{remark} \subsection{Counting subspaces of $\mathbb{F}_{p}^{n}$} In the following, we collect some basic identities for ${n \choose m}_{p}$ for our later use. For more details see \cite[Chapter 6]{Cameron}, \cite{Ko}. \begin{lemma} \label{lem:iidentity} Let $1\leq m\leq n$. (1) ${n \choose 0}_{p}:={n \choose n }_{p}=1$, \, ${n \choose m}_{p}={n \choose n-m }_{p}.$ (2) ${n \choose m}_{p}={n-1 \choose m }_{p} +p^{n-m}{n-1 \choose m-1 }.$ (3) ${n \choose m}_{p}={n-1 \choose m-1 }_{p} +p^{m}{n-1 \choose m }.$ \end{lemma} \begin{lemma}\label{lem:subspace} Let $\xi$ be a non-zero vector of $\mathbb{F}_{p}^{n}$. (1) $|\{V\in G(n,m): \xi\in V\}|={n-1 \choose m-1}_{p}$. (2) $|\{V\in G(n,m): \xi\in Per(V)\}|={n-1 \choose m}_{p}$. \end{lemma} \begin{proof} First note that if $m=1$ then $(1)$ holds. For the case $2\leq m<n$, note that to obtain a $m$-dimensional subspace which contains the given vector $\xi$, it is sufficient to choose another $m-1$ vectors such that these $m-1$ vectors and the vector $\xi$ span a $m$-dimensional subspace. For the choice of the first vector, we have $p^{n}-p$ choices from $\mathbb{F}_{p}^{n}$ (except the vectors from the subspace of $\mathbb{F}_{p}^{n}$ which spanned by $\xi$). For the choice of the second vector, we have $p^{n}-p^{2}$ choices to make sure that the second vector, the first vector, and the vector $\xi$ span a $3$-dimensional subspace. We continue to do this until we choose $m-1$ vectors. In the end we have $(p^{n}-p)(p^{n}-p^{2})\cdots (p^{n}-p^{m-1})$ amount of choices. Note that for each $m$-dimensional subspace which contains vector $\xi$, there are amount of $(p^{m}-p)(p^{n}-p^{2})\cdots (p^{m}-p^{m-1})$ choices which generate (or span) the same subspace. It follows that (see \eqref{eq:bino} for the definition of ${n \choose m}_{p}$) the amount of $m$-dimensional plane which contain $\xi$ is \[ \frac{(p^{n}-p)(p^{n}-p^{2})\cdots (p^{n}-p^{m-1})}{(p^{m}-p)(p^{m}-p^{2})\cdots (p^{m}-p^{m-1})}={n-1 \choose m-1}_{p}. \] To establish $(2)$, first note that $\dim Per(\xi)=n-1$. Observe that \[ \{V\in G(n,m): \xi\in Per(V)\} \] is the collection of all the $m$-dimensional subspace of $Per(\xi)$, which the conclusion follows. \end{proof} \section{Proof of Theorem \ref{thm:maintheorem} }\label{sec:main} Let $\Theta \subset G(n,n-m)$. Recall that \[ \Theta'= \{x_{W,j}+ W: 1\leq j\leq p^{m}, W \in \Theta\}. \] \begin{lemma}\label{lem:keylemma} Let $E\subset \mathbb{F}_{p}^{n}$, $\Theta \subset G(n, n-m)$. Then \begin{equation} \mathcal{E}(E, \Theta') \leq \min \left\{ |E||\Theta|+2|E|^{2} p^{(n-m-1)m}, 2| E| p^{(n-m)m}+| E| ^{2}| \Theta| p^{-m} \right\}. \end{equation} \end{lemma} \begin{proof} We first show the estimate \[ \mathcal{E}(E, \Theta')\leq |E||\Theta|+2|E|^{2} p^{(n-m-1)m}. \] Let $x\in \mathbb{F}_{p}^{n}$. Since for any $W\in \Theta$ there is only one coset of $W$ which contains $x$, we obtain \[ \sum_{V\in \Theta'} V(x)=|\Theta|. \] By Lemma \ref{lem:subspace}, there are ${n-1 \choose n- m-1}_{p}$ amount of $(n-m)$-dimensional subspaces containing the given non-zero vector of $\mathbb{F}_{p}^{n}$. Then \begin{equation*} \begin{aligned} \mathcal{E}(E, \Theta') &=\sum_{V\in \Theta'} \left(\sum_{x\in E}V(x) \right)^{2}\\ &=\sum_{V\in \Theta'} \left(\sum_{x\in E}V(x)+\sum_{x\neq y \in E} V(x)V(y) \right)\\ &\leq |E||\Theta|+|E|(|E|-1){n-1 \choose n-m-1}_{p}\\ &\leq |E||\Theta|+2|E|^{2}p^{(n-m-1)m}. \end{aligned} \end{equation*} Now we turn to the other estimate. Applying Lemma \ref{lem:subspace} we have \begin{equation*} \begin{aligned} \sum_{W\in \Theta} \sum_{\xi\in Per(W)}|\widehat{E}(\xi)|^{2}-|\Theta||E|^{2} &\leq { n-1 \choose m-1}_{p}\sum_{\xi \in \mathbb{F}_{p}^{n}\backslash \{0\}} |\widehat{E}(\xi)|^{2}\\ &\leq { n-1 \choose m-1}_{p} p^{n}|E|. \end{aligned} \end{equation*} Together with Lemma \ref{lem:fff}, we obtain \begin{equation*} \begin{aligned} \mathcal{E}(E, \Theta') &=\sum_{W\in \Theta} \sum_{j=1}^{p^{m}}|E\cap(x_{W,j}+W)|^{2}\\ & =p^{-m}\sum_{W\in \Theta} \sum_{\xi\in Per(W)}|\widehat{E}(\xi)|^{2}\\ &=p^{-m}\left(\sum_{V\in \Theta} \sum_{\xi\in V}|\widehat{E}(\xi)|^{2}-|\Theta||E|^{2}+|\Theta||E|^{2}\right)\\ &\leq p^{-m}p^{n}|E| {n-1 \choose m-1}_{p} +|E|^{2}|\Theta| p^{-m}\\ &\leq 2p^{(n-m)m}|E|+|E|^{2}|\Theta| p^{-m}. \end{aligned} \end{equation*} Thus we complete the proof. \end{proof} \begin{remark} Let $|E|\leq p^{m}$ and $\Theta\subset G(n,n-m)$. Then by the estimate \eqref{eq:condition} and Lemma \ref{lem:iidentity} (1) \[ |\Theta| \leq { n\choose n-m}_{p}\leq 2p^{m(n-m)}. \] Thus \[ |E||\Theta|+2|E|^{2} p^{(n-m-1)m} \leq 2| E| p^{(n-m)m}+| E| ^{2}| \Theta| p^{-m}. \] Therefore, in the proof of Theorem \ref{thm:maintheorem}, if $|E|\leq p^{m}$ then we use the estimate \begin{equation}\label{eq:a} \mathcal{E}(E, \Theta') \leq |E||\Theta|+2|E|^{2} p^{(n-m-1)m}. \end{equation} For the case $|E|>p^{m}$, we use the estimate \begin{equation}\label{eq:b} \mathcal{E}(E, \Theta') \leq 2| E| p^{(n-m)m}+| E| ^{2}| \Theta| p^{-m}. \end{equation} \end{remark} \begin{proof}[Proof of Theorem \ref{thm:maintheorem}] First we prove $(a).$ Let \[ \Theta=\{W\in G(n,n-m): |\pi ^{W} (E)|\leq N\}. \] Applying the estimates \eqref{eq:argument} (outline of the method), \eqref{eq:a}, we obtain \begin{equation*} \begin{aligned} |\Theta||E|^{2} &\leq \mathcal{E}(E,\Theta')N\\ &\leq (|E||\Theta|+2|E|^{2}p^{(n-m-1)m})N. \end{aligned} \end{equation*} It follows that (recall that $N\leq |E|/2$) \[ |\Theta|\leq 4p^{(n-m)m-m}N. \] Now we prove $(b)$. For $\delta \in (0,1)$, let \[ \Theta=\{W\in G(n,n-m): |\pi^{W} (E)|\leq \delta p^{m}\}. \] Applying the estimates \eqref{eq:argument}, \eqref{eq:b}, we obtain \begin{equation*} \begin{aligned} | \Theta||E|^{2} &\leq \mathcal{E}(E,\Theta')\delta p^{m}\\ &\leq (2|E|p^{(n-m)m}+|E|^{2}|\Theta| p^{-m})\delta p^{m}, \end{aligned} \end{equation*} and \[ |\Theta||E|(1-\delta)\leq 2\delta p^{(n-m)m+m}. \] Then \[ |\Theta| \leq 2 \left(\frac{\delta}{1-\delta}\right)p^{(n-m)m+m}|E|^{-1}. \] Thus we complete the proof. \end{proof} \section{Proofs of Theorems \ref{thm:small} and \ref{thm:large}}\label{sec:random} \subsection{Percolation on $\mathbb{F}_{p}^{d}$ } The random model we used here is related to many other well known models, for example: Erd\H{o}s-R\'enyi-Gilbert model in random graphs, percolation theory on the graphs, and Mandelbrot percolation in fractal geometry. We show this model on $\mathbb{F}_{p}^{n}$ in the following. For an application of this model to find sets with small Fourier coefficient, see \cite[Theorem 5.2]{Babai}, \cite{Chen2017}. Let $0<\delta<1$. We choose each point of $\mathbb{F}_{p}^{n}$ with probability $\delta$ and remove it with probability $1-\delta$, all choices being independent of each other. Let $E=E^{\omega}$ be the collection of these chosen points. Let $\Omega=\Omega (\mathbb{F}_{p}^{n}, \delta)$ be our probability space which consists of all the possible sets $E^{\omega}$. We prove Theorems \ref{thm:small} and \ref{thm:large} by choose $\delta=p^{s-n}$ in the above model, and show that the random set $E$ has the desired properties with high probability when $p$ is large enough. For convenience, we formulate a special large deviations estimate in the following. For more background and details on large deviations estimate, see Alon and Spencer \cite[Appendix A]{Alon}. \begin{lemma}[Chernoff bound]\label{lem:largedeviations} Let $\{X_j\}_{j=1}^N$ be a sequence independent Bernoulli random variables with success probability $\delta'$. Let $\mu=N\delta'$, then \[ \mathbb{P}\left(\sum^N_{j=1} X_j < \mu /2 \right)\leq e^{-\mu /16}. \] \end{lemma} \subsection{Projections of random sets} \begin{proof}[Proof of Theorem \ref{thm:small}] Let $\delta=p^{s-n}$. We consider the random model $\Omega(\mathbb{F}_{p}^{n}, \delta).$ Let $W\in G(n,n-m)$. Observe that $|\pi^{W}(E)|$ is a binomial distribution with parameters $p^{m}$ and $\delta'$ where \[ \delta'=1-(1-\delta)^{p^{n-m}}. \] Let $\mu=p^{m}\delta'$. Since $1+x\leq e^{x}$ holds for all $x$, $e^{x} \leq 1+x +5x^{2}/6$ holds for $|x|\leq 1$, and $s\leq m$, we obtain \begin{equation}\label{eq:mu} \begin{aligned} \mu= p^{m}\delta'&\geq p^{m}(1-e^{-\delta p^{n-m}}) \\ &=p^{m}(1-e^{-p^{s-m}})\geq p^{m}(p^{s-m}-5p^{2(s-m)}/6)\\ &\geq p^{s}/6. \end{aligned} \end{equation} Applying Lemma \ref{lem:largedeviations} to $|\pi^{W}(E)|$ we have \[ \mathbb{P}\left(|\pi^{W}(E)| < \mu /2 \right)\leq e^{-\mu /16}. \] Since there are ${ n \choose m}_{p}$ elements of $G(n, n-m)$, we have \begin{equation}\label{eq:p} \begin{aligned} \mathbb{P} ( \text{exists } W \in G(n,n-m) &\text{ such that } |\pi^{W}(E)|\leq \mu/2 )\\ & \leq { n \choose m}_{p} e^{-\mu/16}\\ &\leq 2p^{m(n-m)}e^{-p^{s} /96}\\&\rightarrow 0 \text{ as } p\rightarrow \infty. \end{aligned} \end{equation} Note that $p^{s}/2 \leq |E| \leq 2 p^{s}$ with high probability ($>1/2$). This follows by applying Chebyshev's inequality, which says that \begin{equation*} \begin{aligned} \mathbb{P}(||E| - p^{n}\delta|> \frac{1}{2}p^{n}\delta)&\leq \frac{4p^{n}\delta(1-\delta)}{(p^{n}\delta)^{2}}\\ &\leq \frac{4}{p^{s}}\rightarrow 0 \text{ as } p \rightarrow \infty. \end{aligned} \end{equation*} Together with the estimates \eqref{eq:p} and \eqref{eq:mu}, we conclude that there exists $E\in \Omega(\mathbb{F}_{p}^{n}, \delta)$ with \[ p^{s}/2\leq |E|\leq 2p^{s} \text{ and }|\pi^{W}(E)|> \mu /2\geq p^{s}/12 \text{ for all $W\in G(n,n-m)$ } \] when $p$ is large enough. Thus we complete the proof. \end{proof} \begin{remark} Let $s\leq m$ and $\delta=p^{s-n}$. Then the above proof implies that for ``almost every" $E\in \Omega(\mathbb{F}_{p}^{n}, \delta)$ we have \[ p^{s}/2\leq |E|\leq 2p^{s} \text{ and }|\pi^{W}(E)|\geq |E|/24 \text{ for all $W\in G(n,n-m)$}. \] Roughly speaking, there is no exceptional projections for almost every $E\in \Omega(\mathbb{F}_{p}^{n}, \delta)$. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:large}] Let $\delta=p^{s-n}$. We again consider the random model $\Omega(\mathbb{F}_{p}^{n}, \delta)$. Let $V\in A(n,n-m)$. Note that $|E\cap V|$ is a binomial distribution with parameters $p^{n-m}$ and $\delta$. Thus \begin{equation*} \begin{aligned} \mathbb{P}(E \cap V =\emptyset)&=(1-\delta)^{p^{n-m}}\\ &\leq e^{-\delta p^{n-m}}=e^{-p^{s-m}}. \end{aligned} \end{equation*} Observe that \begin{equation*} |A(n,n-m)|=p^{m} {n \choose n-m}_{p}\leq 2p^{m(n-m+1)}. \end{equation*} Now we have (recalling $s>m$) \begin{equation*} \begin{aligned} \mathbb{P}( \text{exists } V &\in A(n,n-m) \text{ such that } E \cap V = \emptyset)\\ & \leq 2p^{m(n-m+1)}e^{-p^{s-m}}\rightarrow 0 \text{ as } p\rightarrow \infty. \end{aligned} \end{equation*} It follows that \begin{equation*} \begin{aligned} \mathbb{P}(\text{exists } W\in G(n,n-m) &\text{ such that } |\pi^{W}(E)| <p^{m}) \\ &\rightarrow 0 \text{ as } p\rightarrow \infty. \end{aligned} \end{equation*} Again by Chebyshev's inequality, we have $p^{s}/2 \leq |E| \leq 2 p^{s}$ with high probability ($>1/2$) provided $p$ is large enough. Thus we complete the proof. \end{proof} \begin{remark} Let $s> m$ and $\delta=p^{s-n}$. Then the above proof implies that for ``almost every" $E\in \Omega(\mathbb{F}_{p}^{n}, \delta)$ we have \[ p^{s}/2\leq |E|\leq 2p^{s} \text{ and }|\pi^{W}(E)|= p^{m} \text{ for all $W\in G(n,n-m)$}. \] Roughly speaking, there is no exceptional projections for almost every $E\in \Omega(\mathbb{F}_{p}^{n}, \delta)$. \end{remark} \section{Proof of proposition \ref{pro:Fourier}}\label{sec:Fourier} \begin{proof}[Proof of Proposition \ref{pro:Fourier}] First we use the same argument as in the outline of the method. Let $W\in G(n,n-m)$, and $x_{W,j}+W, 1\leq j\leq p^{m}$ be the different cosets of $W$. Then the Cauchy-Schwarz inequality implies \begin{equation*} |E|^{2}\leq |\pi^{W}(E)|\sum_{j=1}^{p^{m}}|E\cap (x_{W,j}+W)|^{2}. \end{equation*} Applying Lemma \ref{lem:fff} and the Fourier decay of $E$, we obtain \begin{equation*} \begin{aligned} |\sum_{j=1}^{p^{m}}|E\cap (x_{W,j}+W)|^{2}&=p^{-m}\sum_{\xi \in Per(W)}|\widehat{E}(\xi)|^{2}\\ &\leq p^{-m}(|E|^{2}+p^{m}C^{2}|E|^{2\alpha}). \end{aligned} \end{equation*} Then \begin{equation*}\label{eq:key} |E|^{2}\leq |\pi^{W}(E)|(p^{-m}|E|^{2}+C^{2}|E|^{2\alpha}) . \end{equation*} It follows that if $p^{-m}|E|^{2}\leq C^{2}|E|^{2\alpha}$ then \[ |\pi^{W}(E)| \geq |E|^{2-2\alpha}/2C^{2}. \] On the other hand, if $p^{-m}|E|^{2}> C^{2}p^{m}|E|^{2\alpha}$ then \[ |\pi^{W}(E)|\geq p^{m}/2. \] Then $(a)$ and $(b)$ hold. Now we turn to $(c)$. Note that if $|\pi^{W}(E)|>p^{m}-1$ then $|\pi ^{m}(E)|=p^{m}.$ Thus it is sufficient to show that \begin{equation}\label{eq:last} \frac{|E|^{2}}{p^{-m}|E|^{2}+C^{2}|E|^{2\alpha}} >p^{m}-1. \end{equation} By calculation if \[ |E|> p^{\frac{m}{1-\alpha}}(2C^{2})^{\frac{1}{2-2\alpha}}, \] then the estimate \eqref{eq:last} holds. Thus we complete the proof. \end{proof} \section{Further results } In the following, we extend an identity of Murphy and Petridis \cite{MurphyPetridis} for general $1\leq m\leq n$. They proved the special case $m=n-1$ for the following identity \eqref{eq:mur}. We prove it in two different ways. The first proof essentialy comes from \cite{MurphyPetridis}. The second proof depends on the discrete Fourier transformation. \begin{proposition}\label{pro:i} Let $E\subset \mathbb{F}_{p}^{n}$. Then \begin{equation}\label{eq:mur} \mathcal{E}(E, A(n,m))= |E|p^{m}{n-1 \choose m}_p +|E|^{2}{n-1 \choose m-1}_{p}. \end{equation} \end{proposition} \begin{proof} First proof. Recall that we denote by $F(x)$ the characteristic function of the subset $F\subset \mathbb{F}_{p}^{n}$. Applying the identities in Lemma \ref{lem:iidentity} (Gaussian coefficient) and Lemma \ref{lem:subspace}, we have \begin{equation*} \begin{aligned} \mathcal{E}(E, A(n,m))&= \sum_{W\in A(n,m) }|E \cap W|^{2}\\ &=\sum_{W \in A(n,m)} \left(\sum_{x\in E}W(x) \right)^{2}\\ &=\sum_{W \in A(n,m)} \left(\sum_{x\in E}W(x)+\sum_{x\neq y \in E} W(x)W(y) \right)\\ &=|E|{n \choose m}_{p}+|E|(|E|-1){n-1 \choose m-1}_{p}\\ &=|E|p^{m}{n-1 \choose m}_{p}+|E|^{2}{n-1 \choose m-1}_{p}. \end{aligned} \end{equation*} Now we use a Fourier approach to give a different proof. Applying Lemma \ref{lem:fff}, Lemmas \ref{lem:iidentity}-\ref{lem:subspace}, and Plancherel identity for subset $E\subset \mathbb{F}_{p}^{n}$, we obtain \begin{equation*} \begin{aligned} \mathcal{E}(E, A(n,m)) &=\sum_{W\in G(n,m)}\sum_{j=1}^{p^{m}}|E\cap (x_{W,j}+W)|^{2}\\ & =p^{m-n}\sum_{W\in G(n,m)} \sum_{\xi\in Per(W)}|\widehat{E}(\xi)|^{2}\\ &=p^{m-n} \left({n-1 \choose m}_{p}\sum_{\xi \neq 0}|\widehat{E}(\xi)|^{2} +{n \choose m}_{p}|\widehat{E}(0)|^{2} \right)\\ &=|E|p^{m}{n-1 \choose m}_{p}+|E|^{2}{n-1 \choose m-1}_{p}. \end{aligned} \end{equation*} Thus we complete the proof. \end{proof} I thank the anonymous referee for suggesting the following definition of projections in $\mathbb{F}_{p}^{n}$. Let $E\subset \mathbb{F}_{p}^{n}$ and $V\in G(n,m)$. Then the projection of $E$ to $V$ is defined as \[ P_{V}(E):=\{x+Per(V): E\cap(x+Per(V))\neq \emptyset, x\in \mathbb{F}_{p}^{n}\}. \] We intend to show that our results also holds under this definition. By using our notation we obtain \[ P_{V}(E)=\pi^{Per(V)}(E). \] Note that the rank-nullity theorem clams $\dim V+ \dim Per(V)=n$. Now we show that for $W\in G(n,n-m)$ and $E\subset \mathbb{F}_{p}^{n}$, \[ P_{Per(W)}(E)=\pi^{W}(E). \] This follows from the fact that $Per(Per(W))=W$ for any subspace $W\subset \mathbb{F}_{p}^{n}$. First note that the definition of $Per(W)$ implies $W\subset Per(Per(W))$. By applying rank-nullity theorem, we obtain \[ \dim Per(Per(W))=n-\dim Per(W)=\dim W, \] and hence $Per(Per(W))=W$. Therefore, for $E\subset \mathbb{F}_{p}^{n}$ and $N<p^{m}$, we obtain \[ |\{V\in G(n,m): |P_{V}(E)|\leq N\}|=|\{W\in G(n,n-m): |\pi^{W}(E)|\leq N\}|. \] Thus we conclude that our results holds under the above definition. \medskip \textbf{Acknowledgements.} I would like to thank Tuomas Orponen for sharing his idea on the projections of finite points in the plane. I thank the anonymous referee for carefully reading the manuscript and giving excellent comments, and thus improving the quality of this article. I am grateful for being supported by the Vilho, Yrj\"o, and Kalle V\"ais\"al\"a foundation.
1,314,259,996,880
arxiv
\section{Introduction} \label{sec:intro} Data involving ensembles of networks - that is, multiple independent networks - arise in various scientific fields, including sociology \citep{slaughter2016multilevel, stewart2019multilevel}, neuroscience \citep{simpson2011exponential, obando2017statistical}, molecular biology \citep{unhelkar2017structure,grazioli2019comparative}, and political science \citep{moody2013portrait} among others. Typically, ensembles of networks represent the action of multiple generative processes, with different processes being prominent in different settings. A reasonable starting point for analysis of such data is to posit that this variation can be represented in terms of discrete set of subpopulations, such that the networks drawn from any given subpopulation tend to be produced by similar generative processes. Given a set of potential generative models, one would then like to identify the subsets of networks drawn from a particular subpopulation, or a probabilistic mixture of multiple subpopulations. It is natural to view this as a hierarchical finite mixture problem, with the base distributions being parametric distributions on graphs. As a plausible approximation to the underlying data generating process, the hierarchical finite mixture framework also provides a flexible approach for predictive modeling of ensemble of networks. If one seeks to predict graph structures drawn from a heterogeneous (super)population learned from observed data, one needs to average over the possible generative processes that might end up producing the observation that one wants to predict. Such a view is similar in spirit to model averaging techniques \citep{hoeting1999bayesian,hjort2003frequentist}, especially if interpreted in terms of a hierarchical problem in which we seek to predict an outcome of interest (e.g., co-voting prevalence among U.S. senators) by first predicting network structure and then predicting the behavior of a process on that network. In that setting, if it turned out that there were $k$ types of possible network formation processes and we did not know which one ours happened to be, we would certainly want to average across the types. There is a growing body of literature on the analysis of ensembles of networks. This includes work on discriminative analysis of networks via distance or similarity measures \citep[e.g.][]{banks.carley:joc:1994,butts.carley:cmot:2005,fitzhugh.et.al:alcr:2015}, which can be broadly viewed as mapping the ensemble of interest into some high-dimensional space (e.g., the Hamming space of graphs), and then employing standard multivariate analysis techniques (e.g., hierarchical clustering, multidimensional scaling) to seek an informative low-dimensional approximation. Other approaches work with user-selected graph statistics, either directly \citep[e.g.][]{przulj:b:2007,sweet2019clustering} or by e.g., modeling quantiles of the observed statistics relative to a reference distribution to control for size and density effects \citep{butts2011bayesian}. As such, these approaches do not attempt to provide generative models for the networks within the ensemble, though they may in some cases provide generative models for summary statistics (e.g., predicting the conditional uniform graph quantile for the transitivity of a new graph drawn from the same ensemble). In the category of generative models for complex networks, a common approach is to employ multilevel models with exponential random graph models (ERGMs, a general family of parametric models for networks \citep[see, e.g.][for a review]{robins2007recent}), as base distributions. \citet{faust2002comparing} introduced both multivariate meta-analysis of ERGM parameters from a common model family (fit to an ensemble of graphs) and predicted conditional edge probabilities from the generative base models as tools for leveraging ERGMs to compare networks. More elaborate meta-analytic procedures and hierarchical models for single populations of networks were subsequently developed by, among others, \citet{zijlstra2006multilevel, slaughter2016multilevel, mcfarland.et.al:asr:2014, butts2017baseline}, and \citet{stewart2019multilevel}. Nonparametric models (e.g., latent space or block models) have also been employed for studying sets of networks, e.g. hierarchical mixed membership stochastic blockmodels for multiple networks \citep{sweet2014hierarchical}. In general, those methods have either not posited a generative model for the parameters of the base distribution (as in descriptive meta-analytic approaches), have not attempted to jointly estimate population-level and network-level parameters (as in conventional meta-analysis), or have assumed a simple hierarchical form in which coefficients are taken to be drawn from a simple population distribution (often Gaussian) with common mean and variance. The latter work well for homogeneous (super)populations; but when the network ensemble reflects higher levels of heterogeneity, more structure is required. In contrast, work such as that of \citet{durante2018bayesian,lehmann2019inferring} explicitly considers heterogeneity within graph subpopulations, but assumes that the subpopulation labels are observed. Joint modeling of population-level and network-level parameters where subpopulation memberships are unknown, or where the true generative process otherwise involves a mixture of graph distributions, has remained an open problem to date in the ERGM context. In this paper, we propose using a mixture of ERGMs to model the generative process of ensembles of networks in which the group labels are not available, under the general framework of finite mixture models \citep{mclachlan1988mixture, fraley2002model, bouveyron2019model}. Such a formulation provides a useful probabilistic interpretation of the results and allows for convenient statistical inference; we note that related approaches have proven to be efficacious for modeling structure \emph{within} networks \citep[e.g.][]{salter2015role,schweinberger2015local,snijders1997estimation}. Recent work on using mixtures of network models with dyadic independence property (e.g., a priori stochastic blockmodel, $p_1$ model) for modeling multiple network observations \citep{signorelli2019model} can encounter difficulties when the observed networks exhibit strong dyadic dependence, which is often the case for real-world networks. We develop a Metropolis-within-Gibbs algorithm to perform Bayesian inference for the proposed model, with both the subpopulation assignments and the ERGM parameters in the subpopulations being estimated simultaneously. Given that our primary focus is to develop a practical procedure that can obtain meaningful subpopulations, we employ a pseudo-likelihood approximation to the ERGM likelihood for efficient computation; while we show here that this approach can work well, more advanced MCMC techniques can also be deployed to obtain more accurate estimates when the interest lies mainly in the inference of subpopulations-specific parameters. (It is also possible to use the pseudo-likelihood when updating subpopulation assignment parameters and then use high-accuracy MCMC-based likelihood calculations to update subpopulation-specific parameters, offering additional options for speed/accuracy tradeoffs.) We approach the problem of choosing number of subpopulations from a model selection perspective, using a version of deviance information criterion. The remainder of this paper is structured as follows. In section \ref{sec:ERGMs} we briefly introduce the exponential-family random graph models (ERGMs) and common estimation techniques. Section \ref{sec:Mixture_of_ERGMs} describes the idea of mixtures of ERGMs, along with our estimation algorithms and our proposed method for selecting the number of subpopulations. Section \ref{sec:Simulation} presents simulation studies showing that the proposed method can accurately recover the true subpopulation assignment and model parameters. Section \ref{sec:Case_study} shows the results of our method applied to a political co-voting data analysis. Section \ref{sec:Conclusion} concludes with a discussion. \section{Exponential-family Random Graph Models (ERGMs)} \label{sec:ERGMs} In recent years, ERGMs have found applications in empirical research in a wide range of scientific fields. Recent examples include the study of large friendship networks \citep{goodreau2007advances}, genetic and metabolic networks \citep{saul2007exploring}, disease transmission networks \citep{groendyke2012network}, conflict networks in the international system \citep{cranmer2011inferential}, the structure of ancient networks in various of archaeological settings \citep{amati2019framework}, the structural comparison of protein structure networks \citep{grazioli2019comparative}, the effects of functional integration and functional segregation in brain functional connectivity networks \citep{simpson2011exponential,sinke2016bayesian,obando2017statistical}, and the impact of endogenous network effects on the formation of interhospital patient referral networks \citep{caimo2017bayesian}. While addressing very different problems in different empirical settings, what these studies have in common is a clear methodological commitment to modeling network mechanisms directly via parametric effects, rather than just attempting to ``control for'' unspecified dependence among the observations (e.g., via latent structure). The ability to provide generative and interpretable models of complex network structure is an important asset of this approach, which we leverage here in the context of graph ensembles. \subsection{Definition and Estimation} \label{subsec:ERGM_def} Exponential-family random graph models (ERGMs) \citep{holland1981exponential, frank1986markov, snijders2006new, hunter2006inference}, also known as $p$-star models \citep{wasserman1996logit}, are a family of parametric statistical models developed for explicitly modeling the complex stochastic processes that govern the formation of edges among pairs of nodes in a network. We introduce them first in the single-network case. Consider the set of nodes in the network of interest, $\vec{V}$, and let $|\vec{V}| = n$ be its cardinality, i.e. the number of nodes in the network. We represent the network's structure via an order-$n$ random adjacency matrix $\vec{Y}$, in which each element takes $1$ or $0$ representing the presence or absence of a tie between incident nodes. Letting $\mathcal{Y}_{n}$ be the set of all possible network configurations on $n$ nodes, we write the probability mass function (pmf) of $\vec{Y}$ taking a particular configuration $\vec{y}$ in the form of a discrete exponential family as \begin{equation} \label{eq:ERGM} \mathbb{P}_{\bm{\eta}}( \vec{Y} = \vec{y}|\vec{X}; {\bm{\theta}}) = \exp \bigg( \bm{\eta}( \bm{\theta})^{\intercal} \vec{g(y;\vec{X})} - \psi_{\vec{g}, \bm{\eta} ,\vec{X}, \mathcal{Y}_{n}}( \bm{\theta} ) \bigg) h(\vec{y}), \quad \vec{y} \in \mathcal{Y}_{n}, \end{equation} where $\bm{\theta} = (\theta_{1}, \cdots, \theta_{q}) \in \mathbb{R}^{q}$ is a vector of (curved) model parameters, mapped to the natural parameters by $ \bm{\eta}(\bm{\theta}) = ( \eta_{1}(\bm{\theta}), \cdots, \eta_{p}(\bm{\theta}) ) \in\mathbb{R}^{p}$. The natural parameters $\bm{\eta}$ may depend on the sizes of the networks and may be non-linear functions of a parameter vector $\bm{\theta}$. The user-defined sufficient statistics $\vec{g} : \mathcal{Y}_{n} \rightarrow \mathbb{R}^{p}$ may incorporate fixed and known covariates $\vec{X}$ that are measured on the nodes or dyads. The sufficient statistics incorporate network features of interests that are believed to be crucial to the social process which had given rise to it \citep[see, e.g.,][]{morris2008specification}. Here $h$ defines the reference measure for the model family; often chosen to be the counting measure on $\mathcal{Y}_n$ for unvalued graphs with fixed $n$, other reference measures can make more sense in different settings. As discussed below, we employ a sparse graph reference that leads to a mean degree that is asymptotically constant in $n$. Finally, the normalizing factor $\psi_{\vec{g}, \bm{\eta}, \vec{X}, \mathcal{Y}_{n}}( \bm{\theta} ) = \log \sum_{\vec{y'} \in \mathcal{Y}_{n} } \exp\left\{ \bm{\eta}( \bm{\theta})^\intercal \vec{g(\vec{y'};\vec{X})} \right\} h(\vec{y'})$ ensures that \eqref{eq:ERGM} sums up to 1 over the support $\mathcal{Y}_{n}$. To make notations simpler, we also assume that $\vec{V}$ is implicitly absorbed into $\vec{X}$. Exact evaluation of the normalizing factor involves integrating an extremely rough function over all possible network configurations ($2^{n \choose 2}$ non-negative terms for an undirected network of size $n$). This cannot be done by brute force except for trivially small graphs, and the roughness of the underlying function precludes simple Monte Carlo strategies; thus, alternative approaches that approximate or avoid this calculation are of substantial interest \citep[see][for a review]{hunter2012computational}. To date, the most frequently used approaches include: maximum pseudo-likelihood estimation (MPLE; \citet{besag1974spatial}) adapted by \citet{strauss1990pseudolikelihood}; Markov Chain Monte Carlo MLE (MCMC MLE; \citet{geyer1992constrained}) by \citet{handcock2003assessing, hunter2006inference}; Stochastic approximation (SA; \citet{robbins1951stochastic,pflug1996optimization}) by \citet{snijders2002markov}; and fully Bayesian inference based on approximate exchange algorithm \citep{caimo2011bayesian}. Recent developments on ERGM estimation have concentrated on: (1) finding better initial values for simulation-based MLE, including the \emph{partial stepping} technique \citep{hummel2012improving} and \emph{contrastive divergence} (CD,\citet{hinton2002training})-based techniques adapted to ERGMs by \citet{krivitsky2017using}; and (2) more accurate tractable approximations to ERGM likelihood than pseudo-likelihood, such as the adjusted pseudo-likelihood \citep{bouranis2017efficient,bouranis2018bayesian} for fast Bayesian inference. Despite the computational challenges, these and related strategies have made ERGM inference practical for well-posed model families (e.g., see \citep{schweinberger2019exponential} for a recent review). \subsection{Size-adjusted parameterizations} It is worth noting that the behavior of Eq.~\eqref{eq:ERGM} across $n$ is highly dependent on the choice of reference measure, $h$. In particular, the counting measure - while a mathematically convenient choice - implicitly sets the base distribution of the network to be the uniform distribution on $\mathcal{Y}_n$, and has the side effect of generating graphs whose densities are \emph{ceteris paribus} constant in $n$. When network size varies, this is not always realistic: in many networks, mean degree is approximately constant in $n$, implying that density must scale as $n^{-1}$. To correct for this, \citet{krivitsky2011adjusting} propose the reference measure $h(\vec{y})=n^{-M(\vec{y})}$, where $M$ is the edge count. This is equivalent to adding a size-dependent offset of $-\log n$ to the natural parameter associated with the edge count, i.e. \begin{equation} \label{eq:krivitsky_offset} \eta_{1}(\bm{\theta}) = \theta_{1} - \log n, \end{equation} \noindent where $\theta_{1} \in \mathbb{R}$ is a parameter that does not depend on the network size. In the present work, we employ the \emph{Krivitsky reference measure} as above, although other size-adjusted parameterizations are also possible \citep[e.g., ][]{butts2015flexible, kolaczyk2015question}. \section{Finite mixtures of ERGMs} \label{sec:Mixture_of_ERGMs} We assume a population of networks $(\vec{Y}^{(1)},\vec{V}^{(1)},\vec{X}^{(1)}),\ldots,(\vec{Y}^{(m)},\vec{V}^{(m)},\vec{X}^{(m)})$, where $\vec{Y}^{(i)}$ is a graph structure on vertex set $\vec{V}^{(i)}$ with covariate set $\vec{X}^{(i)}$. Our interest is in modeling $\vec{Y}^{(1)},\cdots, \vec{Y}^{(m)}$ given $(\vec{V}^{(1)},\vec{X}^{(1)}),\cdots,(\vec{V}^{(m)},\vec{X}^{(m)})$, where it will be assumed that the respective graph structures are conditionally independent given the generative process, vertex sets, and covariates. \subsection{Model} We model the generative process for the network ensemble as a finite mixture, with each mixture component (equivalently, subpopulation, or ``cluster'') being an ERGM distribution with cluster-specific parameters. (See Figure~\ref{fig:mixture_model_graph}.) Given $K$ clusters, the {\it a priori} probability for a network to belong to cluster $k$ is $\tau_{k}$ for $k=1,2,\cdots,K$, and the probability law governing the formation of the network in group $k$ is parameterized by Eq.~\eqref{eq:ERGM} with cluster-specific parameter vector $\bm{\theta}_{k} \in \mathbb{R}^{q_{k}}$ and cluster-specific mapping to the natural parameters $\bm{\eta}_{k}(\bm{\theta}_{k}) = ( \eta_{k,1}(\bm{\theta}_{k}), \cdots, \eta_{k,p_{k}}(\bm{\theta}_{k})) \in \mathbb{R}^{p_{k}}$. For notational simplicity, we omit the subscripts $\bm{\eta}_{k}$'s for the remainder of the paper. More specifically, the marginal likelihood for network $\vec{Y}^{(i)}$, with $|\vec{V}^{(i)}| \equiv n_{i}$, takes the following form \begin{equation} \label{eq:mixture_ERGM} \mathbb{P}(\vec{Y}^{(i)} = \vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\tau}, \underline{\bm{\theta}}) = \sum_{k=1}^{K} \tau_{k} \exp \bigg( \bm{\eta}_{k}(\bm{\theta}_{k})^{\intercal} \vec{g}_{k}(\vec{y}^{(i)};\vec{X}^{(i)}) - \psi_{\vec{g_{k}},\bm{\eta}_{k}, \vec{X}^{(i)},\mathcal{Y}_{n_{i}}}(\bm{\theta}_{k}) \bigg) h_i(\vec{y}^{(i)}), \vec{y}^{(i)} \in \mathcal{Y}_{n_{i}} \end{equation} where $\bm{\tau} = (\tau_{1}, \cdots, \tau_{K})$ and $\underline{\bm{\theta}} = (\bm{\theta}_{1},\cdots,\bm{\theta}_{K})$ are the model parameters, and the former satisfies the constraint $\sum_{k=1}^{K} \tau_{k} = 1, \tau_k \geq 0$ for $k=1,\ldots,K$. The ensemble of networks consists of $m$ independent observations $ \underline{\vec{y}} = (\vec{y}^{(1)},\cdots,\vec{y}^{(m)})$ with fixed covariate set $\underline{\vec{X}} = (\vec{X}^{(1)},\cdots,\vec{X}^{(m)})$ and fixed vertex set $\underline{\vec{V}} = (\vec{V}^{(1)},\cdots,\vec{V}^{(m)})$, and hence the joint likelihood is \begin{equation} \label{eq:mixture_ERGM_joint} \mathbb{P}(\underline{\vec{Y}} = \underline{\vec{y}} | \underline{\vec{X}}; \bm{\tau}, \underline{\bm{\theta}}) = \prod_{i=1}^{m} \bigg[ \sum_{k=1}^{K} \tau_{k} \exp \bigg( \bm{\eta}_{k}(\bm{\theta}_{k})^{\intercal} \vec{g}_{k}(\vec{y}^{(i)};\vec{X}^{(i)}) - \psi_{\vec{g_{k}},\bm{\eta}_{k}, \vec{X}^{(i)}, \mathcal{Y}_{n_{i}}}(\bm{\theta}_{k}) \bigg) h_i(\vec{y}^{(i)}) \bigg], \end{equation} where we have absorbed the support constraint into the reference measure. To facilitate statistical inference, we consider the representation of \eqref{eq:mixture_ERGM_joint} from a latent variable perspective. Let $Z_{i}, i=1,\cdots,m$ be latent variables following a categorical distribution with $K$ values and probability parameter $\bm{\tau}$, such that $Z_{i} = k$ if $\vec{Y}^{(i)}$ belongs to cluster $k$. We may then treat $\vec{Y}^{(i)}$ as arising from a process in which $Z_i$ is first drawn from $\mathrm{Categorial}(\bm{\tau})$, and $\vec{Y}^{(i)}$ is then drawn from the ERGM distribution corresponding to cluster $Z_i$. While one could allow the reference measure to also vary by cluster, we focus on the case of ERGMs specified relative to the Krivitsky reference measure if the sizes of the networks vary. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{plate_diagram.pdf} \caption{Structure of the graph mixture model. Random quantities are depicted within circles, fixed quantities within rectangles; observables are shaded.\label{fig:mixture_model_graph}} \end{center} \end{figure} \subsection{Bayesian estimation} Bayesian estimation is a natural choice for parameter inference here, since (1) it is more robust to initialization and less prone to converge to local minima than maximum likelihood; (2) interval estimation is straightforward and does not rely on the assumption of approximate normality; and (3) it provides principled answers in fixed-$n,m$ settings. Our strategy is to employ Metropolis-within-Gibbs sampling to obtain MCMC samples from the joint posterior distribution of $\underline{\bm{\theta}}$ and $\bm{\tau}$. We specify prior distributions for the parameters as follows, $$ \bm{\tau} \sim \text{Dirichlet}(\bm{\alpha}), $$ $$ \bm{\theta}_{k} \overset{i.i.d.}{\sim} \text{MVN}_{p}(\bm{\mu}, \Psi ), \ \ \ k=1,\cdots,K, $$ where $ \bm{\alpha} = (\alpha_{1}, \cdots, \alpha_{K})$, $\bm{\mu}$ and $\Psi$ are hyper-parameters to be specified by the user. For typical use cases, a reasonable choice of hyperparameters are $\alpha_{1} = \ldots = \alpha_K = 3$, which puts low probability on any group being extremely small, and $\Psi = 25 I_{p}$, which is fairly flat over the typical range of variation for common parameterizations. A convenient choice of $\bm{\mu}$ is $\vec{0}$, but this can be problematic because it will rarely be true that we want to shrink the edge parameter (which governs density) towards 0. It can hence be important to incorporate empirical knowledge into the specification of $\bm{\mu}$; in particular, set the hyperparameter associated with edge term to be negative (e.g., $-\log n$) when modeling social networks under the counting measure, as most social networks are sparse. Under the Krivitsky reference measure, using the log of the \emph{a priori} expected degree (based either on theory or analysis of similar data sets) is an appropriate choice. As noted, we perform posterior inference via MCMC. Our algorithm iterates over the model parameters $(\underline{\bm{\theta}}, \bm{\tau})$ with the priors given above, and the latent variables $\vec{Z} = (Z_{1}, \cdots, Z_{m})$. Where possible we sample from the full conditional posterior distributions; otherwise we use Metropolis-Hastings steps. \begin{algorithm} \caption{Metropolis-within-Gibbs sampler for the ERGM mixture model \label{alg:alg1}} \begin{algorithmic}[1] \STATE \textbf{Initialization}: Set $\bm{\tau}^{0}$, $\underline{\bm{\theta}}^{0}$ and $\bm{Z}^{0}$ to initial values (e.g., prior means). \FOR{$t = 1,2,\cdots,T $} \STATE Generate $Z_{i}^{t}$ ($i=1,\cdots,m$, $k=1,\cdots,K$) from \newline \indent $\mathbb{P}( Z_{i}^{t} = k | \eta_{k}^{t-1}, \bm{\theta}_{k}^{t-1}, \vec{y}^{(i)} ) \propto \eta_{k}^{t-1} \mathbb{P}(\vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\theta}_{k}^{t-1} )$ \STATE Compute $\nu_{k}^{t} = \sum_{i=1}^{m} \mathbbm{1}_{Z_{i}^{t} = k}$; $k=1,\cdots,K$ \STATE Generate $\bm{\tau}^{t}$ from $\text{Dirichlet}(\alpha_{1} + \nu_{1}^{t},\cdots, \alpha_{K} + \nu_{K}^{t} )$ \FOR{$k=1,\cdots,K$} \STATE Propose $\bm{\theta}_{k}^{'} \sim q(\cdot | \bm{\theta}_{k}^{t-1})$ \STATE Accept $\bm{\theta}_{k}^{'}$ with probability equal to \newline \indent $\frac{ \pi( \bm{\theta}_{k}^{'} ) \prod_{ Z_{i}^{t}=k } \mathbb{P}(\vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\theta}_{k}^{'}) q( \bm{\theta}_{k}^{t-1} | \bm{\theta}_{k}^{'}) }{ \pi( \bm{\theta}_{k}^{t-1} ) \prod_{ Z_{i}^{t}=k } \mathbb{P}(\vec{y}^{(i)} |\vec{X}^{(i)}; \bm{\theta}_{k}^{t-1}) q( \bm{\theta}_{k}^{'}|\bm{\theta}_{k}^{t-1}) } $ \label{eq:MH_ratio} \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} The proposal distribution $q(\cdot | \bm{\theta})$ in the Metropolis step is set by the user to achieve good performance of the algorithm. On the basis of some experimentation, we use the symmetric proposal $\mathcal{N}(\bm{\theta}, \sigma^{2} I_{q})$, where $\sigma = 0.05$. At each MCMC iteration, we permute the labels to impose ordering constraints on the first common element of the parameter vectors (e.g., total number of edges), $\theta_{11} < \theta_{21} < \cdots < \theta_{K1} $ for model identifiability purposes. Simulation studies and case studies show that the ordering constraints can work well, though other post-processing techniques (e.g., Kullback-Leibler relabeling algorithm \citep{stephens2000dealing} and Pivotal Reordering algorithm \citep{marin2005bayesian}, etc.), can be used depending on practitioners' preference. To deal with the intractability of $\mathbb{P}( \vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\theta})$, there are at least three possible solutions in ERGM literature: \begin{itemize} \item Work with a tractable approximation in the place of ERGM likelihood, e.g. pseudo-likelihood \citep{strauss1990pseudolikelihood}, fully adjusted pseudo-likelihood \citep{bouranis2018bayesian}, or other composite likelihoods \citep{austad2010deterministic,asuncion2010learning}. \item Use importance sampling to approximate the ERGM likelihood \citep{koskinen2004bayesian, koskinen2008linked}. \item Use auxiliary-variable based MCMC algorithms to eliminate the intractable normalizing factor in ERGM likelihood \citep{caimo2011bayesian}. \end{itemize} In fact, updating $\bm{\theta}_{k}$'s using the Metropolis-Hastings ratio in \eqref{eq:MH_ratio} is a \emph{doubly-intractable} problem, which can be approached using various advanced MCMC techniques \citep[see][for a review]{park2018bayesian}. However, these advanced techniques all require simulating networks from ERGMs at each MCMC iteration to approximate the true likelihood Eq.~\eqref{eq:ERGM}, which can be expensive for large networks. When the major goal is clustering instead of estimation on cluster-specific parameters, we propose to work with the most common form of tractable approximation, the pseudo-likelihood, in which the full likelihood of each network is approximated by a product of full conditional distributions of edge variables $y_{ij}$ in $\vec{y}$, \begin{equation} \label{eq:ERGM_PL} f_{PL}(\vec{y} | \vec{X}; \bm{\theta}) = \prod_{(i,j) \in \mathcal{D}} \mathbb{P}(y_{ij} | y_{-ij}; \vec{X}; \bm{\theta}) = \prod_{(i,j) \in \mathcal{D}} \frac{1}{1 + \exp\left\{-\bm{\eta}(\bm{\theta})^{\intercal} \Delta_{i,j} \vec{g}(\vec{y};\vec{X}) \right\}}, \end{equation} where $\Delta_{i,j} \vec{g}(\vec{y}; \vec{X}) = \vec{g}(y_{ij}^{+};\vec{X}) - \vec{g}(y_{ij}^{-};\vec{X})$ are the so-called \emph{change statistics} associated with the dyad $(i,j)$, representing the change in sufficient statistics when $y_{ij}$ is toggled from 0 ($y_{ij}^{-}$) to 1 ($y_{ij}^{+}$) with the rest of the network remaining unchanged; $\mathcal{D}$ denotes the set of all pairs of dyads. For directed networks, $\mathcal{D} = \{ (i,j) | i,j \in \mathcal{N}, i \neq j \}$, while for undirected networks, $\mathcal{D} = \{ (i,j) | i,j \in \mathcal{N}, i < j \}$. In the frequentist paradigm, maximizing \eqref{eq:ERGM_PL} gives the so-called MPLE, which is relatively fast, algorithmically convenient, and able to provide approximate parameter estimates for even badly-specified models. While empirical observations show that MPLE can cause bias and underestimate standard errors \citep{van2009framework} (especially for models with strong dyadic dependence), it has been the default choice for initialization of MCMC-MLE algorithms. There is also promising work on using bootstrapped MPLE to construct confidence intervals \citep{schmid2017exponential} for large and sparse networks, as the MPLE is usually close to MLE in such cases \citep{desmarais2010consistent}. Similar logic has motivated the use of Bayesian bootstrap estimation based on ``pseudo-MAP'' estimates using the PL approximation to the likelihood \citep{grazioli2019comparative}. \subsection{Choosing the number of clusters} \label{subsec:Clusters} We recast the problem of choosing the number of clusters as a model selection problem, as different numbers of clusters result in distinct statistical models. Therefore, we use a version of the observed \textit{deviance information criteria} (DIC) introduced by \citet{celeux2006deviance}, which is an extension of the original DIC \citep{spiegelhalter2002bayesian} to models with latent variables. Given posterior draws $\bm{\tau}^{l}, \underline{\bm{\theta}}^{l} = ( \bm{\theta}_{1}^{l}, \cdots, \bm{\theta}_{K}^{l})$ and observed ensemble of networks $\underline{\vec{y}} = (\vec{y}^{(1)}, \cdots, \vec{y}^{(m)})$, the observed DIC is defined by \begin{equation} \label{eq:DIC3} DIC_{K} = -4 \mathbb{E}_{\underline{\bm{\theta}}}[ \log \mathbb{P}( \underline{\vec{y}} | \underline{\vec{X}}; \underline{\bm{\theta}}) | \underline{\vec{y}} ] + 2\log \hat{\mathbb{P}}( \underline{\vec{y}} | \underline{\vec{X}}; \underline{\bm{\theta}}), \end{equation} where $$ \hat{\mathbb{P}}( \underline{\vec{y}} | \underline{\vec{X}}; \underline{\bm{\theta}}) = \prod_{i=1}^{m} \hat{\mathbb{P}}(\vec{y}^{(i)} | \vec{X}^{(i)}; \underline{\bm{\theta}}) = \prod_{i=1}^{m} \bigg( \frac{1}{m} \sum_{l=1}^{L} \sum_{k=1}^{K} \tau_{k}^{l} \mathbb{P}(\vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\theta}_{k}^{(l)} ) \bigg), $$ and $$ \mathbb{E}_{\underline{\bm{\theta}}}[ \log \mathbb{P}( \underline{\vec{y}} | \underline{\vec{X}}; \underline{\bm{\theta}}) | \underline{\vec{y}} ] = \frac{1}{m} \sum_{l=1}^{L} \sum_{i=1}^{m} \log \left\{ \sum_{k=1}^{K} \tau_{k}^{l} \mathbb{P}(\vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\theta}_{k}^{(l)}) \right\}. $$ As practitioners often seek for parsimonious models to represent the clusters, we present a rule-of-thumb to identify the point where there is diminishing return by further increasing the number of clusters, and hence to avoid potential over-fitting. Define the relative difference (RD) in DIC as $$RD(k) = \frac{DIC_{k} - DIC_{k-1}}{DIC_{k-1}}, k=2,3,\cdots. $$ We define the optimal number of clusters given by a pre-specified cut-off value $\epsilon$ as $k_{opt}(\epsilon) = \min_{k} \left\{k | RD(k) \geqslant \epsilon \right\}$, based on the reasoning that the optimal number of clusters should be the first $k$ resulting in limited relative improvement in terms of DIC. Simulation studies in section \ref{sec:Simulation} show empirical evidence supporting that $\epsilon = -0.005$ can be a reasonable rule-of-thumb for selecting the number of clusters. We note that having an ensemble of networks makes it possible to assess the out-of-sample performance of mixture of ERGMs using the traditional statistical principle of cross-validation (CV), and there is work on using CV to estimate the number of clusters for observations with continuous values \citep{fu2019estimating}. In particular, to reduce the possibility of accidentally dropping all graphs in a single cluster completely by holding out too many graphs simultaneously, leave-one-out CV should be favored. The loss function for the cross-validation procedure can be negative log-likelihood evaluated on the held-out data as well as prediction error with respect to any structural properties of interest (obtained by simulating from estimated model using training data). Though the CV is not Bayesian and violates the likelihood principle, it is easy to implement and obviates the need to choose a threshold for when to stop adding clusters based on the predictive power of the model. \subsection{Posterior probability of cluster membership} An appealing aspect of mixture modeling is that the posterior probability of individuals belonging to each cluster (alternately: graphs having been generated by a particular process) can be conveniently obtained as \begin{equation} \label{eq:post_prob} \mathbb{P}( Z_{i} = k | \vec{y}^{(i)} ) = \int \frac{ \tau_{k} \mathbb{P}(\vec{y}^{(i)} | \vec{X}^{(i)} ;\bm{\theta}_{k}) }{ \sum_{k=1}^{K} \tau_{k} \mathbb{P}(\vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\theta}_{k}) } \pi( \underline{\bm{\theta}}, \bm{\tau} | \underline{\vec{y}}) d\underline{\bm{\theta}}d\bm{\tau}, \end{equation} where $\pi( \underline{\bm{\theta}}, \bm{\tau} | \underline{\vec{y}})$ is the posterior distribution of $ \underline{\bm{\theta}}, \bm{\tau}$. The integral \eqref{eq:post_prob} is computationally intractable. Hence we use posterior samples $\underline{\bm{\theta}}^{1}, \cdots, \underline{\bm{\theta}}^{L}$ and $\bm{\tau}^{1}, \cdots, \bm{\tau}^{L}$ to obtain its Monte-Carlo approximation, \begin{equation} \hat{\mathbb{P}}( Z_{i} = k | \vec{y}^{(i)} ) = \frac{1}{L} \sum_{l=1}^{L} \frac{ \tau_{k}^{l} \mathbb{P}(\vec{y}^{(i)} | \vec{X}^{(i)}; \bm{\theta}_{k}^{l} ) }{ \sum_{k=1}^{K} \tau_{k}^{l} \mathbb{P}(\vec{y}^{(i)} |\vec{X}^{(i)}; \bm{\theta}_{k}^{l} ) }. \end{equation} The posterior mode, i.e., $ \hat{Z}_{i} = \argmax_{k} \hat{\mathbb{P}}( Z_{i} = k | \vec{y}^{(i)} )$ can be used as the output for cluster analysis, provided that the goal is to obtain a deterministic cluster assignment. \section{Simulation studies} \label{sec:Simulation} We conduct extensive simulation studies to show that the proposed approach is capable of selecting the true number of clusters, recovering the true cluster memberships and true model parameters. \subsection{Experimental settings} The ground truth is available for the synthetic data, as we simulate networks from mixtures of ERGM distributions defined on three most commonly used network sufficient statistics but with distinct parameters, \begin{itemize} \item $ g_{1}(\vec{y}) = \sum_{i<j} y_{ij}$, total number of edges. \item $ g_{2}(\vec{y}) = e^{\phi} \sum_{k=1}^{n-2} \left\{ 1 - (1-e^{-\phi})^{k} \right\} EP_{k}(\vec{y})$, geometrically weighted edgewise shared partners (GWESP). Here $EP_{k}(\vec{y})$ is the number of connected pairs that have exactly $k$ common neighbors, which measures local clustering in a network. The decay parameter $\phi$ controls the relative contribution of $EP_{k}(\vec{y})$ to the GWESP statistic, and it is fixed at $0.25$ in this case. \item $ g_{3}(\vec{y}; \vec{X}) = \sum_{i < j} y_{ij}\mathbbm{1}_{ \{\vec{X}_{i} = \vec{X}_{j}\} } $, total number of edges with endpoints sharing same value on nodal covariate $\vec{X}$, often known as nodematch term. \end{itemize} We fix nodal covariate $\vec{X}$ to be a binary variable, and let one half of nodes take value $0$, while the other half take value $1$ on $\vec{X}$. To examine the performance of the proposed approach across a range of different conditions, we run a full-factorial experiment on the following three treatments \begin{itemize} \item Network size: 40, 100, 250. \item Number of clusters: 2, 3. \item Cluster size: 10, 20, 50. \end{itemize} We thus have a total of 18 experimental conditions, each of which is run for 50 replicates. The true cluster-specific parameters are specified as $$ \bm{\theta}^{40}_{true} = \begin{pmatrix} -1.15 & 0 & 0 \\ -2.85 & 0.25 & 2.25 \\ -4.95 & 2.5 & 0.25 \\ \end{pmatrix}, \ \ \bm{\theta}^{100}_{true} = \begin{pmatrix} -2.20 & 0 & 0 \\ -4.15 & 0.25 & 2.25 \\ -5.85 & 2.5 & 0.25 \\ \end{pmatrix}, \ \ \bm{\theta}^{250}_{true} = \begin{pmatrix} -3.20 & 0 & 0 \\ -4.95 & 0.25 & 2.25 \\ -6.42 & 2.5 & 0.25 \\ \end{pmatrix}$$ to ensure that the simulated networks (i) have similar mean degree ($\sim 9.9$, that is, networks of size 100 have density $\sim 0.10$) across different clusters and network sizes; and (ii) represent three most common-yet-intuitive patterns in real-world networks (parameter settings in the first row corresponds to the cases in which ties are independent Bernoulli draws, and parameter settings in the second row corresponds to the cases in which there is a strong homophily effect but a weak triadic closure effect, while the parameter settings in the third row correspond to the case in which there is a strong triadic closure effect but weak homophily effect). To maintain this pattern, we fix the values of coefficients associated with GWESP and nodematch terms across settings with different network sizes, and only modify the coefficient of edges term to keep the mean degree value as desired. We simulate networks using first two rows of the parameter matrices when the number of clusters is 2. Identifying subpopulations from ensembles of networks produced by this model is by no means a trivial task, especially as the cluster-specific parameters are chosen to produce networks of similar mean degrees ($\approx 0.10$) as shown in Figure \ref{fig:sim_networks}. While these networks appear superficially similar, we can recover the distinct processes that generated them. \begin{figure} \label{fig:simulated_networks} \centering \includegraphics[width=13.5cm]{sim_100_3.png} \caption{Representative networks from clusters 1 (left), 2 (middle), and 3 (right). Network size: 100. Color indicates nodal covariate value: 0 (black), 1 (red). Despite the apparent similarity of the networks produced by the three generative processes, we are able to infer the latter from the observed ensemble. \label{fig:sim_networks}} \end{figure} We apply the proposed algorithm \ref{alg:alg1} to analyze the synthetic data sets, allowing the candidate values for the number of clusters to range from 1 to one greater than the true number of clusters (i.e., to 4, if the true number of clusters is 3; and 3, if the true number of clusters is 2). We assign random initial values to the latent indicator membership $\bm{Z}_{i}^{0}$, weight parameters $\bm{\tau}^{0}$ according to the prior, and set the parameters associated with the edge term as $-2$ (i.e., $\theta_{11} = \cdots = \theta_{K1} = -2$), while all other elements in $\underline{\bm{\theta}}$ are drawn independently from a uniform distribution $\mathcal{U}(-0.1,0.1)$. It is worth noting that our experiments suggest that better initial values can result in faster convergence and more stable performance for large networks. One effective way to initialize the proposed algorithm \ref{alg:alg1} is to first find the MPLE for each network in the ensemble separately, then cluster these MPLE estimates with K-means algorithm to initialize $\bm{Z}_{i}^{0}$ and calculate the intra-cluster mean MPLE estimates to determine the starting value of cluster-specific model parameters for each cluster. Table \ref{tb:sim_mcmc_setting} presents the MCMC settings, prior and proposal distribution for the experiments. The thinning interval is chosen as $50$ for all MCMC chains to obtain high-quality, weakly correlated draws from the posterior. All computations in this paper are implemented in \textbf{R} \citep{R2018}, and we use software suite \texttt{statnet} \citep{handcock2008statnet} to generate networks from ERGMs. \begin{table}[ht] \centering \caption{Total number of iterations, burn-in size, initialization method, prior hyper-parameters and covariance matrix for random-walk Metropolis-Hastings update of $\underline{\bm{\theta}}$ in simulation studies \label{tb:sim_mcmc_setting}} \begin{tabular}{lllllll} \hline & Total iterations & Burn-in & Initialization & $\bm{\mu}$ & $\Psi$ & Prop. Cov \\ \hline 40, 2 & 17500 & 7500 & Random & (-1,0,0) & $25 I_{3}$ & $ 0.0025 I_{3}$ \\ 40, 3 & 20000 & 10000 & Random & (-1,0,0) & $25 I_{3}$ & $ 0.0025 I_{3}$ \\ 100, 2 & 17500 & 7500 & Random & (-1,0,0) & $25 I_{3}$ & $ 0.0025 I_{3}$\\ 100, 3 & 20000 & 10000 & MPLE, K-means & (-1,0,0) & $25 I_{3}$ & $ 0.0025 I_{3}$\\ 250, 2 & 22500 & 12500 & MPLE, K-means & (-1,0,0) & $25 I_{3}$ & $ 0.0016 I_{3}$ \\ 250, 3 & 25000 & 15000 & MPLE, K-means & (-1,0,0) & $25 I_{3}$ & $ 0.0016 I_{3}$ \\ \hline \end{tabular} \end{table} \subsection{Recovery of true number of clusters and cluster membership} We analyze the performance of proposed method in terms of its ability to identify the true number of clusters and cluster memberships. Figure \ref{fig:RF_Khat_2} and \ref{fig:RF_Khat_3} show that selecting the number of clusters according to the point beyond which there is diminishing return ($\epsilon = -0.005$) is unanimously superior to the minimum DIC criterion ($\epsilon = 0$), as the latter tends to be in favor of more complex models (i.e., with more clusters) than is optimal. Under DIC criterion with $\epsilon = -0.005$, we note that one has an $90\%$ or higher chance of identifying the true number of clusters when the true number is 2, and such chance is about $80\%$ when the true number of clusters is 3. Compared to identifying true number of clusters, recovering cluster memberships can be a more meaningful task in real-world applications, which we evaluate using adjusted rand index (ARI) \citep{hubert1985comparing}, a corrected-for-chance measure of the similarity between two clustering assignments, which yields a value of 1 for perfect cluster assignments and has an expected value of 0 for completely random cluster assignments. ARI is employed as an accuracy measure for cluster assignments here because the ground truth is available in the simulation study. Table \ref{tb:ARI} gives the mean ARI calculated across 50 replicates within each experiment setting, it shows that the proposed method can work well on the task of cluster assignments as all the mean ARI values are higher than 0.90 when the true number of clusters is $2$ and 0.85 when the true number of clusters is $3$ (a rule-of-thumb threshold value for ``good clustering'' is 0.80). We note that the mean ARI scores in Table \ref{tb:ARI} includes those calculated on the runs in which the true number of clusters is falsely identified, indicating that the proposed method is robust. In other words, the method fails gracefully, as it tends to completely combine two clusters or split one entire cluster into two when it errs, rather than mixing two clusters. \begin{figure} \centering \includegraphics[width=13.0cm, height=8cm]{RF_Khat_2.jpeg} \caption{Relative frequency of $\hat{K}$ selected by DIC criterion with $\epsilon = 0$ and $\epsilon = -0.005$. True number of clusters ($K$) = 2 \label{fig:RF_Khat_2}} \end{figure} \begin{figure} \centering \includegraphics[width=13.0cm, height=8cm]{RF_Khat_3.jpeg} \caption{Relative frequency of $\hat{K}$ selected by DIC criterion with $\epsilon = 0$ and $\epsilon = -0.005$. True number of clusters ($K$) = 3. \label{fig:RF_Khat_3}} \end{figure} \begin{table}[ht] \centering \caption{Mean ARI calculated across 50 replicates within each experiment setting. The true number of clusters is denoted as $K$. \label{tb:ARI}} \begin{tabular}{l|lll|lll} \hline & & K=2 & & & K=3 & \\ \hline & 10 & 20 & 50 & 10 & 20 & 50 \\ \hline 40 & 0.940 & 0.980 & 0.900 & 0.902 & 0.942 & 0.924 \\ 100 & 0.980 & 0.996 & 0.980 & 0.902 & 0.869 & 0.905 \\ 250 & 1.000 & 1.000 & 1.000 & 0.884 & 0.939 & 0.905 \\ \hline \end{tabular} \end{table} \subsection{Estimation accuracy} Given a correctly identified number of clusters, one natural question to ask is whether the proposed algorithm can accurately estimate the cluster-specific parameters. Specifically, we evaluate the estimation accuracy by examining the bias of posterior means. Table \ref{tb:bias_summary} summarizes the bias for cluster-specific model parameters under all experimental settings. We notice that the bias is in general small, especially for large networks, though there is slightly higher bias when the true number of clusters is 3. Large bias is mostly seen in the clusters in which there is strong dyadic dependence among edge variables (i.e., large coefficients associated with gwesp term), as expected. However, such bias becomes smaller and also less variable as sample size increases, indicating that larger sample size can mitigate the bias induced by the adoption of pseudo-likelihood. These findings offer implications to practitioners as estimated parameters are more reliable when large sample size is available or when the size of networks of interests is large. \begingroup \setlength{\tabcolsep}{11pt} \renewcommand{\arraystretch}{2.25} \begin{sidewaystable} \centering \caption{Mean (standard deviation) of bias across replicates in which the true number of clusters is correctly identified by DIC criterion ($\epsilon = -0.005$) within each experimental setting. \label{tb:bias_summary} } \resizebox{\textwidth}{!}{ \begin{tabular}{l|llllll|lllllllll} \toprule & & & K=2 & & & & & & & & K=3 & & & & \\ \hline size & edges & gwesp & nodematch & edges & gwesp & nodematch & edges & gwesp & nodematch & edges & gwesp & nodematch & edges & gwesp & nodematch \\ 40 & -2.85 & 0.25 & 2.25 & -1.15 & 0 & 0 & -4.95 & 2.5 & 0.25 & -2.85 & 0.25 & 2.25 & -1.15 & 0 & 0 \\ \midrule \quad 10 & -0.041 (0.132) & 0.025 (0.087) & 0.01 (0.073) & 0.002 (0.095) & 0 (0.052) & -0.004 (0.047) & -0.06 (0.358) & 0.051 (0.25) & -0.014 (0.049) & 0.009 (0.153) & 0.007 (0.107) & -0.017 (0.07) & 0.02 (0.114) & -0.009 (0.058) & -0.005 (0.058) \\ \quad 20 & -0.014 (0.128) & 0.002 (0.078) & 0.011 (0.05) & -0.001 (0.069) & 0.002 (0.034) & -0.001 (0.034) & -0.021 (0.256) & 0.02 (0.174) & -0.014 (0.034) & 0.016 (0.121) & -0.004 (0.078) & -0.009 (0.051) & 0.002 (0.077) & 0.005 (0.042) & -0.008 (0.042) \\ \quad 50 & -0.003 (0.077) & -0.001 (0.049) & 0.006 (0.039) & 0.005 (0.046) & -0.003 (0.025) & 0.002 (0.02) & -0.046 (0.154) & 0.032 (0.109) & -0.006 (0.022) & 0.003 (0.069) & 0 (0.043) & -0.003 (0.033) & 0.006 (0.053) & -0.003 (0.029) & 0.001 (0.028) \\ \midrule 100 & -4.15 & 0.25 & 2.25 & -2.20 & 0 & 0 & -5.85 & 2.5 & 0.25 & -4.15 & 0.25 & 2.25 & -2.20 & 0 & 0 \\ \midrule \quad 10 & -0.003 (0.048) & 0.003 (0.027) & -0.001 (0.051) & 0.01 (0.033) & -0.003 (0.018) & 0 (0.03) & 0.001 (0.107) & 0.002 (0.075) & 0.002 (0.035) & -0.007 (0.049) & 0 (0.029) & 0.006 (0.057) & 0.002 (0.044) & -0.001 (0.021) & 0.003 (0.026) \\ \quad 20 & 0.004 (0.034) & -0.002 (0.021) & -0.002 (0.036) & 0.008 (0.026) & -0.003 (0.013) & -0.001 (0.021) & -0.005 (0.081) & 0.006 (0.056) & -0.004 (0.029) & -0.005 (0.042) & 0.002 (0.023) & 0 (0.037) & -0.006 (0.03) & 0.003 (0.015) & 0.001 (0.022) \\ \quad 50 & 0.005 (0.02) & -0.004 (0.015) & 0.001 (0.027) & 0.002 (0.017) & 0 (0.007) & -0.001 (0.015) & -0.019 (0.055) & 0.013 (0.037) & 0 (0.017) & 0.003 (0.022) & -0.001 (0.014) & -0.002 (0.02) & -0.004 (0.015) & 0.001 (0.007) & 0 (0.016) \\ \midrule 250 & -4.95 & 0.25 & 2.25 & -3.20 & 0 & 0 & -6.42 & 0.25 & 2.25 & -4.95 & 0.25 & 2.25 & -6.42 & 2.5 & 0.25 \\ \midrule \quad 10 & 0.004 (0.027) & -0.002 (0.011) & -0.001 (0.027) & -0.002 (0.015) & 0 (0.012) & 0 (0.019) & -0.012 (0.056) & 0.009 (0.042) & -0.001 (0.026) & 0.004 (0.025) & -0.001 (0.012) & -0.001 (0.028) & 0.001 (0.013) & 0.001 (0.012) & -0.006 (0.022) \\ \quad 20 & -0.003 (0.019) & 0 (0.008) & 0.003 (0.021) & 0.001 (0.01) & -0.001 (0.009) & 0 (0.011) & -0.009 (0.046) & 0.006 (0.032) & 0.001 (0.017) & -0.001 (0.018) & 0.001 (0.009) & -0.001 (0.021) & -0.001 (0.012) & 0.001 (0.007) & -0.002 (0.013) \\ \quad 50 & 0.001 (0.012) & 0 (0.006) & 0 (0.013) & 0 (0.007) & 0 (0.006) & -0.001 (0.007) & -0.003 (0.028) & 0.003 (0.019) & -0.002 (0.009) & 0 (0.014) & 0 (0.005) & 0.001 (0.015) & -0.001 (0.007) & 0 (0.005) & -0.002 (0.008) \\ \hline \end{tabular}} \end{sidewaystable} \endgroup \subsection{Posterior predictive assessments} One of the most appealing aspects of mixture modeling framework is that one can use simple probability distributions as building blocks to approximate complex probability distributions (e.g., mixtures of Gaussians are often used to approximate multimodal distributions). It is of substantial interest to see whether mixtures of ERGMs can provide an adequate fit to complex graph distributions. Although the selection of metrics should be guided by the particular properties of interests in practice, we consider four widely used metrics that characterize different aspects of graph structure as follows \begin{itemize} \item Mean eigenvector centrality: the eigenvector centrality (EC) is a node-level metric that measures the degree of membership of a given node in the largest core/periphery structure in the graph, and we take mean eigenvector centrality among all nodes in the graph to convert it to a graph-level metric.\footnote{Except in very rare cases for which the graph adjacency matrix lacks a principal eigenvalue. In such circumstances, eigenvector centrality is a signed indicator of membership in the two largest core/periphery structures (positive versus negative).} The eigenvector centrality is also the best one-dimensional approximation of the graph structure (in a least-squares sense), and accuracy in reproducing it indicates the extent to which the model is able to recover the broadest structural features of the graph. \item Transitivity: a standard measure of triadic closure in network analysis \citep{wasserman1994social}, defined as the ratio of complete triangles to all potentially complete triangles. \item Standard deviation of degree distribution: a measure of the level of heterogeneity in degree distribution. \item Mean of inverse geodesic distances: a measure of the overall closeness between nodes in a graph. \end{itemize} We focus on the experimental settings in which we have the most observations (3 clusters, 50 networks in each cluster) in this section. As each ensemble of networks in the synthetic data sets contains a total of 150 graphs, we also generate 150 networks using posterior samples with the data generating mechanism described in Figure \ref{fig:mixture_model_graph}. The simulated networks based on posterior samples and those synthetic networks are summarized by the four graph-level metrics, and their discrepancies are quantified in terms of the Hellinger distance, a commonly used metric for quantifying the distance between two probability distributions. We use function \texttt{CalcHellingerDist} in package \texttt{textmineR} \citep{textmineR} to calculate the empirical Hellinger distance between two sample vectors. Table \ref{tb:hd_summary} summarizes the mean and standard deviation of Hellinger distance evaluated across all replicates, regardless of whether the number of clusters selected by DIC criterion ($\epsilon = -0.005$) under the experimental settings of interests (i.e., true number of clusters is 3) is correct. The discrepancy between posterior predictive samples and synthetic data sets increases as the model selection accuracy decreases, from network size 40 to 100 and then to 250. To better understand the connections between Hellinger distance values and underlying visual difference in distributions in terms of histograms, we consider two representative replicates when the network size is 250. Figure \ref{fig:post_check_1} corresponds to a case in which the true number of clusters is selected and with Hellinger distance close to the average -- it is clear that the posterior predictive distribution of metrics of interest is very close to that of synthetic data. Figure \ref{fig:post_check_2} corresponds to a representative case in which the number of clusters is underestimated to be 2 -- the key observation is the resulting mixture model successfully captures the bimodal feature of mean eigenvector centrality and the left-skewed feature of mean of inverse geodesic distribution, and also identifies two of the three modes for standard deviation of degree distribution and transitivity. Although the result does not seem to be ideal, one key observation is that the resulting mixture model converges to the ``middle ground'' between two clusters, indicating that the possible reason for the model to choose two clusters over three is that the algorithm gets stuck at a local optimum, which might be mitigated by running MCMC chains longer or a more efficient proposal distribution for the Metropolis-Hastings step of Algorithm \ref{alg:alg1}. At a higher level, these results suggest the potential of mixtures of ERGMs as a tool to approximate complex graph distributions as one can view ERGMs as an analogue to ``kernel'' in density estimation. \begin{table}[ht] \centering \caption{Mean (standard deviation) of Hellinger distance \label{tb:hd_summary}} \begin{tabular}{lllll} \hline & Mean EC & Transitivity & SD of deg. dist. & Mean of inverse geodesic distance \\ \hline 40, 3 & 0.045 (0.002) & 0.086 (0.007) & 0.083 (0.006) & 0.011 (0.001) \\ 100, 3 & 0.076 (0.005) & 0.154 (0.009) & 0.123 (0.006) & 0.025 (0.003) \\ 250, 3 & 0.145 (0.015) & 0.271 (0.016) & 0.137 (0.010) & 0.074 (0.016) \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=12.0cm, height = 6.5cm]{post_check_1.jpeg} \caption{Distribution of metrics of interests for posterior predictive samples and synthetic data, with corresponding Hellinger distance values : 0.150 (upper left), 0.283 (upper right), 0.141 (lower left), 0.076 (lower right). \label{fig:post_check_1}} \end{figure} \begin{figure} \centering \includegraphics[width=12.0cm, height=6.5cm]{post_check_2.jpeg} \caption{Distribution of metrics of interests for posterior predictive samples and synthetic data, with corresponding Hellinger distance values: 0.173 (upper left), 0.270 (upper right), 0.125 (lower left), 0.105 (lower right). \label{fig:post_check_2}} \end{figure} \section{Case study} \label{sec:Case_study} In this section, we apply the proposed method to cluster the co-voting patterns among U.S. Senators from 1867 (start year of Congress 40) to 2014 (end year of Congress 113), which was a subset of the data first analyzed by \citet{moody2013portrait} using modularity and role-based blockmodels. The co-voting tendencies are represented by networks based on the roll call voting data from \url{http://voteview.com}, which contains the voting decision of each Senator (yay, nay, or abstain) for every bill brought to Congress \footnote{The data is available online in the R package \texttt{VCERRGM}, \url{https://github.com/jihuilee/VCERGM} }. The nodes in the co-voting network represent Senators and an edge is placed between two nodes if the corresponding Senators vote concurrently (both yay of both nay) on at least $75\%$ of the bills to which they were both present. Here we aim at identifying subgroups of networks that appear to have similar generating characteristics within the group but different characteristics across groups. \subsection{Model specification and estimation} Figure \ref{fig:congress} shows that the co-voting networks vary in structure on different years, and the party-affiliation appears to be a key factor affecting the co-voting patterns among Senators. Therefore we consider an ERGM model with following sufficient statistics $$ g_{1}(\vec{y}) = \sum_{i<j}y_{ij}, \ \text{total number of edges}; $$ $$ g_{2}(\vec{y}; \vec{X}) = \sum_{i<j}y_{ij}, \mathbbm{1}_{ \{\vec{X}_{i} = \vec{X}_{j} = D\} }, \ \ \text{total number of edges between Democrats}; $$ $$ g_{3}(\vec{y}; \vec{X}) = \sum_{i<j}y_{ij} \mathbbm{1}_{ \{\vec{X}_{i} = R, \vec{X}_{j} = D\} }, \ \ \text{total number of edges between Democrats and Republicans};$$ $$ g_{4}(\vec{y}) = e^{\phi} \sum_{k=1}^{n-2} \left\{ 1 - (1-e^{-\phi})^{k} \right\} EP_{k}(\vec{y}), \ \text{GWESP statistic} $$ \begin{figure} \centering \includegraphics[width=13.5cm]{congress2.png} \caption{Co-voting networks of 61st, 89th and 111th Congress, which were formed in the year of 1909, 1965 and 2009, respectively. Colors indicate Senators' party affiliations, blue = Democrats(D), red = Republican(R). \label{fig:congress}} \end{figure} The decay parameter of GWESP term is fixed as $\phi=0.25$ as often used in ERGM literature. We note that these networks vary in size (range: $69-112$) and thus include an offset term \eqref{eq:krivitsky_offset} to adjust for network size. (This is equivalent to using the Krivitsky reference measure, which provides a parameterization with constant baseline expected degree.) We use the prior specification in \ref{sec:Mixture_of_ERGMs}, and run long MCMC chains (total iterations = 80000, burn-in = 30000, thinning interval = 50) with random initial values. \begin{figure} \centering \includegraphics[height=5cm, width=6.5cm]{case_study_DIC.jpeg} \caption{DIC vs Number of clusters, Congress co-voting networks \label{fig:case_study_DIC}} \end{figure} Figure \ref{fig:case_study_DIC} indicates that the DIC reaches its minimum at $K=3$, and hence $K=3$ appears to be a plausible choice for the number of clusters. Under $K=3$, visual inspections on the traceplots suggest that the chains adapt to the high density region very fast and mix well (see Figure \ref{fig:edges_trace} for traceplots of edges parameter; other traceplots also show similar pattern, but are omitted in the interest of space). The posterior mean estimates of cluster-specific parameters are \begin{figure} \centering \includegraphics[width=13.5cm]{covote_edges_trace.jpeg} \caption{Traceplots for parameters associated with edges term for 3 clusters. \label{fig:edges_trace}} \end{figure} $$ \hat{\bm{\tau}} = \begin{pmatrix} 0.36 \\ 0.47 \\ 0.17 \\ \end{pmatrix} \underline{\hat{\bm{\theta}}} = \begin{pmatrix} 1.69 & 0.01 & -2.49 & 1.42 \\ 2.04 & -0.12 & -3.09 & 2.14 \\ 2.47 & 0.92 & -4.47 & 2.63 \\ \end{pmatrix} $$ We note that the size-invariant parameters for edge term (first column) can be interpreted as the log of the baseline mean degree (rather than the logit of the baseline density, as in the case of the counting measure), suggesting expected degrees varying from approximately 5.5 to 12 across clusters prior to consideration of other effects. Based on these estimates, we have the following observations regarding the co-voting patterns. Across all clusters, we see both inhibition of cross-party ties (third column) and strong triadic closure (fourth column). Clusters do differ, however. Cluster 1 shows essentially symmetric behavior by party (column two), with lower levels of cross-group inhibition and triadic closure bias than in the other clusters; overall, cluster 1 suggests a relatively low level of polarization by party, with voting only loosely restricted by party lines. By contrast, cluster 2 reflects a much more polarized regime, with more activity overall and co-voting being more concentrated within party. Like cluster 1, however, cluster 2 shows little party asymmetry (apart from a fairly weak tendency towards lower levels of co-voting among Democrats). Such asymmetry is much more strongly pronounced within cluster 3, with intraparty Democratic ties being approximately 2.5 times as likely (ceteris paribus) as ties within the GOP. This cluster also reflects extremely high levels of polarization, with cross-party co-voting being strongly inhibited and high levels of triadic closure. Over the period studied here, the most common pattern (probability 0.47) is the symmetric polarization of cluster 2, with the loose, low polarization pattern of cluster 1 also being fairly common (probability 0.36). The asymmetric, highly polarized regime of cluster 3 is less common, but is still estimated to account for approximately $17\%$ of the observed cases. Interestingly, we do not see a corresponding asymmetric pattern in which the GOP shows high intraparty vote density, as might be anticipated; thus, there appear to be latent differences in how the two parties behave during the period that, while not manifest in every congress, always have the potential to arise. One advantage of working with a fully generative model is the ability to perform ``what-if'' analyses that separate effects due to observed covariates from differences in structure arising from differences in generative processes. To probe the impact of the three behavioral regimes inferred from the co-voting data, we consider how the entire ensemble Congressional networks would be expected to have been different, \emph{if} each respective regime had governed the U.S. Congress for the entire study period. To perform such an analysis, we first simulate a set of posterior predictive networks for each Congress during the study period, with parameters drawn from the posterior distribution of each respective cluster. Each collection of networks can be thought of as a simulated ``alternate history,'' in which the size and composition of each Congress were held to their real-world values but the behavioral tendencies that shaped the co-voting networks throughout the period were reflective of only one of the three clusters. Systematic differences in network structure across sets thus provide insight into the potential impact of behavioral regime, controlling for size and composition. One important property that can be probed in this way is the expected incidence of voting coalitions, which play an important role in party politics. Here, we focus on minimal coalitions, defined as sets of three legislators who consistently vote together (i.e., triangles). Within-party coalitions can be sources of party cohesion, although they also act as blocks that can sometimes resist (and must be negotiated with by) party leaders; cross-party coalitions, by contrast, pose significant challenges to party cohesion, but can also serve as foci for sponsorship and promotion of bipartisan legislation. Both are hence significant, with distinct implications for the political landscape. To examine the coalition structures that would have been expected to occur under our three behavioral regimes, we simulate 10 ``alternate histories'' from the posterior distributions of each cluster, calculating the realized proportions of intra-Democratic, intra-Republican, and inter-Party triangles. (That is, the counts of fully connected triads with all three members as Democrats, all three members as Republicans, or members from both parties, scaled by their maximum possible values.) Using proportions rather than raw counts ensures these metrics are normalized for network size and the distribution of party affiliations in each Congress; substantively, this choice of scaling tells us how close each party (or the cross-party cut) is to forming a perfect coalition, in which all members vote in concert. Figure \ref{fig:triangles_intra_party} shows the realized proportion of intra-party triangles in simulated networks, and Figure \ref{fig:triangles_inter_party} shows the realized proportion of intra-party triangles in the simulated networks. Both figures show substantial differences in coalition structure, implying that the behavioral regimes associated with the three inferred clusters would be expected to have a meaningful impact on the political process. Specifically, we note the following: \begin{itemize} \item The regime of cluster 1 is marked by the formation of very few voting coalitions, either within party or between party). As suggested by the parameter values, we see little difference in coalition formation between the two parties, both having little cohesion. \item By contrast, the regime of cluster 2 shows a much higher incidence of intra-party coalition formation, with roughly 10-20\% of the potential intra-party coalitions being present. Coalition incidence differs little by party, with at best a small average increment in the rate of coalition incidence for Republicans versus Democrats. Interestingly, this regime also shows the highest rate of cross-party coalition formation; while the rate is very low overall, it is considerably higher than that observed under cluster 1. \item Finally, the regime of cluster 3 favors extremely high levels of intra-party cohesion, with rates approaching 50\% of the maximum possible for Republicans and 75\% for Democrats. As this implies, the resulting networks are also highly asymmetric, with the Democratic party expected to generate a much more cohesive coalition structure than the GOP. Interestingly, this strong intra-party coalition formation does not exist entirely at the expense of cross-party coalitions: we find an expected rate of cross-party coalition formation that is only slightly less than that expected for networks arising under cluster 2. That said, the much higher incidence of intra-party coalition formation under cluster 3 leads inter-party coalitions to be a smaller fraction of the total coalition set than under cluster 2, potentially making them less critical to the legislative process. \end{itemize} Taken together, these observations suggest that the cluster 1 regime tends to generate \emph{uniformly loose} voting networks with very few coalitions of any kind. These networks may resist polarization, but their high level of fragmentation may make it more difficult to assemble the sorts of alliances needed to push through controversial legislation. By contrast, the regime of cluster 2 tends to produce \emph{uniformly clustered} networks with moderately high levels of coalition formation in both parties coupled with relatively high numbers of cross-party coalitions. These networks may pose particular challenges for party leaders, as they contain a mix of multiple local coalitions that must be courted for votes, ``lone wolves'' outside of coalitions who must be approached individually, and likely defectors whose cross-party coalitions provide a bullwark against within-party influence. Finally, the regime of cluster 3 tends to produce \emph{party-cohesive} networks dominated by dense intra-party coalitions on both sides of the aisle (but with substantially higher levels of cohesion among Democratic legislators). This regime offers party leaders the greatest chance of being able to mobilize members in support of legislation, at the cost of potential legislative deadlock during periods of high inter-party conflict. \begin{figure} \centering \includegraphics[width=13.5cm]{triangles_intra_party.jpeg} \caption{Proportion of realized intra-party triangles in simulated networks. Colors indicate the party affiliation (blue = Democratic (D), red = Republican (R)). \label{fig:triangles_intra_party}} \end{figure} \begin{figure} \centering \includegraphics[width=13.5cm]{triangles_inter_party.jpeg} \caption{Proportion of realized inter-party triangles in simulated networks. \label{fig:triangles_inter_party}} \end{figure} \begin{figure} \centering \includegraphics[width=13.5cm]{cluster_label_year.jpeg} \caption{Maximum probability cluster assignments over study period. Colors indicate the majority party in the corresponding Congress (blue = Democratic (D), red = Republican (R)). Regimes of voting behavior are visibly correlated over time. \label{fig:cluster_label_year}} \end{figure} In addition to examining the potential impact of different behavioral regimes on voting networks, our model also provides insight into the incidence of these regimes over time. For instance, Figure \ref{fig:cluster_label_year} shows maximum probability cluster assignments over the study period. We see that the relatively symmetric cultures represented by cluster 1 and cluster 2 alternate in the nineteenth and twentieth centuries, while the culture of asymmetric polarization represented by cluster 3 becomes dominant after late 1990's. Such finding is in line with the current trend of political party polarization \citep{moody2013portrait}. Table \ref{tb:cluster_party} shows the breakdown of congresses into $3 \times 2$ sub-categories according to the estimated co-voting pattern and the observed majority party. We examine the independence of co-voting pattern assignment and the majority party using Pearson's $\chi^2$ test, and we fail to reject the null hypothesis that the majority party is independent of the co-voting patterns ($\chi_{2}^{2} = 1.07$, p-value = $0.58$). Thus, while the regimes of party behavior are quite visibly autocorrelated, this pattern does not seem to be related to which party has control of congress at any given time. \begin{table}[ht] \centering \caption{Tabulation of co-voting pattern by majority party (from Congress 40 to Congress 113). Majority party is not significantly related to voting regime. \label{tb:cluster_party}} \begin{tabular}{l|ll} \hline Co-voting Pattern & Democratic & Republican \\ \hline 1 & 16 & 11 \\ 2 & 17 & 19 \\ 3 & 5 & 6 \\ \hline \end{tabular} \end{table} \subsection{Model assessment} To assess the adequacy of the resulting model, we consider a simulation-based method \citet{hunter2008goodness}, with the basic insight that a fitted ERGM model should be able to reproduce in simulation structural properties similar to those of the observed networks. Instead of simulating from a single point estimate, we propose to simulate networks from estimated posterior distribution, following practices of posterior predictive assessment in the Bayesian literature \citep{gelman1996posterior}. The structural property of interest here is the modularity score \citep{newman2006modularity} (assessed by party), which can be interpreted as a measure of the polarization of networks with respect to party structure. By definition, the modularity score ranges from $-1$ to $1$, with larger values indicating higher levels of polarization. We replicate the following evaluation procedure for $100$ times: \begin{enumerate} \item For each vertex set, we first randomly draw a latent membership indicator using posterior samples of $\bm{\tau}$, then simulate a network from the corresponding component using posterior samples of $\bm{\theta}$. \item Compute the modularity score of the observed ensemble of networks and the simulated networks. \end{enumerate} We compare the distribution of modularity scores of simulated networks to that of observed networks using Hellinger distance. We obtain the mean of Hellinger distance values as $0.095$ and standard deviation of Hellinger distance values as $0.002$. \begin{figure} \centering \includegraphics[width=9cm, height=6cm]{post_check_modularity.jpeg} \caption{Modularity scores of simulated and observed ensemble of networks. Hellinger distance: 0.096.\label{fig:case_study_gof}} \end{figure} Figure \ref{fig:case_study_gof} shows the distribution of modularity scores for a replicate that has average-case performance (Hellinger distance value : 0.096). We see that the resulting mixture model can capture not only the left-skewed feature of the modularity scores in the observed data but also the variation the observed modularity scores to a large extent. The remaining discrepancy between observed modularity scores and those of simulated networks might be mitigated by more accurate estimation algorithms for cluster-specific parameters (e.g., using importance sampling to approximate ERGM likelihood rather than the pseudo-likelihood), at higher computational expense. \section{Conclusion} \label{sec:Conclusion} In this paper, we proposed a mixture of ERGMs approach for modeling the generative process leading to heterogeneous network ensembles. We developed a Metropolis-within-Gibbs algorithm to fit ERGM mixtures and obtained Bayesian estimates of clustering assignment probabilities and the cluster-specific ERGM parameters. To account for the difference in the size of the observed networks, we used a size-adjusted parameterization for ERGMs. We also tailored a version of observed DIC and defined an empirical rule to select the number of clusters, which is proved to be effective in simulation study. The simulation studies also showed that the proposed approach can accurately recover the cluster membership and cluster-specific parameters, without requiring much effort on initialization. We applied the proposed approach to study the political co-voting networks among U.S. Senators, and identified three clusters that represent vastly different co-voting patterns. After matching the clusters with temporal information, we observed that one symmetric co-voting pattern and another mildly asymmetric co-voting patterns alternate in nineteenth and twentieth century, and there appeared to be an abrupt shift in the co-voting pattern towards the direction of political party polarization in last two decades. Compared to other methods in the literature, our proposed method allows straightforward statistical inference for the generative processes of heterogeneous ensembles of networks with edgewise dependence, and is conveniently interpretable. We believe that the proposed method can prove to be a highly effective tool for both exploratory and inferential analysis of ensembles of networks. In closing, we comment on three important directions of future research that could prove beneficial to the modeling of ensembles of networks: the development of more sophisticated size-adjusted parameterizations, more accurate tractable approximations of the ERGM likelihood and Dirichlet Process mixtures of ERGMs. It is worth mentioning that the sizes of the US congresses between 1867 and 2014 range from 69 to 112, non-identical but broadly similar. More importantly, these size changes occur within a social system whose basic structure remains fairly similar throughout the time period. In other cases, however, large size differences may be accompanied by increasingly complex internal barriers to interaction or other additional exogenous structure that must be accounted for to obtain realistic predictions. While this additional structure is not available in the form of covariates, more sophisticated size-adjusted parameterizations may be required; reference measures or other tools facilitating ``automatic'' correction of such effects would facilitate mixture modeling in such scenarios. With respect to likelihood calculation, it is encouraging that we obtain favorable results in our simulation study using the easily computed pseudo-likelihood approximation. In particular, the main deficiency of the pseudo-likelihood is excessive sharpness near the mode, which could in principle encourage the over-production of mixture components. While we do not see this effect here, more accurate likelihood approximations that are inexpensive enough to perform at each MCMC step for large models would be desirable. As such improved approximations become available, they can be easily integrated into the posterior simulation framework described here. Last but not least, a natural further extension of the finite mixture modeling framework could be Dirichlet Process mixtures of ERGMs where the number of mixture components can vary depending on the incoming data size. Although computationally challenging, such an extension can provide a highly flexible-yet-interpretable density estimation framework for complex graph distributions. \bigskip \bibliographystyle{abbrvnat}
1,314,259,996,881
arxiv
\section{Introduction} \label{sec:intro} Among the sources of gravitational waves (GWs), inspiralling binary systems of compact objects, neutron stars (NSs) and/or black holes (BHs) in the mass range $\sim 1\,M_{\odot} - 100\,M_{\odot}$ stand out as likely to be detected and relatively easy to model. For ground-based laser interferometers currently in operation \cite{2002gr.qc.....4090C}, LIGO \cite{2009NJPh...11g3032A}, Virgo \cite{2008CQGra..25r4001A} and GEO-600 \cite{2004CQGra..21S.417W}, the current detection-rate estimates for BH-NS binaries range from $2\times10^{-4}$ to $0.2$\,yr$^{-1}$ for first-generation instruments \citeaffixed{2008ApJ...672..479O,2010arXiv1003.2480L}{\emph{e.g.}}. Although the estimates are quite uncertain, detection rates are expected to increase with the upgrade to Enhanced LIGO/Virgo, up to $\sim 40$\,yr$^{-1}$ with Advanced LIGO/Virgo. The detection of a gravitational-wave event is challenging and will be a rewarding achievement by itself. After such a detection, measurement of source properties holds major promise for improving our astrophysical understanding and requires reliable methods for parameter estimation. This is a complicated problem, because of the large number of parameters ($15$ for spinning compact objects in a quasi-circular orbit) and the degeneracies between them \cite{2009CQGra..26k4007R}, the significant amount of structure in the parameter space, and the particularities of the detector noise. In this paper we use an example to illustrate the capabilities of our Markov-chain Monte-Carlo (MCMC) algorithm \textsc{SPINspiral} \cite{2008CQGra..25r4011V} for parameter estimation of binary inspirals with two spinning components, using ground-based GW interferometers. In these proceedings we focus on the effects of using LIGO detector data versus synthetic Gaussian noise. Earlier studies \citeaffixed{1994PhRvD..49.1723J,1994PhRvD..49.2658C,1995PhRvD..52..848P,2007CQGra..24.1089V}{\emph{e.g.}} computed the potential accuracy of parameter estimation (\emph{e.g.}\ by using the Fisher matrix), but without performing a parameter estimation in practice. Also, \citename{2006CQGra..23.4895R} \citeyear{2006CQGra..23.4895R,2007PhRvD..75f2004R}, \citename{2009arXiv0911.3820V} \citeyear{2008PhRvD..78b2001V,2008CQGra..25r4010V,2009arXiv0911.3820V} explored parameter estimation for binaries without spins, described by nine parameters. We present the gravitational-wave template used for this study in section~\,\ref{sec:GW}, and the Bayesian framework we employ here in section~\,\ref{sec:methods}. In section~\ref{sec:data} we describe the three data sets that we analyse in this study; a simulated GW signal injected into synthetic Gaussian noise, a GW signal injected into LIGO detector data and a raw LIGO data set containing a known artefact of terrestrial origin (``glitch''). We describe the details of the MCMC simulations in section~\ref{sec:runs}. The analyses of the first two data sets are compared in section~\ref{sec:subresults}, and we present our results on the glitch in section~\,\ref{sec:glitch}. \section{Gravitational-wave signal and observables} \label{sec:GW} We analyse the signal produced during the inspiral phase of two compact objects of masses $M_{1,2}$ in quasi-circular orbit. We focus on a black-hole binary system with $M_1 = 10\,M_{\odot}$ and $M_2 = 1.4\,M_{\odot}$, where unlike in some of our previous studies \citeaffixed{2008ApJ...688L..61V}{\emph{e.g.}}, we do not ignore the second spin to explore the single spin approximation. During the orbital inspiral, the general-relativistic spin-orbit and spin-spin coupling (dragging of inertial frames) cause the binary's orbital plane to precess and introduce amplitude and phase modulations of the observed gravitational-wave signal \cite{1994PhRvD..49.6274A}. A circular binary inspiral with both compact objects spinning is described by a 15-dimensional parameter vector $\vec{\lambda} \in \Lambda$. Our choice of independent parameters with respect to a fixed geocentric coordinate system is: \begin{eqnarray} \vec{\lambda} =& \{{\cal M},\eta,\log{d_\mathrm{L}},t_\mathrm{c},\phi_\mathrm{c},\alpha,\cos\delta,\sin{\iota},\psi, \nonumber \\ &a_\mathrm{spin1},\cos\theta_\mathrm{spin1},\phi_\mathrm{spin1},a_\mathrm{spin2},\cos\theta_\mathrm{spin2},\phi_\mathrm{spin2}\}, \label{e:lambda} \end{eqnarray} where ${\cal M} = \frac{(M_1 M_2)^{3/5}}{(M_1 + M_2)^{1/5}}$ and $\eta = \frac{M_1 M_2}{(M_1 + M_2)^2}$ are the chirp mass and symmetric mass ratio, respectively; $d_\mathrm{L}$ is the luminosity distance to the source; $\phi_\mathrm{c}$ is an integration constant that specifies the GW phase at the time of coalescence $t_\mathrm{c}$, defined with respect to the centre of the Earth; $\alpha$ (right ascension) and $\delta$ (declination) identify the source position in the sky; $\iota$ defines the inclination of the binary with respect to the line of sight; and $\psi$ is the polarisation angle of the waveform. The spins are specified by $0 \le a_\mathrm{spin_{1,2}} \equiv S_{1,2}/M_{1,2}^2 \le 1$ as the dimensionless spin magnitude, and the angles $\theta_\mathrm{spin1,2}$,$\phi_\mathrm{spin1,2}$ for their orientations. Given a network comprising $n_\mathrm{det}$ detectors, the data collected at the $a-$th instrument ($a = 1,\dots, n_\mathrm{det}$) is given by $x_a(t) = n_a(t) + h_a(t;\vec{\lambda})$, where $h_a(t;\vec{\lambda}) = F_{a,+}(t,\alpha,\delta,\psi)\,h_{a,+}(t;\vec{\lambda}) + F_{a,\times}(t,\alpha,\delta,\psi)\,h_{a,\times}(t;\vec{\lambda})$ is the GW strain at the detector \citeaffixed{1994PhRvD..49.6274A}{see Eqs.\,2--5 in} and $n_a(t)$ is the detector noise. The astrophysical signal is given by the linear combination of the two independent polarisations $h_{a,+}(t;\vec{\lambda})$ and $h_{a,\times}(t;\vec{\lambda})$ weighted by the antenna beam patterns $F_{a,+}(t,\alpha,\delta,\psi)$ and $F_{a,\times}(t,\alpha,\delta,\psi)$. The waveform we use includes terms up to ${3.5}$-post-Newtonian (pN) order in phase and uses Newtonian amplitudes, with spin effects up to ${2.5}$-pN in phase. We generate the waveform templates using the routine \texttt{LALGenerateInspiral()} with the approximant \texttt{SpinTaylor} from the injection package in the LSC Algorithm Library (LAL) \cite{LAL}, which closely follows the first section of \citename{2003PhRvD..67j4025B} \citeyear{2003PhRvD..67j4025B}. \section{Parameter estimation: Methods} \label{sec:methods} In our Bayesian analysis we use MCMC methods to determine the multi-dimensional \emph{posterior} probability-density function (PDF) of the unknown parameter vector $\vec{\lambda}$ in equation~\ref{e:lambda}, given the data sets $x_a$ collected by a network of $n_\mathrm{det}$ detectors, a model $M$ of the waveform and the \emph{prior} $p(\vec{\lambda})$ on the parameters. Our priors are uniform in the parameters of Eq.\,\ref{e:lambda} (see \citeasnoun{2008CQGra..25r4011V} for details). One can compute the probability density via Bayes' theorem \begin{equation} p(\vec{\lambda}|x_a,M) = \frac{p(\vec{\lambda}|M) \, p(x_a|\vec{\lambda},M)}{p(x_a|M)}\,, \label{e:jointPDF} \end{equation} where \begin{equation} \mathcal{L} \equiv p(x_a|\vec{\lambda},M) \propto \exp\left( <x_a|h_a(\vec{\lambda})>-\frac{1}{2}<h_a(\vec{\lambda})|h_a(\vec{\lambda})> \right) \label{e:La} \end{equation} is the \emph{likelihood function}, which measures how well the data fits the model $M$ for the parameter vector $\vec{\lambda}$. The term $p(x_a|M)$ is the \emph{marginal likelihood} or \emph{evidence}. In the previous equation \begin{equation} <x|y>=4Re\left( \int_{f_{\rm low}}^{f_{\rm high}}\frac{\tilde{x}(f)\tilde{y}^{*}(f)}{S_a(f)}\,\mathrm{d}f \right) \label{e:prod} \end{equation} is the \emph{overlap} of signals $x$ and $y$, $\tilde x(f)$ is the Fourier transform of $x(t)$, and $S_a(f)$ is the noise power-spectral density in detector $a$. The likelihood computed for the injection parameters $\mathcal{L}_\mathrm{inj}=p(x_a|\vec{\lambda}_\mathrm{inj},M)$ is then a random variable that depends on the particular noise realisation $n_a$ in the data $x_a=h(\vec{\lambda}_\mathrm{inj})+n_a$. The injection parameters are the parameters of the waveform template added to the noise. We define the signal-to-noise ratio (SNR) of the injection to be: \begin{equation} \mathrm{SNR} = \frac{<x|h(\vec{\lambda}_{\rm inj})>}{\sqrt{<h(\vec{\lambda}_{\rm inj})|h(\vec{\lambda}_{\rm inj})>}}. \end{equation} From here on, we use the expected value of the SNR, which is equal to the square root of twice the expectation value of $\log {\cal L_\mathrm{inj}}$: \begin{equation} \mathrm{SNR} = \sqrt{<h(\vec{\lambda}_\mathrm{inj})|h(\vec{\lambda}_\mathrm{inj})>}. \label{e:SNR} \end{equation} To combine observations from a network of detectors with uncorrelated noise realisations (this is the case in this paper as we use two non-co-located detectors) we have the likelihood $p(\vec{x}|\vec{\lambda},M) = \prod_{a=1}^{n_\mathrm{det}}\, p(x_a|\vec{\lambda},M)\,$, for $\vec{x} \equiv \{x_a: a = 1,\dots,n_\mathrm{det}\}$ and \begin{equation} p(\vec{\lambda}|\vec{x},M) = \frac{p(\vec{\lambda}|M)\, p(\vec{x}|\vec{\lambda},M)}{p(\vec{x}|M)}. \label{e:Bayes} \end{equation} The numerical computation of the PDF involves the evaluation of a large multi-modal, multi-dimensional integral. Markov-chain Monte-Carlo (MCMC) methods \citeaffixed[and references therein]{gilks_etal_1996,gelman_etal_1997}{\emph{e.g.}} have proved to be especially effective in tackling this numerical problem. We developed an adaptive \citeaffixed{figueiredo_jain_2002,atchade_rosenthal_2005}{see} MCMC algorithm to explore the parameter space $\Lambda$ efficiently while requiring the least amount of tuning for the specific signal analysed; the code is an extension of the one developed by some of the authors to explore MCMC methods for binaries without spin \cite{2006CQGra..23.4895R,2007PhRvD..75f2004R}. We implemented parallel tempering \cite{1996JPSJ...65.1604H,1997CPL...281..140H,2007PhD..Auck} to improve the sampling. It consists of running several MCMC chains in parallel, each with a different ``temperature'', which can swap parameters under certain conditions. Only the $T=1$ chain is currently used for post-processing. In Eq.\,\ref{e:Bayes} we applied Bayes' theorem to obtain the probability of a specific parameter vector value ($\vec{\lambda}$) given the observed data $\vec{x}$ and the model $M$. The theorem can also be applied to compute the probability of a specific \emph{model} $M_i$ given the observed data: \begin{equation} p(M_i|\vec{x}) = \frac{p(M_i)\, p(\vec{x}|M_i)}{p(\vec{x})}. \label{e:BayesModel} \end{equation} We compare the two models $M_i$ and $M_j$ by computing the \emph{odds ratio}: \begin{equation} O_{i,j}= \frac{p(M_i|\vec{x})}{p(M_j|\vec{x})} = \frac{p(M_i)\, p(\vec{x}|M_i)}{p(M_j)\, p(\vec{x}|M_j)} = \frac{p(M_i)}{p(M_j)} B_{i,j}, \label{e:Odds} \end{equation} where \begin{equation} B_{i,j} = \frac{p(\vec{x}|M_i)}{p(\vec{x}|M_j)} \label{e:BayesRatio} \end{equation} is the \emph{Bayes factor} of the two models, and we recognise the evidence $p(\vec{x}|M_i)$ from Eq.\,\ref{e:Bayes}. The evidence must be marginalised over the parameters of the model in order to compute the Bayes factor: \begin{equation} p(\vec{x}|M_i) = \int_{\Lambda} p(\vec{\lambda}|M_i) \, p(\vec{x}|\vec{\lambda},M_i) \,\mathrm{d}\vec{\lambda}. \label{e:evidence} \end{equation} There are existing algorithms dedicated to the computation of this integral, and of the Bayes factor. For instance, \emph{nested sampling} \cite{MR2282208} has been shown to be very efficient in the case of non-spinning gravitational-wave sources \cite{2009arXiv0911.3820V}, and can in addition be used to produce PDFs of the parameters. As a by-product of the exploration of the parameter space with MCMC, it is possible to compute the evidences of the models used. We have implemented the harmonic-mean method \cite{1994JSTOR..N}, in which the evidence is approximated by: \begin{equation} p(\vec{x}|M_i)\,\approx\,\sum_{k=1}^N p(\vec{\lambda}_k|M_i)\,p(\vec{x}|\vec{\lambda}_k,M_i) \,V_{\vec{\lambda}_k}, \end{equation} where $\{\vec{\lambda}_k: k = 1,\dots,N\}$ is the set of $N$ points sampled by the MCMC, and $V_{\vec{\lambda}_k}$ is the volume of parameter space associated with the point $\vec{\lambda}_k$. Since the MCMC algorithm samples according to the posterior (and, up to a proportionality constant, converges towards posterior PDF), the density of points in the chain at a certain location $\vec{\lambda}_k$ in the parameter space $\Lambda$ will become proportional to the posterior for large $N$. It follows that \begin{equation} \lim_{N \to \infty}\,V_{\vec{\lambda}_k} = \frac{\alpha_i}{p(\vec{\lambda}_k|M_i)\,p(\vec{x}|\vec{\lambda}_k,M_i)}, \end{equation} with $\alpha_i$ a proportionality constant. We then have $p(\vec{x}|M_i)\,\approx\,\sum_{k=1}^N \alpha_i = N\,\alpha_i$, and obtain the estimate for $\alpha_i$ by considering the whole parameter space volume $V_t$: \begin{equation} V_t\,\approx\,\sum_{k=1}^N V_{\vec{\lambda}_k} = \sum_{k=1}^N \frac{\alpha_i}{p(\vec{\lambda}_k|M_i)\,p(\vec{x}|\vec{\lambda}_k,M_i)}. \end{equation} Finally, \begin{equation} p(\vec{x}|M_i)\,\approx\,N\,V_t\,\left[ \sum_{k=1}^N \frac{1}{p(\vec{\lambda}_k|M_i)\,p(\vec{x}|\vec{\lambda}_k,M_i)} \right]^{-1}, \end{equation} which is the harmonic mean of the posterior values sampled by the MCMC. The issue with this method is that it gives too much weight to low-posterior points, which lie in a part of the parameter space that is badly sampled, by design, by the MCMC. The estimate of the evidence is then very sensitive to the quality of the sampling of a particular run. We are looking into other algorithms in order to remedy this problem, \emph{e.g.}\ by using the higher-temperature chains produced by parallel tempering \cite{earl-2005} (we currently use the $T=1$ chain only), or by using a well sampled subset of points \cite{vanhaasteren-2009} to estimate the probability constant $\alpha_i$. A summary of the methods used in our MCMC code was published in \citeasnoun{2008CQGra..25r4011V}; a more complete technical description of the \textsc{SPINspiral} code will be available in \cite{vandersluysprep}. \section{Parameter estimation: Results} \label{sec:results} \subsection{Data sets} \label{sec:data} For these proceedings, we analyse three different data sets, each containing the data for the 4-km LIGO detectors at Hanford (H1) and Livingston (L1): \begin{description} \item[DS1:] a coherent software injection with a total SNR of 11.3 into synthetic Gaussian, stationary noise, simulated for the H1 and L1 detectors; \item[DS2:] a coherent software injection of the same signal, with a total SNR of 11.3, into ``quiet'' LIGO detector data from H1 and L1; \item[DS3:] raw LIGO data from H1 and L1, containing a known, coincident glitch of seismic origin, with a total SNR of 11.3. \end{description} For the data sets DS1 and DS2, the injected signal is that of a $10\,M_\odot$ spinning BH and a $1.4\,M_\odot$ spinning NS in an inspiralling binary system. A low-mass Compact Binary Coalescence Group search \cite{2009PhRvD..79l2001A} does not produce a GW trigger for the data segment DS2; hence we designate it ``quiet''. The distance of each of the injections is scaled to obtain an SNR of 11.3, equal to that of the glitch in DS3, but computed with different waveforms: a SpinTaylor waveform (see section\,\ref{sec:GW}) for DS1 and DS2, and a non-spinning, ${2}$-pN waveform (see section\,\ref{sec:glitch}) for DS3. The other parameters of the injection are: \begin{eqnarray} \vec{\lambda} =& \{{\cal M}=2.99\,M_\odot,\eta=0.107,d_\mathrm{L},t_\mathrm{c},\phi_\mathrm{c}=85.9^\circ,\alpha=17.4\,h,\delta=61.6^\circ, \nonumber \\ &i=52.8^\circ,\psi=11.6^\circ,a_\mathrm{spin1}=0.6,\theta_\mathrm{spin1}=78.5^\circ,\phi_\mathrm{spin1}=63.0^\circ, \nonumber \\ &a_\mathrm{spin2}=0.4,\theta_\mathrm{spin2}=120.0^\circ,\phi_\mathrm{spin2}=315.1^\circ\}, \label{e:parameters} \end{eqnarray} where we assigned a spin of 0.4 to the neutron star, which is higher than astrophysically plausible, for testing purposes only. In DS3, no signal is injected. For our analyses, we use the data of both 4-km LIGO detectors H1 and L1. \subsection{MCMC simulations} \label{sec:runs} The MCMC analysis that we carry out on each data set consists of 10 independent Markov chains, each with a length of about a million iterations and composed of 5 chains at different temperatures for parallel tempering. From now on, we will refer to the $T=1$ chain as \emph{the chain}, since the hotter chains were not used in the post-processing. The part of the chains that is analysed is that after the \emph{burn-in} period \citeaffixed{gilks_etal_1996}{see \emph{e.g.}}, the length of which is determined automatically as follows: we determine the absolute maximum likelihood $\log({\cal L}_\mathrm{max})$, defined as the highest value for $\log[p(\vec{x}|\vec{\lambda},M)]$ obtained over the ensemble of parameter sets $\vec{\lambda}$ in any of our individual Markov chains. Then for each chain we include all the iterations \emph{after} the chain reaches a likelihood value of $\log({\cal L}_\mathrm{max})-2$ for the first time. This results in a convergence test as well, since some of the independent chains may not reach this threshold value. Typically, we demand that more than 50\% of our chains meet this condition before we consider the MCMC run as \emph{converged}, although we consider results as \emph{robust} if they have a convergence rate of 80\% or more. This convergence test is a measure of the quality of our sampling in a given number of iterations. All our Markov chains start at values that are randomly offset from the injection values. The starting values for ${\cal M}$ and $t_\mathrm{c}$ are drawn from a Gaussian distribution centred on the injection value, with a standard deviation of $0.025\,M_\odot$ and 10\,ms respectively. In real analysis, the two Gaussian distributions are centred on the values from the template bank based search of the Compact Binary Coalescence group \cite{2009PhRvD..79l2001A} which will have triggered the MCMC followup. The other thirteen parameters are drawn uniformly from their allowed ranges. \textsc{SPINspiral} needs to run for typically a few days in order to show the first results and a week or two to accumulate a sufficient number of iterations for good statistics, each chain using a single 2.8\,GHz CPU. \subsection{Analysis of data sets DS1 and DS2} \label{sec:subresults} We analysed the data sets DS1 and DS2 as described in section~\ref{sec:data} and the results of both analyses passed the convergence test described in section~\ref{sec:runs} with convergence rates of 70\% and 80\%, respectively. The resulting one-dimensional marginalised PDFs from both analyses are shown in figure~\,\ref{fig:PDFs}. \begin{figure}% \centering \includegraphics[width=.7\textwidth,angle=270]{comp_pdfs/comp_pdfs__example__1d.eps} \caption{ One-dimensional marginalised PDFs for all 15 parameters from our analysis of data sets DS1 (hatched upward; red in the online colour version) and DS2 (hatched downward; blue in the online colour version). The vertical dashed lines mark the injection values. } \label{fig:PDFs} \end{figure} Table~\,\ref{table:numbers} shows the median and the width of the 95\%-probability ranges for each parameter. The differences we find between the results for DS1 and DS2 may be attributed to the particular noise realisations in this example, and most parameters yield similar PDFs and accuracies. \begin{table} \caption{ Median and width of the 95\%-probability ranges for each parameter of the analyses of data sets DS1 and DS2. The column \emph{recovered} indicates whether or not the 95\% range includes the injection value. \label{table:numbers} } \begin{indented} \item[] \begin{tabular}{l | l | lll | lll} \br & & \multicolumn{3}{c|}{DS1 (synthetic noise)} & \multicolumn{3}{c}{DS2 (detector noise)} \\ & injection & median & 95\% width & recovered & median & 95\% width & recovered \\ \mr ${\cal M}\,(M_\odot)$ & 2.99 & 3.006 & 0.294 & yes & 3.041 & 0.122 & yes \\ $\eta$ & 0.107 & 0.133 & 0.145 & yes & 0.183 & 0.144 & yes \\ $d_\mathrm{L}$\,(Mpc) & 28.615 & 21.240 & 20.764 & yes & 24.144 & 17.238 & yes \\ $t_\mathrm{c}$\,(s) & 0.000 & -0.013 & 0.024 & yes & 0.006 & 0.019 & yes \\ $\phi_\mathrm{c}\,(^\circ)$ & 85.944 & 189.745 & 342.398 & yes & 185.482 & 343.175 & yes \\ $\alpha$\,(h) & 17.380 & 11.684 & 5.349 & \textbf{no} & 17.786 & 6.320 & yes \\ $\delta\,(^\circ)$ & 61.642 & 49.326 & 64.346 & yes & 58.390 & 39.796 & yes \\ $i\,(^\circ)$ & 52.753 & 67.056 & 110.735 & yes & 46.850 & 122.787 & yes \\ $\psi\,(^\circ)$ & 11.459 & 93.162 & 176.358 & yes & 88.706 & 173.869 & yes \\ $a_\mathrm{spin1}$ & 0.600 & 0.658 & 0.594 & yes & 0.804 & 0.478 & yes \\ $\theta_\mathrm{spin1}\,(^\circ)$ & 78.463 & 85.490 & 83.110 & yes & 89.225 & 85.787 & yes \\ $\phi_\mathrm{spin1}\,(^\circ)$ & 63.025 & 57.171 & 335.592 & yes & 263.014 & 345.700 & yes \\ $a_\mathrm{spin2}$ & 0.400 & 0.532 & 0.945 & yes & 0.475 & 0.940 & yes \\ $\theta_\mathrm{spin2}\,(^\circ)$ & 120.000 & 94.687 & 150.544 & yes & 89.406 & 146.101 & yes \\ $\phi_\mathrm{spin2}\,(^\circ)$ & 315.127 & 181.959 & 327.603 & yes & 184.681 & 339.071 & yes \\ $M_1\,(M_\odot)$ & 10.002 & 8.533 & 8.849 & yes & 6.421 & 6.536 & yes \\ $M_2\,(M_\odot)$ & 1.400 & 1.598 & 1.277 & yes & 2.036 & 1.564 & yes \\ \br \end{tabular} \end{indented} \end{table} The PDFs of the parameters that describe the spin of the NS follow the prior distributions in both runs. This justifies ignoring the NS spin (by fixing $a_\mathrm{spin2}$ to 0.0 in the recovery template) for this mass ratio \cite{2008ApJ...688L..61V}. For each of the two data sets, DS1 and DS2, we computed the Bayes factor to compare the evidence for the following two models: $M_1$: a ${3.5}$-pN inspiral waveform embedded in Gaussian noise, and $M_2$: Gaussian noise only. The values are listed in table~\,\ref{table:Bayes}. In both cases, the Bayes factor is large, providing strong evidence for a GW signal in the data. The difference in Bayes factor between DS1 and DS2 is attributed to an inherent spread due to different noise realisations, and the uncertainties of our method to estimate the Bayes factor (section~\ref{sec:methods}). The results in this section show an illustrative example, but cannot be used to draw firm conclusions. However, it is clear that they warrant a larger, systematic study of these phenomena with the methods described here. \begin{table} \caption{ Bayes factors $B_{1,2}$ between the models $M_1$: a ${3.5}$-pN inspiral waveform embedded in Gaussian noise, and $M_2$: Gaussian noise only (section~\ref{sec:subresults}) for data sets DS1 and DS2 (see section~\ref{sec:data}). \label{table:Bayes} } \begin{indented} \item[] \begin{tabular}{ccccc} \br & DS1 (Gaussian noise) & DS2 (detector data) & DS3 (glitch)\\ \mr $\log_e B_{1,2}$ & 52.9 & 43.5 & 68.5 \\ \br \end{tabular} \end{indented} \end{table} \subsection{Analysis of data sets DS2 and DS3} \label{sec:glitch} On November 2nd 2006, seismic activity at Hanford and Livingston resulted in a coincident ``glitch'' in the data from the H1 and L1 LIGO detectors. These glitches were recovered by the Compact Binary Coalescence detection pipeline at an SNR of 11.3, using non-spinning, stationary-phase-approximation templates, Newtonian in amplitude and 2.0-pN in phase \cite{2009PhRvD..79l2001A}. We defined the corresponding data set as DS3 in section~\ref{sec:data} and analysed the data as if it had yielded a GW trigger. The convergence test from section~\ref{sec:runs} yields a 20\% convergence rate, which results in our rejection of the results as \emph{not converged}. However, when we nevertheless construct the marginalised one-dimensional PDFs from the data of the two converged chains (because of the small number of data points, the resulting PDFs may not be very accurate), they are similar in appearance to those from DS2 (see figure~\,\ref{fig:glitch}). The Bayes factors in table~\,\ref{table:Bayes} even suggest that the data set DS3 is more consistent with containing a GW signal than DS2 (with the caveat that the SNRs of DS2 and DS3 were not computed the same way). On the other hand, the low value for the median of $\eta$ (0.05) corresponds to a mass ratio of 18, which is near the limit of the regime where post-Newtonian expansions are valid. In particular, a small value for eta suggests a slow frequency evolution which may indicate a spike in the frequency spectrum that dominates the signal. In addition, we find that the sky map for DS3 does not display the (parts of a) sky ring that is expected for an analysis using two non-co-located detectors \citeaffixed{2009CQGra..26k4007R}{see \textit{e.g.}}. These results indicate that we should thoroughly verify our tests, such as the convergence criterion described here, using a large number of different glitches. \begin{figure} \centering \includegraphics[width=.85\textwidth,angle=0]{glitch/GPS0846471912_H1L1__SpinTaylor15_3.5pN_2sp__pdfs.eps} \caption{ One-dimensional marginalised PDFs of a few selected parameters from our analysis of data set DS3. The vertical dashed lines indicate the median of each PDF. } \label{fig:glitch} \end{figure} \section{Conclusions} \label{sec:concl} We have developed the code \textsc{SPINspiral} which can do a complete parameter analysis of the gravitational-wave signals from quasi-circular compact-binary inspirals. We presented an example of the analysis of software injections into both simulated Gaussian noise (DS1) and LIGO-detector data (DS2). We also presented an analysis of a data set containing no injection, but a ``glitch'' coincident in two LIGO interferometers (DS3). These examples demonstrate a remarkable similarity between the results obtained from a GW signal injected in Gaussian noise and a similar signal in detector data. The Bayes factors are also similar, where we note that our present technique for computing the Bayes factor yields estimates with significant variance, and more precise estimates should be possible in the future. In addition, we find that although the Markov chains in the analysis of a coincident glitch in LIGO data do not converge, the resulting PDFs could look remarkably consistent with a simulated GW signal. We plan to run our code on a very large number of coincident triggers from the LIGO Compact Binary Coalescence search pipeline (noise events that are somehow being registered as resembling a binary inspiral) in order to get a good sense of how to distinguish them from actual inspirals. We conclude that further, detailed investigations are necessary to ensure we can rely on the robustness of our tests. \ack The authors acknowledge a CITA National Fellowship to the UoA for MvdS, the NSF astronomy and astrophysics postdoctoral fellowship under the award AST-0901985 to IM, a NSF Gravitational Physics grant (PHY-0854790) to NC and the Max-Planck-Society (CR). Computations were performed on the Fugu computer cluster funded by NSF MRI grant PHY-0619274 to VK. \section*{References} \bibliographystyle{jphysicsB}
1,314,259,996,882
arxiv
\section{Introduction} In 1995 it is realized that open strings with Dirichlet boundary conditions can end on D-branes \cite{1}. Two D-branes which open string is stretched between them can interact. It is known that one of the best methods for finding some properties is calculation of the amplitude of interaction. This amplitude was obtained by one loop open string diagram. However, this is equivalent to a tree-level diagram in the closed string exchange \cite{2}. A closed string is generated from the vacuum, propagates for a while and then annihilates again in the vacuum. The state which describes the creation (annihilation) of closed string from (in) the vacuum is called boundary state \cite{3}. So the boundary state formalism is a strong tool for calculating the amplitude of interaction of the branes, $e.g.$ see \cite{4,5,6,7,8,9,10,11,12,13,14,15} and references therein. In the case of D$p$-branes with nonzero background and internal gauge fields, the boundary state formalism is an effective method for calculating the amplitude of their interaction. For a closed string emitted (absorbed) by a D$p$-brane in the presence of the background field $B_{\mu\nu}$ and $U(1)$ gauge field $A_\alpha$ (which lives on the brane), there are mixed boundary conditions. This D$p$-brane is called $mp$-brane \cite{12,13,14,15}. Previously we studied the interaction of two stationary $m1$-branes at angle \cite{14}. In addition, we considered moving mixed branes \cite{15}. For both cases spacetime is compact. In this article we shall study both cases simultaneously, i.e. a system of moving and angled $m1$-branes in the partially compacted spacetime on a torus. At first the boundary state associated with a moving $m1$-brane, which makes an angle with the $X^1$-direction, will be obtained. It is parallel to the $X^1X^2$-plane, and contains an electrical field along itself. Then the interaction amplitude of the system $m1-m1'$ branes will be obtained. The angle between the branes is $\phi$. The branes move along the $X^3$-direction with the velocities $V_1$ and $V_2$. Various properties of the interaction amplitude of this system will be analyzed. The large distance behavior of the amplitude, which reveals the contribution of the closed string massless states on the interaction, will be obtained. This paper is organized as follows. In the section 2, we obtain the boundary state corresponding to an oblique moving $m1$-brane. In section 3, we obtain the amplitude of interaction via the overlapping of two boundary states. In section 4, we suppose these $m1$-branes are located at large distance. Thus, the contribution of the massless states on the interaction will be studied. Section 5 is devoted to the conclusions. \section{Boundary state of an oblique moving $m1$-brane} We suppose that an $m1$-brane with the electric field $E_2$ along it, moves with the velocity $V_2$ along the direction $X^3$, while makes an angle $\theta_2$ with the $X^1$-direction. It is parallel to the $X^1X^2$-plane. In our notations the index 2 in $E_2$, $V_2$, $\cdot \cdot \cdot$, refers to the second $m1$-brane. Similarly, we consider $E_1$, $V_1$, $\cdot \cdot \cdot$, for the first $m1$-brane. Note that in this article the signature of the metric is $\eta_{\mu\nu}=diag (-1,1, \cdot \cdot \cdot,1)$. Previously we obtained the boundary state for a moving $mp$-brane \cite{15}. In the corresponding boundary state equations we consider $p=1$, and then rotate the $m1$-brane to make an angle $\theta_2$ with the $X^1$-direction. After this process the boundary state equations, associated with the moving-angled $m1$-brane, take the form \begin{equation} [\partial_{\tau}X^0-V_2\partial_{\tau}X^3-E_2\cos\theta_2\partial_{\sigma} X^1-E_2\sin\theta_2\partial_{\sigma}X^2]_{\tau_0}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [\cos\theta_{2}\partial_{\tau}X^1+\sin\theta_2\partial_{\tau}X^2 -E_2\partial_{\sigma}X^0]_{\tau_0}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [X^3-V_2X^0-{y^3}_{(2)}]_{\tau_0}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [-(X^1-y^1_{(2)})\sin\theta_2+(X^2-y^2_{(2)})\cos\theta_2]_{\tau_0} |{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} (X^j-y^j_{(2)})_{\tau_0}|B^2_x,\tau_0\rangle=0,\quad\quad\quad j\neq0,1,2,3. \end{equation} The mode expansion of $X^{\mu}(\sigma,\tau)$ is \begin{equation} X^{\mu}(\sigma,\tau)=x^{\mu}+2\alpha'p^{\mu}\tau+2L^{\mu}\sigma+ \frac{i}{2}\sqrt{2\alpha'}\sum_{m\neq0}\frac{1}{m} (\alpha_m^{\mu}e^{-2im(\tau-\sigma)} +\tilde{\alpha}_m^{\mu}e^{-2im(\tau+\sigma)}), \end{equation} where $L^\mu$ is zero for the non-compact directions. For a compact direction there are $L^\mu=N^\mu R^\mu$ and $p^\mu=\frac{M^\mu}{R^\mu}$, where $N^\mu$ and $M^\mu$ are winding number and momentum number of the emitted (absorbed) closed string from the brane, respectively. $R^\mu$ also is the radius of compactification of the compact direction $X^\mu$. After replacing the mode expansion of $X^\mu$ into the Eqs. (1)-(5) these equations will be written in terms of the oscillators. The zero mode part of the boundary state equations become \begin{equation} [p^0-V_2p^3-\frac{1}{\alpha'}E_2(L^2\sin\theta_2+ L^1\cos\theta_2)]_{op}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [p^1\cos\theta_2+p^2\sin\theta_2-\frac{1}{\alpha'}E_2L^0]_{op} |{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [-(x^1-y^1_{(2)}+2\alpha'\tau_0p^1)\sin\theta_2+ (x^2-y^2_{(2)}+2\alpha'\tau_0p^2)\cos\theta_2]_{op}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [L^2\cos\theta_2-L^1\sin\theta_2]_{op}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [x^3+2\alpha'\tau_0p^3-y^3_{(2)}- V_2(x^0+2\alpha'\tau_0p^0)]_{op}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} (L^3-V_2L^0)_{op}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [x^j+2\alpha'p^j\tau_0-y^j_{(2)}]_{op}|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} (L^j)_{op}|{B_x}^2,\tau_0\rangle=0. \end{equation} For the oscillating part, the equations of the boundary state are as in the following \begin{eqnarray} &~&[(\alpha^0_m-V_2\alpha^3_m+E_2(\alpha^2_m\sin\theta_2+ \alpha^1_m\cos\theta_2))e^{-2im\tau_0} \nonumber\\ &~&+(\tilde{\alpha}^0_{-m}-V_2\tilde{\alpha}^3_{-m}- E_2(\tilde{\alpha}^2_{-m}\sin\theta_2+\tilde{\alpha}^ 1_{-m}\cos\theta_2))e^{2im\tau_0}]|{B_x}^2,\tau_0\rangle=0, \end{eqnarray} \begin{eqnarray} &~&[(E_2\alpha^0_m+\alpha^1_m\cos\theta_2+ \alpha^2_m\sin\theta_2)e^{-2im\tau_0}+ \nonumber\\ &~&(-E_2\tilde{\alpha}^0_{-m}+\tilde{\alpha}^1_{-m}\cos\theta_2+ \tilde{\alpha}^2_{-m}\sin\theta_2)e^{2im\tau_0}]|{B_x}^2,\tau_0\rangle=0, \end{eqnarray} \begin{equation} [(-\alpha^1_m\sin\theta_2+\alpha^2_m\cos\theta_2)e^{-2im\tau_0} +(\tilde{\alpha}^1_{-m}\sin\theta_2- \tilde{\alpha}^2_{-m}\cos\theta_2) e^{2im\tau_0}]|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [(-V_2\alpha^0_m+\alpha^3_m)e^{-2im\tau_0}+ (V_2\tilde{\alpha}^0_{-m}-\tilde{\alpha}^3_{-m}) e^{2im\tau_0}]|{B_x}^2,\tau_0\rangle=0, \end{equation} \begin{equation} [\alpha^j_me^{-2im\tau_0}-\tilde{\alpha}^j_{-m} e^{2im\tau_0}]|{B_x}^2,\tau_0\rangle=0\quad,\quad j\in\{4,\cdots,d-1\}. \end{equation} These equations can be collected in a single equation, i.e., \begin{equation} ({\alpha^\mu_m}e^{-2im\tau_0}+{{{S_{(2)}}^\mu}_\nu}\tilde{\alpha}^ \nu_{-m}e^{2im\tau_0})|{B_x}^2,\tau_0\rangle=0, \end{equation} where the matrix ${{S_{(2)}}^\mu}_\nu$ is defined by \begin{equation} {{S_{(2)}}^\mu}_\nu=\left(\begin{array}{l} \vspace{0.5cm} {\Omega_{(2)}}^p_{\;\;\;q}\quad\quad\quad\quad\quad 0 \\ \vspace{0.5cm} 0\quad\quad\quad -I_{(d-4)\times(d-4)} \end{array}\right)\quad\quad,\quad\quad p,q\in\{0,1,2,3\}. \end{equation} The matrix ${\Omega_{(2)}}^p_{\;\;\;q}$ also has the definition \begin{equation} \scriptsize{\Omega_{(2)}}^p_{\;\;\;q} =\frac{1}{1-V^2_2-E^2_2} \left[ \begin{array}{llll} \vspace{0.5cm} 1+V^2_2+E^2_2&-2E_2\cos\theta_2&-2E_2\sin\theta_2&-2V_2\\ \vspace{0.5cm} -2E_2\cos\theta_2&(1-V^2_2)\cos2\theta_2+E^2_2&(1-V_2) \sin2\theta_2&2V_2E_2\cos\theta_2\\ \vspace{0.5cm} -2E_2\sin\theta_2&(1-V^2_2)\sin2\theta_2&-[(1-V^2_2) \cos2\theta_2-E^2_2]&2V_2E_2\sin\theta_2\\ \vspace{0.5cm} 2V_2&-2V_2E_2\cos\theta_2&-2V_2E_2\sin \theta_2&-(1-E^2_2+V^2_2) \end{array}\right]. \end{equation} According to $({\Omega_{(2)}}^T)^p_{\;\;\;q}=\eta^{pp}\eta_{qq}{\Omega_{(2)}}^q_{\;\;\;p}$ the matrix $\Omega_{(2)}$ is orthogonal, and hence $S_{(2)}$ also is an orthogonal matrix. By solving the Eqs. (7)-(14) and (20) the boundary state will be obtained \begin{eqnarray} &~&|{B_x}^2,\tau_0\rangle=\frac{T}{2} \sqrt{1-V^2_2-E^2_2}\exp[i\alpha'\tau_0 (\gamma^2_2(p^3_{op}-V_2p^0_{op})^2 \nonumber\\ &~&+(-p^1_{op}\sin\theta_2+p^2_{op}\cos\theta_2)^2+ \sum_{j=4}^{d-1}(p^j_{op})^2)] \nonumber\\ &~&\times\delta[-(x^1-y^1_{(2)})\sin\theta_2+ (x^2-y^2_{(2)})\cos\theta_2]\delta(x^3-y^3_{(2)}-V_2x^0) \prod_{j=4}^{d-1}\delta(x^j-y^j_{(2)}) \nonumber\\ &~&\times\sum_{p^0}\sum_{p^1}\sum_{p^2} |p^0\rangle|p^1\rangle|p^2\rangle \prod_{j=4}^{d-1}|p^j_L=p^j_R=0\rangle |p^3_L=p^3_R=\frac{1}{2}V_2p^0\rangle \nonumber\\ &~&\times\exp[-\sum_{m=1}^\infty(\frac{1}{m}e^{4im\tau_0} \alpha^\mu_{-m}S^{(2)}_{\mu\nu}\tilde{\alpha}^\nu_{-m})]|0\rangle, \end{eqnarray} where $\gamma_2=1/\sqrt{1-V_2^2}$, and $T=\frac{\sqrt{\pi}}{2^{(d-10)/4}}(4\pi^2\alpha')^{(d-6)/4}$ is tension of the $m1$-brane which lives in the $d$-dimensional spacetime. The momentum components of the closed string, that are appeared in (23), are given by \begin{equation} p^0=\frac{\gamma^2_2}{\alpha'}E_2(\ell^2\sin\theta_2+\ell^1\cos\theta_2), \end{equation} \begin{equation} p^1=\frac{E_2}{\alpha'}\ell^0\cos\theta_2, \end{equation} \begin{equation} p^2=\frac{E_2}{\alpha'}\ell^0\sin\theta_2, \end{equation} \begin{equation} p^3=\frac{\gamma^2_2V_2}{\alpha'}E_2(\ell^2\sin\theta_2+\ell^1\cos\theta_2), \end{equation} where $p^\mu=p^\mu_L+p^\mu_R$ and $\ell^\mu=\alpha'(p^\mu_L-p^\mu_R)=N^\mu R^\mu$. We should consider (24)-(26) for summing over $p^0,p^1$ and $p^2$ in (23). Therefore, these summations convert to the winding numbers $N^0,N^1$ and $N^2$. The Eq. (24) implies that energy of the closed string is quantized and depends on its winding numbers around the $X^1$ and $X^2$ directions. However, the Eqs. (24)-(27) imply that the momentum numbers of the closed string $M^0,M^1,M^2$ and $M^3$ are related to its winding numbers $N^0,N^1$ and $N^2$. The Eqs. (10), (12) and (14) also lead to the relations \begin{equation} \ell^2\cos\theta_2=\ell^1\sin\theta_2, \end{equation} \begin{equation} \ell^3=V_2\ell^0, \end{equation} \begin{equation} \ell^j=0. \end{equation} We can write the Eq. (28) in the form \begin{equation} N^2R^2\cos\theta_2=N^1R^1\sin\theta_2. \end{equation} This equation tells us that only when $\frac{R^1\sin\theta_2}{R^2\cos\theta_2}$ is rational, closed string can wrap around $X^1$ and $X^2$ directions, otherwise $N^1=N^2=0$ and closed string has no winding around $X^1$ and $X^2$. In this case its energy also is zero. In the same way, by the Eq. (29), for having winding around $X^3$ and $X^0$, the quantity $\frac{V_2R^0}{R^3}$ also should be rational. The ghost part of the boundary state is independent of the electric field $E_2$, the velocity $V_2$ and the angle $\theta_2$. It is given by \begin{equation} |B_{gh},\tau_0\rangle=\exp\bigg{[}\sum_{m=1}^\infty e^{4im\tau_0}(c_{-m}{\tilde{b}}_{-m}-b_{-m} \tilde{c}_{-m})\frac{c_0+\tilde{c}_0}{2}\bigg{]} |q=1\rangle|\tilde{q}=1\rangle. \end{equation} \section{Interaction between two $m1$-branes} Before calculation of the interaction amplitude, let us introduce some notations for the positions of these two mixed branes. Similar to the $m1$-brane, the $m1'$-brane also is parallel to the $X^1X^2$-plane and makes angle $\theta_1$ with the $X^1$-direction, and moves with the speed $V_1$ along the $X^3$-direction. The electric field on it also is $E_1$. The common direction of motions is $X^3$, and the other directions perpendicular to the world-volume of both branes are $\{X^j| j \neq 0,1,2,3\}$. We use the set $\{X^{j_n}\}$ to denote the non-compact part of $\{X^j\}$, and $\{X^{j_c}\}$ is for the compact part of $\{X^j\}$. Now we can calculate the overlap of the two boundary states to obtain the interaction amplitude of the branes. The complete boundary state for each brane is \be |B\rangle=|B_x\rangle|B_{gh}\rangle. \ee These two mixed branes simply interact via exchange of closed strings so the amplitude is given by \be {\cal A}=~^{^{(1)}}\langle B,\tau_0=0|D|B,\tau_0=0\rangle^{(2)}, \ee where ``$D$'' is the closed string propagator. The calculation is straightforward but tedious. Here we only write the final result \begin{eqnarray} &~&{\cal A}=\frac{T^2\alpha' L}{4(2\pi)^{d-4}|\sin\phi| |V_1-V_2|}\sqrt{(1-V_1^2-E_1^2)(1-V_2^2-E_2^2)} \nonumber\\ &~&\times\int_0^\infty dt \bigg{\{}e^{(d-2)t/6}\bigg{(}\sqrt{\frac{\pi}{\alpha't}}\bigg{)}^{d_{j_n}} \prod_{j_n}\exp \bigg{(}-\frac{(y^{j_n}_{(1)}-y^{j_n}_{(2)}) ^2}{4\alpha't}\bigg{)}\prod_{j_c}\Theta_3 \bigg{(}\frac{y^{j_c}_ {(1)}-y^{j_c}_{(2)}}{2\pi R_{j_c}}\bigg{|}\frac{i\alpha't}{\pi(R_{j_c})^2}\bigg{)} \nonumber\\ &~&\times\sum_{N^0}\sum_{N^1}\sum_{N^2}\bigg{[}\exp[-\frac{t}{\alpha'} (\ell^0\ell^0+(\ell^1\cos\theta_1+\ell^2\sin\theta_1) (\ell^1\cos\theta_2+\ell^2\sin\theta_2) \nonumber\\ &~&+F^{(+)}F^{(-)})+\frac{i}{\alpha'} (\Phi(12)y^3_{(2)}-\Phi(21)y^3_{(1)})]\bigg{]} \Theta_3(\nu|\tau) \nonumber\\ &~&\times \prod_{n=1}^\infty[\det(1-\Omega_1 \Omega_2^Te^{-4nt})]^{-1}(1-e^{-4nt})^{6-d}\bigg{\}}, \end{eqnarray} where $L=2\pi R_0$, and $\Phi(12)$ and $F^{(\pm)}$ are defined by \begin{eqnarray} &~&\Phi(12)=\frac{1}{V_2-V_1}[\gamma^2_1E_1(V^2_1+1)(\ell^2\sin \theta_1+\ell^1\cos\theta_1) \nonumber\\ &~&-\gamma^2_2E_2(1+V_1V_2)(\ell^2\sin\theta_2+\ell^1\cos\theta_2)], \end{eqnarray} \begin{eqnarray} &~&F^{(\pm)}=\frac{1}{|V_1-V_2|}[\gamma^2_2(1\pm V_1)(1+V_2^2)E_2 (\ell^2\sin\theta_2+\ell^1\cos\theta_2) \nonumber\\ &~&-\gamma^2_1(1\pm V_2)(1+V^2_1)E_1(\ell^2\sin\theta_1+\ell^1\cos\theta_1)]. \end{eqnarray} We can obtain $\Phi(21)$ by exchanging $1\longleftrightarrow 2$ in (36). In addition, $\phi=\theta_2-\theta_1$ and $\nu$ and $\tau$ also have the definitions \begin{eqnarray} &~&\nu=\frac{R_0}{2\pi\alpha'\sin\phi}[(E_2-E_1\cos\phi) \bar{y}^2_{(1)}+(E_1-E_2\cos\phi)\bar{y}^2_{(2)}], \nonumber\\ &~&\tau=\frac{itR_0^2}{\pi\alpha'}\bigg{(}\frac {E_1^2+E_2^2-2E_1E_2\cos\phi}{\sin^2\phi}-1\bigg{)}. \end{eqnarray} The set $\{\bar{y}_{(2)}^2,y_{(2)}^3,\cdots,y_{(2)}^{(d-1)}\}$ shows the position of the $m1$-brane, with $\bar{y}^2_{(2)}=-y^1_{(2)}\sin\theta_2+y^2_{(2)}\cos\theta_2$ and $y^1_{(2)}\cos\theta_2+y^2_{(2)}\sin\theta_2=0$, similarly for the $m1'$-brane. We observe that the interaction amplitude not only depends on the relative angle $\phi$ between the branes, but also depends on the configuration angles of the branes, i.e. $\theta_1$ and $\theta_2$. Because of the electric fields, this amplitude is not symmetric under the change $\phi\rightarrow \pi-\phi$. Therefore, for the angled mixed branes, $\phi$ and $\pi-\phi$ indicate two different configurations. From (38), we see that the electric fields and compactification of the time direction cause $\bar{y}^2_{(2)}$ and $\bar{y}^2_{(1)}$ to appear in the interaction. Finally, the amplitude (35) is symmetric with respect to the $m1$ and $m1'$- branes, i.e., \begin{equation} {\cal A}(V_1,V_2;E_1,E_2;\theta_1,\theta_2;y_1,y_2)= {\cal A}^*(V_2,V_1;E_2,E_1;\theta_2,\theta_1;y_2,y_1). \end{equation} For complex conjugation see (34). For non-compact spacetime, remove all factors $\Theta_3$ from (35). In addition, use $\ell^0=\ell^1=\ell^2=0$, and change $j_n\rightarrow j$, and hence $d_{j_n}\rightarrow d-4$. So the interaction amplitude in the non-compact spacetime is as in the following \begin{eqnarray} &~&{\cal A}_{\rm non-compact}=\frac{T^2\alpha' L}{4(2\pi)^{d-4} |\sin\phi||V_1-V_2|} \sqrt{(1-V^2_1-E^2_1)(1-V^2_2-E^2_2)} \nonumber\\ &~&\times\int_0^\infty dt\bigg{\{}(e^{(d-2)t/6}\bigg{(}\sqrt{\frac{\pi} {\alpha't}}\bigg{)}^{d-4}\exp \bigg{(}-\sum_{j=4}^{d-1}\frac {(y^j_{(1)}-y^j_{(2)})^2}{4\alpha't}\bigg{)} \nonumber\\ &~&\times\prod_{n=1}^\infty[(\det(1-\Omega_1 \Omega^T_2e^{-4nt}))^{-1}(1-e^{-4nt})^{6-d}])\bigg{\}}. \end{eqnarray} This interaction depends on the minimal distance between the branes, that is $\sum_{j=4}^{d-1}(y^j_{(1)}-y^j_{(2)})^2$. \section{Large distance branes} Now we extract the contribution of the massless states in the interaction. As the metric $G_{\mu\nu}$, anti-symmetric tensor $B_{\mu\nu}$ and dilaton $\Phi$ have zero winding and zero momentum numbers, only the term with $N^0=N^1=N^2=0$ corresponds to these massless states. By using the identity $det M=e^{Tr(\ln M)}$ for a matrix M, we obtain the following limit for $d=26$ \begin{eqnarray} &~&\lim_{q\rightarrow0}\frac{1}{q}\prod_{n=1}^\infty \bigg{(}[\det(1-\Omega_1\Omega_2^T q^n)]^{-1}(1-q^n)^{-20}\bigg{)} \nonumber\\ &~&=\lim_{q\rightarrow0}\frac{1}{q}+{\rm Tr}(\Omega_1\Omega_2^T)+20, \end{eqnarray} where $q=e^{-4t}$. Put away the tachyon divergence, the contribution of the massless states is given by \begin{eqnarray} &~&{\cal A}^{(0)}=\frac{T^2\alpha'L} {4(2\pi)^{22}|\sin\phi||V_1-V_2|} \sqrt{(1-V^2_1-E^2_1)(1-V_2^2-E_2^2)} [{\rm Tr}(\Omega_1\Omega_2^T)+20]G, \nonumber\\ &~&G\equiv\int_o^\infty dt\bigg{\{}\bigg{(}\sqrt{\frac{\pi}{\alpha't}}\bigg{)}^{d_{j_n}} \prod_{j_n}\exp \bigg{(}-\frac{(y^{j_n}_{(1)}-y^{j_n}_{(2)})^2}{4\alpha't} \bigg{)} \prod_{j_c}\Theta_3 \bigg{(}\frac{y^{j_c}_{(1)}-y^{j_c}_{(2)}} {2\pi R_{j_c}}\bigg{|}\frac{i\alpha't} {\pi(R_{j_c})^2}\bigg{)}\Theta_3(\nu|\tau)\bigg{\}}. \end{eqnarray} For the non-compact spacetime this amplitude reduces to \begin{equation} {\cal A}^{(0)}_{\rm non-compact}=\frac{T^2 \alpha' L}{4(2\pi)^{22} |\sin\phi||V_1-V_2|} \sqrt{(1-V^2_1-E_1^2)(1-V_2^2-E_2^2)} [{\rm Tr}(\Omega_1\Omega_2^T)+20]G_{22}({\bar Y}^2), \end{equation} where $\bar{Y}^2=\sum_{j=4}^{25}(y^j_1-y^j_2)^2$ is the impact parameter, and $G_{22}$ is the Green's function of the 22-dimensional space. \section{Conclusions} We obtained the boundary state, associated with an oblique moving $m1$- brane, parallel to the $X^1X^2$-plane. This state reveals that how electric field, velocity of the brane, obliqueness of the brane, compact part and non-compact part of the spacetime affect the brane. For a closed string emitted (absorbed) by such brane, some of the momentum numbers have relations with the winding numbers. We determined the interaction amplitude of two moving-angled $m1$-branes, which live in the partially compact spacetime. This amplitude depends on the electric fields, velocities of the branes, obliqueness of the branes, compact part and non-compact part of the spacetime. In addition, this interaction contains the relative angle $\phi$, and configuration angle of each brane, i.e. $\theta_1$ and $\theta_2$. The electric fields along the branes imply that the cases $\phi$ and $\pi-\phi$ are two different systems. We extracted contribution of the massless states (i.e. graviton, dilaton and Kalb-Ramond fields) on the interaction. For the non-compact spacetime, this contribution is proportional to the Green's function of the 22-dimensional space.
1,314,259,996,883
arxiv
\section{Introduction} The covert channel is a well-known way to transmit messages by circumventing the security mechanism. The definition of covert channel was given by Lampson in 1973 to describe the leakage of data by abuse of shared resource by the processes in different privilege levels\cite{Lampson:1973:NCP:362375.362389}. With the development of communication technology, the border of covert channel had been extended from one-host to networks. There are many kinds of cover channel developed in past twenty years. Zander et al surveyed the network covert channels in different kinds of networks protocols\cite{Zander2007-4317620}. In order to maintain the security, physical isolation is applied in almost every top-secret organization to keep the networks with high level separated from the less secure and public networks. The term of this type of isolation is \textit{air-gapped}. Is the air-gapped networks safe enough then? No. A lot of methods were proposed to breach the air-gapped networks in the last ten years. Generally saying, there are four kinds of covert channel to bridge the air gap: \textit{Electromagnetic} covert channels, \textit{Acoustic} covert channels, \textit{Thermal} covert channels and \textit{Optical} covert channels. Kuhn and Anderson proposed firstly the method\cite{kuhn1998soft} to transmit information covertly using electromagnetic radiation in 1998. Guri et al introduced AirHopper\cite{guri2014airhopper}, a type of malware, leak data between a mobile phone and a computer nearby using FM radio module in 2014. Guri et al introduced a malware named GSMem\cite{guri2015gsmem}, which leak data via electromagnetic radiation generated by the bus of computer memory in 2015. Guri et al proposed USBee\cite{guri2016usbee}, which can be used to leak data via electromagnetic radiation generated by the USB cable in 2016. In 2016, Matyunin et al used the magnetic field sensor in mobile device to build a covert channel. In 2013, Hanspach and Goetz used the acoustical devices: speakers and microphones of the notebook computer to build a covert channel\cite{hanspach2014covert}. Malley et al\cite{Malley-o2014bridging} introduced a covert communication over inaudible sounds in 2014. Lee et al\cite{lee2015various} uses a loud-speaker as an acoustical input device, and make a speaker-to-speaker covert channel in 2015. Guri et al introduced Fansmitter\cite{Guri-Fansmitter-2016arXiv160605915G} and DiskFiltration\cite{Guri2017DiskFiltration}, new methods to send acoustic signals without speakers in 2016. In 2015, Guri et al introduced BitWhisper\cite{guri2015bitwhisper}, to build a unique bidirectional thermal covert channel via the heat radiated with another adjacent PC. In 2017, Mirsky et al proposed HVACKer\cite{Mirsky2017}, to build a one-way thermal covert channel from an air conditioning system to an air-gapped network. The thermal covert channels in the multi-cores CPU is researched as follows. Mast built a thermal covert channel in multi-cores\cite{masti2015thermal} with a transmit rate of 12.5bits per second in 2015. Bartolini studied the capacity of a thermal covert channel in multi-cores\cite{bartolini2016capacity} in 2016. Selber propose UnCovert3\cite{selber2017uncovert3}, a new thermal covert channel in multi-cores with a transmit rate of 20 bits per second in 2017. The optical covert channels are mostly utilized. Shamir present a cover channel to breach an air-gapped network \cite{shamir2014light} by a light-based printer in 2014. Lopes and Aranha proposed a malicious device\cite{lopes2017platform} to leak data via its flickering infrared LEDs. In 2016, Guri introduced VisiSploit\cite{}, a prototype to leak date via an invisible QR-code in LCD screen. Loughry and Umphres studied the exfiltration via LED indicators\cite{Loughry:2002:ILO:545186.545189} in 2002. They divided LED indicators into three classes: \begin{description} \item[Class I] The unmodulated LEDs used to indicate some state of the device. \item[Class II] The time-modulated LEDs correlated with the activity level of the device. \item[Class III] The modulated LEDs that are strongly correlated with the content of data being processed. \end{description} They found that TD LED indicators on almost every modem of those years belong to Class III. Even a LED indicator on a DES encryptor leaks plain data. They indicated that although the LEDs in Class II are not so dangerous as those in Class III, but they can be modulated to leak significant signal, and can be used to build covert channels. Sepetnitsky proposed a covert channel prototype \cite{Sepetnitsky-2014-6975588} of leaking data to the camera in a smart phone via the monitor's power status LED indicator in 2014. Guri presented LED-it-GO\cite{Guri2017LED}, to leak data via hard drive LED indicator in 2017. Guri also proposed xLED\cite{guri2017xled}, to leak data via status LED indicators on the routers in 2017. In Guri's two methods, LED-it-GO and xLED, the LEDs that used as the light source belong to Class II. They flickers naturally without causing user's suspicion. Nevertheless, Sepetnitsky's prototype might be not so good to cope with the behavior covertness of covert channel. Because the LED indicator he used belongs to Class I. Unfortunately, the fastest flicker frequency of the monitor power LED is 25Hz. It is hard to circumvent human sense of sight if some data is modulated with OOK at that frequency. In our paper, a novel approach is proposed to modulate the LEDs in Class I. We give a prototype, KLONC(the abbreviation of ``Keyboard's LED tO Network Camera"), to build an optical covert channel and to leak data from air-gapped network to an IP camera via the LED status indicator on the keyboard of a PC. In 2002, Loughry et al presented an exfiltration via keyboard LED indicators in Appendix A\cite{Loughry:2002:ILO:545186.545189}. The flicker frequency was up to 150Hz in Solaris OS. Unlikely, because of the limitation of Windows 10, the ordinary user-leveled program can make the keyboard LED indicators flicker at 33Hz only by simulation of striking the keyboard. In our experiment, we noted that human vision can hardly distinguish two flickers with different frequencies on a LED lighting on continuously. Then, we use B-FSK to modulate the data. Two different flicker frequencies are utilized to encode logical '1' and '0'. The result of experiment shows that the effect of covertness is achieved. Our approach can be used in the optical covert channels via LED indicators in Class I at a low flicker frequency. Especially, Sepetnitsky's prototype\cite{Sepetnitsky-2014-6975588} can use our approach by replace OOK into B-FSK on their modulation. Comparing with our prototype, Sepetnitsky's prototype has more advantages, such as a nearer distance from the LED indicator to the camera and a higher frame rate of camera up to 60fps. The contributions of our research are as follows: \begin{enumerate} \item It is difficult to build an available covert channel via the unmodulated LED status indicator. We proposed a novel approach on modulation form and presented a prototype to leak data from air-gapped network to an IP camera via the keyboard LED indicator. \item A household IP camera with ordinary configuration is utilized to achieve the covert signal steadily in our experiment. \end{enumerate} The rest of the paper is organized as follows: Background technology is given in Section \ref{BackgroundTechnology}. A prototype, KLONC, is proposed in Section \ref{AttackModel}. Section \ref{ResultandEvaluation} presents results and evaluations. Countermeasures are given in Section \ref{Countermeasures}, and we draw our conclusions in Section \ref{Conclusions}. \section{Background Technology}\label{BackgroundTechnology} \subsection{LED}\label{LED} A light-emitting diode (LED) is a two-lead semiconductor light source. It is a p-n junction diode that emits light when activated. When a suitable voltage is applied to the leads, electrons are able to recombine with electron holes within the device, releasing energy in the form of photons. \cite{wikiLED} Most keyboards equip with three LED indicators recently. They are NumLock, CapsLock and ScrollLock arranged horizontally in the upper right corner of font panel. \subsection{IP camera} An IP camera\cite{wikiIPcamera}, also called a network surveillance camera, is a new type of camera which can access Internet. The user can control it by manipulating a client panel remotely. With development of the technology of optics , video coding and networks, the configuration of IP camera has upgraded rapidly. MPEG4 coding algorithm with h.264 standard is applied to cope with the high resolution up to 720P(1280x720) or 1080P(1920x1080). Nowadays, IP camera is widely used in normal life. \section{Attack Model}\label{AttackModel} An attack model named KLONC, is proposed in this section. In the model, we suppose that the IP camera is compromised by an attacker. And the LED indicators on a keyboard of an air-gapped PC are in the camera's line of sight. We also suppose that a malware that controls the LEDs is preinstalled on the PC. As shown in Figure \ref{FlowdiagramofKLONC}, the sensitive information, such as credit card number, password, encryption key etc, exfiltrates via the LED indicator of the keyboard on the desk of an office cuibicle. The optical signal is fetched by an IP camera hung on the ceiling of the office. An optical covert channel is build between the LED indicator and the IP camera. Then, the attacker access the IP camera via Internet by manipulating the client panel of the IP camera with ID and password gotten forehand. A .mp4 video file is obtained by attacker. The YUV data of the LED indicator is gotten after decoding the video file. By demodulating the brightness values, the sensitive information is restored. \begin{figure} \includegraphics[width=0.5\textwidth]{KLONC-all-en.eps} \caption{Flow diagram of KLONC} \label{FlowdiagramofKLONC} \end{figure} \subsection{Modulation and Encoding} A normal method to leak messages is to turn the three LED indicators on/off on a keyboard. Then, the optical signals can be adopted by some type of acquisition equipment. We can control those LEDs by \textit{keybd\underline{\hspace{0.5em}}event}() function\cite{APIkeybdEvent} in Windows API. The function synthesizes a keystroke. A hardware scan code is needed for the key. VK\underline{\hspace{0.5em}}NUMLOCK is the code for the key NumLock, and VK\underline{\hspace{0.5em}}CAPITAL or VK\underline{\hspace{0.5em}}SCROLL for the key CapsLock or ScrollLock. \textit{GetKeyState}() function\cite{APIGetKeyState} can be use to judge the LED's status. It returns 0 while LED turns off; It returns 1 while LED turns on. The function can be used to record LEDs' initial status to recovery their status after the covert signal transmitting. The advantage of the method is threefold: A good compatibility for different Windows versions; Supporting both PS/2 and USB interfaces; No administrator privilege is required. The disadvantage is that the lock status of a LED indicator is changed while it is being turned on/off. Hence a interference would be made when the user is typing in the meantime. Because this method on typing simulation is to send data into the keyboard buffer, reaction speed of LED indicators can be increased by modifying Registry keys for Windows.\cite{TechnetKeyboard} For Linux OS, there are two methods to turn the LED indicators on/off. The command \textit{setleds} can turn them on/off without changing their lock statuses. But an administrator privilege is required. On the contrary, The command \textit{xset} and \textit{numlockx} can turn them on/off without any administrator privilege. But they change the lock statuses of the LEDs. On modulation, the simplest form of a common modulation is On-Off Keying(OOK). We can use the presence of a signal(LED-ON) to encode a logical zero(0), and use the absence of a signal(LED-OFF) to encode a logical one(1). \begin{center} \begin{tabular}{c|c} \hline Logical Bit & LED Status\\ \hline 0 & LED-ON\\ 1 & LED-OFF\\ \hline \end{tabular} \end{center} The OOK can be used for the transmission with high carrier frequency. When the frequency is up to 150Hz\cite{Loughry:2002:ILO:545186.545189}, people can never find any flicker. But it is not available with low carrier frequency. More unfortunately, the frame rate of an IP camera is 15fps(Frames Per Second) at most. So we found a new form of signal modulation which is more suitable to transmit optical signal to a low-frequency acquisition equipment with a high covertness against human eyes. In our approach, we use Binary Frequency Shift Keying(B-FSK) to modulate the signal. We can use one flicker frequency $f_0$ to encode a logical zero(0), and use another flicker frequency $f_1$ to encode a logical one(1). \begin{center} \begin{tabular}{c|c} \hline Logical Bit & Flicker Frequency\\ \hline 0 & $f_0$\\ 1 & $f_1$\\ \hline \end{tabular} \end{center} Because there are only two discrete states on the brightness of a LED indicator, a novel method is proposed to simulate flicker frequencies on B-FSK. The method is described in Figure \ref{FrequencySimulationsforBFSK}. \begin{figure} \includegraphics[width=0.5\textwidth]{modulations.eps} \caption{Frequency Simulations for B-FSK} \label{FrequencySimulationsforBFSK} \end{figure} In Condition (a) of Figure \ref{FrequencySimulationsforBFSK}, the LED is always being on, no flicker exists. Supposing the change rate is 30 times per second. So the flicker frequency $f=0$(No change happened), the brightness $B=30$(the sum of turn-on blocks), the flicker value $f$(the index to estimate human vision of flicker) is 0. We define the \textbf{flicker value} $f$ by the following formula to express \textit{the feeling of flicker}. \[f=\frac{D_{\text{off}}^2}{D_{\text{on}}}\] Where, $D_{\text{off}}$ is the average length of the runs of turn-off block, $D_{\text{on}}$ is the average length of the runs of turn-on block. Obviously the Condition (e) is not good for covertness. It is the reason why OOK is not suitable to be a modulation form here. When a long runs of 1 follow a long runs of 0, the flicker value of the LEDs would be too high. The user would become aware of it. The optical signal emitted from the LED is received by an IP camera. The video data are stored in a TF card inserted in the camera encoded by H.264 standard\cite{Team2013DraftH264}. \subsection{Decoding and Demodulation} We can use the famous free software \textit{FFmpeg} to convert the .mp4 video file into a .rgb video file with a command as follow. \texttt{ffmpeg -i input.mp4 -vcodec rawvideo -pix\underline{\hspace{0.5em}}fmt rgb24 -an output.rgb} But this method is not wise for the .rgb file will be too large. So we deal with the .mp4 file with following steps: Firstly, the video data encoded by H.264 standard will be extracted from the .mp4 file. \texttt{ffmpeg -i input.mp4 -f h264 output.264} Secondly, the H.264 video will be decoded into YUV format data frame by frame. By referring to Lei's code\cite{Xiaohua}, we finished this step by making a C code with FFmpeg's \textit{avcodec} library. Finally, pixel values of the LED indicator will be acquired from the YUV data by its fixed position(row and column) in the frame. There are three sample modes of YUV data: YUV444, YUV422 and YUV420. Take YUV420 for example, every pixel has a unique Y value, and four adjacent pixels share a set of U value and V value as shown in the Table \ref{YUV420sample}. So when the width of the frame is $w$, the pixel $(n,m)$'s offset in Y sequence is $(m-1)\times w + n$. And the offsets in U and V sequences are both $(\lfloor\frac{m+1}{2}\rfloor-1)\times\frac{w}{2}+\lfloor\frac{n+1}{2}\rfloor$. \begin{table} \centering \caption{YUV420 sample} \footnotesize \begin{tabular}{|c|c|c|c|c|} \hline $Y_{11}U_{11}V_{11}$ & $Y_{12}U_{11}V_{11}$ & $Y_{13}U_{12}V_{12}$ & $Y_{14}U_{12}V_{12}$ & $\cdots$ \\ \hline $Y_{21}U_{11}V_{11}$ & $Y_{22}U_{11}V_{11}$ & $Y_{23}U_{12}V_{12}$ & $Y_{24}U_{12}V_{12}$ & $\cdots$ \\ \hline $Y_{31}U_{21}V_{21}$ & $Y_{32}U_{21}V_{21}$ & $Y_{33}U_{22}V_{22}$ & $Y_{34}U_{22}V_{22}$ & $\cdots$ \\ \hline $Y_{41}U_{21}V_{21}$ & $Y_{42}U_{21}V_{21}$ & $Y_{43}U_{22}V_{22}$ & $Y_{44}U_{22}V_{22}$ & $\cdots$ \\ \hline $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ & $\cdots$ \\ \hline \end{tabular} \label{YUV420sample} \end{table} As we mentioned on modulation, B-FSK is used as the modulation form. So naturally, we can demodulate the data by distinguishing two different frequencies. In addition, we can calculate the mean value and the variance of the data. Because the every condition of Figure \ref{FrequencySimulationsforBFSK} has a B value, the index value of brightness, the mean value of Y value in data can be calculated to distinguish two different B values. The variance of Y value in data can also represents the dither degree of the signal. It can be used to demodulate the data too. \subsection{Effective Distance} The effective distance is an essential index of a camera to fetch the optical signal of LED indicators. The ability of a camera is determined by its frame resolution and sensitivity of its electronics. So, on distances, there is an upper bound to obtain the message availably for a certain camera. Three factors influence the upper bound of effective distance. They are: \begin{enumerate} \item Ambient Brightness; \item Emitting Angle of a LED indicator; \item Distance between the LED indicator and the camera \end{enumerate} \subsubsection{Ambient Brightness} LED indicators are only used to represent the statuses of a keyboard, so the brightness of a LED indicator is always weak. When ambient brightness is too high, the brightness status of a LED can hardly be distinguished in video. On the contrary, When ambient brightness is low enough, the status of a LED is quite obvious in video. Nevertheless, in our experiments, we find that when the camera is close to the keyboard, a certain intensity of ambient brightness can reduce the noise in MPEG-4 video, the capacity of the channel is increased instead. \subsubsection{Relationship between Emitting Angle and Distance} The \textbf{emitting angle of a LED indicator} is defined here as \textit{an angel of the direction of LED's emitting and the direction of the camera}. IP cameras are always hung on the ceiling. The distance between the desktop and the ceiling is constant. So, the longer the distance from LED indicator to camera, the bigger the emitting angle, the weaker the intensity of signal obtained. The relationship between the emitting angle and the distance is described in Figure \ref{RelationshipbetweenEmittingAngleandDistance}. Where, $\angle \text{UZX}$ is the angle between keyboard surface and desktop. According to the feet's status of keyboard, the $\angle \text{UZX}$ has two fixed value. Taking Logitech K120 as an example, the values of $\angle \text{UZX}$ are $1.1353328^{\circ}$ and $6.9474259^{\circ}$. In general, LED's emitting direction is perpendicular to the surface of keyboard. That is $\text{XB}\perp\text{XZ}$, then $\angle \text{UZX} = \angle \text{AUB}$. \begin{figure} \includegraphics[width=0.5\textwidth]{enviromentgeometry1.eps} \caption{Relationship between Emitting Angle and Distance} \label{RelationshipbetweenEmittingAngleandDistance} \end{figure} Suppose J is an arbitrary plot on the line AF, then we can get a relational expression between the emitting angle and the distance as follow. \[\angle\text{JXB}=\arccos\left(\frac{|\text{XA}|}{|\text{XJ}|}\right)-\angle\text{UZX}\] Where, $\angle \text{UZX} = \angle \text{AUB}$ is known, and $|\text{XA}|$ can be measured. We can also determine the value of LED's visual angle in camera, and the value of LED's effective shine area in the projection plane. \subsubsection{Relationship between Distance and Brightness} Because $\frac{|\text{OY}|}{|\text{ON}|}\approx 1000$, it means that$\angle\text{HOY}=\arccos\left(\frac{|\text{ON}|}{2|\text{OY}|}\right)$ is approximated with $90^\circ$. So, a simplified model is described in Figure \ref{RelationshipbetweenEmittingAngleandEffectiveShineArea}. \begin{figure} \includegraphics[width=0.5\textwidth]{environmentgeometry3.eps} \caption{Relationship between Emitting Angle and Effective Shine Area} \label{RelationshipbetweenEmittingAngleandEffectiveShineArea} \end{figure} In the figure, Plot O is one side of the LED indicator. Plot N is the other side. Changing with the emitting angle from $0^\circ$ to $90^\circ$, N moves on the Arc GH. Plot K is N's projection on the camera direction. Then $|\text{KO}|$ can be represent the value of LED's effective shine area in the projection plane. We can get a relational expression between the emitting angle and the effective shine area as follow. \[|\text{KO}| = |\text{ON}|\cos(\angle\text{HON})\] Where, $\angle\text{HON}$ is equal to the emitting angle. Furthermore, the brightness in the video is not only related with the effective shine area, but also with the distance between LED indicator and camera. Then, the relational expression between the brightness and its influence factors is: \[B=\beta\frac{|\text{ON}|\cos(\angle\text{HON})}{\left(\frac{|\text{OY}|}{y}\right)^2}\] Where, $\beta$ is a constant coefficient, $y$ is an initial reference distance, a non-zero value. $|\text{ON}|$ is the length of LED indicator in the direction of change. \subsection{Channel Capacity} According to Nyquist-Shannon sampling theorem, if the sampling frequency of the receiver is $f$, the maximum carrier frequency would be $\frac{f}{2}$. The frame rate of most normal camera in current market is 25fps(frames per second). It means an upper bound of transmitting speed. The frame rate of some high end camera can be 60fps or higher. But high frame rate and high resolution are interacted on each other. For example, the frame rate of most IP camera is 25fps in 720P, but 15fps in 1080P. Aiming at surveillance for security, there is no tend to increase the frame rate of IP camera. \subsection{Covertness} According to the persistence of vision\cite{wikiPersistenceofvision}, a single slight change in 50ms(microsecond) is not sensitive to human vision. This feature can help us to hide a turn-off behavior in 40ms on a LED indicator always being on. For mankind, the maximal fusion frequency can be up to 60Hz at very high illumination intensities \cite{wikiFlickerfusionthreshold}.By conducting experiments, we find that a turn-off behavior in 20ms can hardly is observed even the LED is stared continuously. When the duration of the turn-off behavior is in 20ms to 50ms, a tiny dithering on the brightness of LED can be observed under a careful observation. Moreover, the covertness of three LED indicators on the keyboard are different. To normal computer users, they would suspect something wrong with their computers when they notice that a LED indicator turns on without any sake, even when they find any tiny flash on the brightness of a LED. So our only choice is to select the LED indicator always being on to leak data covertly. Among three LED indicators, NumLock is always on after a booting of Windows on most computers. Hence NumLock is most suitable to leak covert message, unless on the computers in department of finance where the number pads will be served all the time. ScrollLock is another suitable one actually for its function is too old to current OSes. If ScrollLock could keep being on from the booting of Windows, it would not catch the attention of the user. On the contrary, the function of CapsLock is alway used by every user to input text message such as ID, password etc. So it would make user anxious when the turn-on CapsLock is seen. \section{Results and Evaluations}\label{ResultandEvaluation} \subsection{Experiment Setting} An open-plan office is served as the experimental environment. It is a common environment for most business companies and research organizations etc. The keyboard that leaks data is located on the desk of an office cubicle. The IP camera is hung on the ceiling of the office. A survey sheet of the experimental environment is shown in Figure \ref{ExperimentalEnvironment}. \begin{figure} \includegraphics[width=0.5\textwidth]{ExperimentalEnvironment.eps} \caption{Survey Sheet of Experimental Environment} \label{ExperimentalEnvironment} \end{figure} The configuration lists of Personal Computer and IP camera are shown in Table \ref{ConfigurationofPersonalComputer} and Table \ref{ConfigurationofIPCamera}. \begin{table*} \caption{Configuration of Personal Computer} \small\centering \begin{tabular}{c|c} \hline Module & Configuration \\ \hline CPU & Intel Core i5-4590 CPU 3.30GHz\\ Motherboard & ASUS B85-PLUS R2.0\\ RAM & 8GB\\ Hard Disk & SEAGATE Desktop HDD 500G\\ Keyboard & Logitech K120 HID USB\\ OS &Windows 10 Chinese Simplified Version 64-bit (10.0, Build 14393)\\ \hline \end{tabular} \label{ConfigurationofPersonalComputer} \end{table*} \begin{table*} \caption{Configuration of IP Camera} \small\centering \begin{tabular}{c|c} \hline Module & Configuration\\ \hline Resolution & 1920x1080 and 640x352\\ Video Encoding & H264MANINPROFLE, JPEG Snapshot\\ Wireless Network & IEEE 802.11b/g/n 2.4GHz\\ Focus & 5 times optical zoom, 3.6-12mm\\ Aperture value & F2.0\\ \hline \end{tabular} \label{ConfigurationofIPCamera} \end{table*} \subsection{Results} Several experiments were conducted with different distances and various ambient brightness. Obtained BERs(Bit Error Rates) of the covert channel KLONC are listed in Table \ref{BER}. The table shows that BERs increase with the distances, but are not linear relationships with the ambient brightness. When the distance is 2.54m, most of BERs are less than 10\%. When the distance is 3.27m, most of BERs are less than 25\%. But we can see all BERs are greater than 33\% while the distance reaches 5 meters. According to the channel capacity formula in Information Theory: \[C=1-H(p)=1+p\log(p)+(1-p)\log(1-p)\] we know that when $p=\frac{1}{3}$, the capacity $C=0.081704166<\frac{1}{12}$. It means that we need more than 12 bits data to transmit 1 bit information correctly. It is impossible to build a reliable channel under such a condition. So, the distance 5 meters can be considered as an upper bound of effective distance to build a covert channel with current experimental devices. \begin{table*} \caption{Bit Error Rates(\%) with Different Distances and Various Ambient Brightness} \small \begin{tabular}{c|cccccccccccc} \hline Brightness(LUX) & 100 & 200 & 300 & 400 & 500 & 600 & 700 & 800 & 900 & 1000 & 1100 & 1200\\ \hline 2.54m & 0.39 & 15.63 & 0 & 0 & 0 & 10.16 & 4.30 & 0 & 16.02 & 5.47 & 1.17 & 8.98\\ 3.27m & 3.13 & 1.95 & 27.73 & 23.05 & 14.06 & 6.64 & 10.94 & 16.80 & 10.55 & 42.19 & 10.55 & 23.38\\ 4.02m & 26.17 & 35.55 & 30.08 & 26.17 & 37.89 & 33.20 & 28.13 & 24.61 & 30.08 & 30.86 & 39.84 & 39.84\\ 5.08m & 38.67 & 37.89 & 33.98 & 37.11 & 33.20 & 41.41 & 41.41 & 44.92 & 38.28 & 41.41 & 42.58 & 39.06\\ \hline \end{tabular} \label{BER} \end{table*} In our experiments, $\angle \text{UZX} = 6.9474259^\circ$(the angle between keyboard surface and desktop) and $|\text{XA}|=1.77m$(the distance between LED and camera's altitude) in Figure \ref{RelationshipbetweenEmittingAngleandDistance}. A list of emitting angles and distances are given in Table \ref{EmittingAnglesandDistancesinExperiments}. We can see that the emitting angle grows up observably in the distances from 2.54m to 5.08m. It means that the camera receives a quick drop in brightness when the emitting angle increases. \begin{table} \caption{Emitting Angles and Distances} \centering \small \begin{tabular}{c|cccc} \hline & Exp.1 & Exp.2 & Exp.3 & Exp.4\\ \hline Distance & 2.54m & 3.27m & 4.02m & 5.08m\\ Angle & \scriptsize$38.877^\circ$ & \scriptsize$50.2814^\circ$ & \scriptsize$56.9296^\circ$ & \scriptsize$62.6616^\circ$\\ \hline \end{tabular} \label{EmittingAnglesandDistancesinExperiments} \end{table} Then, a relational graph between distance, emitting angle and brightness is made with $y=1.77m$(the distance between LED and camera's altitude), $\beta=1$(the constant coefficient) and $|\text{ON}|=1$(the length of LED indicator in the direction of change) in Figure \ref{RelationshipbetweenEmittingAngleandBrightness}. The figure shows that the brightness captured by camera at a distance of 4 meters is about only 10\% of the brightness at 1.77 meters. \begin{figure} \includegraphics[width=0.5\textwidth]{RelationshipbetweenEmittingAngleandBrightness.eps} \caption{Relationship between Emitting Angle and Brightness} \label{RelationshipbetweenEmittingAngleandBrightness} \end{figure} \subsection{Comparison with OOK} When the flicker frequency is so low that the turn-off behavior can be found with human vision, it is natural to do something making the behavior more indetectable. A normal way is to give a long turn-on duration before a turn-off behavior. Then we can encode the message before modulation like this: \begin{center} \begin{tabular}{c|c} \hline Plain Bit & Encoded Word\\ \hline 0 & 0\\ 1 & $0\cdots 01$\\ \hline \end{tabular} \end{center} Then the channel rate $R_\text{OOK}$ and the flicker value $f_\text{OOK}$ can be deduced as follows. \[R_\text{OOK}=\frac{2F}{|\text{Enc}(1)|+1},\quad f_\text{OOK}=\frac{1}{|\text{Enc}(1)|}\] Where, $F$ is the flicker frequency, $\text{Enc}(1)$ is the length of encoded word of the bit one. Meanwhile, the channel rate $R_\text{B-FSK}$ and the flicker value $f_\text{B-FSK}$ with $f_0=0$ can be deduced as follows. \[R_\text{B-FSK}=f_1,\quad f_\text{B-FSK}=\frac{1}{\frac{2F}{f_1}-0.5}\] A comparison of flicker values is given in Figure \ref{B-FSKvsOOK} with $F=25$ as same as Sepetnitsky's propotype\cite{Sepetnitsky-2014-6975588}. The figure shows that the flicker value with B-FSK is always lower than those with OOK. \begin{figure} \includegraphics[width=0.5\textwidth]{B-FSKvsOOK.eps} \caption{Flicker Values Comparison between B-FSK and OOK} \label{B-FSKvsOOK} \end{figure} \section{Countermeasures}\label{Countermeasures} The countermeasures can be divided into two types: procedural countermeasures and technical countermeasures. Procedural countermeasures involve banning cameras from the office, covering the LEDs, cutting off the LEDs' feet and shielding windows. Any banning policy needs a supervision all the time to insure no exception. Covering the LEDs or cutting off the feet of LEDs is easy to utilized, but it makes users inconvenient without any indication. In addition, armored glass is used as walls in many office space. So a surveillance camera can also received optical signal through glass of the windows or wall. It is necessary to shield them availably. Technical countermeasures involve LED status monitoring with software or optical methods, LED status confusing with software. Detecting the malware is a common job to security software. Then a watchdog for the status of LEDs can find the abuses on them. As a cost, CPU resources would be occupied to slow down the OS. Detecting the abuse of LEDs by an outside sensor is a perfect method without giving any information to the attacker. It always obtains a high percentage of success if the hardware meets the conditions. But the existence of a covert channel is a low probability event. So it is still difficult to detect. We notice that there is only one covert channel that can be established in same time. So we can confuse the LED status actively to hold back the real risk. The list of all countermeasures is summarized in Table \ref{CountermeasuresList}. \begin{table*} \caption{Cost and Effect of Countermeasures} \label{CountermeasuresList} \begin{tabular}{l|c|c|c|l} \hline Countermeasure & Type & Cost & Effect & shortcomings\\ \hline Banning cameras from the office & Proc. & High & Good & Need for supervision \\ Covering the LEDs & Proc. & Low & Good & Inconvenience to user\\ Cutting off the LEDs' feet & Proc. & Low & Good & Inconvenience to user\\ Shielding windows & Proc. & High & Good & Change surrounding brightness\\ \hline Status monitoring with software & Tech. & Low & Good & Occupy CPU resources\\ Status monitoring with optical methods & Tech. & High & Normal & Difficult to detect\\ Status confusing with software & Tech. & Low & Good & Occupy CPU resources\\ \hline \end{tabular} \end{table*} \section{Conclusions}\label{Conclusions} A novel form of signal modulation with the fix status LED indicator to build an optical covert channel was proposed in this paper. By using this modulation form, a LED indicator in Type I can leak covert signal with a good covertness on human vision. An attack model, KLONC, was given to build a covert communication with a purchasable generally configured IP camera by programming C codes to turn the LED on/off. Furthermore, the modulation form and the corresponding demodulation method were designed and optimized. Then the efficiency and covertness were estimated. The upper bound of effective distance of KLONC was obtained with both theoretical calculation and experimental observation. Finally, countermeasures were given by considering the necessary conditions of existence of this kind of covert channel. \bibliographystyle{plain}
1,314,259,996,884
arxiv
\section{Introduction} The discovery of neutrino mass and mixing implies that the Standard Model (SM) must be extended somehow. An elegant possibility remains the original type Ia seesaw mechanism~\cite{Minkowski:1977sc, Yanagida:1979ss, Gell-Mann:1979ss, Glashow:1979ss, Mohapatra:1979ia,Schechter:1980gr, Schechter:1981cv} involving right-handed neutrinos, which, when integrated out, yield the Weinberg operators $HHL_iL_j$, where $H$ is the Higgs doublet of the SM and $L_i$ is a lepton doublet of the $i$th family. The minimal type Ia seesaw mechanism supplements the particle content of the SM by just two right-handed neutrinos (2RHN)~\cite{King:1999mb,King:2002nf}, and this approach will be followed in the present paper. However, to explain the observed approximate tri-bimaximal lepton mixing, one must go beyond the seesaw mechanism and consider a non-Abelian discrete family symmetry~\cite{King:2013eh,King:2017guk}. For example, $S_4$ has been used to account for trimaximal TM$_1$ lepton mixing \cite{Varzielas:2012pa, Luhn:2013vna}, enforced by a residual $Z^{SU}_2$ symmetry in the neutrino sector, and a residual $Z_3^T$ in the charged lepton sector~\footnote{We adopt the standard presentation of the $S_4$ generators $S,T,U$ where $S^2=T^3=U^2=(ST)^3=(SU)^2=(TU)^2=(STU)^4=I$~\cite{King:2013eh}.}. However such realistic models typically involve many flavons. The origin of such non-Abelian discrete family symmetry might be due to a continuous non-Abelian gauge symmetry \cite{deMedeirosVarzielas:2005qg, Koide:2007sr, Banks:2010zn, Luhn:2011ip, Merle:2011vy, Wu:2012ria, Rachlin:2017rvm, King:2018fke}. Alternatively, it could be due to extra dimensions \cite{Asaka:2001eh, Altarelli:2006kg, Kobayashi:2006wq, Altarelli:2008bg, Adulpravitchai:2009id, Burrows:2009pi, Adulpravitchai:2010na, Burrows:2010wz, deAnda:2018oik, Kobayashi:2018rad, deAnda:2018yfp, Baur:2019kwi}. With extra dimensions, it could either arise as an accidental symmetry of the orbifold fixed points (for recent discussion with two extra dimensions, see \cite{Kobayashi:2008ih,deAnda:2018oik,Olguin-Trejo:2018wpw, Mutter:2018sra}) or as a subgroup of the symmetry of the extra dimensional lattice, known as modular symmetry~\cite{Giveon:1988tt}, arising from superstring theory \cite{Ferrara:1989bc,Ferrara:1989qb} \footnote{The geometric connection between the origin of the family symmetry due to modular symmetry and the orbifolding method with two extra dimensions has recently been discussed, e.g., in \cite{deAnda:2018ecu, Kobayashi:2018bff}. On the other hand, massive states predicted in string theories may break the modular symmetries. This effect is naturally suppressed by the Planck scale, and thus can be safely ignored. }. Indeed, it has been suggested that a finite subgroup of the modular symmetry group, when interpreted as a family symmetry, might help to provide a possible explanation for the neutrino mass matrices \cite{Altarelli:2005yx, deAdelhartToorop:2011re}, and this will be the approach followed here. Recently it has been suggested that finite modular symmetry might be the origin of flavour mixing with neutrino masses as modular forms \cite{Feruglio:2017spp}, leading to constraints on the Yukawa couplings. This has led to a revival of the idea that modular symmetries are symmetries of the extra dimensional spacetime with Yukawa couplings determined by their modular weights \cite{Criado:2018thu}. The finite modular groups $\Gamma_2\simeq S_3$~\cite{Kobayashi:2018vbk,Kobayashi:2018wkl}, $\Gamma_3\simeq A_4$~\cite{Feruglio:2017spp,Criado:2018thu,Kobayashi:2018scp,Okada:2018yrn,Kobayashi:2018wkl,Novichkov:2018yse}, $\Gamma_4\simeq S_4$~\cite{Penedo:2018nmg,Novichkov:2018ovf} and $\Gamma_5\simeq A_5$~\cite{Novichkov:2018nkm,Ding:2019xna} have been considered, in which special Yukawa structures are consequences of the modular forms. Compared with traditional neutrino models of flavour symmetry, only a minimal set of flavon fields (or no flavons at all) need to be introduced in the new framework~\footnote{Extension to the quark flavour mixing is given in \cite{Kobayashi:2018wkl,Okada:2018yrn, Okada:2019uoy}.}, making such an approach very attractive. Within the framework of finite modular symmetry outlined above, only a single modulus field $\tau$ is usually considered, corresponding to a single finite modular symmetry $\Gamma_N$. It has been pointed out that particular modular forms, corresponding to special values of $\tau$, preserve a residual subgroup of the finite modular symmetry $\Gamma_N$. For example, such residual symmetries are considered in \cite{Novichkov:2018yse} as subgroups of the modular $A_4$ symmetry. Some of these specific values for $\tau$ have been shown to be obtained in extra dimensions through orbifolding \cite{deAnda:2018ecu}. With the help of two moduli with different residual symmetry $Z_3$ in the charged lepton sector and $Z_2$ in the neutrino sector, it was shown how trimaximal TM$_2$ lepton mixing may be realised~\cite{Novichkov:2018yse}. Also brief discussion on residual symmetry after modular $S_4$ symmetry breaking is given in \cite{Novichkov:2018ovf}. However, the formalism for having two or more moduli fields (as necessary for such a scheme) has not so far been developed, providing one of the main motivations for the present paper. In the present paper, we shall extend the formalism of finite modular symmetry to the case of multiple moduli fields $\tau_J$ ($J=1, \ldots M$) associated with the finite modular symmetry $\Gamma_{N_1}^1\times \Gamma_{N_2}^2 \times \cdots \times \Gamma_{N_M}^M$. As an example, we shall then present the first consistent example of a flavour model of leptons with multiple modular $S_4$ symmetries interpreted as a family symmetry. The considered model involves three finite modular symmetries $S_4^A$, $S_4^B$ and $S_4^C$, associated with two right-handed neutrinos and the charged lepton sector, respectively, broken by two bi-triplet scalars to their diagonal subgroup. The low energy effective theory consists of a single $S_4$ modular symmetry with three independent modular fields $\tau_A$, $\tau_B$ and $\tau_C$, which preserve the residual modular subgroups $Z_3^A$, $Z_2^B$ and $Z_3^C$, in their respective sectors~\footnote{Having a separate residual symmetry associated with each of the two right-handed neutrinos and the charged lepton sector was also assumed in the tridirect CP approach~\cite{Ding:2018fyz,Ding:2018tuj}, although here we do not assume any (generalised) CP symmetry. An extension of modular symmetry to include general CP symmetries was given in \cite{Novichkov:2019sqv}.}, leading to trimaximal TM$_1$ lepton mixing, consistent with current data, without requiring any flavons. The remainder of the paper is organised as follows. In section~\ref{multiple} we show how the formalism of finite modular symmetry with a single modulus field can be extended to include multiple moduli and an extended finite modular group. In section~\ref{S4} we have focussed on the case of the single finite modular $S_4$ symmetry, and have analysed its stabilisers and resulting remnant symmetries. In section~\ref{model} we have proposed a model based on three moduli fields associated with a high energy finite modular group $S_4^3$, which is broken to a single diagonal $S_4$ with three independent moduli fields at low energies, whose stabilisers leads to different remnant symmetry in the different sectors, which may be used to enforce trimaximal TM$_1$ mixing, leading to good numerical fits to the data, once right-handed neutrino mixing is taken into account. Section~\ref{conclusion} concludes the paper. \section{From single to multiple modular symmetries} \label{multiple} Modular invariant supersymmetric field theories have been analyzed in \cite{Ferrara:1989bc,Ferrara:1989qb}. Modular invariance is involved in string compactifications and realistic Yukawa couplings arise from modular forms \cite{Ibanez:1986ka,Casas:1991ac,Lebedev:2001qg,Kobayashi:2003vi}. It has been invoked while addressing several aspects of the flavour problem in model building \cite{Brax:1994kv,Binetruy:1995nt,Dudas:1995eq,Dudas:1996aa,Leontaris:1997vw,Dent:2001cc,Dent:2001mn}. Direct application of modular symmetry to explain lepton flavour mixing was suggested in \cite{Feruglio:2017spp}. In the rest of this section, we will give a short review of effective modular-invariant supersymmetry and then expand the formulism to include multiple moduli fields. \subsection{A single modular symmetry } The modular group $\overline{\Gamma}$ acting on the complex modulus $\tau$ (${\rm Im}(\tau)>0$) as linear fractional transformations: \begin{eqnarray} \label{eq:modular_transformation} \gamma: \tau \to \gamma \tau = \frac{a \tau + b}{c \tau + d}\,, \end{eqnarray} where $a, b, c, d$ are integers and satisfy $ad-bc=1$. It is convenient to represent each element of $\overline{\Gamma}$ by a two by two matrix \footnote{Note that it need not be a unitary matrix.}. Then, $\overline{\Gamma}$ is expressed as \begin{eqnarray} \overline{\Gamma} = \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} / (\pm \mathbf{1})\,,~ a, b, c, d \in \mathbb{Z}, ~~ ad-bc=1 \right\} \,. \end{eqnarray} This group is isomorphic to the projective special linear group $PSL(2,\mathbb{Z}) = SL(2,\mathbb{Z})/\mathbb{Z}_2$. The modular group has two generators, $S_\tau$ and $T_\tau$, which satisfy $S_\tau^2 = (S_\tau T_\tau)^3 = \mathbf{1}$. They act on the modulus $\tau$ and take the following forms \begin{eqnarray} S_\tau: \tau \to -\frac{1}{\tau} \,, \hspace{1cm} T_\tau: \tau \to \tau + 1\,, \end{eqnarray} respectively. Representing them by two by two matrices, we obtain \begin{eqnarray} S_\tau=\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}\,, \hspace{1cm} T_\tau=\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \,. \end{eqnarray} $\overline{\Gamma}$ is a discrete but infinite group. By requiring $a, d = 1~({\rm mod}~N)$ and $b, c = 0~({\rm mod}~N)$, $N=2, 3, 4, \cdots$, i.e., \begin{eqnarray} \label{eq:mode_N} a = k_a N+1\,,~ d = k_d N +1\,,~ b = k_b N\,, ~~~~ c = k_c N\,, \end{eqnarray} where $k_a$, $k_b$, $k_c$ and $k_d$ are integers, we obtain a subset of $\overline{\Gamma}$ which is also an infinite group and is labelled as \begin{eqnarray} \overline{\Gamma}(N) = \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in PSL(2,\mathbb{Z}), ~~ \begin{pmatrix} a & b \\ c & d \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} ~~ ({\rm mod}~ N) \right\} \,. \end{eqnarray} The quotient group $\overline{\Gamma}/\overline{\Gamma}(N)$, labelled as $\Gamma_N$, is a finite group, also called the finite modular group. The finite modular group $\Gamma_N$ can be also obtained by imposing an additional condition for $T_\tau$, $T_\tau^N = \mathbf{1}$, which can be achieved to identify $\tau=\tau+N$ in the upper complex plane \footnote{Note that once $\tau=\tau+N$ is imposed, $\tau' = \frac{-1}{\tau} = \frac{-1}{\tau+4} = \frac{-\tau}{4\tau -1}$ is automatically satisfied. }. For $N$ taking some small number, $\Gamma_N$ is isomorphic to a permutation group, in particular, $\Gamma_2 \simeq S_3$, $\Gamma_3 \simeq A_4$, $\Gamma_4 \simeq S_4$ and $\Gamma_5 \simeq A_5$ \cite{deAdelhartToorop:2011re}. In a theory satisfying the $\Gamma_N$ modular symmetry, any chiral superfield $\phi_i$, as a function of $\tau$ (but does not need to be modular forms), non-linearly transforms as \cite{Ferrara:1989bc}, \begin{eqnarray} \phi_i(\tau) \to \phi_i(\gamma\tau) = (c\tau + d)^{-2k_i} \rho_{I_i}(\gamma) \phi_i(\tau)\,, \label{eq:field_transformation} \end{eqnarray} where $-2k_i$ with $k_i$ an integer is the modular weight of $\phi_i$, ${I_i}$ is the representation of $\phi_i$ and $\rho_{I_i}(\gamma)$ denotes a unitary representation matrix of $\gamma$ with $\gamma$ an element of $\Gamma_N$. Considering an $\mathcal{N}=1$ supersymmetric model in the finite modular symmetry, the action in general takes the form \cite{Ferrara:1989bc, Ferrara:1989qb} \begin{eqnarray} \mathcal{S} = \int d^4x d^2\theta d^2\overline{\theta} K(\phi_i, \overline{\phi}_i;\tau,\overline{\tau}) + \left[ \int d^4x d^2\theta W(\phi_i;\tau)+ {\rm h.c.} \right]\,, \end{eqnarray} where $h$ is a positive constant. The K\"ahler potential $K$ can be changed at most by a K\"ahler transformation under $\Gamma_N$, and the superpotential $W$ is required to be invariant, i.e., \begin{eqnarray} K(\phi_i, \overline{\phi}_i;\tau,\overline{\tau}) &\to& K(\phi_i, \overline{\phi}_i;\tau,\overline{\tau}) + f(\phi_i, \tau) + \overline{f}(\overline{\phi}_i, \overline{\tau}) \,,\nonumber\\ W(\phi_i;\tau) &\to& W(\phi_i;\tau) \,. \end{eqnarray} An example of the K\"ahler potential satisfying the K\"ahler transformation takes the following form \footnote{The effects of taking a different form for the Kahler potential are expected to be subdominant, analogously to the results shown by studies of Kahler corrections e.g. \cite{King:2004tx}. Corrections to the Kahler potential may further lead to the stabilisation of the moduli vacua (see, e.g., reviews \cite{Balasubramanian:2004uy,Silverstein:2004id}). We, following all other papers on modular symmetries, avoid this problem by fixing moduli VEVs at typical values.}, \begin{eqnarray} K(\phi_i, \overline{\phi}_i;\tau,\overline{\tau}) = - h \log(-i\tau + i \overline{\tau}) + \sum_{i} \frac{\overline{\phi}_i \phi_i}{(-i \tau + i \overline{\tau})^{2k_i}} \,. \end{eqnarray} After $\tau$ gets a vacuum expectation value (VEV), the K\"ahler potential leaves kinetic terms for the scalar components of the supermultiplets $\phi_i$ and the modulus field as \footnote{The scalar component of $\phi_i$ may gain a non-zero VEV, and this VEV also contributes to the kinetic term of $\tau$. We ignore such a contribution by assuming $v_{\phi_i} \ll \sqrt{h}$. } \begin{eqnarray} \frac{h}{\langle -i\tau + i \overline{\tau} \rangle^2} \partial_\mu \overline{\tau} \partial^\mu \tau + \sum_i \frac{\partial_\mu \overline{\phi}_i \partial^\mu \phi_i}{\langle -i\tau + i \overline{\tau} \rangle^{2k_i}} \,. \end{eqnarray} The superpotential $W(\phi_i;\tau)$ is in general a function of the modulus $\tau$ and superfieds $\phi_i$. Under the modular transformation, the superpotential should be invariant under the modular transformation \cite{Ferrara:1989bc}. Expanding the superpotential $W(\phi_i;\tau)$ in powers of $\phi_i$, we obtain \begin{eqnarray} W(\phi_i;\tau) = \sum_n \sum_{\{i_1, \cdots, i_n\}} \sum_{I_Y} \left( Y_{I_Y} \phi_{i_1} \cdots \phi_{i_n} \right)_{\mathbf{1}} \,. \end{eqnarray} Here, $Y_{I_Y}$ represents a collection of coefficients of the relevant couplings. It transforms as a multiplet modular form of weight $2k_Y$ and representation $I_Y$, \begin{eqnarray} \label{eq:form_transformation} Y_{I_Y}(\tau) \to Y_{I_Y}(\gamma \tau) = (c\tau + d)^{2k_Y} \rho_{I_Y}(\gamma) Y_{I_Y}(\tau) \,, \end{eqnarray} where $k_Y = k_{i_1} + \cdots + k_{i_n}$ is required to be a non-negative integral. Its representation and weight are required for the invariance of the operator under the $\Gamma_N$ modular transformation. \subsection{Multiple modular symmetries \label{sec:multi_modular}} All lepton flavour models based on finite modular symmetries in the literature so far have been limited to the case of a single modulus field. No theoretical approach or model has so far managed to include more than one modulus fields in a self-consistent approach, although the latter case has been briefly mentioned in some references, e.g.~\cite{Novichkov:2018ovf}. In this subsection, we will discuss how to include multiple moduli fields consistently. We start from the modular transformation as a series of modular groups $\overline{\Gamma}^{1}$, $\overline{\Gamma}^{2}$, ..., $\overline{\Gamma}^{M}$, where the modulus field for each modular symmetry $\overline{\Gamma}^{J}$ for $J=1,..., M$ is denoted as $\tau_J$. Following Eq.~\eqref{eq:modular_transformation}, any modular transformation $\gamma_J$ in $\overline{\Gamma}^J$ takes the form as \begin{eqnarray} &&\gamma_J: \tau_J \to \gamma_J \tau_J = \frac{a_J \tau_J + b_J}{c_J \tau_J + d_J} \,. \end{eqnarray} A series of finite modular groups $\Gamma_{N_J}^J$ for $J = 1,2,...,M$ can be obtained by modding out an integer $N_J$ by following the discussion in the former section. Note that $N_J$ does not need to be identical to $N_{J'}$ for $J\neq J'$. For any finite modular transformations ${\gamma_1, ..., \gamma_M}$ in $\Gamma_{N_1}^1\times \Gamma_{N_2}^2 \times \cdots \times \Gamma_{N_M}^M$, the chiral superfield $\phi_i$, as a function of $\tau_1$, ..., $\tau_M$, now transforms as \begin{eqnarray} \phi_i(\tau_1, ...,\tau_M) &\to& \phi_i(\gamma_1\tau_1, ..., \gamma_M \tau_M) \nonumber\\ &&= \prod_{J=1,...,M} (c_J\tau_J + d_J)^{-2k_{i,J}} \bigotimes_{J=1,...,M} \rho_{I_{i,J}}(\gamma_J) \phi_i(\tau_1, \tau_2, ...,\tau_M)\,, \label{eq:field_transformation2} \end{eqnarray} where $k_{i,J}$ and $I_{i,J}$ are the weight and representation of $\phi_i$ in $\Gamma_{N_J}^{J}$, respectively, and $\bigotimes$ represents the outer product of the representation matrices for $\rho_{I_{i,1}}$, $\rho_{I_{i,2}}$, ..., $\rho_{I_{i,M}}$. For an $\mathcal{N}=1$ supersymmetric model in a series of modular symmetries, the action is extended to the form \begin{eqnarray} \mathcal{S} = \int d^4x d^2\theta d^2\overline{\theta} K(\phi_i, \overline{\phi}_i;\tau_1,...,\tau_M,\overline{\tau}_1,...,\overline{\tau}_M) + \int d^4x d^2\theta W(\phi_i;\tau_1,..., \tau_M)+ {\rm h.c.}\,, \end{eqnarray} where $h$ is a positive constant. The superpotential $W$ is required to be invariant under all modular transformations and that the K\"ahler potential $K$ can be changed at most by K\"ahler transformations. Including multiple modulus fields, the K\"ahler potential can be written as, \begin{eqnarray} \hspace{-5mm} K(\phi_i, \overline{\phi}_i;\tau_1,...,\tau_M,\overline{\tau}_1,...,\overline{\tau}_M) &=& - \sum_{J=1,...,M} h_J \log(-i\tau_J + i \overline{\tau}_J) \nonumber\\ &+& \sum_{i} \, \frac{\overline{\phi}_i\phi_i}{\displaystyle \prod_{J=1,...,M} (-i \tau_J + i \overline{\tau}_J)^{2k_{i,J}}} \,, \end{eqnarray} where all $h_J$ are positive constants. Since each modular symmetry is independent from each other, one modulus field getting a VEV leaves the rest of the K\"ahler potential still satisfying the other modular symmetries. For example, after $\tau_1$ gets a VEV, the K\"ahler potential is left with \begin{eqnarray} - \sum_{J=2,...,M} h_J \log(-i\tau_J + i \overline{\tau}_J) + \sum_{i} \, \frac{1}{\langle -i \tau_J + i \overline{\tau}_J \rangle^{2k_{i,1}}} \frac{\overline{\phi}_i\phi_i}{\displaystyle \prod_{J=2,...,M} (-i \tau_J + i \overline{\tau}_J)^{2k_{i,J}}} \,. \end{eqnarray} Once all modulus fields get VEVs, the K\"ahler potential gives rise to kinetic terms for the scalar components of the supermultiplets $\phi_i$ and the modulus fields as \begin{eqnarray} \sum_{J=1,...,M} \frac{h_J}{\langle -i\tau_J + i \overline{\tau}_J \rangle^2} \partial_\mu \overline{\tau}_J \partial^\mu \tau_J + \sum_i \frac{\partial_\mu \overline{\phi}_i \partial^\mu \phi_i}{ \displaystyle \prod_{J=1,...,M}\langle -i\tau_J + i \overline{\tau}_J \rangle^{2k_{i,J}} } \,. \end{eqnarray} In this example, the scalar component of each modulus field performs as a scalar field of vanishing weight in the remaining modular symmetries. The superpotential $W(\phi_i;\tau_1,..., \tau_M)$ is in general a function of the modulus fields $\tau_1$ to $\tau_M$ and superfields $\phi_i$. Under the modular transformation, the superpotential should be invariant under the modular transformation \cite{Ferrara:1989bc}. Expanding the superpotential $W$ in powers of $\phi_i$, we obtain \begin{eqnarray} W(\phi_i;\tau_1,..., \tau_M) = \sum_n \sum_{\{i_I, \cdots, i_n\}} \left(Y_{(I_{Y,1},..., I_{Y,M})} \phi_{i_1} \cdots \phi_{i_n} \right)_{\mathbf{1}} \,, \end{eqnarray} the weights of $Y_{(I_{Y,1}, ..., I_{Y,M})}$ are given by $k_{Y,J} = k_{1,J}+ \cdots k_{n,J}$ for $J=1,...,M$. And the modular form $Y_{(I_{Y,1}, ..., I_{Y,M})}$ transforms as \begin{eqnarray} \label{eq:form_transformation2} \hspace{-5mm} &&Y_{(I_{Y,1}, ..., I_{Y,M})}(\tau_1,..., \tau_M) \to Y_{(I_{Y,1},..., I_{Y,M})}(\gamma_1 \tau_1, ..., \gamma_M \tau_M) \nonumber\\ &&\hspace{2cm}= \prod_{J=1,...,M} (c_J\tau_J + d_J)^{2k_{Y,J}} \bigotimes_{J=1,...,M} \rho_{I_{Y,J}}(\gamma_J) Y_{(I_{Y,1},..., I_{Y,M})}(\tau_1,..., \tau_M) \,. \end{eqnarray} \section{Modular $S_4$ symmetry and its remnant symmetries} \label{S4} In this section, we temporarily return to the case of a single modular symmetry, focussing on the case of a single modular $S_4$ symmetry and its remnant symmetries, before generalising the results to the case of multiple $S_4$ symmetries in the next section. \subsection{Modular $S_4$ symmetry} $S_4$ is a permutation group of four objects. In the framework of modular symmetry, the $S_4$ modular group is obtained in the series of $\Gamma_N$ by fixing $N=4$. In other word, its generators satisfy $S_\tau^2 = (S_\tau T_\tau)^3 = T_\tau^4 =I$. In previous works, it is common to use three generators $S$, $T$ and $U$, which satisfy $S^2 = T^3 = U^2 = (ST)^3 = (SU)^2 = (TU)^2 =(STU)^4=I$~\cite{King:2013eh}, to generate $S_4$. These traditional generators are related to the modular generators $S_\tau$ and $T_\tau$ as \begin{eqnarray} S = T_\tau^2 \,,~ T = S_\tau T_\tau \,,~ U = T_\tau S_\tau T_\tau^2 S_\tau \, , \end{eqnarray} which provides a useful dictionary to relate the two types of generators. In the upper complex plane with the requirement $\tau = \tau +4$, $S$, $T$ and $U$ can be represented by two by two matrices such as \begin{eqnarray} \label{eq:STU} S=\begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}\,, ~ T=\begin{pmatrix} 0 & 1 \\ -1 & -1 \end{pmatrix} \,, ~ U=\begin{pmatrix} 1 & -1 \\ 2 & -1 \end{pmatrix} \,. \end{eqnarray} Due to the identification in Eq.~\eqref{eq:mode_N}, these representation matrices are not unique. Using Eq.~\eqref{eq:STU}, we write out another three elements of $S_4$, namely $TS = S_\tau T_\tau^{-1} $, $ST = T_\tau S_\tau T_\tau^{-1} S_\tau$ and $STS = T_\tau^{-1} S_\tau T_\tau S_\tau$, which are order-three elements of $S_4$ which will appear in our later discussion, \begin{eqnarray} \label{eq:TS} TS=\begin{pmatrix} 0 & 1 \\ -1 & 1 \end{pmatrix}\,,~~ ST=\begin{pmatrix} 2 & -1 \\ 3 & -1 \end{pmatrix}\,,~~ STS=\begin{pmatrix} -2 & -1 \\ 3 & 1 \end{pmatrix}\,. \end{eqnarray} Modular forms of even weights in a modular $S_4$ symmetry can be explicitly constructed in terms of the Dedekind eta function $\eta(\tau)\equiv q^{1/24} \prod_{n=1}^{\infty} (1- q^n)$, with $q = e^{2\pi i \tau}$ \cite{Penedo:2018nmg}. At lowest weight $2k=2$, there are five independent modular forms. By defining \begin{eqnarray} Y(a_1, \cdots, a_6 | \tau) &=& \frac{d}{d\tau} \left[ a_1 \log \eta \left( \tau + \frac{1}{2} \right) + a_2 \log \eta \left( 4 \tau \right) + a_3 \log \eta \left( \frac{\tau}{4} \right) \right.\nonumber\\ &&\left.+ a_4 \log \eta \left( \frac{\tau+1}{4} \right) + a_5 \log \eta \left( \frac{\tau+2}{4} \right) + a_6 \log \eta \left( \frac{\tau+3}{4} \right) \right] , \end{eqnarray} with $a_1+\cdots+a_6=0$, these five independent modular forms can be constructed to be \begin{eqnarray}\label{eq:form} Y_1(\tau) &=& Y (1, 1, \omega, \omega^2, \omega, \omega^2 | \tau) \,, \nonumber\\ Y_2(\tau) &=& Y (1, 1, \omega^2, \omega, \omega^2, \omega | \tau) \,, \nonumber\\ Y_3(\tau) &=& Y (1, -1, -1, -1, 1, 1 | \tau) \,, \nonumber\\ Y_4(\tau) &=& Y (1, -1, -\omega^2, -\omega, \omega^2, \omega | \tau) \,, \nonumber\\ Y_5(\tau) &=& Y (1, -1, -\omega, -\omega^2, \omega, \omega^2 | \tau) \,, \end{eqnarray} where $\omega= e^{2\pi i/3}$. These five independent modular forms at lowest weight $2k=2$ form a doublet $\mathbf{2}$ and a triplet $\mathbf{3}'$ of $S_4$, \begin{eqnarray} \label{eq:Y2} Y_{\mathbf{2}}^{(2)} = \begin{pmatrix} Y_1 \\ Y_2 \end{pmatrix}\,, \hspace{1cm} Y_{\mathbf{3}'}^{(2)} = \begin{pmatrix} Y_3 \\ Y_4 \\ Y_5 \end{pmatrix} \,. \end{eqnarray} Modular forms with higher even weights ($2k=4, 6, \cdots$) can be constructed from these five modular forms. In general, the dimension of the linear space formed by the modular forms of weight $2k$ and level 4 is $4k+1$ \cite{Feruglio:2017spp}. Namely, the nine independent modular forms of weight $2k=4$, which form one $\mathbf{1}$, one $\mathbf{2}$, one $\mathbf{3}$ and one $\mathbf{3}'$. Among them, the two triplet modular forms are given by \begin{eqnarray} \label{eq:Y4} Y_{\mathbf{3}}^{(4)} = \begin{pmatrix} Y_1 Y_4 - Y_2 Y_5 \\ Y_1 Y_5 - Y_2 Y_3 \\ Y_1 Y_3 - Y_2 Y_4 \end{pmatrix} \,, &\hspace{1cm}& Y_{\mathbf{3}'}^{(4)} = \begin{pmatrix} Y_1 Y_4 + Y_2 Y_5 \\ Y_1 Y_5 + Y_2 Y_3 \\ Y_1 Y_3 + Y_2 Y_4 \end{pmatrix} \,. \end{eqnarray} At weight $2k=6$, there are 13 independent forms. They form one $\mathbf{1}$, one $\mathbf{1}'$, one $\mathbf{2}$, one $\mathbf{3}$ and two $\mathbf{3}'$s of $S_4$. Here we only interested in the two $\mathbf{3}'$s of $S_4$. They are given by \begin{eqnarray} \label{eq:Y6} Y_{\mathbf{3}}^{(6)} = \begin{pmatrix} -Y_1^2 Y_5 +Y_2^2 Y_4 \\ -Y_1^2 Y_3 +Y_2^2 Y_5 \\ -Y_1^2 Y_4 +Y_2^2 Y_3 \end{pmatrix} \,, \hspace{1cm} Y_{\mathbf{3}'_1}^{(6)} = \begin{pmatrix} Y_1^2 Y_5 +Y_2^2 Y_4 \\ Y_1^2 Y_3 +Y_2^2 Y_5 \\ Y_1^2 Y_4 +Y_2^2 Y_3 \end{pmatrix} \,, \hspace{1cm} Y_{\mathbf{3}'_2}^{(6)} = Y_1 Y_2 \begin{pmatrix} Y_3 \\ Y_4 \\ Y_5 \end{pmatrix} \,. \end{eqnarray} These modular forms will be used for our model building in the next section. For modular forms with weights up to 10, a full list can be found in \cite{Novichkov:2018ovf}. Extension from a single $S_4$ modular symmetry to a series of modular $S_4$ symmetries is straightforwardly achieved by following the procedure in section~\ref{sec:multi_modular} with all levels fixed at $N_J=4$. In each $S_4^J$, we denote their generators $S$, $T$ and $U$ by $S_J$, $T_J$ and $U_J$, where the subscript is only used to distinguish groups. Modular forms with weights $k_{Y,1}, ..., k_{Y,M}$ are multiplets of multiple moduli, namely of of $\tau_1$, ..., $\tau_M$. \subsection{Stabilisers and residual symmetries of modular $S_4$\label{sec:residual}} Although a brief discussion on residual symmetry after modular $S_4$ symmetry breaking has been given in \cite{Novichkov:2018ovf}, we note that the essential correlation between the modular field and its residual symmetries has not been discussed. In this section, we will give a thorough analysis of this case, uncovering some new results along the way. We begin by introducing and reviewing the notion of stabilisers of the symmetry which will play a crucial role in residual symmetries. Given an element $\gamma$ in the modular group $S_4 \simeq \Gamma_4$, a stabiliser of $\gamma$ corresponds to a fixed point $\tau_\gamma$ in the upper complex plane which satisfies $\gamma \tau_\gamma = \tau_\gamma$. Once the modular field $\tau$ gains a VEV at such a stabiliser, $\langle \tau \rangle = \tau_\gamma$, an Abelian residual modular symmetry generated by $\gamma$ is preserved. It is obvious that acting $\gamma$ on a modular form at its stabiliser leaves the modular form invariant, i.e., \begin{eqnarray} \gamma: Y_I(\tau_\gamma) \to Y_I(\gamma \tau_\gamma) = Y_I(\tau_\gamma)\,. \end{eqnarray} Following the standard transformation property in Eq. \eqref{eq:form_transformation}, we obtain \begin{eqnarray} \label{eq:yukawa_eigenvector} \rho_I(\gamma) Y_I(\tau_\gamma) = (c\tau_\gamma + d)^{-2k} Y_I(\tau_\gamma) \,. \end{eqnarray} This equation lead us to the following important properties for the stabiliser and the modular form: \begin{itemize} \item A modular form at a stabiliser $Y_I(\tau_\gamma)$ is an eigenvector of the representation matrix $\rho_I(\gamma)$ with respective eigenvalue $(c\tau_\gamma + d)^{-2k}$. \item The stabiliser $\tau_\gamma$ satisfies $|c\tau_\gamma + d| = 1$ since $(c\tau_\gamma + d)^{-2k}$ is an eigenvalue of a unitary matrix. \end{itemize} A special case is that when $(c\tau_\gamma + d)^{-2k}=1$ is satisfied, $\rho_I(\gamma) Y_I(\tau_\gamma) = Y_I(\tau_\gamma)$, and we recover the residual flavour symmetry generated by $\gamma$. In general, the eigenvalue does not need to be fixed at $1$ in the framework of modular symmetry. In the follow-up of this subsection, we will consider the following stabilisers, \begin{eqnarray} &\tau_S= i\infty \,,~ \tau_T= \omega = - \frac{1}{2} + i \frac{\sqrt{3}}{2} \,,~ \tau_U=\frac{1}{2} + \frac{i}{2} \,,\nonumber\\ &\tau_{TS}=-\omega^2 = \frac{1}{2} + i \frac{\sqrt{3}}{2} \,,~ \tau_{ST}=\frac{1}{2}+\frac{i}{2\sqrt{3}}\,,~ \tau_{STS}=-\frac{1}{2}+\frac{i}{2\sqrt{3}} \,. \end{eqnarray} Although $\tau_T$ and $\tau_{TS}$ have been discussed in \cite{Novichkov:2018ovf} (identified with $\tau_L$ and $\tau_R$ therein, respectively), $\tau_S$, $\tau_U$ and $\tau_{ST}$ as stabilisers in the $S_4$ modular symmetry are discussed here for the first time. Here we apply this notation to take the advantage of modular residual symmetries generated by $S$, $T$, $U$, $TS$, $ST$ and $STS$, respectively. Following Eq.~\eqref{eq:STU}, it is straightforward to check that these stabilisers are invariant under the corresponding modular transformations respectively, i.e., \begin{eqnarray} &&S: \tau_S \to S \tau_S= \tau_S+2 = \tau_S\,, \nonumber\\ &&T: \tau_T \to T \tau_T= \frac{-1}{\tau_T+1} = \tau_T\,, \nonumber\\ &&U: \tau_U \to U \tau_U= \frac{\tau_U-1}{2\tau_U-1} = \tau_U \,, \nonumber\\ &&TS: \tau_{TS} \to TS \, \tau_{TS} = \frac{1}{-\tau_{TS}+1} = \tau_{TS} \,, \nonumber\\ &&ST: \tau_{ST} \to ST \, \tau_{ST} = \frac{2\tau_{ST}-1}{3\tau_{ST}-1} = \tau_{ST} \,, \nonumber\\ &&STS: \tau_{STS} \to STS \, \tau_{STS} = \frac{-2\tau_{STS}-1}{3\tau_{STS}+1} = \tau_{STS} \,. \end{eqnarray} It is worthy noting that these stabilisers are some typical examples but not the full list of stabilisers of $S_4$. At the stabiliser, the multiplets formed by the modular form may specify interesting directions. We will discuss how the triplet modular forms $Y_{\mathbf{3}}^{(2k)}$ or $Y_{\mathbf{3}'}^{(2k)}$ (for $k =1,2,3$) gain these directions based on the symmetry argument in Eq.~\eqref{eq:yukawa_eigenvector}. We begin our discussion from modular forms at the stabiliser $\gamma_S$. We know that $Y_{\mathbf{3}^{(\prime)}}^{(2k)}(\tau_S)$ is the eigenvector of $\rho_{\mathbf{3}^{(\prime)}}(S)$ with respective eigenvalue $1^{-2k} \equiv 1$, \begin{eqnarray} \rho_{\mathbf{3}^{(\prime)}}(S) Y^{(2k)}_{\mathbf{3}^{(\prime)}}(\tau_S) = Y^{(2k)}_{\mathbf{3}^{(\prime)}}(\tau_S) \end{eqnarray} for any weight $2k$. Given the well-known representation matrix for $S$ in $\mathbf{3}$ or $\mathbf{3}'$, \begin{eqnarray} \rho_{\mathbf{3}^{(\prime)}}(S) = \frac{1}{3} \begin{pmatrix} -1 & 2 & 2 \\ 2 & -1 & 2 \\ 2 & 2 & -1 \end{pmatrix} \,. \end{eqnarray} Three eigenvalues are given by $1$, $-1$ and $-1$. The eigenvector corresponding to the eigenvalue $1$ is always fixed at $(1,1,1)^T$ up to an overall factor. Therefore, we conclude that $Y^{(2k)}_{\mathbf{3}^{(\prime)}}(\tau_S)$ always takes the form \begin{eqnarray} \label{eq:residual_S} Y^{(2k)}_{\mathbf{3}^{(\prime)}}(\tau_S) = y^{(2k)}_{\mathbf{3}^{(\prime)},S} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix} \,. \end{eqnarray} Here, $y^{(2k)}_{\mathbf{3}^{(\prime)},S}$ is a overall factor determined by the weight and representation. By taking $\tau_S = i \infty$ into the exact modular form $Y_i(\tau)$ in Eq.~\eqref{eq:form}, we obtain $q=0$ and $Y_1(\tau_S) = Y_2(\tau_S) = i 3\pi/8$, $Y_3(\tau_S) = Y_4(\tau_S) = Y_5(\tau_S) = i \pi/4$. For the weight $2k=2$, $y_{\mathbf{3}^{\prime},S}^{(2)}=i \pi/4$ for $\mathbf{3}'$. For $2k=4$, $y_{\mathbf{3},S}^{(4)} = 0$ and $y_{\mathbf{3}^{\prime},S}^{(4)} = -3 \pi^2/16$. For $2k=6$, $y_{\mathbf{3},S}^{(6)} = 0$, $y_{\mathbf{3}^{\prime}_1,S}^{(6)} = 2 y_{\mathbf{3}^{\prime}_2,S}^{(6)} = -i 9 \pi^3/128$. At the stabiliser $\tau_S$, since the eigenvalue is always fixed at 1 regardless of the weight, the residual $Z_2^S$ modular symmetry is identical to the residual $Z_2^S$ flavour symmetry. We perform a similar discussion for modular forms at the stabiliser $\tau_T$. Eq.~\eqref{eq:yukawa_eigenvector} is simplified into \begin{eqnarray} \rho_{\mathbf{3}^{(\prime)}}(T) Y^{(2k)}_{\mathbf{3}^{(\prime)}}(\tau_T) = (-\tau_T-1)^{-2k} Y^{(2k)}_{\mathbf{3}^{(\prime)}}(\tau_T) = \omega^{2k} Y^{(2k)}_{\mathbf{3}^{(\prime)}}(\tau_T) \,. \end{eqnarray} Thus, the selected eigenvector corresponds to the eigenvalue $\omega^{2k}$, which is weight-dependent. In the $T$-diagonal basis we use in the paper, representation matrix for $T$ is given by \begin{eqnarray} \rho_{\mathbf{3}^{(\prime)}}(T)=\begin{pmatrix} 1 & 0 & 0 \\ 0 & \omega^2 & 0 \\ 0 & 0 & \omega \end{pmatrix} \,. \end{eqnarray} The triplet form, as an eigenvalue of $T$, takes a very simple form \begin{eqnarray} \label{eq:residual_T} && Y^{(2)}_{\mathbf{3}'}(\tau_T) = y_{\mathbf{3}',T}^{(2)} \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\,,~ Y^{(4)}_{\mathbf{3}^{(\prime)}}(\tau_T) = y_{\mathbf{3}^{(\prime)},T}^{(4)} \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}\,,~ Y^{(6)}_{\mathbf{3}^{(\prime)}}(\tau_T) = y_{\mathbf{3}^{(\prime)},T}^{(6)} \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\,, \end{eqnarray} where the overall factors are also determined by the weight and representation. These results can be checked numerically by taking $\tau_T$ into exact formulas of modular forms. It is straightforward to obtain $Y_1(\tau_T) = Y_3(\tau_T) = Y_5(\tau_T) = 0$, and we are left with only two non-zero modular forms, $Y_2(\tau_T) = 2.11219 i$ and $Y_4(\tau_T) = -2.43895 i$. Taking them to Eqs.~\eqref{eq:Y2}, \eqref{eq:Y4} and \eqref{eq:Y6}, we arrive at the same above result with $y_{\mathbf{3}',T}^{(2)} = -2.43895 i$, $y_{\mathbf{3},T}^{(4)} = - y_{\mathbf{3}',T}^{(4)} = -5.15151$, and $y_{\mathbf{3},T}^{(6)} = y_{\mathbf{3}'_1,T}^{(6)} = 10.881 i$, and $y^{(6)}_{\mathbf{3}'_2,T} = 0$. In this typical example, only the third direction, i.e., $(1,0,0)^T$, corresponding to modular forms with weights $2k = 0~ ({\rm mod} ~3)$, preserves the residual flavour symmetry generated by $T$. The other two vectors do not satisfy the residual flavour symmetry, but only the residual modular symmetry. In the framework of flavour symmetry, the residual symmetry generated by $U$ is usually called $\mu$-$\tau$ symmetry. We discuss the modular form at the stabiliser of $U$. $Y_{\mathbf{3}^{(\prime)}}^{(2k)}(\tau_U)$ is the eigenvalue of $\rho_{\mathbf{3}^{(\prime)}}(U)$ with respective eigenvalue $(2\tau_U-1)^{-2k} = (-1)^{k}$. Representation matrices for $U$ are different in $\mathbf{3}$ and $\mathbf{3}'$, \begin{eqnarray} \rho_{\mathbf{3}}(U) = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \,,~ \rho_{\mathbf{3}'}(U) = - \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \,. \end{eqnarray} $\rho_{\mathbf{3}}(U)$ has one eigenvalue $-1$ and the other two degenerate eigenvalues $+1$. The eigenvector with respective eigenvalue $-1$ is fixed at $(0,1,-1)^T$ without considering an overall factor. The eigenvector with respective eigenvalue $+1$ is in principle a linear combination of two independent vectors $(2,-1,-1)^T$ and $(1,1,1)^T$. For odd and even $k$, we can express $Y_{\mathbf{3}}^{(2k)}(\tau_U)$ as \begin{eqnarray} \label{eq:residual_U} &&Y^{(2k)}_{\mathbf{3}}(\tau_U) = y^{(2k)}_{\mathbf{3},U} \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}~ \hspace{23mm} \text{for an odd } k\,, \nonumber\\ &&Y^{(2k)}_{\mathbf{3}}(\tau_U) = y^{(2k)}_{\mathbf{3},U} \begin{pmatrix} 2 \\ -1 \\ -1 \end{pmatrix} + y^{(2k)\prime}_{\mathbf{3},U} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}~ \text{for an even } k\,. \end{eqnarray} The coefficients are determined by the weight. Numerically, $Y_1(\tau_U)=-Y_2(\tau_U)=2.84287 i$, $Y_3(\tau_U)=-(2\sqrt{2}+i)a$, $Y_4(\tau_U)=Y_5(\tau_U)=(\sqrt{2}-i)a$ with $a=1.09422$. We obtain $y^{(4)}_{\mathbf{3},U}=\sqrt{2}Y_1(\tau_U) a$, $y^{(4)\prime}_{\mathbf{3},U} = - i 2 Y_1(\tau_U) a$ for $2k=4$, and $y^{(6)}_{\mathbf{3},U}=3\sqrt{2}Y_1^2(\tau_U) a$ for $2k=6$. In $\mathbf{3}'$ representations, $\rho_{\mathbf{3}'}(U)$ has one eigenvalue $+1$ and the other two degenerate eigenvalues $-1$. For an even $k$ the direction of $Y^{(2k)}_{\mathbf{3}'}(\tau_U)$ is fixed along $(0, 1, -1)^T$, while for an odd $k$ $Y^{(2k)}_{\mathbf{3}'}(\tau_U)$ is a linear combination of $(2,-1,-1)^T$ and $(1,1,1)^T$, \begin{eqnarray} \label{eq:residual_U_prime} &&Y^{(2k)}_{\mathbf{3}'}(\tau_U) = y^{(2k)}_{\mathbf{3}',U} \begin{pmatrix} 2 \\ -1 \\ -1 \end{pmatrix} + y^{(2k)\prime}_{\mathbf{3}',U} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}~ \text{for an odd } k\,, \nonumber\\ &&Y^{(2k)}_{\mathbf{3}'}(\tau_U) = y^{(2k)}_{\mathbf{3}',U} \begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}~ \hspace{23mm} \text{for an even } k\,. \end{eqnarray} Specifically, for $2k=2$, we have $y_{\mathbf{3}',U}^{(2)} = - \sqrt{2} a$, $y_{\mathbf{3}',U}^{(2)\prime} = - i a$. For $2k=6$, we have $y^{(6)}_{\mathbf{3}_1,U}=\sqrt{2}Y_1^2(\tau_U) a$, $y^{(6)\prime}_{\mathbf{3}_1,U} = - i 2 Y_1^2(\tau_U) a$; and $y_{\mathbf{3}',U}^{(6)} = \sqrt{2} Y_1^2(\tau) a$, $y_{\mathbf{3}',U}^{(6)\prime} = i Y_1^2(\tau) a$, respectively. And for $2k=4$, keeping the $(0,1,-1)^T$, we have $y_{\mathbf{3}',U}^{(4)} = 3\sqrt{2} Y_1(\tau_U) a$. We would like to mention that although the direction $(0,1,-1)^T$ is realised in both $\mathbf{3}$ and $\mathbf{3}'$ representations, $(0,1,-1)^T$ in $\mathbf{3}'$ preserves a $\mu$-$\tau$ flavour symmetry, but that in $\mathbf{3}$ preserves not a $\mu$-$\tau$ flavour symmetry, but a $\mu$-$\tau$ modular symmetry. In addition, we would like to consider stabilisers for the elements $TS$, $ST$ and $STS$. These elements are order-3 elements and stabiliser for each element preserves a $Z_3$ symmetry. The representation matrices of $TS$, $ST$ and $STS$ take the forms \begin{eqnarray} \rho_{\mathbf{3}^{(\prime)}}(TS) = \frac{1}{3} \begin{pmatrix} -1 & 2 & 2 \\ 2\omega^2 & -\omega^2 & 2\omega^2 \\ 2\omega & 2\omega & -\omega \end{pmatrix} \,,\nonumber\\ \rho_{\mathbf{3}^{(\prime)}}(ST) = \frac{1}{3} \begin{pmatrix} -1 & 2\omega^2 & 2\omega \\ 2 & -\omega^2 & 2\omega \\ 2 & 2\omega^2 & -\omega \end{pmatrix} \,,\nonumber\\ \rho_{\mathbf{3}^{(\prime)}}(STS) = \frac{1}{3} \begin{pmatrix} -1 & 2\omega & 2\omega^2 \\ 2\omega & -\omega^2 & 2 \\ 2\omega^2 & 2 & -\omega \end{pmatrix} \,. \end{eqnarray} They all have three eigenvalues given by $1$, $\omega$ and $\omega^2$. The corresponding eigenvectors for $TS$ are $(-1, 2\omega, 2\omega^2)^T$, $(2\omega, 2\omega^2, -1)^T$ and $(2\omega^2,-1,2\omega)^T$, respectively; the corresponding eigenvectors for $ST$ are $(-1, 2\omega^2, 2\omega)^T$, $(2\omega^2, 2\omega, -1)^T$ and $(2\omega,-1,2\omega^2)^T$, respectively; and the corresponding eigenvectors for $STS$ are $(-1, 2, 2)^T$, $(2, 2, -1)^T$ and $(2,-1,2)^T$, respectively. $Y_{\mathbf{3}^{(\prime)}}^{(2k)}(\tau_{TS})$ corresponds to the eigenvalue $(1-\tau_{TS})^{-2k} = \omega^{k}$. Thus, we directly arrive at \begin{eqnarray} \label{eq:residual_STT} Y^{(2)}_{\mathbf{3}'}(\tau_{TS}) = y^{(2)}_{\mathbf{3}',TS} \begin{pmatrix} 2\omega \\ 2\omega^2 \\ -1 \end{pmatrix},\, Y^{(4)}_{\mathbf{3}^{(\prime)}}(\tau_{TS}) = y^{(4)}_{\mathbf{3}^{(\prime)},TS} \begin{pmatrix} 2\omega^2 \\ -1 \\ 2\omega \end{pmatrix},\, Y^{(6)}_{\mathbf{3}}(\tau_{TS}) = y^{(6)}_{\mathbf{3}^{(\prime)},TS} \begin{pmatrix} -1 \\ 2\omega \\ 2\omega^2 \end{pmatrix}. \end{eqnarray} Taking the explicit formulas of modular forms into account, we obtain the overall factors to be $y^{(2)}_{\mathbf{3}',TS} = - Y_5(\tau_{TS}) = 0.81298i$, $y^{(4)}_{\mathbf{3}, TS} = y^{(4)}_{\mathbf{3}', TS} = -Y_1(\tau_{TS}) Y_5(\tau_{TS}) = - 1.71717$, $y^{(6)}_{\mathbf{3},TS} = - y^{(6)}_{\mathbf{3}'_1,TS} = -Y_1^2(\tau_{TS}) Y_5(\tau_{TS}) = - 3.62699 i$, and $y^{(6)}_{\mathbf{3}'_2,TS} = 0$. We turn to the modular forms at stabilisers $\tau_{ST}$. $Y_{\mathbf{3}^{(\prime)}}^{(2k)}(\tau_{ST})$ are obtained by exchanging the second and the third entries of the above expressions but with care due to different weights \begin{eqnarray} \label{eq:residual_STT} Y^{(2)}_{\mathbf{3}'}(\tau_{ST}) = y^{(2)}_{\mathbf{3}',ST} \begin{pmatrix} 2\omega^2 \\ -1 \\ 2\omega \end{pmatrix},\, Y^{(4)}_{\mathbf{3}^{(\prime)}}(\tau_{ST}) = y^{(4)}_{\mathbf{3}^{(\prime)},ST} \begin{pmatrix} 2\omega \\ 2\omega^2 \\ -1 \end{pmatrix},\, Y^{(6)}_{\mathbf{3}}(\tau_{ST}) = y^{(6)}_{\mathbf{3}^{(\prime)},ST} \begin{pmatrix} -1 \\ 2\omega \\ 2\omega^2 \end{pmatrix}, \end{eqnarray} where $y^{(2)}_{\mathbf{3}',ST} = 2.43895 i$, $y^{(4)}_{\mathbf{3}, ST} = -y^{(4)}_{\mathbf{3}', ST} = -15.4545$, $y^{(6)}_{\mathbf{3}, ST} = y^{(6)}_{\mathbf{3}'_1, ST} = -97.9287 i$ and $y^{(6)}_{\mathbf{3}'_2, ST} = 0$. They correspond to eigenvectors of $\rho_{\mathbf{3}^{(\prime)}}(ST) $ with respective eigenvalues $(3\tau_{ST}-1)^{-2k} = \omega^{2k}$. Finally, we list modular forms at stabilisers $\tau_{STS}$. $Y_{\mathbf{3}^{(\prime)}}^{(2k)}(\tau_{STS})$ are given by \begin{eqnarray} \label{eq:residual_STT} Y^{(2)}_{\mathbf{3}'}(\tau_{STS}) \!=\! y^{(2)}_{\mathbf{3}', ST} \begin{pmatrix} 2 \\ 2 \\ -1 \end{pmatrix},\, Y^{(4)}_{\mathbf{3}^{(\prime)}}(\tau_{STS}) \!=\! y^{(4)}_{\mathbf{3}^{(\prime)}, STS} \begin{pmatrix} 2 \\ -1 \\ 2 \end{pmatrix},\, Y^{(6)}_{\mathbf{3}}(\tau_{STS}) \!=\! y^{(6)}_{\mathbf{3}^{(\prime)}, STS} \begin{pmatrix} -1 \\ 2 \\ 2 \end{pmatrix}, \end{eqnarray} where $y^{(2)}_{\mathbf{3}', STS} = - 2.43895 i$, $y^{(4)}_{\mathbf{3}, STS} = y^{(4)}_{\mathbf{3}', STS} = -15.4545$, $y^{(6)}_{\mathbf{3},STS} = - y^{(6)}_{\mathbf{3}'_1,STS} = -97.9287 i$, and $y^{(6)}_{\mathbf{3}'_2,STS} = 0$. They correspond to eigenvectors of $\rho_{\mathbf{3}^{(\prime)}}(ST) $ with respective eigenvalues $(3\tau_{STS}+1)^{-2k} = \omega^{k}$. We summarise directions of triplet ($\mathbf{3}$ and $\mathbf{3}'$) modular forms for lower weights ($2k=2,4,6$) at stabilisers ($\tau = \tau_S, \tau_U, \tau_T, \tau_{TS}, \tau_{ST}, \tau_{STS})$ in Table~\ref{tab:stabilisers}. All the above discussion in this subsection is based on a single modular $S_4$ with a single modulus field. Extending to the case of multiple modular symmetries may allow the theory to have several different residual modular symmetries. Namely, the different moduli fields may take different values at different stabilisers. In the next section, we will apply this property to model building. \begin{table}[h] \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c |} \hline \hline \multirow{2}{*}{$\tau$} & weight 2 & \multicolumn{2}{c|}{weight 4} & \multicolumn{3}{c|}{weight 6} \\\cline{2-7} & $\mathbf{3}'$ & $\mathbf{3}$ & $\mathbf{3}'$ & $\mathbf{3}$ & $\mathbf{3}'_1$ & $\mathbf{3}'_2$ \\ \hline \hline $\tau_S$ & $\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$ & $\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$ & $\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$ & $\mathbf{0}$ & $\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$ & $\begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}$ \\\hline $\tau_U$ & $\begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}$ & $ \begin{pmatrix} 2-i\sqrt{2} \\ -1-i\sqrt{2} \\ -1-i\sqrt{2} \end{pmatrix}$ & $\begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}$ & $\begin{pmatrix} 0 \\ 1 \\ -1 \end{pmatrix}$ & $ \begin{pmatrix} 2\sqrt{2} + i \\ -\sqrt{2} + i \\ -\sqrt{2} + i \end{pmatrix}$ & $ \begin{pmatrix} 2-i\sqrt{2} \\ -1-i\sqrt{2} \\ -1-i\sqrt{2} \end{pmatrix}$ \\\hline $\tau_T$ & $\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}$ & $\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$ & $\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}$ & $\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$ & $\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$ & $\mathbf{0}$ \\\hline $\tau_{TS}$ & $\begin{pmatrix} 2\omega \\ 2\omega^2 \\ -1 \end{pmatrix}$ & $\begin{pmatrix} 2\omega^2 \\ -1 \\ 2\omega \end{pmatrix}$ & $\begin{pmatrix} 2\omega^2 \\ -1 \\ 2\omega \end{pmatrix}$ & $\begin{pmatrix} -1 \\ 2\omega \\ 2\omega^2 \end{pmatrix}$ & $\begin{pmatrix} -1 \\ 2\omega \\ 2\omega^2 \end{pmatrix}$ & $\mathbf{0}$ \\\hline $\tau_{ST}$ & $\begin{pmatrix} 2\omega \\ -1 \\ 2\omega^2 \end{pmatrix}$ & $\begin{pmatrix} 2\omega^2 \\ 2\omega \\ -1 \end{pmatrix}$ & $\begin{pmatrix} 2\omega^2 \\ 2\omega \\ -1 \end{pmatrix}$ & $\begin{pmatrix} -1 \\ 2\omega^2 \\ 2\omega \end{pmatrix}$ & $\begin{pmatrix} -1 \\ 2\omega^2 \\ 2\omega \end{pmatrix}$ & $\mathbf{0}$ \\\hline $\tau_{STS}$ & $\begin{pmatrix} 2 \\ 2 \\ -1 \end{pmatrix}$ & $\begin{pmatrix} 2 \\ -1 \\ 2 \end{pmatrix}$ & $\begin{pmatrix} 2 \\ -1 \\ 2 \end{pmatrix}$ & $\begin{pmatrix} -1 \\ 2 \\ 2 \end{pmatrix}$ & $\begin{pmatrix} -1 \\ 2 \\ 2 \end{pmatrix}$ & $\mathbf{0}$ \\ \hline \hline \end{tabular} \end{center} \caption{Triplet ($\mathbf{3}$ and $\mathbf{3}'$) representations of $S_4$ modular forms of low weights ($2k= 2, 4, 6$) at typical stabilisers $\tau_S= i\infty$, $\tau_U=\frac{1}{2} + \frac{i}{2}$, $\tau_T= -\frac{1}{2} + i\frac{\sqrt{3}}{2}$, $\tau_{TS}=\frac{1}{2} + i\frac{\sqrt{3}}{2}$, $\tau_{ST}=\frac{1}{2} + \frac{i}{2\sqrt{3}}$ and $\tau_{STS}=-\frac{1}{2} + \frac{i}{2\sqrt{3}}$. Here, we have ignored the overall factor if it is non-zero. $\mathbf{0}$ represents a vanishing modular form, namely, the one which has a zero overall factor. } \label{tab:stabilisers} \end{table} \section{A model with three modular $S_4$ symmetries} \label{model} \begin{table}[h] \begin{tabular}{| l | c c c c c c|} \hline \hline Field & $S_4^A$ & $S_4^B$ & $S_4^C$ & \!$2k_A$\! & \!$2k_B$\! & \!$2k_C$\!\\ \hline \hline $L$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{3}$ & 0 & 0 & 0\\ $e^c$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & 0 & 0 & \!$-6$\! \\ $\mu^c$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & 0 & 0 & \!$-4$\! \\ $\tau^c$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & 0 & 0 & \!$-2$\! \\ $N_A^c$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & \!$-6$\! & 0 & 0 \\ $N_B^c$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & 0 & \!$-4$\! & 0 \\ \hline $\Phi_{AC}$ & $\mathbf{3}$ & $\mathbf{1}$ & $\mathbf{3}$ & 0 & 0 & 0 \\ $\Phi_{BC}$ & $\mathbf{1}$ & $\mathbf{3}$ & $\mathbf{3}$ & 0 & 0 & 0 \\ \hline \hline \end{tabular} \begin{tabular}{| l | c c c c c c|} \hline \hline Yuk/Mass &$S_4^A$ & $S_4^B$ & $S_4^C$ & \!$2k_A$\! & \!$2k_B$\! & \!$2k_C$\!\\ \hline \hline $Y_e(\tau_C)$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{3}$ & 0 & 0 & $6$ \\ $Y_\mu(\tau_C)$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{3}$ & 0 & 0 & $4$ \\ $Y_\tau(\tau_C)$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{3}$ & 0 & 0 & $2$ \\ $Y_A(\tau_A)$ & $\mathbf{3}$ & $\mathbf{1}$ & $\mathbf{1}$ & $6$ & 0 & 0 \\ $Y_B(\tau_B)$ & $\mathbf{1}$ & $\mathbf{3}$ & $\mathbf{1}$ & 0 & $4$ & 0 \\\hline $M_A(\tau_A)$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $12$ & 0 & 0 \\ $M_B(\tau_B)$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & 0 & $8$ & 0 \\ $M_{AB}(\tau_A,\tau_B)$ & $\mathbf{1}$ & $\mathbf{1}$ & $\mathbf{1}$ & $6$ & $4$ & 0 \\ \hline \hline \end{tabular} \caption{Transformation properties of leptons, Yukawa couplings $Y$ and right-handed neutrino masses $M$ in $S_4^A \times S_4^B \times S_4^C$. } \label{tab:particle_contents} \end{table} Combining the results of the previous two sections, we see that the extension from one single modular field to multiple moduli fields, as discussed in section~\ref{multiple}, opens a window into a new type of modular model building, in which several moduli fields can appear, with each one having a different modular form with a different residual symmetry, of the kind discussed in section~\ref{S4}. \subsection{A modular $S_4^3$ model} As a concrete example, we will show how the results of the previous sections can lead to a consistent model of trimaximal TM$_1$ mixing, analogous to the traditional approach~\cite{Varzielas:2012pa, Luhn:2013vna}. At high energies, the model in Table~\ref{tab:particle_contents} is based on three modular symmetries, $S_4^A$, $S_4^B$ and $S_4^C$, with moduli fields labelled by $\tau_A$, $\tau_B$ and $\tau_C$, respectively. After the moduli fields gain different VEVs, different textures of mass matrices are realised in charged lepton and neutrino sectors. The transformation properties of the leptons are given in Table~\ref{tab:particle_contents}. We arrange that each lepton has no more than one non-vanishing modular weight in either $S_4^A$, $S_4^B$ or $S_4^C$. We note that: 1) The lepton doublets $L$ form a triplet of $S_4^C$ with zero weight; 2) the right-handed leptons $e^c$, $\mu^c$ and $\tau^c$ are singlets of $S_4^C$ but have different weights $2k_C=-6,-4,-2$, respectively; 3) We introduce only two right-handed neutrinos $N_A^c$ and $N_B^c$, which are all singlets but have weights $2k_A=-6$ and $2k_B=-4$ in $S_4^A$ and $S_4^B$, respectively. It is in principle possible to arrange one field with non-vanishing weights in more than one modular symmetry, so our choice here is just for simplicity. In addition, we introduce two scalars $\Phi_{AC}$ and $\Phi_{BC}$. These scalars are assumed to be bi-triplets in the flavour space, arranged in $S_4^A \times S_4^B \times S_4^C$ as $\Phi_{AC}\sim (\mathbf{3}, \mathbf{1}, \mathbf{3})$ and $\Phi_{BC} \sim (\mathbf{1}, \mathbf{3}, \mathbf{3})$ with zero weights. As bi-triplets, they transform as \begin{eqnarray} \Phi_{AC} &\to& \rho_{\mathbf{3}}(\gamma_A) \otimes \rho_{\mathbf{3}}(\gamma_C) \Phi_{AC} \,,\nonumber\\ \Phi_{BC} &\to& \rho_{\mathbf{3}}(\gamma_B) \otimes \rho_{\mathbf{3}}(\gamma_C) \Phi_{BC} \,. \end{eqnarray} for any elements $\gamma_A$, $\gamma_B$ and $\gamma_C$ of $S_4^A$, $S_4^B$ and $S_4^C$, respectively. These scalars are introduced to connect three $S_4$'s together as shown in the superpotential below, \begin{eqnarray} w_\ell &=& \frac{1}{\Lambda}\left[L \Phi_{AC} Y_A(\tau_A) N_A^c + L \Phi_{BC} Y_B(\tau_B) N_B^c \right] H_u \nonumber\\ &&+ \left[ L Y_e(\tau_C) e^c + L Y_\mu(\tau_C) \mu^c + L Y_\tau(\tau_C) \tau^c \right] H_d \nonumber\\ &&+ \frac{1}{2} M_A(\tau_A) N_A^c N_A^c + \frac{1}{2} M_B(\tau_B) N_B^c N_B^c + M_{AB}(\tau_A,\tau_B) N_A^c N_B^c\,, \end{eqnarray} where the leptonic superpotential includes the terms responsible for generating lepton masses. To be invariant under the modular transformation, $Y_{e,\mu,\tau}$ are $\mathbf{3}$-plet modular forms in the modular space $S_4^C$ with weight $2k_C=2,4,6$, respectively, $Y_A$ and $Y_B$ are $\mathbf{3}$-plet modular forms in the modular space $S_4^A$, $S_4^B$ with weights $2k_A = 6$ and $2k_B = 4$, respectively. A term, e.g., $L \Phi_{AC} Y_A(\tau_A) N_A^c$ is explicitly written as \begin{eqnarray} \label{eq:example} L \Phi_{AC} Y_A(\tau_A) N_A^c &=& L_1\left[ (\Phi_{AC})_{11} (Y_A)_1 + (\Phi_{AC})_{21} (Y_A)_3 + (\Phi_{AC})_{31} (Y_A)_2 \right] N_A^c \nonumber\\ &+& L_2 \left[ (\Phi_{AC})_{13} (Y_A)_1 + (\Phi_{AC})_{23} (Y_A)_3 + (\Phi_{AC})_{33} (Y_A)_2 \right] N_A^c \nonumber\\ &+& L_3 \left[ (\Phi_{AC})_{12} (Y_A)_1 + (\Phi_{AC})_{22} (Y_A)_3 + (\Phi_{AC})_{32} (Y_A)_2 \right] N_A^c \,,\nonumber\\ &=& (L_1, L_2, L_3) P_{23} \begin{pmatrix} (\Phi_{AC})_{11} & (\Phi_{AC})_{12} & (\Phi_{AC})_{13} \\ (\Phi_{AC})_{21} & (\Phi_{AC})_{22} & (\Phi_{AC})_{23} \\ (\Phi_{AC})_{31} & (\Phi_{AC})_{32} & (\Phi_{AC})_{33} \end{pmatrix}^T P_{23} \begin{pmatrix} (Y_A)_1 \\ (Y_A)_2 \\ (Y_A)_3 \end{pmatrix} N_A^c\,, \nonumber\\ \end{eqnarray} where $L_\alpha$, $(\Phi_{AC})_{i\alpha}$ and $(Y_A)_i$ are entries of $L$, $\Phi_{AC}$ and $Y_A$, respectively, for $i, \alpha=1,2,3$, and $P_{23}$ is the (2,3) row/column-switching transformation matrix. $M_A$ and $M_B$ are singlet modular forms in the modular space $S_4^A$, $S_4^B$ with weights $2k_A = 12$ and $2k_B = 8$, respectively. The cross mass term between $N_A$ and $N_B$, $M_{AB}$, is not forbidden. It takes both non-trivial weights in $S_4^A$ and $S_4^B$, $2k_A = 6$ and $2k_B = 4$. The general formulae for $M_A$, $M_B$ and $M_{AB}$ are given by \begin{eqnarray} M_A(\tau_A) &=& m_A Y_1^2(\tau_A) Y_2^2(\tau_A) \,, \nonumber\\ M_B(\tau_B) &=& m_{B,1} [Y_1^6(\tau_B) + Y_2^6(\tau_B)] + m_{B,2} Y_1^3(\tau_B) Y_2^3(\tau_B) \,,\nonumber\\ M_{AB}(\tau_A,\tau_B) &=& m_{AB} [Y_1^3(\tau_A) + Y_2^3(\tau_A)] Y_1(\tau_B) Y_2(\tau_B) \,, \end{eqnarray} where $m_A$, $m_{B,1}$, $m_{B,2}$ and $m_{AB}$ are complex free parameters with a mass dimension. \subsection{Symmetry breaking of $S_4^3$ to the diagonal $S_4$ subgroup} The modular symmetries are broken after the bi-triplet scalars $\tau_A$, $\tau_B$ and $\tau_C$ gain VEVs. Unlike the flavons introduced in most flavour models in the literature, the VEVs of these scalars are not responsible for special Yukawa textures for leptons, but rather their purpose is to break three modular $S_4$'s to a single modular $S_4$ symmetry, identified as the diagonal subgroup and denoted as $S_4^D$, \begin{eqnarray} S_4^A \times S_4^B \times S_4^C \to S_4^D\,, \end{eqnarray} as depicted in Fig.~\ref{fig:S4s}. The VEVs of $\Phi_{AC}$ and $\Phi_{BC}$ take the following forms \begin{eqnarray} \label{eq:vev} \langle \Phi_{AC} \rangle_{i \alpha} = v_{AC} (P_{23})_{i \alpha}\,,~~ \langle \Phi_{BC} \rangle_{m \alpha} = v_{BC} (P_{23})_{m \alpha}\,. \end{eqnarray} Here again, $P_{23}$ represents the (2,3) row/column-switching transformation matrix, and $\alpha=1,2,3$ corresponds the entries of the triplet of $S_4^C$, while $i=1,2,3$ ($m=1,2,3$) corresponds to those of $S_4^A$ ($S_4^B$). These VEV structures are not arbitrarily assumed, but can be simply achieved following the standard driving field method. They are essentially related to the group structure of $S_4$ and its explicit form is basis-dependent \footnote{Explicit forms of scalar VEVs are dependent upon the basis of $S_4$ we use. As shown in Appendix~\ref{app:S4}, we work in the $T$-diagonal basis in Table~\ref{tab:rep_matrix_main}, where the trivial singlet contraction for two triplets is $(ab)_{\mathbf{1}} = a_1 b_1 + a_2 b_3 + a_3 b_2$. If we had worked in the real basis in Table~\ref{tab:rep_matrix_vacuum}, where the singlet contraction can be simply given by $(\tilde{a}\tilde{b})_{\mathbf{1}} = \tilde{a}_1 \tilde{b}_1 + \tilde{a}_2 \tilde{b}_2 + \tilde{a}_3 \tilde{b}_3$, the VEVs of $\Phi_{AC}$ and $\Phi_{BC}$ would have been proportional to the identity matrix, $\langle \tilde{\Phi}_{AC} \rangle_{i \alpha} = v_{AC} \delta_{i\alpha}$, $\langle \tilde{\Phi}_{BC} \rangle_{m \alpha} = v_{BC} \delta_{m \alpha}$, following the discussion in Appendix~\ref{eq:vacuum}. }. For details of how to derive them without loss of generality, we refer the reader to Appendix~\ref{eq:vacuum}. Although $S_4^A$, $S_4^B$ and $S_4^C$ are broken by these VEVs, the diagonal subgroup $S_4^D$ survives below the symmetry breaking scale, corresponding to the associated transformation $\gamma_A=\gamma_B=\gamma_C$. In more detail, the $S_4^D$ survives since, given any $\gamma_A$ of $S_4^A$, there always exists an element $\gamma_C$ of $S_4^C$ which is identical to $\gamma_A$, and the VEV of $\Phi_{AC}$ is invariant under this ``contravariant'' transformation. Furthermore, there also exists an element $\gamma_B$ of $S_4^B$ which is identical to $\gamma_C$, and the VEV of $\Phi_{BC}$ is also invariant under the transformation. Thus, the modular $S_4^D$ symmetry corresponds to a universal transformation. \begin{figure}[ht] \centering \hspace*{1ex} \includegraphics[width=0.5\textwidth]{diagram_S4s.pdf} \caption{Illustration of the breaking of $S_4^A \times S_4^B \times S_4^C \to S_4^D$, identified as the diagonal subgroup, via the VEVs of $\Phi_{AC}$ and $\Phi_{BC}$.} \label{fig:S4s} \end{figure} \subsection{The effective low energy theory with modular $S_4$ symmetry} The effective low energy superpotential, below the $S_4^3$ breaking scale, involves only a single surviving modular $S_4$ symmetry, and may be written as, \begin{eqnarray} w^{\rm eff}_\ell \!&=&\! \left[ \frac{v_{AC}}{\Lambda} L Y_A(\tau_A) N_A^c + \frac{v_{BC}}{\Lambda} L Y_B(\tau_B) N_B^c \right] H_u \nonumber\\ &&+ \left[ L Y_e(\tau_C) e^c + L Y_\mu(\tau_C) \mu^c + L Y_\tau(\tau_C) \tau^c \right] H_d \nonumber\\ &&+ \frac{1}{2} M_A(\tau_A) N_A^c N_A^c + \frac{1}{2} M_B(\tau_B) N_B^c N_B^c + M_{AB}(\tau_A,\tau_B) N_A^c N_B^c\,, \end{eqnarray} where terms such as e.g., $L Y_A(\tau_A) N_A^c$ may be explicitly written as \begin{eqnarray} L Y_A(\tau_A) N_A^c &=& \left[ L_1 (Y_A)_1 + L_2 (Y_A)_3 + L_3 (Y_A)_2 \right] N_A^c \,, \end{eqnarray} which is straightforwardly obtained from Eq.~\eqref{eq:example}. This superpotential involves only the single residual $S_4^D$, and three modular fields $\tau_A$, $\tau_B$ and $\tau_C$ at the same time. The above superpotential may be taken as a starting point for models based on a single modular $S_4$ symmetry, where the three moduli fields introduced in an {\it ad hoc} way and taken to be independent fields. However, we have shown that such a model can consistently arise from a high energy model involving three modular groups $S_4^3$. The key point of such a model is that, in the low energy effective theory, the three moduli transform under the same $S_4^D$, i.e., for any $\gamma_D \in S_4^D$, $\tau_A$, $\tau_B$ and $\tau_C$ transform in the following way, \begin{eqnarray} \label{eq:modular_transformation_D} \gamma_D: &&\tau_J \to \gamma_D \tau_J = \frac{a_D \tau_J + b_D}{c_D \tau_J + d_D}\,, \end{eqnarray} for $J=A,B,C$. We also write out transformation properties of leptons \begin{eqnarray} L &\to& L(\gamma_D) = \rho_{\mathbf{3}}(\gamma_D) L \,,\nonumber\\ \alpha^c(\tau_C) &\to& \alpha^c(\gamma_D\tau_C) = (c_D \tau_C + d_D)^{-2k_\alpha} \alpha^c(\tau_C) \,,\nonumber\\ N_A^c(\tau_A) &\to& N_A^c(\gamma_D\tau_A) = (c_D \tau_A + d_D)^{-6} N_A^c(\tau_A) \,,\nonumber\\ N_B^c(\tau_B) &\to& N_B^c(\gamma_D\tau_B) = (c_D \tau_B + d_D)^{-4} N_B^c(\tau_B) \,, \label{eq:field_transformation_D} \end{eqnarray} and those for modular forms \begin{eqnarray} Y_\alpha(\tau_C) &\to& Y_\alpha(\gamma_D\tau_C) = (c_D \tau_C + d_D)^{2k_\alpha} \rho_{\mathbf{3}}(\gamma_D) Y_\alpha(\tau_C) \,,\nonumber\\ Y_A(\tau_A) &\to& Y_A(\gamma_D\tau_A) = (c_D \tau_A + d_D)^{6} \rho_{\mathbf{3}}(\gamma_D) Y_A(\tau_A) \,,\nonumber\\ Y_B(\tau_B) &\to& Y_B(\gamma_D\tau_B) = (c_D \tau_B + d_D)^{4} \rho_{\mathbf{3}}(\gamma_D) Y_B(\tau_B) \,,\nonumber\\ M_A(\tau_A) &\to& M_A(\gamma_D\tau_A) = (c_D \tau_A + d_D)^{12} M_A(\tau_A) \,,\nonumber\\ M_B(\tau_B) &\to& M_B(\gamma_D\tau_B) = (c_D \tau_B + d_D)^{8} M_B(\tau_B) \,,\nonumber\\ M_{AB}(\tau_A,\tau_B) &\to& M_{AB}(\gamma_D\tau_A, \gamma_D\tau_B) = (c_D \tau_A + d_D)^{6} (c_D \tau_B + d_D)^{4} M_{AB}(\tau_A,\tau_B) \,, \label{eq:form_transformation_D} \end{eqnarray} where $\alpha=e,\mu,\tau$ and $k_{e,\mu,\tau} = 3,2,1$. We make a further comment on residual modular symmetries. It is well-known that in classical flavour model building, the residual symmetry for Majorana neutrinos is restricted to $Z_2$ or $Z_2 \times Z_2$. In the framework of modular symmetry, the residual symmetry can be relaxed, e.g., $Z_3$ for $N_A$ as will be applied in section~\ref{sec:4.5}. And the reason is that the relevant mass is not a trivial coefficient but a modular form, which can vary with residual modular transformation. This novel feature could be applied to other phenomenological model constructions. For example, the residual symmetry to stabilise a dark matter candidate is not limited to a $Z_2$, while the latter is necessary in classic models of non-Abelian discrete symmetry \cite{Hirsch:2010ru}. To summarise, we have derived a low energy effective flavon-less leptonic flavour model with one modular $S_4$ symmetry and three independent moduli fields. The importance of this for model building is that, as we shall see shortly, by making use of the different moduli fields, we can access different sets of triplet modular forms, corresponding to having different residual symmetries in different sectors of the theory. This is similar to the traditional approach to model building based on $S_4$, but of course is achieved now without having to introduce flavons with certain vacuum alignments. \subsection{Flavour structure in the charged lepton sector} In the charged lepton sector, only $S_4^C$ plays a role. We assume the VEV of $\tau_C$ fixed at $\langle \tau_C \rangle = \tau_T =\omega$. Following Eq.~\eqref{eq:residual_T}, we obtain \begin{eqnarray} Y_e(\langle\tau_C\rangle) = \begin{pmatrix} 1\\ 0\\ 0 \end{pmatrix} \,,\quad Y_\mu(\langle\tau_C\rangle) = \begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix}\,,\quad Y_\tau(\langle\tau_C\rangle) = \begin{pmatrix} 0\\ 1\\ 0 \end{pmatrix}\,, \end{eqnarray} for weights $2k_C=6, 4, 2$, respectively. This is a consequence of the residual modular $Z_3^T$ symmetry. These modular forms will lead to diagonal Yukawa couplings for the charged leptons, where all lepton mixing arises from the neutrino sector. Although the diagonal Yukawa couplings are independent, we do not gain any understanding of the charged lepton mass hierarchy in this model. \subsection{Flavour structure in the neutrino sector \label{sec:4.5}} In the neutrino sector, by selecting $\langle \tau_A \rangle = \tau_{TS} = \frac{1}{2}+i\frac{\sqrt{3}}{2}$ and $\langle \tau_B \rangle = \tau_{U} = \frac{1}{2}+\frac{i}{2}$ we have residual modular symmetries $Z_3^{TS}$ and $Z_2^U$, respectively. Following the discussion in section~\ref{sec:residual}, we obtain the modular form for the Yukawa coupling \begin{eqnarray} Y_A(\langle\tau_A\rangle) = \begin{pmatrix} -1\\ 2 \omega\\ 2 \omega^2 \end{pmatrix} \,,\quad Y_B(\langle\tau_B\rangle) = \begin{pmatrix} 0\\ 1\\ -1 \end{pmatrix}\,, \end{eqnarray} by selecting the modular weights of $N_A^c$ and $N_B^c$ in $S_4^A$ and $S_4^B$ to be $2k_A = -6$ and $2k_B = -4$, respectively. $Y_A(\langle\tau_A\rangle)$ and $Y_B(\langle\tau_B\rangle)$ give rise to the $3\times 2$ Dirac neutrino mass matrix $M_D'$. $M_A$, $M_B$ and $M_{AB}$ all takes non-zero values at $\langle \tau_A \rangle = \tau_{TS}$ and $\langle \tau_B \rangle = \tau_{U}$. Thus, we obtain a $2 \times 2$ Majorana matrix for $N_A^c$ and $N_B^c$, \begin{eqnarray} M_N = \begin{pmatrix} M_A & M_{AB} \\ M_{AB} & M_B \end{pmatrix} \,. \end{eqnarray} Here, we still use $M_{A}$, $M_{B}$ and $M_{AB}$ to represent values of $M_{A}(\tau_A)$, $M_{B}(\tau_B)$ and $M_{AB}(\tau_A,\tau_B)$ at the relevant VEVs. $M_N$ can be diagonalised by a unitary matrix $V$ via $V^T M_N V = {\rm diag}\{M_1, M_2\}$, with \begin{eqnarray} V = e^{i \alpha_3} \begin{pmatrix} \hat{c}_R & \hat{s}_R^* \\ -\hat{s}_R & \hat{c}_R^* \end{pmatrix} \,, \end{eqnarray} where $\hat{c}_R \equiv \cos \theta_R e^{\alpha_1}$ and $\hat{s}_R \equiv \sin \theta_R e^{i\alpha_2}$. The Dirac mass matrix $M_D$ in the basis where charged lepton and right-handed neutrino mass matrices are diagonal is obtained through $V$ acting on the right of $M_D'$, which mixes the columns: \begin{eqnarray} M_D = e^{i \alpha_3} \begin{pmatrix} -\hat{c}_R & - \hat{s}_R^* \\ 2\omega^2 \hat{c}_R + \hat{s}_R~ & ~2\omega^2 \hat{s}_R^* - \hat{c}_R^* \\ 2\omega \hat{c}_R - \hat{s}_R~ & ~2\omega \hat{s}_R^* + \hat{c}_R^* \end{pmatrix}\,. \end{eqnarray} Applying seesaw formula, we obtain \begin{eqnarray} M_\nu &=& (\mu_1 \hat{c}_R^2 + \mu_2 \hat{s}_R^{*2}) \begin{pmatrix} 1 & -2 \omega ^2 & -2 \omega \\ -2 \omega ^2 & 4 \omega & 4 \\ -2 \omega & 4 & 4 \omega ^2 \\ \end{pmatrix} +(\mu_1 \hat{s}_R^2 + \mu_2 \hat{c}_R^{*2}) \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & -1 & 1 \\ \end{pmatrix} \nonumber\\ && + (\mu_1 \hat{c}_R\hat{s}_R - \mu_2 \hat{c}_R^{*} \hat{s}_R^{*} ) \begin{pmatrix} 0 & -1 & 1 \\ -1 & 4 \omega ^2 & 2 i \sqrt{3} \\ 1 & 2 i \sqrt{3} & -4 \omega \\ \end{pmatrix} \,, \end{eqnarray} where $\mu_1$ and $\mu_2$ are real inspect of an overall phase. There are five physical parameters $\mu_1$, $\mu_2$, $\theta_R$, $\alpha_1$ and $\alpha_2$. The PMNS matrix is obtained by diagonalising the neutrino mass matrix, $U^T M_\nu U = {\rm diag} \{ 0, m_2, m_3 \}$. Since both $Y_A$ and $Y_B$ are orthogonal to $(2,-1,-1)^T$, we directly arrive at the TM$_1$ form of lepton mixing matrix~\cite{Xing:2006ms, Lam:2006wm, Albright:2008rp, Albright:2010ap}, \begin{equation}\label{TMM} \!\!\!\!\!\!\!\! U_{\rm TM_1} = \left( \begin{array}{ccc} \frac{2}{\sqrt{6}} & - & - \\ -\frac{1}{\sqrt{6}} & - & - \\ -\frac{1}{\sqrt{6}} & - & - \end{array} \right). \end{equation}% $\rm{TM}_1$ lepton mixing implies three equivalent relations: \begin{equation} \tan \theta_{12} = \frac{1}{\sqrt{2}}\sqrt{1-3s^2_{13}}\ \ \ \ {\rm or} \ \ \ \ \sin \theta_{12}= \frac{1}{\sqrt{3}}\frac{\sqrt{1-3s^2_{13}}}{c_{13}} \ \ \ \ {\rm or} \ \ \ \ \cos \theta_{12}= \sqrt{\frac{2}{3}}\frac{1}{c_{13}} \label{t12p} \end{equation} leading to a prediction $\theta_{12}\approx 34^{\circ}$, in excellent agreement with current global fits, assuming $\theta_{13}\approx 8.5^{\circ}$. By contrast, the corresponding $\rm{TM}_2$ relations imply $\theta_{12}\approx 36^{\circ}$ \cite{Albright:2008rp}, which is on the edge of the three sigma region, and hence disfavoured by current data. $\rm{TM}_1$ mixing also leads to an exact sum rule relation relation for $\cos \delta$ in terms of the other lepton mixing angles \cite{Albright:2008rp}, \begin{equation} \cos \delta = - \frac{\cot 2\theta_{23}(1-5s^2_{13})}{2\sqrt{2}s_{13}\sqrt{1-3s^2_{13}}} \,. \label{TM1sum} \end{equation} \subsection{Numerical fit} As described in previous subsections, we obtain through the use of modular symmetries a flavon-less effective theory which fulfils TM$_1$ lepton mixing. In this section, we make use of the above analytical sum rules for TM$_1$ lepton mixing as well as the diagonalisation of the $2 \times 2$ symmetric matrices which result from the rotation of the neutrino mass matrix by the TB mixing matrix, following the analytic methods presented in~\cite{King:2015dvf}. We are thus able to express each observable (the 3 mixing angles, the squared mass ratio and the CP-violating phase $\delta$) in terms of the model parameters $(\{ x \}) = (\{\alpha_1, \alpha_2, \theta_R, \mu_1, \mu_2 \})$, i.e. the phases $\alpha_1$ and $\alpha_2$, the angle parametrizing the rotation originating from RH neutrino sector, and the parameters governing the contribution from $Y_A$ and $Y_B$, $\mu_1$ and $\mu_2$. These formulas are somewhat complicated and not particularly illustrative, but enable us to easily run a numerical minimisation procedure on a $\chi^2$ function: \begin{equation} \label{eq:chi2} \chi^2 = \sum \left( \frac{P_i(\{ x \}) - {\rm BF}_i}{\sigma_i} \right)^2 \,, \end{equation} where $P_i$ are the model predictions, BF$_i$ the current best-fit values, and the errors $\sigma_i$ correspond here to the average of the $1 \sigma$ ranges for each observable. We use the best-fit values and $1 \sigma$ ranges from NuFit 4.0 \cite{Esteban:2018azc, nufit4}. The minimisation runs over model parameters and the observables tested are the 3 PMNS mixing angles, the phase $\delta$, and the absolute masses obtained from the square roots of the squared mass differences (taking into account that we have only 2 RH neutrinos and normal mass ordering, $m_1 = 0$). The obtained best-fit point (BF) corresponds to a $\chi^2 = 0.74$, with the model parameters shown in Table \ref{ta:benchmarks}, together with the respective predictions for the observables, including mixing parameters, neutrino masses, and the effective neutrino mass parameter in neutrino-less double beta decay $m_{ee}= |\mu_1 \hat{c}_R^2 + \mu_2 \hat{s}_R^{*2}|$. These observables (predicted by the analytical formulas for the specific point in parameter space) completely match with the values obtained by performing an entirely numerical diagonalisation for the same point in parameter space. For the best-fit point the observables are all within the $1 \sigma$ range except $\delta = 290^\circ$. For comparison we present also two other benchmark points. In Benchmark 1 (B1), the observables are all within the $1 \sigma$ range except $\delta$, which is slightly smaller ($285^\circ$) than in the best fit point. Conversely, $\theta_{23}$ deviates slightly from its best-fit point. The total $\chi^2 = 1.6$ is slightly worse. In Benchmark 2 (B2), $\delta = 254^\circ$ is within the $1 \sigma$ range. Conversely, $\theta_{13}$ is slightly deviated from its best fit point and $\theta_{23} = 41.5^\circ$ is strongly deviated from its best-fit point and is indeed outside the $1 \sigma$ range. The total $\chi^2=55$ is much worse, although we note that this value is somewhat spurious, given that the expression in Eq.~\eqref{eq:chi2} is based on Gaussian distributions, which is not the case for $\theta_{23}$, which for B2 contributes 0.99 of the total $\chi^2$. We are taking the best-fit point for $\theta_{23} = 49.6^\circ$ from NuFit 4.0 \cite{Esteban:2018azc, nufit4}. It is worth emphasizing that these predictions originate from the special directions $Y_A$ and $Y_B$ obtained from the fixed points in the respective modular symmetries. The best-fit point observables all lie within the $1 \sigma$ range except $\delta$, which nevertheless lies within its $3 \sigma$ range and takes a value close to maximal ($290^\circ$). \begin{table} \centering \begin{tabular}{| c|c | c |} \hline \hline \multirow{3}{*}{BF} & Para. & \begin{tabular}{| c | ccccc} $\chi^2$ & $\alpha_1$ & $\alpha_2$ & $\theta_R$ & $\mu_1$ & $\mu_2$ \\ \hline 0.74 & $64.53^\circ$ & $20.38^\circ$ & $43.01^\circ$ & $0.00633$\,eV & $0.0114$\,eV \\ \end{tabular} \\\cline{2-3} & Obs. & \begin{tabular}{ccccccc} $\theta_{12}$ & $\theta_{13}$ & $\theta_{23}$ & $\delta$ & $m_2$ & $m_3$ & $m_{ee}$ \\ \hline $34.33^\circ$ & $8.61^\circ$ & $49.6^\circ$ & $290^\circ$ & $0.00860$\,eV & $0.0502$\,eV & $0.00206$\,eV \end{tabular} \\ \hline \hline \multirow{3}{*}{B1} & Para. & \begin{tabular}{| c | ccccc} $\chi^2$ & $\alpha_1$ & $\alpha_2$ & $\theta_R$ & $\mu_1$ & $\mu_2$ \\ \hline 1.6 & $70.16^\circ$ & $16.62^\circ$ & $43.51^\circ$ & $0.00651$\,eV & $0.0135$\,eV \\ \end{tabular} \\\cline{2-3} & Obs. & \begin{tabular}{ccccccc} $\theta_{12}$ & $\theta_{13}$ & $\theta_{23}$ & $\delta$ & $m_2$ & $m_3$ & $m_{ee}$ \\ \hline $34.33^\circ$ & $8.62^\circ$ & $48.6^\circ$ & $285^\circ$ & $0.00860$\,eV & $0.0502$\,eV & $0.00188$\,eV \end{tabular} \\ \hline \hline \multirow{3}{*}{B2} & Para. & \begin{tabular}{| c | ccccc} $\chi^2$ & $\alpha_1$ & $\alpha_2$ & $\theta_R$ & $\mu_1$ & $\mu_2$ \\ \hline 55 & $358.73^\circ$ & $338.89^\circ$ & $24.65^\circ$ & $0.00533$\,eV & $0.0114$\,eV \\ \end{tabular} \\\cline{2-3} & Obs. & \begin{tabular}{ccccccc} $\theta_{12}$ & $\theta_{13}$ & $\theta_{23}$ & $\delta$ & $m_2$ & $m_3$ & $m_{ee}$ \\ \hline $34.34^\circ$ & $8.56^\circ$ & $41.5^\circ$ & $254^\circ$ & $0.00860$\,eV & $0.0502$\,eV & $0.00319$\,eV \end{tabular} \\ \hline \hline \end{tabular} \caption{Model parameters (Para.) and respective observables (Obs.) for the best-fit point (BF) and two other benchmark points (B1, B2). \label{ta:benchmarks}} \end{table} \section{Conclusions and Discussion} \label{conclusion} In this paper we have considered, for the first time, leptonic flavour models based on multiple moduli fields with an extended finite modular symmetry. We reviewed the case of a single modular symmetry $\overline{\Gamma}$ with a single modulus field $\tau$ and $\mathcal{N} = 1$ supersymmetry, then extended the formalism to include a series of modular groups $\overline{\Gamma}^{1}$, $\overline{\Gamma}^{2}$, ..., $\overline{\Gamma}^{M}$, where the modulus field for each modular symmetry $\overline{\Gamma}^{J}$ is denoted as $\tau_J$, where $J=1,..., M$, resulting in the finite modular symmetry $\Gamma_{N_1}^1\times \Gamma_{N_2}^2 \times \cdots \times \Gamma_{N_M}^M$. We then returned to the case of a single modular symmetry, focussing on the case of modular $S_4$ symmetry and its remnant symmetries, exploring relations of stabilisers of modular transformations, residual symmetries and modular forms in the framework of finite modular symmetry. In the case of modular $S_4$ symmetry, several new stabilisers of residual symmetries were identified, where each stabiliser preserves a $Z_2$ or $Z_3$ residual symmetry. We discovered a strong correlation between the modular transformation and the modular form at its stabiliser, namely that {\it a modular form at a stabiliser of any modular transformation is an eigenvector of the representation matrix of the modular transformation.} Based on this correlation, we were able to determine some new types of modular forms without knowing exact expressions for those modular forms. As an application of the preceding results, we constructed a flavour model of leptons involving two right-handed neutrinos and three finite modular symmetries $S_4^A \times S_4^B \times S_4^C$. Here, $S_4^A$ and $S_4^B$ are modular symmetries for two right-handed neutrinos, respectively, while $S_4^C$ is the modular symmetry in charged lepton sector. They are connected by two bi-triplet scalars. After they gain VEVs, three $S_4$'s are broken to a single $S_4^D$, i.e., $S_4^A \times S_4^B \times S_4^C \to S_4^D$. Independent fixed points in the extra dimensions associated with $S_4^A$ and $S_4^B$ specify (flavon-less) special directions that preserve subgroups of the respective symmetries, whereas a scalar transforming as a triplet of both $S_4^A$ and $S_4^C$ and another scalar transforming one of both as a triplet of both $S_4^B$ and $S_4^C$ acquire vacuum expectation values that break $S_4^A \times S_4^B \times S_4^C$ to its diagonal subgroup $S_4^D$. We emphasise that these scalars do not carry any information about flavour. After the three $S_4$'s are broken, we arrive at an effective low energy flavour mixing model with a single $S_4$ modular symmetry but three independent modular fields $\tau_A$, $\tau_B$ and $\tau_C$. The independence of these modular fields allows us to assign different VEVs for them which determine the flavour structure. We fix the VEV of $\tau_C$ at a stabiliser which satisfies a modular $Z^C_3$ symmetry. A diagonal charged lepton mass matrix is obtained. VEVs of $\tau_A$ and $\tau_B$ are fixed at other two stabilisers which preserve a different $Z^A_3$ symmetry and a $Z^B_2$ symmetry respectively. The residual modular symmetries justify the special directions that lead to TM$_1$ mixing. This is similar to the traditional approach to model building based on $S_4$, but of course is achieved now without having to introduce flavons with certain vacuum alignments. Finally, we performed an analysis of the predictions of the model taking into account the existence of RH neutrino mixing (in the model-building basis). When this is taken into account, the 5 observables depend on 4 real model parameters, and we obtain an excellent fit to experiment, with all 3 mixing angles and the squared mass ratio within $1 \sigma$ of their experimental values and a near-maximal value for $\delta = 290$ degrees. Having two right-handed neutrinos, the model predicts the absolute neutrino mass scale $m_1 = 0$. In conclusion, we have developed a general formalism for multiple modular symmetries, analysed the residual symmetries of modular $S_4$ symmetry, and proposed a realistic model based on modular $S_4^3$ symmetry, which yields the successful trimaximal TM$_1$ lepton mixing, without requiring any flavons. \subsection*{Acknowledgements} IdMV acknowledges funding from the Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) through the contract IF/00816/2015 and partial support by FCT through projects CFTP-FCT Unit 777 (UID/FIS/00777/2019), CERN/FIS-PAR/0004/2017 and PTDC/FIS-PAR/29436/2017 which are partially funded through POCTI (FEDER), COMPETE, QREN and EU. SFK and YLZ acknowledge the STFC Consolidated Grant ST/L000296/1 and the European Union's Horizon 2020 Research and Innovation programme under Marie Sk\l{}odowska-Curie grant agreements Elusives ITN No.\ 674896 and InvisiblesPlus RISE No.\ 690575.
1,314,259,996,885
arxiv
\section{System Description}\label{sec:system} This section will showcase our CMT system and describe how each part of the system responds to the above generalization gap. \zh{First dense retrieval for document retrieval. Then few-shot learning for neural reranking. Finally say our domain specific MLM.} \subsection{Continuous MLM Training} ~\zh{The fist paragraph can be more general and pay more attention to the differences of general MLM and COVID-targeted MLM. We choose the scibert, tell the differences of bert and scibert. In addition, wht you choose the mask rate 15\%, refer the original bert paper. } The pre-training masked language model on a domain-specific corpus has been proved to enhance downstream tasks, such as the scientific domain~\cite{Beltagy2019SciBERT}. Followed the direction, we continue to perform MLM training on SciBERT on the CORD19 corpus to ease the cross-domain gap of COVID. CMT system inherits the same architecture and vocabulary as SciBERT and uses WordPiece for unsupervised tokenization of COVID-19 papers. In all of our experiments, we use the full text of the documents, not just the abstracts, and mask 15\% of all WordPiece tokens in each sequence at random. \subsection{Few-Shot Learning} ~\zh{Say the challenge of COVID-IR and why it is a few-shot task.} Few-shot learning has been extensively studied to improve model training in the low-data regime~\cite{wang2019few}. Two few-shot learning methods are applied in our system to alleviate COVID's label scarcity, including weak supervision data selection and in-domain data generation.~\zh{Mention the two methods in out models.} ~\zh{The first part should describe the contrastive qg. Tell why we need more COVID-targeted data.} \textbf{Selective Weak Supervision.} It is natural to regard the rich medical-related annotation in MS-MARCO as weak supervision data for training the neural ranker in the low-resource COVID search. Here we leverage ReInfoSelect~\cite{zhang2020selective}, an advanced data selection method based on reinforcement learning, to distinguish valuable weak supervision signals for robust training. Given the medical MS-MARCO as the training set, and a few annotated data (TREC-COVID Round 1) as target set, ReInfoSelect uses the policy gradient to propagate the reward signals from target data to a weak supervision data selector, where the evaluation result of the ranker on the target data is used as reward signals. SciBERT is our ranker with warm-up training on the previous MLM stage. ~\zh{The second part should describe weak supervision selection.} \textbf{Contrastive Query Generation.} There are also apparent differences between medical MS MARCO and COVID, even that they both belong to the medical field. To further mitigate the gap, we used advances in query generation to construct related query-document pairs in the COVID domain automatically. In the first attempt, we trained a generator on the medical MS MARCO, which can generate queries by encoding a single document~\cite{ma2020zero}. But the query generated by the generator trained on query-document pairs is too general, so that the query may be related to many other papers. We propose a simple but effective generation method for generating more distinguishable queries. Instead of encoding a single document to generate related queries, our generator learns to produce queries based on a pair of contrast documents related to but cover different topics. The process of obtaining contrast document pair is divided into two steps. First, we considered queries generated in the first attempt as the first generation and then use BM25 to extract relevant documents according to the queries. Then two documents randomly selected from the relevance list returned by BM25 are concatenated and fed to the contrast query generator to generate the second generation query, where the contrast generator was pre-trained on the medical MARCO triples $<$query, positive doc, negative doc$>$. \input{Diagrams/overall} \input{Diagrams/rankdepth} ~\zh{This part can be more simplification.} \subsection{Dense Retrieval} Dense retrieval standing on distributed representations can compare queries and documents at the semantic level even if they use different vocabularies, which complement to the sparse retriever by design~\cite{gao2020complementing, karpukhin2020dense}. Given this, the CMT system combines sparse BM25 and the dense Approximate Nearest Neighbor (ANN) method. ANN aims to search the nearest neighbors of documents regarding the similarity between the query $q$ and the document $d$. The core part of ANN is the dense encoder $E(\cdot)$, which is used to maps text to distributed vectors and build an index for all the documents we will use for retrieval. At search-time, the encoder $E(\cdot)$ maps the query to the same distributed representation space and retrieves $k$ documents of which vectors are the closest to the query vector. In our system, we define the similarity between queries and documents as their vectors' dot product and regards BERT as the dense encoder $E(\cdot)$: $$ sim(q, d) = E(q)^{T}E(d), $$ Training ANN can formulate as a learning-to-rank problem. Given training dataset $D = \{(q_i, d_{i}^{+}, D_{i}^{-})\}_{i=1}^{N}$ that consists of $N$ instances. Each instance contains a query $q_i$ and its relevant (positive) document $d_{i}^{+}$ along with $m$ irrelevant (negative) documents $D_{i}^{-} = \{d_{ij}^{-}\}_{j=1}^{m}$. The training objective is to learn a distributed space such that the positive document has a higher similarity to the query than all negative documents: $$ L(q, d^{+}, D^{-}) = \frac{1}{N}\sum_{i=1}^{N}-\text{log}\frac{e^{sim(q_i, d^{+})}}{e^{sim(q_i, d^{+})} + \sum_{j=1}^{m}e^{sim(q_i, d_{ij}^{-})}}, $$ where the training instance comes from medical MS MARCO. \section{Introduction} \end{document} \section{Introduction} Recent years have witnessed continuous successes of neural ranking models in information retrieval~\cite{pang2017deeprank, dai2018convolutional, macavaney2019cedr, xiong2020approximate}. Most notably, deep pretrained language models (LMs) achieve state-of-the-art performance on several web search benchmarks~\cite{yang2019simple, nogueira2019passage, craswell2020overview}. Their success relies on the learned semantic information from general domain corpus with the language model pretraining~\cite{craswell2020overview, zhang2019generic}. However, ranking models in specific domains usually face the domain adaption problem, which comes from two generalization gaps between the general and the specific domain. The first gap derives from the discrepancy of vocabulary distributions in different domains. Taking the COVID domain as an example~\cite{wang2020cord, voorhees2020trec}, the earliest related publication appeared at the end of 2019. Even pretrained LMs targeting the biomedical domain~\cite{Beltagy2019SciBERT, lee2020biobert} are unfamiliar with new medical terms like COVID-19 because their pretraining corpora have not contained such new terminologies. The other gap is the label scarcity. For the specific searching scenario, large-scale relevance labels are luxury, such as biomedical and scientific domains. In addition, most information retrieval (IR) systems usually use sparse ranking methods in the first-stage retrieval, such as BM25, which are based on term-matching signals to calculate the relevance between query and document. Nevertheless, these systems may fail when queries and documents use different terms to describe the same meaning, which is known as the vocabulary mismatch problem~\cite{furnas1987vocabulary, croft2010search}. The vocabulary mismatch problem of sparse retrieval has become an obstacle to existing IR systems, especially for specific domains that have lots of in-domain terminologies. This paper presents a solution to alleviate the specific domain adaption problem with three core technics. The first one conducts domain-adaptive pretraining (DAPT)~\cite{gururangan2020don} to help pretrained language models learn semantics of special domain terminologies to keep the language knowledge is the latest. The second one uses Contrast Query Generation (ContrastQG) and ReInfoSelect~\cite{zhang2020selective} to mitigate the label scarcity problem in the specific domain. ContrastQG and ReInfoSelect focus on generating and filtering pseudo relevance labels to further improve ranking performance, respectively. Finally, our system integrates dense retrieval to alleviate the sparse retrieval's vocabulary mismatch bottleneck. Dense retrieval can encode query and document to dense vectors to measure the relevance between query and document in the latent semantic space~\cite{karpukhin2020dense, gao2020complementing, luan2020sparse, chang2020pre,xiong2020approximate}. Using above technologies, our system achieves the best performance among non-manual groups in Round 2 of TREC-COVID~\cite{voorhees2020trec}, which is a COVID-domain TREC task to evaluate information retrieval systems for searching COVID-19 related literature. The next section will analyze the generalization gaps and vocabulary mismatch faced by the COVID domain search. Sec.\ref{sec:system} and Sec.\ref{sec:implement} describe in detail how our system alleviates these problems. Sec.\ref{sec:evaluation} shows the evaluation results and hyperparameter study. In the Sec.\ref{sec:attempts} and Sec.\ref{sec:concern}, we discuss the negative attempts and our concerns of the residual collection evaluation~\cite{salton1990improving} used in TREC-COVID. \section{Data Study} This section studies the generalization gaps from web to COVID domain, and the vocabulary mismatch problem of sparse retrieval. \textbf{Domain Discrepancy.} Most existing pretrained language models divide uncommon words into subwords, which aims to alleviate the out-of-vocabulary problem~\cite{sennrich2015neural}. As shown in Figure~\ref{fig:corpus_mismatch}, the subword ratio of TREC-COVID queries is dramatically higher than that of the web domain dataset, MS MARCO~\cite{bajaj2016ms}. The results show that existing pretrained language models treat most COVID-domain terminologies as unfamiliar words, indicating a considerable discrepancy between the existing pretraining and the COVID domain. \textbf{Label Scarcity.} The label scarcity in the COVID domain search is very prominent. Only 30 queries were judged in the second round of TREC-COVID. In contrast, medical MS MARCO contains more than 78,800 annotated queries, which is the medical subset of MS MARCO filtered by the previous work~\cite{macavaney2020sledge}. \textbf{Vocabulary Mismatch.} We observed that BM25 only covered 35\% of relevant documents in the top 100 retrieved documents. The result reveals that retrieving relevant documents only according to term-matching signals will hinder the search system's effectiveness. \input{Diagrams/corpus_mismatch} \section{System Description}\label{sec:system} Our system employs a two-stage retrieval architecture, which utilizes BM25 for base retrieval and SciBERT~\cite{Beltagy2019SciBERT} for reranking. The domain-adaptive pretraining and two few-shot learning techniques are used to mitigate the generalization gaps faced by SciBERT in the COVID domain. Dense retrieval is also incorporated into our system to alleviate BM25's vocabulary mismatch problem. \subsection{Domain-Adaptive Pretraining} SciBERT has been used in our system since it is pretrained with scientific texts and biomedical publications. However, COVID is a new concept that has not appeared in previous pretraining corpora. Therefore, we conduct domain-adaptive pretraining (DAPT)~\cite{gururangan2020don} for SciBERT. Our approach is straightforward to continuously train SciBERT with CORD-19 corpus~\cite{wang2020cord}, which is a growing collection of scientific papers about COVID-19 and coronavirus. \subsection{Few-Shot Learning} We introduce two few/zero-shot learning methods named \text{ContrastQG} and ReInfoSelect~\cite{zhang2020selective} to alleviate the label scarcity challenge when fine-tuning the neural ranking model. Specifically, we first use ContrastQG to generate weakly supervised data in a zero-shot manner and then utilize a weak supervision data selection method, ReInfoSelect, to recognize high quality training data. \textbf{ContrastQG} is a zero-shot data synthetic method aiming to generate queries for synthesizing weakly supervised relevance signals. Unlike the prior work~\cite{ma2020zero}, ContrastQG synthesizes a query given a relevant text pair rather than a single related text, which can capture the specificity between two documents to generate more meaningful queries instead of keyword-style queries. The entire synthesis process uses two query generators named $QG$ and $ContrastQG$, which aim to generate pseudo queries according to documents. Both $QG$ and $ContrastQG$ are implemented with standard GPT-2~\cite{radford2019language}. $QG$ is trained on medical MS MARCO's positive passage-query pairs ($d_+$, $q$) following the previous method~\cite{ma2020zero}. $ContrastQG$ is directly trained on medical MS MARCO's triples by encoding the concatenated text of positive and negative passages ($d_+$, $d_-$) to generate query $q$. At inference time, we first leverage $QG$ to generate queries $q$ based on a single COVID domain document $d$: $$ q = QG(d). $$ Then we utilize BM25 to retrieve two related documents ($d'_+$, $d'_-$) that show different correlation according to the generated query $q$. Finally, $ContrastQG$ is used to generate another query $q'$ based on the two contrastive documents ($d'_+$, $d'_-$): $$ q' = ContrastQG(d'_+, d'_-). $$ The synthetic triple $(q', d'_+, d'_-)$ is used as weakly supervised data to train the neural ranker. \textbf{ReInfoSelect}~\cite{zhang2020selective} uses reinforcement learning to select weak supervision data. ReInfoSelect evaluates the neural ranker's performance on the target data and regards the NDCG difference as the reward. Then the reward signal from target data is propagated to guide data selector via the policy gradient. In our system, we use ContrastQG and medical MARCO to construct the weakly supervised data. The annotated data of TREC-COVID Round 1 is used as the target data. The trial-and-error learning mechanism of ReInfoSelect can select proper weakly supervised data according to neural ranker's performance in the target domain, which helps to further mitigate the domain discrepancy. \input{Diagrams/overall} \subsection{Dense Retrieval} Dense retrieval maps queries and documents to the same distributed representation space and retrieves related documents based on the similarities between document vectors and query vectors~\cite{karpukhin2020dense,xiong2020approximate}. Let each training instance contain a query $q$, relevant (positive) document $d_{+}$ and $m$ irrelevant (negative) documents $D_{-} = \{d_{-}^{j}\}_{j=1}^{m}$. Dense retrieval first encodes the query $q$ and all documents $d$ to dense vectors $\boldsymbol{q}$ and $\boldsymbol{d}$. Then the similarity of $\boldsymbol{q}$ and $\boldsymbol{d}$ is calculated as $sim(\boldsymbol{q}, \boldsymbol{d})$. The training objective can be formulated as learning a distributed representation space that the positive document has a higher similarity to the query than all negative documents: $$ loss(q, d_{+}, D_{-}) = -\text{log}\frac{e^{sim(\boldsymbol{q}, \boldsymbol{d_{+}})}}{e^{sim(\boldsymbol{q}, \boldsymbol{d_{+}})} + \sum_{j=1}^{m}e^{sim(\boldsymbol{q}, \boldsymbol{d_{-}^{j}})}}, $$ where the similarity $sim(\cdot, \cdot)$ is the dot product between vectors. \section{Implementation Details.} \label{sec:implement} In this section, we describe the system's implementation details. \textbf{Dataset.} The testing data of TREC-COVID Round 2 contains the May 1, 2020 version of the CORD-19 document set~\cite{wang2020cord} (59,851 COVID-related papers) and 35 queries written by biomedical professionals. Among these queries, the first 30 queries have been judged in the Round 1. In the experiment, we use TREC-COVID Round 1's annotated data as the development set (30 queries) and the medical MS MARCO~\cite{macavaney2020sledge} as the training data (78,895 queries). \textbf{System Setup.} For data preprocessing, we concatenated title and abstract to represent each document and deleted stop words for all queries. Our system utilized the BM25 constructed by Anserini~\cite{yang2017anserini} as the base retrieval and adopted the dense retrieval implementation provided by Gao, et al.~\cite{gao2020complementing}. The neural ranker based on SciBERT~\cite{Beltagy2019SciBERT} was used in dense retrieval and reranking stages~\cite{macavaney2020sledge} with the learning rate of 2e-5 and the batch size of 32. We set the warm-up proportion as 0.1 and limited the maximum sequence length to 256. The NDCG@10 score on the development set is used to measure the convergence and is calculated every three training steps. Our system is based on PyTorch, and the training process it involves can be implemented on a GeForce RTX 2080 Ti. \input{Diagrams/qid2ndcg} \input{Diagrams/residual} \section{Evaluation Results} \label{sec:evaluation} This section presents evaluation results and hyperparameter studies. \input{Diagrams/rankdepth} \subsection{Overall Results} Table~\ref{tab:overall} shows the overall performance of different models in the TREC-COVID task. Three top systems during Round 2 evaluation and several variants of our systems are compared. Our system achieved the best performance in Round 2 of TREC-COVID. From our detailed experimental results, our method significant improves the ranking performance of SciBERT in the COVID domain. The domain-adaptive pretraining (DAPT) helps to improve SciBERT, which illustrates that learning the semantics of these new terminologies is crucial for language models. Then the system's performance has been further improved with about 6.5\% NDCG@10 gains by ContrastQG and ReInfoSelect. ContrastQG generates lots of pseudo relevance labels, which provides more training guidance for neural rankers in the specific domain. ReInfoSelect further boosts models with more fine-grained selected supervisions. The most significant improvement comes from the fusion of dense retrieval, where the P@5 score is increased by 11.8\%. This result shows that dense retrieval can significantly improve retrieval effectiveness by alleviating sparse retrieval's vocabulary mismatch problem. \subsection{Hyperparameter Study} Among all hyperparameters, we found the reranking depth significantly impacts the neural ranking model's effectiveness. As shown in Table~\ref{tab:rankdepth}, SciBERT's performance is significantly limited at the shallow reranking depth ($\leq$20), mainly caused by the low ranking accuracy of BM25. With the increase the reranking depth to 50 and 100, the neural ranker shows stable performance and achieves the best. Nevertheless, the reranking accuracy begins to drop as the depth continues to increase. The possible reason is that the neural ranker is not good enough to distinguish truly relevant documents when more noisy documents are included. \subsection{Query Analysis} Figure~\ref {fig:qid2ndcg} shows the testing results of each query. The first 30 queries have been judged in Round 1, and others are newly added in Round 2 (query 31-35). Our system outperforms baselines on most queries with previous annotations. Besides, our system is also comparable to the T5 Fusion system on new queries and avoids the sharp drop of the SciBERT Fusion system (such as 34th query), which shows our system's robustness. \section{Failed Attempts} \label{sec:attempts} This section discusses some of our failed attempts and experience. \textbf{Manual Labeling.} A straightforward approach to mitigate the label scarcity is to annotate more data within this domain manually. We recruited three medical students who compiled 50 COVID-related queries and assigned the relevance label to the top 20 documents retrieved by BM25 for each query. However, our annotations were not able to get good agreement with TREC-COVID's annotations. \textbf{Corpus Filtering.} MacAvaney et al.~\cite{macavaney2020sledge} proposed to narrow the retrieval scale by filtering out the document published before 2020. Nevertheless, our analysis found that this method excluded more than 80\% of documents from the second round of corpus, dropping a large amount of useful COVID-related literature, such as SARS and MERS. Thus, we did not adopt this method in our system. \textbf{Neural Reranker.} We also attempted two other neural ranking models besides SciBERT for document reranking, including BERT~\cite{devlin2019bert} and Conv-KNRM~\cite{dai2018convolutional}. Our experimental results show that BERT-Large has no obvious advantage over SciBERT-Base and Conv-KNRM performs the worst. The main reason for the poor performance of Conv-KNRM is that we did not use its subword version~\cite{hofstatter2019effect}, which led to a severe out-of-vocabulary problem. \textbf{Fusion Attempts.} Two fusion methods have been tried to integrate dense retrieval into our system. One approach is to combine dense retrieval with BM25 in the base retrieval stage. The other is to fuse dense retrieval into SciBERT's reranking processing directly. The second method works better in our limited attempts. \section{Concerns on Residual Evaluation} \label{sec:concern} This section discusses our observations about the residual collection evaluation used in the TREC-COVID task. In residual collection evaluation, test queries can be divided into \textit{old queries} and \textit{new queries}. The old queries have been annotated in previous rounds, but their annotated documents will be removed from the collection before scoring. TREC-COVID allows IR systems to use old queries' relevance judgments and classify such systems as feedback types. Figure~\ref{fig:residual} shows the evaluation results of the top 10 feedback systems in Round 2 of TREC-COVID. Although these systems performed closely in overall scores, they showed significant differences in the old and new queries. E.g., the 2nd system's performance in the new query is greatly better than that in the old query. In contrast, some systems' ranking accuracy for the new query is considerably lower than in the old query, even worse than the base retrieval BM25 Fusion system, such as the 3rd-5th and 9th systems. A powerful search system is desirable to achieve balanced performance on known and unknown queries. However, this result shows that the residual collection evaluation may bias towards seen queries, which are much easier in real production scenarios.
1,314,259,996,886
arxiv
\section{Introduction} The resolution-of-identity (RI)\cite{Whi73,Dun79,Min82,Als88,Fey93,Vah93,Klo02} stands as a central technique in quantum chemistry, relying on the expansion of $\phi_n \phi_m $ co-densities over an auxiliary atomic basis set $\lbrace \beta \rbrace$ that scales linearly with the number of atoms. Even though the auxiliary basis sets are typically three times larger than the corresponding atomic orbital (AO) basis sets supporting the $ \phi_n $ Hartree-Fock or Kohn-Sham molecular orbitals, this represents a considerable saving both in terms of memory and number of operations to be performed, which comes at the price of a moderate accuracy loss. While RI was introduced initially to facilitate the calculation of 2-electron 4-center $(mn|kl)$ Coulomb integrals, operators that finds an exact expression in the product space between occupied and virtual eigenstates can also be expressed more compactly using auxiliary bases. As such, the independent-electron density-density susceptibility $\chi_0({\bf r},{\bf r}';\omega)$ is of central importance to the present study. In particular, the scaling of the related random phase approximation (RPA) approach, \cite{Boh53,Gun76,Lan77} a popular low-order perturbative approach to correlation energies beyond density functional approximations, can be reduced from $\mathcal{O}(N^6)$ to $\mathcal{O}(N^4)$. \cite{Esh10} The computational efficiency and accuracy of RI techniques strongly depend on the scheme adopted to build the appropriate coefficients expressing the $ \phi_n \phi_m $ co-densities on the auxiliary basis. The original density-fitting (RI-SVS) approach\cite{Dun79,Als88,Fey93} expresses these coefficients as a direct overlap $\langle \phi_m \phi_n | \beta \rangle$, requiring the calculation of the sparse $\langle \alpha {\alpha}' | \beta \rangle$ coefficients, with $\lbrace \alpha \rbrace$ being the AO basis set used to expand the molecular orbitals $ \phi_n $. The now widely adopted Coulomb-fitting (RI-V) approximation \cite{Min82,Vah93} requires on the other hand the calculation of the denser 3-center Coulomb integrals $(\alpha {\alpha}' | \beta )$, displaying much less sparsity than $\langle \alpha {\alpha}' | \beta \rangle$ integrals due to the long-range nature of the Coulomb operator. The Coulomb-fitting formalism is known to be more accurate than the density-fitting scheme for auxiliary basis sets of similar sizes, \cite{Ren12,Duchemin16} bringing to the standard issue of the trade-off between accuracy and computational/memory costs. The use of short-range or attenuated Coulomb operators, \cite{Jun05,Rei08,Sod08,Ihr15} or corrective techniques such as ``multipole-preserving" constraints to the density-fitting scheme,\cite{Als88,Duchemin16} allows one to tune the accuracy-to-cost ratio between these two standard RI approximations. In conjunction with other powerful techniques, such as the Laplace transform, \cite{Alm91} exploiting the sparsity of the RI-SVS density-fitting $\langle \alpha {\alpha}' | \beta \rangle$ coefficients in the limit of large systems was shown to allow cubic-scaling RPA calculations. \cite{Wil16} As a trade-off between accuracy and efficiency, a Coulomb-attenuated variation of the Coulomb-fitting (RI-V) RPA was recently explored to obtain a low-scaling formulation, \cite{Lue17} exploiting further the decay properties in real-space of the Laplace-transformed "pseudo" density matrices expressed in the AO basis. \cite{Has93,Aya99,Kal15,Sch16} The efficiency of this latter family of approaches depends on the electronic properties of the system of interest, and are different in nature from the sparsity associated with specific resolution-of-identity formalisms. In the present study, we explore an alternative approach for reducing computational and memory loads by assessing on a large set of molecules the merits of a separable resolution-of-the-identity formalism relying on a density fitting scheme over compact sets of real-space points $\lbrace {\bf r}_k \rbrace$. Our approach preserves the use of standard Gaussian atomic orbitals and auxiliary basis sets for all-electron calculations, targeting the accuracy of the Coulomb-fitting (RI-V) formalism. The set-up of the fitting procedure scales cubically with system size. The accuracy of our approach is first validated by an extensive benchmark of the exchange and MP2 correlation energies over a large set of molecules. Combined with the Laplace transform technique, and following the so-called space-time approach for calculating the susceptibility operator,\cite{Roj95,Kal14} the calculation of the RPA correlation energy within the present real-space quadrature approach is shown to scale cubically in terms of operations and quadratically in terms of memory, without invoking any sparsity or localization properties. The accuracy of the present real-space RI-RPA formalism is further shown to match that of the standard quartic-scaling Coulomb-fitting RI-RPA calculations for a large set of molecules including the oligoacenes, $C_{60}$ and a larger octapeptide angiotensin II molecule (146 atoms including 71 H atoms) proposed by Eshuis and coworkers in the early days of RI-RPA implementations. \cite{Esh10} \section{Theory} In this Section, we briefly outline the standard RI-V and RI-SVS approximations, introducing the notations used throughout the paper. We then discuss separable resolution-of-identities and present our specific implementation preserving the use of standard atomic and auxiliary basis sets. The present approach relies on weighted real-space $\delta({\bf r}-{\bf r}_k)$-functions to express the density fitting coefficients, relating co-densities to auxiliary basis functions through real-space quadratures. The scheme to optimize the distribution of ${\bf r}_k$ and related weights is presented, and compared to other real-space quadrature formalisms. We demonstrate in particular that the computational cost associated with the setup of the present RI approach scales cubically with the system size. We show then how such a separable RI allows to obtain cubic-scaling RPA with low crossover with respect to standard quartic RI-RPA formalism when combined with the Laplace transform technique. We conclude this Section by presenting the technical details and parameters adopted in this study to perform the calculations illustrating the accuracy and scaling properties of the present approach. \subsection{Standard Resolution of the Identity} The resolution-of-identity (RI) approximation\cite{Whi73,Dun79,Min82,Als88,Fey93,Vah93,Klo02} relies on the expansion of molecular orbitals co-densities $ \phi \phi' $ over an auxiliary basis set $\lbrace \beta \rbrace$, namely: \begin{equation} \begin{split} \phi(\mathbf{r})\phi'(\mathbf{r}) \simeq &\sum_{\beta} \mathcal{F}_{\beta}(\phi\phi') \; \beta(\mathbf{r}) \\ \doteqdot & \, \mathcal{F}(\phi\phi';\mathbf{r}) \\ \label{generic} \end{split} \end{equation}The fit $\mathcal{F}$ is realized through an ensemble of measures $\{\mathcal{F}_{\beta}\}$, mapping the $ \phi \phi' $ product-space to the $ \beta $ auxiliary subspace defined so as to scale linearly with the number of atoms. Typical examples of such procedures are the standard RI-V and RI-SVS fitting approaches that use respectively: \begin{eqnarray} \mathcal{F}_{\beta}^{V}(\phi\phi') = \sum_{\beta'} [V^{-1}]_{\beta\beta'} \; (\beta'|\phi\phi') \\ \mathcal{F}_{\beta}^{SVS}(\phi\phi') = \sum_{\beta'} [S^{-1}]_{\beta\beta'} \; \langle\beta'|\phi\phi'\rangle \end{eqnarray} where $V$ and $S$ represent respectively the Coulomb $(\beta|\beta')$ and overlap $\langle\beta|\beta'\rangle$ matrices associated with the auxiliary basis set, and $[X^{-1}]_{\beta\beta'}$ denotes the $(\beta,\beta')$ entry of the $X$ inverse matrix. To explicitly define our $\langle \cdot | \cdot \rangle$ and $( \cdot | \cdot )$ notations, we write: \begin{eqnarray*} (\beta|\phi\phi') &=& \iint d{\bf r}d{\bf r}' \; \frac{ \beta({\bf r}) \phi({\bf r}')\phi'({\bf r}') }{| {\bf r} - {\bf r}' | } \\ \langle\beta |\phi\phi'\rangle &=& \int d{\bf r} \;\beta({\bf r}) \phi({\bf r})\phi'({\bf r}) \end{eqnarray*} As shown in Ref.~\citenum{Duchemin16}, both fitting techniques can be combined, preserving the Coulomb-fitting RI-V approach for low angular momentum auxiliary $ \beta $ atomic orbitals. As emphasized here above, the number of $\langle\beta|\alpha\alpha'\rangle$ overlap matrix elements in the RI-SVS approximation scales linearly with system size, offering a first strategy for reducing computational cost and memory thanks to sparsity. On the contrary, the number of $(\beta |\alpha\alpha')$ Coulomb integrals scales quadratically so that sparsity, or sparse tensor algebra, is difficult to exploit within the more accurate Coulomb-fitting (RI-V) approach. \subsection{Separable RI} A separable expression for the resolution-of-the-identity can be obtained through a set of separable measures $\{\langle f_k|\}$ on the co-densities, namely: \begin{equation} \begin{split} \mathcal{F}_{\beta}(\phi\phi') & = \sum_{k} [M]_{\beta k} \; \langle f_k|\phi\phi'\rangle\\ & = \sum_{k} [M]_{\beta k} \; \langle f_k|\phi\rangle \; \langle f_k|\phi'\rangle\\ \end{split} \end{equation} where the coefficients of $M$ have yet to be defined. Though it is not the only option, a trivial way to obtain separable measures is to work with $\delta(\mathbf{r}-\mathbf{r}_k)$ distributions centered on a set of $N_{k}$ real-space locations $\{\mathbf{r}_k\}$. Working with real-space (RS) measures, the $\mathcal{F}^{RS}$ density fitting procedure takes then the simple form \begin{equation} \begin{split} \mathcal{F}^{RS}_{\beta}(\phi\phi') & = \sum_{k} [M]_{\beta k} \langle \delta(\mathbf{r}-\mathbf{r}_k) |\phi\phi'\rangle\\ & = \sum_{k} [M]_{\beta k} \; \phi(\mathbf{r}_k) \; \phi'(\mathbf{r}_k) \label{rirsdef} \end{split} \end{equation} The clear advantage of the separability is that the two molecular orbitals $\phi \phi'$, originally entangled in e.g. the $\mathcal{F}_{\beta}^{V}(\phi\phi')$ RI-V fitting coefficients through the $(\beta|\phi \phi')$ Coulomb integrals, are now disentangled. This will prove crucial in the calculation of linear-response or perturbation theory related quantities where summations over occupied/virtual pairs have to be performed as shown below in the case of the calculation of the RPA correlation energy. We emphasize however that, while relying on discrete values of the molecular orbitals in real-space, the present approach remains a resolution-of-the-identity in the sens that physical continuous quantities such as the co-densities, linear-response operators (e.g. suceptibility), etc. are defined everywhere in space in terms of the $ \beta $ auxiliary basis functions. Amongst existing formalisms adopting real-space quadrature strategies, several studies focused directly on 2-electron Coulomb integrals. The chain-of-sphere (COSX) semi-numerical approach to exchange integrals,\cite{Neese09} building on Friesner’s pioneering pseudo-spectral approach,\cite{Fri91} develops only one of the two co-densities forming 2-electron Coulomb integrals over a real-space grid. Alternatively, the Least Square Tensor Hypercontraction (LS-THC) formalism \cite{Parrish12} fully develops the 2-electron integrals as a quadrature over real-space grid points, with an $\mathcal{O}(N^4)$ computational complexity associated with the establishment of the quadrature. On the other hand, the Interpolative Separable Density Fitting (ISDF) from Lu and coworkers provides a $\mathcal{O}(N^3\log(N))$ separable fit tensor by using Fourier transform and random projection techniques to select the $\mathbf{r}_k$ points and define the corresponding auxiliary densities, with a proof-of-concept application to a simple model system.\cite{Lu14,Ying16} The present work adopts as well a separable form for the fit tensor (Eq.~\ref{rirsdef}) through separable measures along real-space positions, but differs in the way the real-space locations $\mathbf{r}_k$ and their associated weights $[M]_{\beta k}$ are constructed, leading to a $\mathcal{O}(N^3)$ quadrature determination process. In addition, we preserve the auxiliary atomic basis sets in use in quantum chemistry. The accuracy and efficiency of our approach is further benchmarked on a large number of molecular systems. \subsection{Quadrature weight determination} Assuming that the $\mathbf{r}_k$ locations are known (see below), the determination of the $[M]_{\beta k}$ coefficients in Eq.~\ref{rirsdef} are obtained from the following minimization condition: \begin{equation} \argmin_{M} \sum_{\rho,\beta}\Big( \mathcal{F}_\beta^{RS} (\rho) - \mathcal{F}_\beta^{V} (\rho) \Big)^2 \label{eq:fit_LSQR} \end{equation}Namely, we aim to equate the fitting functions of the present RI-RS approach with that of the Coulomb fitting RI-V scheme. The accuracy of the present RI-RS approach is thus targeting that of the RI-V approach. The set of test co-densities $\lbrace \rho \rbrace$ typically spans the $\lbrace \alpha \rbrace \otimes \lbrace \alpha \rbrace$ product-space, even though it can be adjusted depending on the problem being addressed. Interestingly, the present fitting scheme allows to recover at reduced cost the otherwise $\mathcal{O}(N^4)$ LS-THC factorization (see Supporting Information SI \cite{suppinfo}). The solution of Eq.~\ref{eq:fit_LSQR} can indeed be achieved with an advantageous $\mathcal{O}(N^3)$ computational complexity, as demonstrated now. In order to detail our fitting procedure, we adopt the following matrix notations: $[D]_{k\rho}=\rho(r_k)$, $[F]_{\beta\rho}=\mathcal{F}_\beta^{V} (\rho)$. Due to the localization properties of the atomic orbitals, the number of atomic orbital products scales linearly with system size, and thus the number $N_\rho$ of test codensities in the test set can be considered $\propto N_\alpha$. As a result, the matrices D ($N_k\times N_\rho$), F ($N_\beta\times N_\rho$) as well as the matrix M ($N_\beta\times N_k$) of Eq.~\ref{rirsdef} are all $\mathcal{O}(N^2)$ tensors. The fit equation Eq.~\ref{eq:fit_LSQR} can then be formulated as: \begin{equation} \argmin_{M} \Big|\Big| M\cdot D - F \Big|\Big| \label{eq:fit_LSQR_2} \end{equation}which leads to the standard least-square estimator: \begin{equation} M = F\cdot D^\dag \cdot ( D\cdot D^\dag )^{-1}\label{eq:fit_LSQR_3} \end{equation} involving only matrix multiplications and inversions. Computation of $ ( D\cdot D^\dag )^{-1}$ could prove problematic if done explicitly. The term $ D\cdot D^\dag$ is positive, but has not guarantee to be definite. On the other hand, due to the large number $N_\rho$ of test co-densities, application of the standard SVD technique to extract the pseudo-inverse based estimator leads to rather significant prefactors to the otherwise $\mathcal{O}(N^3)$ pseudo-inverse procedure. We take a side approach by combining simple balancing and Tikhonov $L_2$ regularization. \cite{Tikho95} We first balance the problem by normalizing the rows of $D$, writing $\widetilde{D}=d\cdot D$ where $d$ is a diagonal matrix and the diagonal terms $[\widetilde{D}\cdot \widetilde{D}^\dag]_{kk}=1$. The pseudo inverse is then calculated as: \[ ( D\cdot D^\dag )^{-1} \simeq d\cdot ( \widetilde{D}\cdot \widetilde{D}^\dag+\epsilon\mathds{I})^{-1}\cdot d \]where the $L_2$ regularization parameter $\epsilon$ is adjusted to a small value to maintain definiteness of the problem and ensure numerical stability of the inverse. We identified the value $\epsilon=4 \times 10^{-7}$ as a reasonable parameter for double precision arithmetic and kept this value for all the results presented in the current work. The resulting final least square estimator is thus: \begin{equation} M = F\cdot \widetilde{D}^\dag \cdot ( \widetilde{D}\cdot \widetilde{D}^\dag+\epsilon\mathds{I})^{-1} \cdot d \label{eq:fit_LSQR_4} \end{equation}which can be computed efficiently through standard numerical inversion techniques. We emphasize that while computing the $[M]_{\beta k}$ optimal coefficients, there is no need for keeping the 3-center Coulomb integrals or the associated $\mathcal{F}^{V}_{\beta}(\alpha\alpha')$ coefficients: these can be computed once on-the-fly and discarded immediately, avoiding thus any extra memory consumption. In other terms, we never store explicitly the $F$ and $D$ matrices but only their $F\cdot D^\dag$ and $D\cdot D^\dag$ ($N_k\times N_k$) resulting products. \subsection{Real-space grids generation } In the present approach, the optimized $\lbrace {\bf r}_k \rbrace$ sets are generated for isolated atoms, once for every chemical species and their associated atomic basis sets. These atomic grids are then duplicated according to the molecule geometry to form the system-specific quadrature points. With Eq.~\ref{eq:fit_LSQR_4} defining the optimal $M$ for a given $\lbrace {\bf r}_k \rbrace$ set, locations are adjusted so as to minimize the fit error for a single atom test co-densities, using the Coulomb metric: \begin{equation} \argmin_{\{\mathbf{r}_k\}} \sum_{\rho}\Big|\Big| \mathcal{F}^{RS} (\rho) - \mathcal{F}^{V} (\rho) \Big|\Big|_V^2 \label{eq:fit_LSQR_5} \end{equation} For the sake of keeping the optimization process relatively simple, we structured the $\lbrace {\bf r}_k \rbrace$ set over four different shells, each one replicated with a different number of radii. The only parameters of the optimization problem are thus the number of radii and their length. The four base shells taken here and denoted $A_1$, $A_2$, $A_3$ and $B_1$ are subsets of the Lebedev quadrature grids\cite{LEBEDEV1975} (denoted here $L_i$ for the Lebedev grid of order i) in the sense that: \[ \begin{split} &L_{3\phantom{1}}=A_1 \\ &L_{5\phantom{1}}=A_1\cup A_2 \\ &L_{7\phantom{1}}=A_1\cup A_2\cup A_3 \\ &L_{11}=A_1\cup A_2\cup A_3\cup B_1 \\ \end{split} \]The determination of the number of radii associated with each shell has been done through experimentation until satisfying configurations were obtained. Base shells are provided in the SI Tables S4-7, and the resulting atomic quadrature grids for atoms H, C, N and O in the SI Tables S8-11.\cite{suppinfo} Optimizing freely all $\mathbf{r}_k$ locations and not only the radii may leads to significant improvement with respect to the grid size / accuracy ratio. Providing such grids falls however outside the scope of the present work. \subsection{Technical details } Benchmark Hartree-Fock and MP2 calculations were performed on a standard set of 28 medium size organic molecules containing unsaturated aliphatic and aromatic hydrocarbons or heterocycles, aldehydes, ketones, amides and nucleobases. Such a test set was originally proposed by Thiel and coworkers \cite{Sch08} for reference optical excitations calculations within e.g. coupled cluster, \cite{Sch08,Sil10b} TD-DFT \cite{Jac09} and more recently Bethe-Salpeter \cite{Jac15,Bru15,Kra17} formalisms. We adopt the MP2/6-31Gd geometries supplied in Ref.~\citenum{Sch08}. The assessment of the scaling properties of the present real-space quadrature RPA implementation is further performed on the oligoacenes family from benzene to hexacene using the B3LYP cc-pVTZ geometries available in Ref.~\citenum{Ran16}, complemented by the decacene, recently observed, \cite{Kru18} and the (hypothetical) octacene, both relaxed at the B3LYP/6-31Gd level. Finally, we consider the $C_{60}$ fullerene (B3LYP/6-311Gd geometry provided in the SI) and the octapeptide angiotensin II molecule originally proposed by Eshuis and coworkers. \cite{Esh10} All calculations are performed with input molecular orbitals generated at the (spherical) cc-pVTZ \cite{CCPVTZ} Hartree-Fock level using the NWChem package.\cite{NWCHEM} The corresponding (cartesian) cc-pVTZ-RI auxiliary basis \cite{Wei02} was adopted in all resolution-of-identity (RI) approaches (RI-SVS, RI-V and real-space quadrature RI-RS). For sake of comparison, Hartree-Fock exchange and MP2 correlation energies were calculated exactly, namely without any RI approximation, using the NWChem package as well. All calculations are performed without any frozen-core approximation. The set $\{\rho\}$ of test co-densities can be adjusted depending on the needs, for example to match a specific subset of the wave functions co-densities. In the rest of this work, we adopt the following settings: \begin{equation} \{\rho\}=(\{\alpha\}\otimes\{\alpha'\}_{l\leq 2})\cup\{\beta\} \label{eq:fit_LSQR_7} \end{equation} for both the single atom $\lbrace \mathbf{r}_k \rbrace$ sets problem and the full system optimization of $M$ coefficients. Limiting the second $\{\alpha' \}$ atomic orbital (AO) basis set to $s$, $p$ and $d$ orbitals allowed to speed up the computation while no significant change of accuracy was observed. In the minimization process, the weight on the $s$ and $p$ AO orbitals have also been stressed with respectively factors 4 and 2, so as to increase focus on the low-order multipole charge and dipole component of the co-densities. Inclusion of the $\{\beta\}$ auxiliary orbitals within the test set slightly improves regularity of the errors. We adopt real-space $\lbrace {\bf r}_k \rbrace$ sets that contain typically 320 points per C, N and O atom, and 180 for hydrogen. This size corresponds to about 3 times the size of the corresponding cc-pVTZ-RI auxiliary basis set. In the present study, we do not seek to look for minimal grid sizes, showing here below that excellent accuracy and small crossover between RI-RS and RI-V can already be obtained with such parameters. Details about the optimized real-space $\lbrace {\bf r}_k \rbrace$ sets, optimized for the cc-pVTZ and cc-pVTZ-RI Gaussian basis sets following Eq.~\ref{eq:fit_LSQR} and \ref{eq:fit_LSQR_5}, are provided in the SI. \cite{suppinfo} The Laplace transform (LT) RPA correlation energy calculations are based on time and frequency grids described in the Appendix together with convergence tests for the benzene correlation energy, other molecules being reported in the SI.\cite{suppinfo} The present RI calculations, including the standard RI-V, RI-SVS and the newly developed real-space RI-RS, with and without Laplace Transform, are performed with a specific pilot code building on the Coulomb integral libraries implemented in the {\sc{Fiesta}} code. \cite{Jac15,Duchemin16,Jin16} \section{Results} \subsection{Assessing the accuracy of the optimized real-space grid : exact exchange and MP2 correlation energies} As a first accuracy test of the present real-space RI implementation, namely to assess the quality of the co-density fits, we calculate both the exact exchange energy: \begin{equation} \begin{split} E_{xx}=& - \sum_{ij}^{occ} (ij|ij) \end{split} \label{exxri} \end{equation}written here for a spin compensated system, and the M{\o}ller-Plesset (MP2) correlation energy: \begin{equation} E_C^{MP2} = - \sum_{ij}^{occ} \sum_{ab}^{virt} { (ia|jb) [ 2(ia|jb) - (ib|ja)] \over \varepsilon_a + \varepsilon_b - \varepsilon_i - \varepsilon_j } \label{eq:E_MP2} \end{equation}The exchange and MP2 energies are calculated using the RI expressions of the 2-electron integrals, namely e.g.: \begin{equation} (ia|jb) \overset{RI-X}{=} \sum_{\beta \beta'} \mathcal{F}^X_{\beta}(ia) V_{\beta \beta'} \mathcal{F}^X_{{\beta}'}(jb) \label{eq:RIX}\end{equation}where X=SVS, V or RS depending on the selected scheme. Since we want to specifically address the accuracy of the fitting technique, we avoid at that stage using extra approximations such as Laplace transform techniques. \begin{figure} \includegraphics[width=10cm]{fig1} \caption{Exchange energy (Exx) error as compared to exact calculations for various RI approximations (RI-RS, RI-V and RI-SVS). Errors are given in milli-Hartree ($mHa$) per atom (excluding H atoms). For RI-SVS data (red triangles) we adopt a log scale (grey shaded area). The error bar on the RI-RS data (blue circles) is related to the variance of the exchange energy error distribution ($mHa$ per atom, computed over 40 random orientations) with respect to molecule orientation (see text). The molecules are ordered from left to right following the original order provided in Ref.~\citenum{Sch08} and in the SI Table S1.} \label{fig1} \end{figure} The results, namely the error as compared to exact calculations for the 28 Thiel's set molecules, are provided in Fig.~\ref{fig1} for exchange energy calculations. Clearly, the $\lbrace {\bf r}_k \rbrace$ sets adopted in this study provide errors that are of the same magnitude as the targeted RI-V approximation, and much smaller than those obtained with the RI-SVS scheme. We recall that our real-space quadrature was optimized to reproduce the coefficients $\mathcal{F}_{\beta}^{V}$ of the RI-V formalism (see Eq.~\ref{eq:fit_LSQR}) so that it should not be expected that the RI-RS approach yields errors smaller than the RI-V. The RI-RS results are provided with an "error bar" that represents the variance of the error distribution when the molecules are rotated. Contrary to the $\lbrace \beta \rbrace$ Gaussian auxiliary basis, the atomic $\lbrace {\bf r}_k \rbrace$ set is not rotationally invariant. Clearly, however, such a variance remains marginal as compared e.g. to the differential of errors between the standard Coulomb-fitting (RI-V) and density fitting (RI-SVS) schemes. We now turn to the MP2 correlation energies (Fig.~\ref{fig2}). Again, we observe that the RI-RS quadrature does not degrade significantly the targeted RI-V MP2 correlation energies, with an error remaining lower than a few tens of $\mu$Hartree per atom. Similarly, the variance of the error remains small. We provide in the Inset of Fig.~\ref{fig2} the actual distribution of errors obtained for a thousand independent random orientations of the benzene molecule, reporting the related variance error bar. We can conclude from the present set of benchmark calculations that the real-space representation (RI-RS), with the adopted $\lbrace {\bf r}_k \rbrace$ distributions size, reproduce accurately the RI-V Coulomb integrals involving co-densities (products $\phi_i \phi_j$ in the exact exchange expression) and transition densities (products $\phi_i \phi_a$ in the MP2 energy formula) at the core of all explicitly correlated perturbative techniques. \begin{figure} \includegraphics[width=10cm]{fig2} \caption{MP2 correlation energy error as compared to exact calculations for the standard Coulomb-fitting (RI-V) and real-space quadrature (RI-RS) approximations. Errors are given in micro-Hartree (${\mu}Ha$) per atom (excluding H atoms). The error bar on the RI-RS data (blue circles) is related to the variance of the energy error distribution (${\mu}Ha$ per atom) with respect to molecule orientation (see text). Inset: details of benzene RI-RS MP2 correlation energy error (${\mu}Ha$) over a thousand random orientations with the corresponding variance (blue segment). Reference values (no-RI) and errors are provided in SI Table S1. } \label{fig2} \end{figure} \subsection{Laplace transformed RPA} We now turn to the central application of the present study, namely the calculation of the correlation energy within the random phase approximation (RPA). \cite{Pin52,Noz58,Lan77} We show in particular that the real-space quadrature RI-RS, combined with the standard Laplace transform (LT) approach, \cite{Alm91} allows reducing the scaling with system size down to $\mathcal{O}(N^3)$, instead of the $\mathcal{O}(N^4)$ scaling of standard RI-RPA implementations,\cite{Esh10} without invoking any localization nor sparsity arguments. Following seminal papers,\cite{Fur05,Fuc05,Fur08} we start with the adiabatic-connection fluctuation-dissipation theorem (ACFDT) formula to the RPA correlation energy : \begin{equation} E^{RPA}_C = {1 \over 2\pi } \int_0^{\infty} d\omega \; Tr \large[ {\ln}(1-{{\chi}_0(i\omega)}\cdot v) + {{\chi}_0(i\omega)}\cdot v \large] \label{eq::ERPA} \end{equation}where $v$ is the bare Coulomb operator and $\chi_0(i\omega)$ the independent-electron density-density susceptibility at imaginary frequency, that is for closed-shell systems: \begin{eqnarray} \chi_0({\bf r},{\bf r}' ; i\omega) &= & 2 \sum_{ja} \frac{\phi _j^*({\bf r}) \phi _a({\bf r}) \phi _a^*({\bf r}') \phi _j({\bf r}') }{ i\omega - (\varepsilon_a - \varepsilon_j) } + cc \label{eq:chi0_rr} \\ & \overset{RI}{\simeq} &\sum_{\beta\beta'} \beta(\mathbf{r})\beta'(\mathbf{r}') \left[ 2 \sum_{ja} \frac{\mathcal{F}_{\beta}(\phi _j\phi _a) \mathcal{F}_{\beta'}(\phi _a\phi _j) } {i\omega - (\varepsilon_a - \varepsilon_j) } + cc \right] \label{eq:chi0_bb}\\ & \doteqdot & \sum_{\beta\beta'} \beta(\mathbf{r})\beta'(\mathbf{r}') \big[\chi_0^{RI}(i\omega)\big]_{\beta\beta'} \end{eqnarray}with $(j,a)$ indexing (occupied/virtual) molecular eigenstates. The construction of the $\chi_0^{RI}(i\omega)$ matrix according to Eq.~\ref{eq:chi0_bb} clearly scales as $\mathcal{O}(N^4)$. To discuss such scaling properties, we compare first in Fig.~\ref{fig3} the total computing time for calculating the RI-RPA correlation energy within the standard Coulomb-fitting approach (RI-V) and the novel RI-RS formalism, using the acene family from benzene to decacene as a test set. Calculations are performed using the cc-pVTZ AO and cc-pVTZ-RI auxiliary basis sets, together with a 12-point quadrature rule for the imaginary frequency axis integration (see Appendix). The corresponding correlation energies are provided in the Appendix for benzene, and in the SI for other acenes, demonstrating again the accuracy of the real-space approach as compared to the targeted RI-V approach. We observe that the RI-RS scheme walltime becomes smaller than that of our RI-V implementation for acenes larger than naphthalene. We emphasize however that at that stage, both RI-V and RI-RS techniques offer the very same $\mathcal{O}(N^4)$ scaling, differing only by the expression of the $\mathcal{F}_{\beta}(\phi _j\phi _a)$ coefficients. This crossover is related to the effort coming from the $\mathcal{O}(N^4)$ computation of the full $\mathcal{F}_{\beta}(\phi _j\phi _a)$ coefficients set. Without any assumption on the sparsity, the RI-RS scaling relies only on dense algebra techniques and can thus be implemented efficiently. \begin{figure} \includegraphics[width=12cm]{fig3} \caption{Total walltime for the calculation of the RPA correlation energy over the acene family (benzene to decacene) using several RI schemes. We compare in particular the standard Coulomb fitting RI-V approach, the real-space quadrature RI-RS scheme, with and without Laplace transform (LT) technique. The abscissa provides the size of the (cc-pVTZ) AO basis used to expand the molecular orbitals. Both axis are displayed in log scale. The crossovers between the various RI formalisms are indicated by vertical short segments with the corresponding molecules. Inset : same data points without log scales. Walltimes are given for a run on 64 processors described in Note~\citenum{CPU-details}. } \label{fig3} \end{figure} In order to reduce this scaling, one thus needs to avoid explicit calculation of the full $\mathcal{F}_{\beta}(\phi _j\phi _a)$ coefficients set. We achieve this by evaluating the independent-electron susceptibility directly in the real-space representation before transforming it back to the normal auxiliary basis representation: \begin{equation} \big[\chi_0^{RI}(i\omega)\big]_{\beta\beta'} = \sum_{kk'}\; M_{\beta k} \; \chi_0({\bf r}_k,{\bf r}_{k'} ; i\omega) \; M_{\beta' k'} \label{matmult} \end{equation}The second step consists in applying the well known Laplace transform (LT) technique \cite{Alm91} so as to first compute $\chi_0({\bf r}_k,{\bf r}_{k'} ;i\tau)$ in the time domain where its expression is separable \cite{Roj95,Kal14} and transform it back to the frequency domain, using quadrature rules to form $\chi_0({\bf r}_k,{\bf r}_{k'} ;i\omega)$ (Eq.~\ref{chiOkkLT}). Such scheme allows us to work with factorized expression of $\chi_0({\bf r}_k,{\bf r}_{k'} ;i\tau)$ (Eq.~\ref{chiOkktau}): \begin{eqnarray} \label{chiOkkLT} & & \chi_0({\bf r}_k,{\bf r}_{k'};i\omega) = \sum_{\tau} c_\tau(\omega) \chi_0({\bf r}_k,{\bf r}_{k'};i\tau) \\ \label{chiOkktau} & & \chi_0({\bf r}_k,{\bf r}_{k'};i\tau) = G^{<}({\bf r}_k,{\bf r}_{k'};i\tau) G^{>}({\bf r}_k,{\bf r}_{k'};-i\tau) \end{eqnarray}introducing the propagators of the occupied states and of the unoccupied states, respectively: \begin{eqnarray} G^{<}({\bf r}_k,{\bf r}_{k'};i\tau) & = i &\sum_j \phi_j({\bf r}_k) \phi_j({\bf r}_{k'}) e^{\varepsilon_j \tau} \\ G^{>}({\bf r}_k,{\bf r}_{k'};-i\tau) & = -i &\sum_a \phi_a({\bf r}_k) \phi_a({\bf r}_{k'}) e^{-\varepsilon_a \tau} \end{eqnarray}with $\tau >0$ and the zero of occupied/virtual electronic energy levels taken at the Fermi level. As a result of the decoupling of occupied and virtual states, the $G^{<}$ and $G^{>}$ propagators can be obtained with $\mathcal{O}(N^3)$ operations. Further, the entire $\big[\chi_0^{RI}(i\omega)\big]_{\beta\beta'} $ matrix stems from a combination of Hadamard product (Eq.~\ref{chiOkktau}) and standard matrix operations (Eq.~\ref{matmult}-\ref{chiOkkLT}), yielding an overall $\mathcal{O}(N^3)$ process. We can now report in Fig.~\ref{fig3} the full calculation walltime associated with the RI-RS+LT approach. Laplace transform quadratures for each of the 12 imaginary frequencies are performed with a grid of 18 imaginary times, yielding fully converged RPA energies (see Appendix). With such running parameters, the RI-RS+LT approach becomes more efficient than the standard RI-V formalism for systems larger than anthracene, outperforming the RI-RS approach for molecules larger than pentacene. To better assess the scaling properties associated with the resolution of Eq.~\ref{eq::ERPA} within standard and Laplace Transformed RI-RPA approaches, we single out the corresponding computation walltimes in Fig.~\ref{fig4}. We assume in particular that fitted co-densities coefficients are already available in the case of standard approaches, so that standard RI-V and RI-RS computational loads are equivalent. To probe larger systems, the test set is extended with the $C_{60}$ fullerene and the original octapeptide angiotensin II molecule proposed by Eshuis and coworkers.\cite{Esh10} \begin{figure} \includegraphics[width=12cm]{fig4} \caption{Partial walltime associated with the RI-RS RPA correlation energy calculations with and without Laplace Transforms (LT), removing the extra cost associated with obtaining the $\mathcal{F}_{\beta}(\phi _j\phi _a)$ coefficients in the case of the no-LT approach.\cite{CPU-details} The test covers the acene family plus the $C_{60}$ fullerene and octapeptide angiotensin II molecule. The abscissa provides the size of the (cc-pVTZ) AO basis used to expand the molecular orbitals. Both axis are displayed in log scale. Dotted and dashed lines are a schematic guide to the eyes for scaling properties. Walltimes are given for a run on 64 processors described in Note~\citenum{CPU-details}. } \label{fig4} \end{figure} Without accounting for the fitting of co-densities calculation, the Laplace Transform RI-RS RPA algorithm supersedes the standard RI-RS (or RI-V) RPA approach for systems larger than hexacene. This delayed crossover (as compared to Fig.~\ref{fig3}) can be imputed to both the Laplace transform overheads and the fact that with the ${\lbrace {\bf r}_k \rbrace}$ real space point set presently used, $\big[\chi_0(i\omega)\big]_{kk'}$ matrices are roughly $3\times 3$ times bigger than $\big[\chi_0(i\omega)\big]_{\beta\beta'}$ ones, leading to a $3^3$ prefactor in the linear algebra operations. This last fact demonstrates the importance of operating on small ${\lbrace {\bf r}_k \rbrace}$ quadrature sets. The additional cost coming from obtaining the fitting parameters $\mathcal{F}_{\beta}(\phi _j\phi _a)$ only adds to the cost of the no-Laplace-Transform RI approaches, bringing the overall crossover at the level of a pentacene molecule, as exemplified in Fig.~\ref{fig4}. We finally observe $\mathcal{O}(N^{3.5})$ and $\mathcal{O}(N^{2.5})$ scaling laws (see dotted lines), indicating that the expected asymptotic $\mathcal{O}(N^{4})$ and $\mathcal{O}(N^{3})$ behaviours are not yet reached for the tested molecule sizes. We conclude this section by mentioning that the total walltime for the (RI-RS+LT) calculation of the cc-pVTZ RPA correlation energy for $C_{60}$ takes less than 6500 secs on a single processor. \section{Discussion} The present implementation can be compared to the real-space-grid imaginary-time approach introduced originally in the framework of $GW$ calculations by Rojas and coworkers \cite{Roj95} or the recent real-space-grid imaginary-time RPA implementation by Kaltak and coworkers. \cite{Kal14} In such studies, the real-space grid was obtained as the Fourier transform of the planewave basis used to expend the Bloch states in a pseudopotential or PAW framework for periodic systems. Alternatively, the work of Moussa \cite{Mou14} demonstrated as well cubic scaling for a random-phase approximation with second-order screened exchange formalism, exploiting a real-space grid for both the primary and auxiliary basis sets, combined with nested low-rank approximations to energy denominators. The present RI-RS approach, while targeting all-electron atomic-basis calculations, preserves the use of the standard representation of molecular orbitals, related co-densities and response operators in terms of atomic orbitals and their associated auxiliary bases, adopting further standard Laplace transform techniques. A central issue in real-space representations, in particular when performing all-electron calculations, concerns the size of the real-space grid that strongly influences the crossover with standard RI implementations and the memory requirements. This is all the more important in the present study since we aim to calculate and store intermediate non-local operators such as the ${\chi}^0({\bf r}_k, {\bf r}_{k'}; i\tau)$ susceptibilities, and not only local functions such as the charge density or the DFT exchange-correlation potential and energy density. Since our real-space $\lbrace {\bf r}_k \rbrace$ distribution must serve in a quadrature reproducing co-densities involving molecular orbital products, one may expect that it should be as large as standard grids \cite{Gil93} used to represent the charge density in DFT codes. Taking as an example the Gaussian09 code, the default DFT grid involves about 7000 grid points per atom after pruning. This is consistent with the recommended "Grid3" in the original paper by Trutler and Alrichs \cite{Tre95} yielding 5980 pruned points for elements from Li to Ne and that serves as the default in Turbomole. Such standard DFT grid sizes are much larger than the number of real-space points used in the present RS approach, roughly 180 for Hydrogen and 320 per non-H atom (C, N, O). The present $\lbrace {\bf r}_k \rbrace$ sets were optimized so that the RI-RS scheme faithfully reproduces the standard RI-V density fitting results. Each set is composed of a number of different shells of high symmetry points, each one associated with a set of different radii. This minimization process results in a non-uniform ${\lbrace {\bf r}_k \rbrace }$ distribution of real-space points. As emphasized above, we did not seek to explore here in great details the set size to accuracy ratio, for our goal was to demonstrate that one can find an accurate real-space representation yielding a crossover with standard techniques for small system sizes. Smaller optimized sets of real-space points may certainly be explored in the future. Another advantage the RI-RS+LT RPA scheme lies in the related $\mathcal{O}(N^2)$ memory footprint associated with the underlying matrix algebra. In the case of the $C_{60}$ molecule, the relevant sizes are 1800 spherical AOs, 6060 cartesian auxiliary orbitals, and 19140 real-space quadrature points. On a single processor run, the set up of the RI-RS (Eq.~\ref{eq:fit_LSQR}) peaks at about 8~Gb of memory, while leaving at exit a memory footprint below 1Gb for the $M_{{\beta}k}$ coefficients. Further, each $\chi_0({\bf r}_k,{\bf r}_{k'};i\tau)$ requires about 3 Gb. In regards of a parallelization scheme oriented towards CPU efficiency, one can benefit from storing all the ($n\tau$) $\chi_0({\bf r}_k,{\bf r}_{k'};i\tau)$ in memory. As emphasized here above, the present cubic scaling in terms of floating point operations, and quadratic scaling in terms of memory load, was obtained without invoking localization nor sparsity considerations. In particular, our approach does not require the use of the density-fitting (RI-SVS) approach with its sparse 3-center overlap matrix tensor. However, another class of localization properties, based on the exponential decay in real-space of the one-body Green's function in gaped systems,\cite{Sch00} can be easily combined with the present approach. These localization properties, that strongly depend on the electronic properties of the system of interest, are reminiscent of the low-scaling techniques based on local AO formulations in the treatment of MP2 \cite{Has93,Aya99} or RPA \cite{Kal15,Sch16,Lue17} correlation energies. \cite{notegreens} Such additional considerations, together with the stochastic approach by Neuhauser and coworkers, \cite{Neu13} may be easily combined and explored in the future to further reduce the memory and computing time. \section{ Conclusion } We have introduced a separable resolution-of-the-identity based on a real-space quadrature of co-densities. The efficiency of our approach relies on setting up an optimal and compact distribution of real-space points $\lbrace {\bf r}_k \rbrace$ allowing excellent accuracy, as exemplified in the case of the exact exchange energy and further the MP2 and RPA correlation energies, taking as a test case a large set of molecular systems. Our approach preserves the use of standard Gaussian atomic orbitals and related auxiliary basis sets for all-electron calculations, the real-space set of points being used as an intermediate representation. We demonstrate that such an approach leads to calculating RPA correlation energies with a cubic scaling in terms of operations, quadratic in memory, without invoking any localization nor sparsity considerations that may be combined in the future. The limited number of needed real-space points allows early crossovers with traditional Coulomb-fitting RI-RPA calculations for systems as small as naphthalene or anthracene (Fig.~\ref{fig3}). The application of such a real-space separable RI to other explicitly correlated techniques, such as the $GW$ and Bethe-Salpeter equation (BSE) formalisms for calculating charged and neutral electronic excitations in molecular systems,\cite{Bla18} are currently under exploration. \begin{acknowledgements} The authors thank Thierry Deutsch and Pierre-Francois Loos for their critical reading of the manuscript and Denis Jacquemin for running the benzene RPA calculations with the Turbomole code. This research used resources from the French GENCI supercomputing centers under project no. A0030910016. \end{acknowledgements} \section{MP2 and RPA correlation energies} \label{secSI1} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{lllr} \caption{MP2 correlation energy (Hartree) for Thiel's set of molecules.\cite{Sch08} The various RI schemes are compared. Calculations are performed at the (spherical) cc-pVTZ level with the corresponding (cartesian) cc-pVTZ-RI auxiliary basis. The total reference energy (no RI) is provided while the RI results are given as an error as compared to the reference. For the real-space (RI-RS) scheme, we provide the mean signed error (MSE) averaged over 40 random orientations of the corresponding molecules (see main manuscript). } \label{tbl:mp2SI} \\ \hline \endfirsthead \multicolumn{4}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline & ref. (no RI)\cite{NWCHEM} & RI-V err. & RI-RS MSE \\ \hline \endhead \hline \multicolumn{4}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot & ref. (no RI)\cite{NWCHEM} & RI-V err. & RI-RS MSE \\ \hline Ethene & -0.36621712 & 2.8864e-05 & 2.6618e-05 \\ Butadiene & -0.71282637 & 5.7159e-05 & 5.3500e-05 \\ Hexatriene & -1.06131929 & 8.4513e-05 & 8.8470e-05 \\ Octatetraene & -1.41053465 & 1.1173e-04 & 1.0533e-04 \\ Cyclopropene & -0.53351982 & 2.7989e-05 & 2.9524e-05 \\ Cyclopentadiene & -0.88355154 & 6.6776e-05 & 7.6226e-05 \\ Norbornadiene & -1.25189681 & 9.9161e-05 & 1.1061e-04 \\ Benzene & -1.04326862 & 7.2164e-05 & 7.7232e-05 \\ Naphthalene & -1.72711598 & 1.2577e-04 & 1.3562e-04 \\ Furan & -0.94557368 & 2.5866e-05 & 3.3693e-05 \\ Pyrrole & -0.92636075 & 4.4035e-05 & 4.7447e-05 \\ Imidazole & -0.95859577 & 2.6609e-05 & 2.8459e-05 \\ Pyridine & -1.07638112 & 5.6094e-05 & 5.9644e-05 \\ Pyrazine & -1.11113492 & 4.0363e-05 & 3.4864e-05 \\ Pyrimidine & -1.10577313 & 3.8956e-05 & 4.4471e-05 \\ Pyridazine & -1.11510376 & 4.3281e-05 & 5.0146e-05 \\ Triazine & -1.13228003 & 2.1557e-05 & 1.5914e-05 \\ Tetrazine & -1.18621363 & 1.2456e-05 & -3.3183e-06 \\ Formaldehyde & -0.42515595 & 5.2684e-07 & -3.4332e-06 \\ Acetone & -0.80235559 & 3.3520e-05 & 3.5088e-05 \\ Benzoquinone & -1.49609284 & 6.5378e-05 & 7.1006e-05 \\ Formamide & -0.64873853 & 4.0474e-06 & -7.3104e-06 \\ Acetamide & -0.83678121 & 2.1426e-05 & 2.8016e-05 \\ Propanamide & -1.02751877 & 3.7440e-05 & 4.5876e-05 \\ Cytosine & -1.57264992 & 3.7081e-05 & 4.4385e-05 \\ Thymine & -1.78790406 & 4.0576e-05 & 4.9361e-05 \\ Uracil & -1.59383066 & 2.2851e-05 & 2.8059e-05 \\ Adenine & -1.92267910 & 5.0528e-05 & 6.1021e-05 \\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{clll} \caption{RPA correlation energy (Hartree) for the benzene molecule as obtained using the present implementation, together with the MOLGW and Turbomole codes. Available details about the calculations are provided. All calculations are performed at the (spherical) cc-pVTZ level with (spherical or cartesian) cc-pVTZ-RI auxiliary basis. } \label{tbl:benzeneSI} \\ \hline \multicolumn{1}{c}{energy} & \multicolumn{1}{c}{code} & \multicolumn{1}{c}{Hartree-Fock} & \multicolumn{1}{c}{cc-pVTZ-RI basis} \\ \hline \endfirsthead \multicolumn{4}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \multicolumn{1}{c}{energy} & \multicolumn{1}{c}{code} & \multicolumn{1}{c}{Hartree-Fock} & \multicolumn{1}{c}{cc-pVTZ-RI basis} \\ \hline \endhead \hline \multicolumn{4}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot -1.25150754 & present & no-RI & cartesian \\ -1.25147833 & MOLGW\cite{MOLGW} & RI-V & spherical \\ -1.25122935 & Turbomole\cite{TURBOMOLE} & RI-V & spherical \\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{lrrrrr} \caption{RPA correlation energy (Hartree) for acenes, $C_{60}$ and the octapeptide angiotensin II molecule. The various RI schemes are compared. Calculations are performed at the (spherical) cc-pVTZ level with the corresponding (cartesian) cc-pVTZ-RI auxiliary basis. The number of imaginary frequencies in the RPA integration is given by n$\omega$=12. The number of times in the Laplace transform (LT) approach is given by n$\tau$.} \label{tbl:rpaSI} \\ \hline & \multicolumn{1}{c}{\multirow{2}{*}{RI-SVS}} & \multicolumn{1}{c}{\multirow{2}{*}{RI-V}} & \multicolumn{1}{c}{\multirow{2}{*}{RI-RS}} & \multicolumn{1}{c}{RI-RS+LT} & \multicolumn{1}{c}{RI-RS+LT} \\ & & & & \multicolumn{1}{c}{(n$\tau$=1.5$\times$n$\omega$)} & \multicolumn{1}{c}{(n$\tau$=2$\times$n$\omega$)} \\ \hline \endfirsthead \multicolumn{6}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline & \multirow{2}{*}{RI-SVS} & \multirow{2}{*}{RI-V} & \multirow{ 2}{*}{RI-RS} & RI-RS+LT & RI-RS+LT \\ & & & & (n$\tau$=1.5$\times$n$\omega$) & (n$\tau$=2$\times$n$\omega$) \\ \hline \endhead \hline \multicolumn{6}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot Benzene & -1.25171791 & -1.25150754 & -1.25145456 & -1.25145456 & -1.25145456 \\ Naphthalene & -2.04585075 & -2.04552127 & -2.04541965 & -2.04541965 & -2.04541965 \\ Anthracene & -2.84161803 & -2.84117439 & -2.84103203 & -2.84103203 & -2.84103203 \\ Tetracene & -3.63813465 & -3.63757585 & -3.63740533 & -3.63740528 & -3.63740533 \\ Pentacene & -4.43569083 & -4.43501826 & -4.43481146 & -4.43481136 & -4.43481146 \\ Hexacene & -5.23386340 & -5.23307561 & -5.23284107 & -5.23284090 & -5.23284108 \\ C60 & & -11.25611147 & -11.25546768 & -11.25546749 & -11.25546767 \\ Octapeptide & & -17.53746478 & -17.53632073 & -17.53632074 & -17.53632073 \\ \end{longtable} \end{center} \section{Points sets for H, C, N and O species} \label{secSI2} We present the set of real-space $\lbrace {\bf r}_k \rbrace$ points generated to reproduce within the present RI-RS scheme the Coulomb-fitting RI-V data at the cc-pVTZ/cc-pVTZ-RI level. Points set are generated as a combination of four different base shells associated with different radii. We start by providing the four base shells that we denote $A_1$, $A_2$, $A_3$ and $B_1$ (S4-7). These shells were constructed as subsets of the Lebedev quadrature grids\cite{LEBEDEV1975} (denoted here $L_i$ for the Lebedev grid of order i) in the sense that: \[ \begin{split} &L_{3\phantom{1}}=A_1 \\ &L_{5\phantom{1}}=A_1\cup A_2 \\ &L_{7\phantom{1}}=A_1\cup A_2\cup A_3 \\ &L_{11}=A_1\cup A_2\cup A_3\cup B_1 \\ \end{split} \]The base shells points are located on the unit sphere, while the associated radii are provided in atomic units. As indicated below, the origin \((0.0, 0.0, 0.0)\) point was also added on top of the different shell/radii combination for each species quadrature grid. The atomic quadrature grids for atoms H, C, N and O are reported in tables S8-11. \begin{center} \LTcapwidth=\textwidth \begin{longtable}{rrr} \caption{Lebedev point subset 1/4.} \label{tbl:LebSI1} \\ \hline \multicolumn{3}{c}{set $A_1$} \\ \hline \endfirsthead \multicolumn{3}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \multicolumn{3}{c}{set $A_1$} \\ \hline \endhead \hline \multicolumn{3}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 1.0000000000000000 & 0.0000000000000000 & 0.0000000000000001 \\ -1.0000000000000000 & 0.0000000000000001 & 0.0000000000000001\\ 0.0000000000000001 & 1.0000000000000000 & 0.0000000000000001\\ 0.0000000000000001 & -1.0000000000000000 & 0.0000000000000001\\ 0.0000000000000000 & 0.0000000000000000 & 1.0000000000000000\\ 0.0000000000000000 & 0.0000000000000001 & -1.0000000000000000\\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{rrr} \caption{Lebedev point subset 2/4.} \label{tbl:LebSI2} \\ \hline \multicolumn{3}{c}{set $A_2$} \\ \hline \endfirsthead \multicolumn{3}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \multicolumn{3}{c}{set $A_2$} \\ \hline \endhead \hline \multicolumn{3}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 0.5773502691896258 & 0.5773502691896257 & 0.5773502691896258\\ 0.5773502691896258 & 0.5773502691896257 & -0.5773502691896257\\ 0.5773502691896258 & -0.5773502691896257 & 0.5773502691896258\\ 0.5773502691896258 & -0.5773502691896257 & -0.5773502691896257\\ -0.5773502691896257 & 0.5773502691896258 & 0.5773502691896258\\ -0.5773502691896257 & 0.5773502691896258 & -0.5773502691896257\\ -0.5773502691896257 & -0.5773502691896258 & 0.5773502691896258\\ -0.5773502691896257 & -0.5773502691896258 & -0.5773502691896257\\ \end{longtable} \end{center} \newpage \begin{center} \LTcapwidth=\textwidth \begin{longtable}{rrr} \caption{Lebedev point subset 3/4.} \label{tbl:LebSI3} \\ \hline \multicolumn{3}{c}{set $A_3$} \\ \hline \endfirsthead \multicolumn{3}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \multicolumn{3}{c}{set $A_3$} \\ \hline \endhead \hline \multicolumn{3}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 0.0000000000000000 & 0.7071067811865475 & 0.7071067811865476\\ 0.0000000000000000 & 0.7071067811865476 & -0.7071067811865475\\ 0.0000000000000000 & -0.7071067811865475 & 0.7071067811865476\\ 0.0000000000000000 & -0.7071067811865476 & -0.7071067811865475\\ 0.7071067811865475 & 0.0000000000000000 & 0.7071067811865476\\ 0.7071067811865476 & 0.0000000000000000 & -0.7071067811865475\\ -0.7071067811865475 & 0.0000000000000001 & 0.7071067811865476\\ -0.7071067811865476 & 0.0000000000000001 & -0.7071067811865475\\ 0.7071067811865476 & 0.7071067811865475 & 0.0000000000000001\\ 0.7071067811865476 & -0.7071067811865475 & 0.0000000000000001\\ -0.7071067811865475 & 0.7071067811865476 & 0.0000000000000001\\ -0.7071067811865475 & -0.7071067811865476 & 0.0000000000000001\\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{rrr} \caption{Lebedev point subset 4/4.} \label{tbl:LebSI4} \\ \hline \multicolumn{3}{c}{set $B_1$} \\ \hline \endfirsthead \multicolumn{3}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \multicolumn{3}{c}{set $B_1$} \\ \hline \endhead \hline \multicolumn{3}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 0.3015113445777636 & 0.3015113445777635 & 0.9045340337332909\\ 0.3015113445777637 & 0.3015113445777636 & -0.9045340337332909\\ 0.3015113445777636 & -0.3015113445777635 & 0.9045340337332909\\ 0.3015113445777637 & -0.3015113445777636 & -0.9045340337332909\\ -0.3015113445777635 & 0.3015113445777636 & 0.9045340337332909\\ -0.3015113445777636 & 0.3015113445777637 & -0.9045340337332909\\ -0.3015113445777635 & -0.3015113445777636 & 0.9045340337332909\\ -0.3015113445777636 & -0.3015113445777637 & -0.9045340337332909\\ 0.3015113445777636 & 0.9045340337332909 & 0.3015113445777636\\ 0.3015113445777636 & -0.9045340337332909 & 0.3015113445777636\\ 0.3015113445777636 & 0.9045340337332909 & -0.3015113445777635\\ 0.3015113445777636 & -0.9045340337332909 & -0.3015113445777635\\ -0.3015113445777635 & 0.9045340337332910 & 0.3015113445777636\\ -0.3015113445777635 & -0.9045340337332910 & 0.3015113445777636\\ -0.3015113445777635 & 0.9045340337332910 & -0.3015113445777635\\ -0.3015113445777635 & -0.9045340337332910 & -0.3015113445777635\\ 0.9045340337332909 & 0.3015113445777637 & 0.3015113445777636\\ -0.9045340337332909 & 0.3015113445777637 & 0.3015113445777636\\ 0.9045340337332909 & 0.3015113445777637 & -0.3015113445777635\\ -0.9045340337332909 & 0.3015113445777637 & -0.3015113445777635\\ 0.9045340337332909 & -0.3015113445777637 & 0.3015113445777636\\ -0.9045340337332909 & -0.3015113445777637 & 0.3015113445777636\\ 0.9045340337332909 & -0.3015113445777637 & -0.3015113445777635\\ -0.9045340337332909 & -0.3015113445777637 & -0.3015113445777635\\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{cc} \caption{Hydrogen cc-pVTZ/cc-pVTZ-RI species quadrature points (a.u.)} \label{tbl:HptsSI} \\ \hline H point set \\ \hline \endfirsthead \multicolumn{1}{l}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline H point set \\ \hline \endhead \hline \multicolumn{1}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 0.0 0.0 0.0 \\ $A_1$ $\otimes$ 0.2550495692028164\\ $A_2$ $\otimes$ 0.4414503914404356\\ $A_3$ $\otimes$ 0.5875929206946565\\ $A_1$ $\otimes$ 0.8500881858181466\\ $A_2$ $\otimes$ 0.8767919926845705\\ $A_3$ $\otimes$ 1.1376855741676248\\ $A_1$ $\otimes$ 1.4020561072906210\\ $A_2$ $\otimes$ 1.4031674307559165\\ $A_3$ $\otimes$ 1.6948362308564306\\ $B_1$ $\otimes$ 2.0508682846396238\\ $A_1$ $\otimes$ 2.3084240560165830\\ $A_2$ $\otimes$ 2.4853014001377569\\ $B_1$ $\otimes$ 2.7541840112271343\\ $A_3$ $\otimes$ 3.3784459365547317\\ $A_1$ $\otimes$ 3.9085612166141659\\ $A_2$ $\otimes$ 4.5534786191179117\\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{cc} \caption{Carbon cc-pVTZ/cc-pVTZ-RI species quadrature points (a.u.)} \label{tbl:CptsSI} \\ \hline C point set \\ \hline \endfirsthead \multicolumn{1}{l}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline C point set \\ \hline \endhead \hline \multicolumn{1}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 0.0 0.0 0.0 \\ $A_1$ $\otimes$ 0.0697148142120944\\ $A_2$ $\otimes$ 0.1453952389747425\\ $A_1$ $\otimes$ 0.2342968516739921\\ $A_2$ $\otimes$ 0.3167194257881547\\ $A_3$ $\otimes$ 0.3991658485572637\\ $A_1$ $\otimes$ 0.4921463978748848\\ $A_2$ $\otimes$ 0.5452940327070877\\ $A_3$ $\otimes$ 0.6928808595836493\\ $A_1$ $\otimes$ 0.7271808943305131\\ $A_2$ $\otimes$ 0.8364419359040032\\ $A_3$ $\otimes$ 0.9550764536689218\\ $B_1$ $\otimes$ 1.0042552897048800\\ $A_1$ $\otimes$ 1.2308082245100480\\ $A_2$ $\otimes$ 1.2515921948505666\\ $A_3$ $\otimes$ 1.3170945413732240\\ $B_1$ $\otimes$ 1.5668473153583786\\ $A_2$ $\otimes$ 1.7967421791285734\\ $A_1$ $\otimes$ 1.8562800941359989\\ $A_3$ $\otimes$ 1.9026574596444366\\ $B_1$ $\otimes$ 2.2023376401947057\\ $A_2$ $\otimes$ 2.4701298890775587\\ $A_1$ $\otimes$ 2.5826940890377630\\ $A_3$ $\otimes$ 2.6506301435069486\\ $B_1$ $\otimes$ 3.0810767533391212\\ $A_2$ $\otimes$ 3.4221828573728055\\ $A_3$ $\otimes$ 3.8904929263108845\\ $A_1$ $\otimes$ 4.0340344838211628\\ $A_2$ $\otimes$ 4.8577261225759178\\ $A_1$ $\otimes$ 5.2951816730851577\\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{cc} \caption{Nitrogen cc-pVTZ/cc-pVTZ-RI species quadrature points (a.u.)} \label{tbl:NptsSI} \\ \hline N point set \\ \hline \endfirsthead \multicolumn{1}{l}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline N point set \\ \hline \endhead \hline \multicolumn{1}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 0.0 0.0 0.0 \\ $A_1$ $\otimes$ 0.0740291727840731\\ $A_2$ $\otimes$ 0.1517711025394544\\ $A_1$ $\otimes$ 0.2292386711035654\\ $A_2$ $\otimes$ 0.2831000248738570\\ $A_3$ $\otimes$ 0.3644185443171212\\ $A_1$ $\otimes$ 0.4288518251583348\\ $A_2$ $\otimes$ 0.4799688137540949\\ $A_3$ $\otimes$ 0.5882996510447190\\ $A_1$ $\otimes$ 0.6142000220234174\\ $A_2$ $\otimes$ 0.6875219702081756\\ $B_1$ $\otimes$ 0.8246416514000185\\ $A_3$ $\otimes$ 0.8295086553115903\\ $A_1$ $\otimes$ 0.9989841825701087\\ $A_2$ $\otimes$ 1.0363846320748618\\ $A_3$ $\otimes$ 1.0745653323543152\\ $B_1$ $\otimes$ 1.2644981933147030\\ $A_2$ $\otimes$ 1.4818776161693972\\ $A_3$ $\otimes$ 1.5303532634022801\\ $A_1$ $\otimes$ 1.5656970159055199\\ $B_1$ $\otimes$ 1.7742342680710006\\ $A_2$ $\otimes$ 2.0608562398153891\\ $A_3$ $\otimes$ 2.0667401311291220\\ $A_1$ $\otimes$ 2.0980231282257580\\ $B_1$ $\otimes$ 2.4760916823114418\\ $A_2$ $\otimes$ 2.8170652651714376\\ $A_3$ $\otimes$ 3.1838378537539076\\ $A_1$ $\otimes$ 3.2554062082350321\\ $A_3$ $\otimes$ 4.0189700979204375\\ $A_1$ $\otimes$ 4.1136811865325891\\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{cc} \caption{Oxygen cc-pVTZ/cc-pVTZ-RI species quadrature points (a.u.)} \label{tbl:OptsSI} \\ \hline O point set \\ \hline \endfirsthead \multicolumn{1}{l}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline O point set \\ \hline \endhead \hline \multicolumn{1}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot 0.0 0.0 0.0 \\ $A_1$ $\otimes$ 0.0652447257128732\\ $A_2$ $\otimes$ 0.1325503255253438\\ $A_1$ $\otimes$ 0.2050193077495113\\ $A_2$ $\otimes$ 0.2483307608247183\\ $A_3$ $\otimes$ 0.3296725279121236\\ $A_1$ $\otimes$ 0.3695681974012790\\ $A_2$ $\otimes$ 0.4317151572904991\\ $A_3$ $\otimes$ 0.5216903985417337\\ $A_1$ $\otimes$ 0.5314942941974161\\ $A_2$ $\otimes$ 0.6280697001504593\\ $B_1$ $\otimes$ 0.7138025771749562\\ $A_3$ $\otimes$ 0.8056864183075758\\ $A_1$ $\otimes$ 0.8746946212148640\\ $A_2$ $\otimes$ 0.9021904874234906\\ $A_3$ $\otimes$ 0.9801554595808909\\ $B_1$ $\otimes$ 1.0933052463892226\\ $A_2$ $\otimes$ 1.2908793119692796\\ $A_3$ $\otimes$ 1.3264294868555391\\ $A_1$ $\otimes$ 1.3499143280546431\\ $B_1$ $\otimes$ 1.5285596452003496\\ $A_2$ $\otimes$ 1.7616890999626422\\ $A_3$ $\otimes$ 1.8056343561817432\\ $A_1$ $\otimes$ 1.8079536046419200\\ $B_1$ $\otimes$ 2.1159638624942851\\ $A_2$ $\otimes$ 2.4301369998386630\\ $A_3$ $\otimes$ 2.6133556138762177\\ $A_1$ $\otimes$ 2.8080543419973161\\ $A_2$ $\otimes$ 3.3111645919590016\\ $A_1$ $\otimes$ 3.7734143044955140\\ \end{longtable} \end{center} \section{Octacene, decacene and $C_{60}$ geometries } \label{secSI3} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{crrr@{\hskip 20mm}crrr} \caption{Octacene (B3LYP/6-31Gd geometry), Angstr\"{o}m} \label{tbl:octaceneSI} \\ \hline \endfirsthead \multicolumn{8}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \endhead \hline \multicolumn{8}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot C & -3.714926 & -1.409887 & 0.000000 & C & -0.011113 & -0.732348 & -0.000001 \\ C & -1.248764 & -1.411259 & -0.000000 & C & -6.188589 & 1.412129 & 0.000000 \\ C & -7.363538 & 0.717885 & 0.000001 & C & -7.363537 & -0.717890 & 0.000001 \\ C & -6.188588 & -1.412133 & 0.000001 & C & -4.922719 & -0.729554 & 0.000000 \\ C & -4.922720 & 0.729551 & 0.000000 & C & -3.714927 & 1.409885 & -0.000000 \\ C & -2.467867 & 0.731169 & -0.000000 & C & -2.467867 & -0.731170 & -0.000000 \\ C & -1.248763 & 1.411258 & -0.000000 & C & 6.142064 & -1.411136 & -0.000001 \\ C & 7.361111 & -0.731209 & -0.000000 & C & 7.361110 & 0.731206 & -0.000000 \\ C & 6.142064 & 1.411132 & -0.000000 & C & 4.904303 & 0.732349 & -0.000001 \\ C & 4.904303 & -0.732351 & -0.000001 & C & 3.678182 & -1.411821 & -0.000001 \\ C & 2.446613 & -0.732708 & -0.000001 & C & 2.446614 & 0.732706 & -0.000001 \\ C & 1.215008 & -1.411813 & -0.000001 & C & 3.678182 & 1.411817 & -0.000001 \\ C & -0.011112 & 0.732347 & -0.000001 & C & 1.215010 & 1.411811 & -0.000001 \\ H & 3.677950 & 2.500601 & -0.000001 & H & 1.214600 & 2.500594 & -0.000001 \\ H & 6.142287 & 2.499967 & -0.000000 & H & -3.715252 & -2.498850 & 0.000000 \\ H & -1.249306 & -2.500074 & -0.000000 & H & 6.142287 & -2.499974 & -0.000001 \\ H & 3.677949 & -2.500606 & -0.000001 & H & 1.214597 & -2.500597 & -0.000001 \\ H & -6.186989 & 2.500307 & 0.000000 & H & -8.312222 & 1.248416 & 0.000001 \\ H & -3.715254 & 2.498849 & -0.000000 & H & -1.249305 & 2.500074 & -0.000001 \\ H & -6.186987 & -2.500311 & 0.000001 & H & -8.312221 & -1.248423 & 0.000001 \\ C & 11.081835 & -1.412118 & 0.000000 & C & 12.256783 & -0.717907 & 0.000001 \\ C & 12.256783 & 0.717901 & 0.000001 & C & 11.081835 & 1.412113 & 0.000001 \\ C & 9.815989 & 0.729526 & 0.000000 & C & 9.815990 & -0.729530 & 0.000000 \\ C & 8.608127 & -1.409805 & -0.000000 & C & 8.608127 & 1.409800 & 0.000000 \\ H & 8.607914 & 2.498780 & 0.000000 & H & 13.205553 & 1.248275 & 0.000001 \\ H & 11.079963 & 2.500292 & 0.000001 & H & 11.079964 & -2.500297 & 0.000000 \\ H & 13.205553 & -1.248280 & 0.000001 & H & 8.607914 & -2.498787 & -0.000000 \\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{crrr@{\hskip 20mm}crrr} \caption{Decacene (B3LYP/6-31Gd geometry), Angstr\"{o}m} \label{tbl:decaceneSI} \\ \hline \endfirsthead \multicolumn{8}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \endhead \hline \multicolumn{8}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot C & -3.731097 & -1.410141 & 0.000001 & C & -0.026152 & -0.732916 & -0.000001 \\ C & -1.265287 & -1.411634 & -0.000000 & C & -6.204536 & 1.412316 & 0.000001 \\ C & -7.379252 & 0.718055 & 0.000002 & C & -7.379253 & -0.718054 & 0.000003 \\ C & -6.204537 & -1.412316 & 0.000002 & C & -4.938337 & -0.729814 & 0.000001 \\ C & -4.938337 & 0.729814 & 0.000001 & C & -3.731097 & 1.410140 & 0.000000 \\ C & -2.483272 & 0.731546 & -0.000000 & C & -2.483272 & -0.731547 & 0.000000 \\ C & -1.265286 & 1.411632 & -0.000001 & C & 6.122035 & -1.412588 & -0.000002 \\ C & 7.349879 & -0.733604 & -0.000002 & C & 7.349879 & 0.733601 & -0.000002 \\ C & 6.122034 & 1.412585 & -0.000002 & C & 4.891019 & 0.733767 & -0.000002 \\ C & 4.891020 & -0.733772 & -0.000002 & C & 3.660053 & -1.412724 & -0.000002 \\ C & 2.432210 & -0.733597 & -0.000002 & C & 2.432210 & 0.733590 & -0.000002 \\ C & 1.197899 & -1.412387 & -0.000001 & C & 3.660052 & 1.412719 & -0.000002 \\ C & -0.026152 & 0.732911 & -0.000001 & C & 1.197900 & 1.412382 & -0.000001 \\ H & 3.659639 & 2.501473 & -0.000002 & H & 1.197345 & 2.501150 & -0.000001 \\ H & 6.122042 & 2.501360 & -0.000002 & H & -3.731487 & -2.499098 & 0.000001 \\ H & -1.265932 & -2.500440 & -0.000000 & H & 6.122044 & -2.501362 & -0.000002 \\ H & 3.659641 & -2.501477 & -0.000002 & H & 1.197343 & -2.501153 & -0.000001 \\ H & -6.202983 & 2.500490 & 0.000001 & H & -8.327996 & 1.248473 & 0.000003 \\ H & -3.731487 & 2.499098 & 0.000000 & H & -1.265929 & 2.500438 & -0.000001 \\ H & -6.202984 & -2.500490 & 0.000002 & H & -8.327996 & -1.248472 & 0.000003 \\ C & 9.808174 & -0.732855 & -0.000001 & C & 8.584081 & -1.412288 & -0.000002 \\ C & 8.584080 & 1.412286 & -0.000001 & C & 15.986667 & -1.412324 & 0.000002 \\ C & 17.161322 & -0.718096 & 0.000002 & C & 17.161321 & 0.718105 & 0.000003 \\ C & 15.986665 & 1.412331 & 0.000002 & C & 14.720357 & 0.729858 & 0.000001 \\ C & 14.720358 & -0.729853 & 0.000001 & C & 13.513231 & -1.410121 & 0.000000 \\ C & 12.265248 & -0.731559 & -0.000000 & C & 12.265248 & 0.731559 & -0.000000 \\ C & 11.047375 & -1.411596 & -0.000001 & C & 13.513229 & 1.410124 & 0.000001 \\ C & 9.808174 & 0.732854 & -0.000001 & C & 11.047374 & 1.411596 & -0.000001 \\ H & 13.513335 & 2.499087 & 0.000001 & H & 11.047441 & 2.500404 & -0.000000 \\ H & 18.110119 & 1.248420 & 0.000003 & H & 15.984999 & 2.500508 & 0.000002 \\ H & 8.583708 & -2.501073 & -0.000002 & H & 15.985003 & -2.500500 & 0.000001 \\ H & 18.110122 & -1.248409 & 0.000003 & H & 13.513339 & -2.499083 & 0.000000 \\ H & 11.047442 & -2.500403 & -0.000001 & H & 8.583706 & 2.501072 & -0.000001 \\ \end{longtable} \end{center} \begin{center} \LTcapwidth=\textwidth \begin{longtable}{crrr@{\hskip 20mm}crrr} \caption{C60 (B3LYP/6-311Gd geometry), Angstr\"{o}m} \label{tbl:C60SI} \\ \hline \endfirsthead \multicolumn{8}{c}% {\tablename\ \thetable\ -- \textit{Continued from previous page}} \\ \hline \endhead \hline \multicolumn{8}{r}{\textit{Continued on next page}} \\ \endfoot \hline \endlastfoot C & 0.725748 & -0.998906 & 3.321895 & C & -0.725748 & -0.998906 & 3.321895 \\ C & -1.421859 & -1.957021 & 2.589960 & C & -0.696111 & -2.955928 & 1.826864 \\ C & 0.696111 & -2.955928 & 1.826864 & C & 1.421859 & -1.957021 & 2.589960 \\ C & 1.174285 & 0.381548 & 3.321895 & C & 0.000000 & 1.234716 & 3.321895 \\ C & -1.174285 & 0.381548 & 3.321895 & C & -2.300617 & 0.747516 & 2.589960 \\ C & -2.596144 & -1.575473 & 1.826864 & C & -1.421859 & -3.191737 & 0.592148 \\ C & -0.725748 & -3.417918 & -0.592148 & C & 0.725748 & -3.417918 & -0.592148 \\ C & 1.421859 & -3.191737 & 0.592148 & C & 2.596144 & -2.338570 & 0.592148 \\ C & 2.596144 & -1.575473 & 1.826864 & C & 3.026364 & -0.251390 & 1.826864 \\ C & 2.300617 & 0.747516 & 2.589960 & C & 0.000000 & 2.419011 & 2.589960 \\ C & 1.174285 & 2.800560 & 1.826864 & C & 2.300617 & 1.982232 & 1.826864 \\ C & 3.026364 & 1.746422 & 0.592148 & C & 3.474901 & 0.365967 & 0.592148 \\ C & 3.474901 & -0.365967 & -0.592148 & C & 3.026364 & -1.746422 & -0.592148 \\ C & 2.300617 & -1.982232 & -1.826864 & C & 1.174285 & -2.800560 & -1.826864 \\ C & -2.596144 & -2.338570 & 0.592148 & C & -1.174285 & -0.381548 & -3.321895 \\ C & -2.300617 & -0.747516 & -2.589960 & C & -3.026364 & 0.251390 & -1.826864 \\ C & -2.596144 & 1.575473 & -1.826864 & C & -1.421859 & 1.957021 & -2.589960 \\ C & 0.725748 & 0.998906 & -3.321895 & C & 1.174285 & -0.381548 & -3.321895 \\ C & 0.000000 & -1.234716 & -3.321895 & C & 0.000000 & -2.419011 & -2.589960 \\ C & -1.174285 & -2.800560 & -1.826864 & C & -2.300617 & -1.982232 & -1.826864 \\ C & -3.474901 & -0.365967 & -0.592148 & C & -3.474901 & 0.365967 & 0.592148 \\ C & -3.026364 & 1.746422 & 0.592148 & C & -2.596144 & 2.338570 & -0.592148 \\ C & -1.421859 & 3.191737 & -0.592148 & C & -0.696111 & 2.955928 & -1.826864 \\ C & 0.696111 & 2.955928 & -1.826864 & C & 1.421859 & 1.957021 & -2.589960 \\ C & 2.300617 & -0.747516 & -2.589960 & C & 3.026364 & 0.251390 & -1.826864 \\ C & 2.596144 & 1.575473 & -1.826864 & C & 2.596144 & 2.338570 & -0.592148 \\ C & 1.421859 & 3.191737 & -0.592148 & C & 0.725748 & 3.417918 & 0.592148 \\ C & -0.725748 & 3.417918 & 0.592148 & C & -1.174285 & 2.800560 & 1.826864 \\ C & -2.300617 & 1.982232 & 1.826864 & C & -3.026364 & -1.746422 & -0.592148 \\ C & -3.026364 & -0.251390 & 1.826864 & C & -0.725748 & 0.998906 & -3.321895 \\ \end{longtable} \end{center} \section{Relation with the LS-THC estimator} \label{secSITHC} The LS-THC estimator for expressing 2-electron Coulomb integrals using a real-space quadrature approach reads (eq.21 Ref.~\citenum{Parrish12}): \begin{equation} \argmin_{Z} \sum_{\rho,\rho'}\Big|\Big| (\rho|\rho') - \sum_{kk'} \rho(\mathbf{r}_k)\cdot Z_{kk'} \cdot \rho'(\mathbf{r}_{k'})\Big|\Big|^2, \label{eq:fit_LSTHC} \end{equation} \noindent Adopting the RI-V approximation to express the $(\rho|\rho')$ Coulomb integrals: \begin{equation} \begin{split} (\rho|\rho') & = \sum_{\beta'} (\rho|\beta' ) \mathcal{F}_{\beta'}^{V} (\rho') \\ & = \sum_{\beta'\beta'} \mathcal{F}_{\beta}^{V} (\rho) (\beta|\beta' ) \mathcal{F}_{\beta'}^{V} (\rho') \\ \end{split} \end{equation} \noindent and using the previously introduced matrix notations $[D]_{k\rho}=\rho(r_k)$, $[F]_{\beta\rho}=\mathcal{F}_\beta^{V} (\rho)$ and writing $[V]_{\beta\beta'}= (\beta|\beta' )$, one obtains: \begin{equation} [ (\rho|\rho')] = F^\dag \cdot V \cdot F \end{equation} \noindent and the estimator in eq.\ref{eq:fit_LSTHC} reads: \begin{equation} \begin{split} & \argmin_{Z} \Big|\Big|F^\dag \cdot V \cdot F - D^\dag \cdot Z \cdot D \Big|\Big|^2 \\ = & \argmin_{Z} \, tr \Big[ (F^\dag \cdot V \cdot F - D^\dag \cdot Z \cdot D)^\dag \cdot(F^\dag \cdot V \cdot F - D^\dag \cdot Z \cdot D) \Big]\\ \label{eq::LS-THC_trace} \end{split} \end{equation} \noindent Differentiating expression \ref{eq::LS-THC_trace} with respect to $Z$ brings: \begin{equation} \begin{split} & \frac{d}{dZ} \Big|\Big|F^\dag \cdot V \cdot F - D^\dag \cdot Z \cdot D \Big|\Big|^2 \\ = & 2 \Big( D \cdot D^\dag \cdot Z \cdot D \cdot D^\dag - D \cdot F^\dag \cdot V \cdot F \cdot D^\dag ) =0 \end{split} \end{equation} \noindent resulting in the following expression for $Z$: \begin{equation} \begin{split} Z & = ( D \cdot D^\dag)^{-1} \cdot D \cdot F^\dag \cdot V \cdot F \cdot D^\dag \cdot ( D \cdot D^\dag)^{-1}\\ & = M^\dag \cdot V \cdot M\\ \end{split} \end{equation} \noindent with $M$ the fitting matrix as defined by equation (8) of our main manuscript: \[ M = F\cdot D^\dag \cdot ( D\cdot D^\dag )^{-1}\label{eq:fit_LSQR_3} \] \noindent We can thus see that the formulation of the equation (6) of our main manuscript allows to recover the LS-THC estimator of eq.\ref{eq:fit_LSTHC} with a $\mathcal{O}(N^3)$ computational effort. \nocite{*}
1,314,259,996,887
arxiv
\section{Introduction} Although autoregressive models have achieved great success in various NLP tasks and speech recognition\cite{bahdanau2014neural, chorowski2015attention, chan2016listen, vaswani2017attention, kim2017joint, dong2018speech}, the autoregressive characteristics result in a large latency during the inference process \cite{lee2018deterministic}. Most of the attention-based sequence-to-sequence models generate the target sequence in an autoregressive fashion. These models predict the next token conditioned on the previously generated tokens and the source state sequence. By contrast, the non-autoregressive model gets rid of temporal dependency and able to perform parallel computing, greatly improving the speed of inference. Non-autoregressive transformers (NAT) have achieved comparable results with autoregressive models in neural machine translation and speech recognition \cite{lee2018deterministic, gu2017non, ma2019flowseq, wang2019non, chen2019non, libovicky2018end, moritz2019triggered}. Different from the autoregressive sequence-to-sequence model, the NAT takes a fixed-length mask sequence as input to predict target sequence. The setting of this predefined length is very important. If the length is shorter than the actual length, this will cause many errors of deletion. On the contrary, a longer length will cause the model to generate duplicate tokens and consume additional calculations. To our best knowledge, there are three ways to estimate the length of the target sequence. Firstly, some works introduce a neural network module behind the encoder to predict the target length \cite{lee2018deterministic, gu2017non, ma2019flowseq}. These method cannot guarantee the accuracy of the predicted lengths. During inference, it is necessary to sample different lengths to select the optimal sequence. Secondly, \cite{wang2019non, chen2019non} set an empirical(or maximum) length based on the length of the source sequence. To guarantee the performance of the model, the length is often much longer than the actual length of the target sequence. It will result in extra calculation cost and affect the inference speed. Thirdly, \cite{libovicky2018end} utilizes the CTC loss function instead of the cross entropy to optimize the model, which makes the model generate tokens without calculating the length of the target sequence. However, the characteristics of CTC will cause the model to generate some duplicate tokens and a large number of blanks during inference, and it does not accelerate the inference speed. \vspace{-2pt} For speech recognition, the number of valid characters or words contained in a piece of speech is affected by various factors such as the speaker's speech rate, silence, and noise. It is unreasonable to set a fixed length only according to the duration of the audio. To estimate the length of the target sequence accurately and accelerate the inference speech, we propose a spike-triggered non-autoregressive transformer (ST-NAT) for end-to-end speech recognition, which introduces a CTC module to predict the length of the target sequence and accelerate the convergence. The CTC loss plays three important roles in our proposed model. Firstly, ST-NAT utilizes the CTC module to predict the length of target sequences. The CTC module can generate spike-like label posterior probabilities. The number of spikes accurately reflects the length of the target sequence \cite{ma2019flowseq, moritz2019streaming}. During inference, the ST-NAT can count the number of spikes to avoid redundant calculations. Secondly, the ST-NAT adopts the encode states corresponding to the positions of spikes as the input of the decoder. We assume that the triggered encode state sequence can obtain more prior information than the mask sequence, which may able to improve the performance of the model. Thirdly, the ST-NAT adapts the CTC loss as an auxiliary loss to speed up training and convergence \cite{kim2017joint}. Additionally, a non-autoregressive transformer cannot model the inter-dependencies between the outputs. Therefore, we improve the model performance by integrating the output probabilities predicted by the ST-NAT and a neural language model. All experiments are conducted on a public Chinese mandarin dataset AISHELL-1. The results show that the ST-NAT can predict the length of the target sequence accurately and achieve comparable performance with the most advanced end-to-end models. The probability of missing words or characters is less than 2\%. What's more, the model even achieves a real-time factor (RTF) of 0.0056, which exceeds all mainstream speech recognition models. \vspace{-2pt} The remainder of this paper is organized as follows. Section 2 describes our proposed triggered non-autoregressive transformers. Section 3 presents our experimental setup and results. The conclusions and future work will be given in Section 4. \section{Spike-Triggered Non-Autoregressive Transformer} \subsection{Model Architecture} The spike-trigger non-autoregressive transformer consists of an encoder, a decoder, and a CTC module, as depicted in Fig.1. Both encoder and decoder are composed of multi-head attention layers and feed-forward layers \cite{vaswani2017attention}, which is similar to the speech transformer \cite{dong2018speech}. As shown in Fig.1, we put a 2D convolution front end at the bottom of the encoder to process the input speech feature sequences simply, including dimension transformation (from 40 to 320), time-axis down-sampling, and adding sine-cosine positional information. Multi-head attention (MHA) layer allows the model to focus on the information from different positions. Each head $h_i$ is a complete self-attention component. $Q$, $K$ and $V$ represent queries, keys and values respectively. $d_k$ is the dimension of keys. $W^Q\in\mathbb{R}^{d_m\times{d_q}}$, $W^K \in \mathbb{R}^{d_m\times{d_k}}$, $W^V\in\mathbb{R}^{d_m\times{d_v}}$ and $W^O\in\mathbb{R}^{d_m\times{d_m}}$ are projection parameter matrices. \begin{equation} \label{eq:self-attention} \text{SelfAttn}(Q,K,V)=\text{softmax}(\frac{QK^T}{\sqrt{d_k}})V \end{equation} \begin{equation} \begin{split} \text{MultiHead}(Q,K,V)&=\text{Concat}(h_1,h_2,...h_{n_h})W^O \\ \text{where } h_i =& \text{SelfAttn}(QW_i^Q,KW_i^K,VW_i^V) \end{split} \end{equation} Feed-forward network (FFN) contains two linear layers and a gated liner unit (GLU) \cite{dauphin2017language} activation function \cite{Tian2019, fan2019speaker}. \begin{equation} FFN(x)=\text{GLU}(xW_1+b_1)W_2+b_2 \end{equation} where parameters $W_1 \in \mathbb{R}^{d_m\times{2d_{ff}}}$, $W_2\in \mathbb{R}^{d_{ff}\times{d_{m}}}$, $b_1\in\mathbb{R}^{d_{m}}$ and $b_2\in\mathbb{R}^{d_{m}}$ are learnable. The sine and consine positional embedding proposed by \cite{vaswani2017attention} are applied for all the experiments in this paper. Besides, the model also apply residual connection and layer normalization. The ST-NAT introduces a CTC module to predict the length of the target sequence and accelerate the convergence. The CTC module only consists of a linear project layer. Most non-autoregressive transformer model adopts a fixed-length sequence filled with '$\langle\textit{MASK}\rangle$' as the input of the decoder. These sequences don't contain any useful information. In fact, the CTC spike is usually located in the range of one specific word. Therefore, the ST-NAT utilizes the encoded states corresponding to the CTC spike as the input of the decoder. We assume that the triggered encode state sequence contains some prior information on the target words, which makes the decoding process more purposeful than guessing from the empty sequence. \begin{figure}[t] \centering \label{fig:whole} \includegraphics[width=1.0\linewidth]{model3.pdf} \caption{The spike-triggered non-autoregressive transformer has three components, an encoder, a decoder, and a CTC module. The encoder processes the input feature sequences into encoded states sequence. The CTC module computes spike-like posteriors from encoded states. And then the decoder extract encoded states sequence from the corresponding position of spikes as the input. The whole system is trained jointly.} \end{figure} \subsection{Training} It is very important to predict the length of the target sequence accurately. When the predicted length $T^\prime$ is shorter than the target length $T$, there is no doubt that the generated sequence will miss many words or characters, which means that it causes many deletion errors. Instead, the predicted length $T^\prime$ is longer than the target length of $T$, it will cost extra calculation and even generate many duplicate tokens. The ST-NAT can predict the length of the target sequence accurately, through counts the number of spikes produced by the CTC module. When the probability that the CTC module generates a non-blank token is greater than the trigger threshold $\beta$, the corresponding trigger position is recorded. This process can be described as follows. \begin{equation} POS(i)= \left\{ \begin{array}{lr} triggerd, & 1 - p_b \ge \beta \\ ignored, & 1 - p_b < \beta \end{array} \right. \end{equation} Where $POS(i)$ means the $i$-th position of the encoder output states. $p_b$ is the blank probability predicted by the CTC module. Then the probability of non-blank can be expressed as $1 - p_b$. The ST-NAT also inserts an end-of-sentence token '$\langle\textit{EOS}\rangle$' into the target sequence to guarantee the model still able to generate a correct sequence, when the predicted length $T^\prime$ is larger than the target length $T$. Furthermore, it has been widely proved that the CTC loss function \cite{graves2006connectionist} is effective to assist the model to accelerate the training and convergence \cite{kim2017joint}. It is difficult to train a non-autoregressive model from scratch. Therefore, we use the CTC loss as an auxiliary loss function to optimize the model. \begin{equation} \mathcal{L} = \left\{ \begin{array}{lr} \alpha\mathcal{L}_{CTC} + (1 - \alpha)\mathcal{L}_{CE}, & T^\prime \ge T \\ \mathcal{L}_{CTC}, & T^\prime < T \end{array} \right. \end{equation} where $\mathcal{L}_{CE}$ is the cross entropy loss \cite{de2005tutorial} and $\mathcal{L}_{CTC}$ is the CTC loss. $\alpha$ is the weight of CTC in joint loss function. $T^\prime$ is the predicted target length. $T$ is the real target length. If $T^\prime$ is smaller than $T$, the ST-NAT only utilizes the CTC loss optimize the encoder. Thanks to the CTC module, the ST-NAT can be trained from scratch and without any pre-training or other tricks. \subsection{Inference} During inference, we just select the token which has the highest probability at each position. Generating the token '$\langle\textit{EOS}\rangle$' or the last word in the sequence means the end of the decoding process. Non-autoregressive model cannot model the temporal dependencies between the output labels. This largely prevents the improvement of model performance. We also introduce a transformer-based language model into decoding process. Neural language model makes up the weakness of non-autoregressive model. The joint decoding process can be described as \begin{equation} \begin{array}{lr} \hat{y} = \arg \underset{y}{\max}(logP(y|x)+\lambda logP_{LM}(y|x)) \end{array} \end{equation} where $\hat{y}$ is the predict sequence. And $P_{LM}(y|x)$ is the probability of language model. $\lambda$ is the weight of the language model probabilities. \section{Experiments and Results} \subsection{Dataset} In this work, all experiments are conducted on a public Mandarin speech corpus AISHELL-1\footnote{http://www.openslr.org/13/}. The training set contains about 150 hours of speech (120,098 utterances) recorded by 340 speakers. The development set contains about 20 hours (14,326 utterances) recorded by 40 speakers. And about 10 hours (7,176 utterances / 36109 seconds) of speech is used as test set. The speakers of different sets are not overlapped. \subsection{Experiment Setup} For all experiments, we use 40-dimensional FBANK features computed on a 25ms window with a 10ms shift. We chose 4233 characters (including a padding symbol '$\langle\textit{PAD}\rangle$' , an unknown symbol '$\langle\textit{UNK}\rangle$' and an end-of-sentence symbol '$\langle\textit{EOS}\rangle$') as model units. Our proposed model and baseline models are built on OpenTransformer\footnote{https://github.com/ZhengkunTian/OpenTransformer}. The ST-NAT model consists of 6 encoder blocks and 6 decoder blocks. There are 4 heads in multi-head attention. The 2D convolution front end utilizes two-layer time-axis CNN with ReLU activation, stride size 2, channels 320, and kernel size 3. The output size of the multi-head attention and the feed-forward layers are 320. We adopt an Adam optimizer with warmup steps 12000 and the learning rate scheduler reported in \cite{vaswani2017attention}. After 80 epochs, we average the parameters saved in the last 20 epochs. We also use the time mask and frequency mask method proposed in \cite{park2019specaugment} for the baseline transformer, SAN-CTC, and all non-autoregressive models. During inference, we use a beam search with a width of 5 for the baseline Transformer model, SAN-CTC model and the ST-NAT with language model. We use the character error rate (CER) to evaluate the performance of different models. For evaluating the inference speed of different models, we decode utterances one by one to compute real-time factor (RTF) on the test set. The RTF is the time taken to decode one second of speech. All experiments are conducted on a GeForce GTX TITAN X 12G GPU. \subsection{Results} \subsubsection{Explore the effects of different weights and trigger thresholds.} We train the ST-NAT model with different CTC weights $\alpha$ and trigger thresholds $\beta$ from scratch. As shown in Table.\ref{tab:trigger}, the trigger NAT model with CTC weight 0.7 and trigger threshold 0.3 can achieve a CER of 7.66\% on test. At the same threshold, the trigger NAT with weight 0.6 can achieve the best performance on development set. The CTC weights and trigger thresholds affect the performance of the model in different aspects. The CTC weights $\alpha$ are used to balance the performance of CTC trigger module and decoder. However, the trigger threshold $\beta$ are used to determine how many encoder states are triggered. Both weights and thresholds play important roles in the performance of the models. \begin{table}[t] \caption{Comparison of the model with different CTC weights $\alpha$ and trigger thresholds $\beta$. We evaluate the CER(\%) on development and test set, respectively.} \label{tab:trigger} \centering \begin{tabular}{c|c|c|c|c} \bottomrule \textbf{CTC} & \multicolumn{4}{c}{\textbf{Trigger Threshold} $\bm{\beta}$} \\ \cline{2-5} \textbf{Weight} $\bm{\alpha}$&0.1&0.3&0.5&0.7\\ \hline 0.1&7.67/8.66&7.56/8.50&7.66/8/45&7.56/8.50\\ 0.3&7.37/8.14&7.25/8.12&7.26/8.19&7.21/8.01\\ 0.5&7.06/7.97&7.10/7.88&7.38/8.14&7.30/8.12\\ 0.6&7.06/7.88&\textbf{6.88}/7.67&7.05/7.77&7.01/7.70\\ 0.7&7.26/8.05&6.91/\textbf{7.66}&7.03/7.87&7.39/8.02\\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \caption{Comparison of the effects of different trigger threshold on the inference speed. We record the time that the ST-NAT spends on decoding test set and calculate the real-time factor.} \label{tab:rtf} \centering \begin{tabular}{c|cccc} \toprule \textbf{Threshold} $\bm{\beta}$ & \textbf{0.1} & \textbf{0.3} & \textbf{0.5} & \textbf{0.7} \\ \hline Performance & 7.88 & 7.67 & 7.77 & 7.70 \\ Seconds & 212.04 & 202.59 & 200.62 & 198.44 \\ RTF & 0.0059 & 0.0056 & 0.0055 & 0.0054 \\ \bottomrule \end{tabular} \end{table} \subsubsection{Explore the effects of different trigger thresholds on the inference speed.} We evaluate our ST-NAT with different trigger thresholds on the inference speed. All the ST-NAT models are trained with a CTC weight of 0.6. It is obvious from the Table.\ref{tab:rtf} that the larger the threshold, the faster the model decode an utterance. When the trigger threshold is 0.7, the model achieves an RTF of 0.0054. It also means the model only has a latency of nearly 20 milliseconds. However, a large threshold does not mean that the model can achieve the best performance. A large trigger threshold might cause the predicted length generated by the CTC trigger to be shorter than the target length, which in turn will hurt the performance of the model. Fortunately, different trigger thresholds have only little effect on the speed of inference, which can even be ignored. \subsubsection{Analysis on trigger mechanism.} We analyze the trigger non-autoregressive transformer from the following two perspectives. On the one hand, we explore the relationship between the predicted length by the model and the target length, as show in Fig.2. The histogram record the difference between the target length and the predicted length. When the value is less than or equal to zero, it means that the predicted length is less than or equal to the target length. This will not cause irreversible effects. The decoder is still able to predict a token at the end of sentence. We can find that the vast majority of predicted length have no any errors. What's more, the probability of missing words or characters is even less than 2\%. Even for the most of weights (0.3, 0.5 and 0.7), the maximum predict error does not exceed 4. Therefore, we conclude that the CTC model can predict the length of the target sequence approximately accurately. However, if the value is larger than zero, the model will miss some words permanently. We can fix this problem by adding a padding bias to the predicted length. On the other hand, Fig.\ref{fig:ctc} shows the relationship between the trigger position and the word pronunciation boundary. There is no triggered spike in the range of silence. Within the scope of the last pronounced word, there are two triggered spikes. Because we also take an end-of-sentence token into consideration during training. It's obvious that each spike is within the boundary of the word. Therefore, our assumption, that the triggered encode state sequence contains more prior information on the target sequence, is reasonable. It's obvious from Fig.\ref{fig:attn} that the ST-NAT model can make the target sequence better aligned to the encoded states sequence. What's more, the center of the alignment position almost coincides with the trigger position, which again verifies our assumption. \begin{figure}[t] \centering \label{fig:len} \includegraphics[width=\linewidth]{length2.pdf} \caption{The analysis of the predicted length. The histogram shows the difference between the target length and the predicted length.} \end{figure} \vspace{-10pt} \begin{figure}[t] \centering \subfigure[The realtionship bettween trigger and word boundaries]{ \label{fig:ctc} \includegraphics[width=\linewidth]{ctc_trigger.pdf} } \subfigure[Attention mechanism visualization]{ \label{fig:attn} \includegraphics[width=\linewidth]{attn.pdf} } \caption{We visually analyzed the test set sentences 'BAC009S0764W0149'. (a)The line chart shows the relationship between trigger position and character pronunciation boundaries. The dotted lines indicate the pronunciation boundary information and the spikes present the spike-like posterior probability of the CTC module. (b) is from the 4th source attention mechanism of the decoder.} \vspace{-10pt} \end{figure} \begin{table}[t] \caption{Compare with other models in performance and real-time factor.} \label{tab:models} \centering \begin{threeparttable} \begin{tabular}{c|ccc} \toprule \textbf{Model} & \textbf{DEV} & \textbf{TEST} & \textbf{RTF} \\ \hline TDNN-Chain (Kaldi) \cite{povey2016purely} & - & 7.45 & - \\ LAS\cite{8682490} & - & 10.56 & - \\ Speech-Transformer * & 6.57 & 7.37 & 0.0504 \\ SA-Transducer $\dagger$ \cite{Tian2019} & 8.30 & 9.30 & 0.1536 \\ SAN-CTC * \cite{salazar2019self} & 7.83 & 8.74 & 0.0168 \\ Sync-Transformer $\dagger$ \cite{tian2019synchronous} & 7.91 & 8.91 & 0.1183 \\ NAT-MASKED * \cite{chen2019non} & 7.16 & 8.03 & 0.0058 \\ ST-NAT(ours) & 6.88 & 7.67 & \textbf{0.0056} \\ ST-NAT+LM(ours) & \textbf{6.39} & \textbf{7.02} & 0.0292 \\ \bottomrule \end{tabular} \begin{tablenotes} \item[*] These models are re-implemented by ourselves according to the papers. \item[$\dagger$] We supplement the RTF of our previous two models. \end{tablenotes} \end{threeparttable} \vspace{-5pt} \end{table} \subsubsection{Compare with other models.} We also compare our proposed ST-NAT model with various main-stream models, e.g. traditional model, CTC-based model, transducer model, and attention-based sequence-to-sequence model. Under the same training condition and the same model parameters, we train a Speech-Transformer\cite{dong2018speech}, NAT-MASKED \cite{chen2019non}, and our proposed ST-NAT model, where the speech transformer applies a beam search with beam width 5 to decoding utterances. From Table.\ref{tab:models}, we can find the ST-NAT models can achieve comparable performance with the advanced speech-transformer model \cite{dong2018speech} and TDNN-Chain model \cite{povey2016purely}, which is better than LAS. From another perspective, the ST-NAT has the fastest inference speed among them, which is only about 1/10 of speech-transformer. The ST-NAT with a transformer language model can achieve the best CER of 7.02\% on the test set and an RTF of 0.0292. Compared with the streaming end-to-end model, e.g. SAN-CTC \cite{salazar2019self}, Sync-Transformer \cite{tian2019synchronous}, and SA-Transducer \cite{Tian2019}, the ST-NAT can not only achieve the best performance, but also the fastest inference speed. We suppose that the ST-NAT can decode an utterance with all context and without temporal dependencies. By contrast, we also re-implement a NAT-MASKED model in a BERT-like way \cite{chen2019non}, which adopts a fixed-length (set as 60) mask sequence as the input. The NAT-MASKED has the same parameters as our ST-NAT except for the CTC module. We find the ST-NAT can achieve better performance. We guess that it is difficult for the model to learn to predict the target words(or characters) and the target length jointly. Both of them have a very close inference speed. \section{Conclusions and Future Works} To estimate the length of the target sequence accurately and accelerate the inference speech, we proposed a spike-triggered non-autoregressive transformer (ST-NAT) for end-to-end speech recognition, which introduce a CTC module to predict the target length and accelerate the convergence. The ST-NAT adopts the encode states corresponding to the positions of spikes as the input of the decoder. In the inference process, ST-NAT can count the number of spikes to avoid redundant calculations. We conduct all experiments on a public Chinese mandarin dataset AISEHLL-1. The results show that the CTC module can accurately predict the length of the target sequence. The ST-NAT model has achieved achieve comparable performance with the advanced speech transformer model. However, the ST-NAT has a real-time factor of 0.0056, which exceeds all mainstream models. What's more, the ST-NAT with a language model can still have a very high inference speed. In the future, we will try to utilize the CTC module for joint decoding to improve the performance of the model during inference. \bibliographystyle{IEEEtran}
1,314,259,996,888
arxiv
\subsection{Emission and absorption} The atmosphere interferes with the microwave band observations for two main reasons. On the one hand, some components of the atmosphere, for example, oxygen molecules and water vapor molecules, absorb microwave radiation, and on the other hand, the movement of these molecules generates additional emission. Oxygen molecules have two absorption bands in the microwave section with wavelengths of 4-6 mm (frequency of 46.45-71.05 GHz) and 2.53 mm (frequency of 118.75 GHz); water vapor molecules also have two absorption bands in the microwave section with absorption spectral lines of 1.35 cm (frequency of 22.235 GHz) and 1.64 mm (frequency of 183 GHz). Therefore, the windows for terrestrial CMB experiments are mainly distributed below 50 GHz, near 100 GHz, near 220 GHz, and near 270 GHz. The ongoing CMB telescopes have been built in the driest places on earth. So far, there are four established sites, which are South Pole in Antarctica (2835\;m) and the Chajnantor Plateau in Chile (4990\;m) located at southern hemisphere, Summit Station in Greenland (3216\;m) and the Ali observatory in Tibet (5250\;m) located at northern hemisphere. The PWV values keeps very low at these sites, the median values of which are usually below $\rm 1\;mm$ measured by MERRA-2 during the observation season\cite{2017arXiv170909053L,2017ApJ...848...64K}. On the other hand, the strong emission of atmosphere also provides a serious challenge for CMB ground-based observations. It brings an additional term $E(\nu)$ to the total background emission of detectors loading. The atmospheric temperature near the Earth's surface is about 280 K\cite{2015ApJ...809...63E}, which contributes to the total background emission of about $\rm 20\;K$ at $\rm 150 \; GHz$. The additional term $E(\nu)$ can be written as \begin{equation} E(\nu)= [1-T(\nu)] B_{\nu}(T_{atm}), \end{equation} where $\ T_{atm} \sim 280K$ is the atmospheric temperature, $ B_{\nu}(T_{atm})$ is the thermal equilibrium blackbody spectrum at atmospheric temperature $ T_{atm}$, and $ T(\nu)$ is the transmittance of atmosphere at frequency $\rm \nu$. For the detector, the atmospheric background emission is mainly shown as white noise. \iffalse \begin{table}[h!] \centering \caption{The PWV of four sites} \begin{tabular}{l c c c c} \hline \hline & \multirow{2}{*}{elev.(m)} & & PWV(mm) & \\ \cline{3-5} & & 25\% & 50 \% & 75 \% \\ \hline South Pole & 2835 & 0.349 & 0.461 & 0.600 \\ Ali1 & 5250 & 0.712 & 0.997 & 1.325 \\ Ali2 & 6100 & 0.356 & 0.564 & 0.850 \\ Chajnantor & 5190 & 0.532 & 0.806 & 1.228 \\ Greenland & 3216 & 0.737 & 1.086 & 1.439 \\ \hline \end{tabular} \label{table:noise} \end{table}\fi \subsection{Fluctuation} A typical atmosphere physical temperature above the site is about $280$ K, leading to a background emission of $\sim 20$ K\cite{2015ApJ...809...63E}. In addition to the photon noise on detector induced by the atmospheric background emission, which is directly related to the PWV level, the non-uniformity of the emission caused by atmospheric fluctuation, e.g. clouds, turbulence, etc., affects the detection of CMB photons. Along the line of sight, the turbulence of the atmospheric emission is recorded in the time stream of the detector readings and results in a strong correlation between the time streams of adjacent detector readings. The signal of atmospheric fluctuation is predominantly distributed at low frequencies, where its power spectral density is inversely proportional to frequency, and is often referred as $1/f$ noise\cite{2020MNRAS.491.4254C,2016arXiv160609584H}. This scale-dependent noise contamination, unlike white noise, cannot be averaged out by multiple measurements or stacking and needs to be filtered out. For the ACT experiment, the median atmospheric temperature drift is distributed at roughly $220$ mK over a 15-minute observation, with a PWV value below $\rm 1mm$ in a good observing night\cite{2013ApJ...762...10D}. Atmospheric emission fluctuations are correlated at different spatial scales\cite{2018RPPh...81d4901S}. The structure of atmospheric turbulence can be modeled by the Kolmogorov model, which is given by the formula, $P( k)\propto k^m$, where $k$ is the modes of atmosphere turbulence and $m=-\frac{11}{3}$\cite{2005ApJ...622.1343B,2000ApJ...543..787L}. Based on the Kolmogorov model, many articles have studied the correlation of the atmosphere emission between different spatial modes. Church et al. \cite{1995MNRAS.272..551C} discussed the influence of wind speed and typical scale of turbulence on the PSD of atmospheric emission fluctuations, which is based on a three-dimension model. Lay \& Halverson\cite{2000ApJ...543..787L} established a two-dimensional atmospheric model. On the basis of Church's research, Errard\cite{2015ApJ...809...63E} studied how scanning strategies and atmospheric conditions affect atmospheric correlation in detector data streams, and derived the correlation function that will give a quantitative description of this effect. These models have been verified in real experimental observations. For example, ACT experiment\cite{2013ApJ...762...10D} gives the relationship between the power spectrum of TOD and the frequency in the form of power law: $P(f)\sim f^{-\beta}$, where f is frequency and $\beta = 1 \sim 3.5$. In the ACT experiment, the higher PWV value of the atmosphere, the larger value of $\beta$, which is also consistent with the model. f The correlation property of atmosphere emission fluctuation indicates that the atmosphere-induced noise at low frequencies cannot be removed effectively through accumulating the data of different detectors, and specific filters are needed in data processing. \subsection{Polynomial filter } Polynomial fitting is a widely used method in CMB ground-based experiments\cite{2018ApJ...852...97H,2019ApJ...870..102T,2010ApJ...711.1123C,2011ApJ...741...81B,2013ApJ...765...64M}. With the fitting method of least squares, we can determine the optimal fitting curve by minimizing the sum of squares of errors between the fitting curve and the data. Given a data set $(x_i,y_i ),i=1,2,\ldots,m,$ fit it with a polynomial $P(x_i)=a_0+a_1 x_i+a_2 x_i^2+\dotsb+a_nx_i^n$ of order $n$, we can derive a linear function group: \begin{equation} \begin{cases} \begin{array}{l} P_n\left( x_1 \right) =a_0+a_1x_1+a_2x_{1}^{2}+...+a_nx_{1}^{n}\\ P_n\left( x_2 \right) =a_0+a_1x_2+a_2x_{2}^{2}+...+a_nx_{2}^{n}\\ \end{array}\\ \,\, \vdots\\ P_n\left( x_m \right) =a_0+a_1x_m+a_2x_{m}^{2}+...+a_nx_{m}^{n}\\ \end{cases} \end{equation} The sum of squares of errors between the polynomial and the data set is: \begin{equation} J=\sum_{i=1}^m{\left[ P\left( x_i \right) -y_i \right] ^2} \end{equation} Written in the form of matrices, it follows: \begin{equation} \begin{aligned} J&=\left( \left[ \begin{matrix} 1& x_1& x_{1}^{2}& ...& x_{1}^{n}\\ 1& x_2& x_{2}^{2}& ...& x_{2}^{n}\\ 1& x_3& x_{3}^{2}& ...& x_{3}^{n}\\ \vdots& \vdots& \vdots& \ddots& \vdots\\ 1& x_m& x_{m}^{2}& ...& x_{m}^{n}\\ \end{matrix} \right] \cdot \left[ \begin{array}{l} a_0\\ a_1\\ a_2\\ \vdots\\ a_n\\ \end{array} \right] -\left[ \begin{array}{l} y_0\\ y_1\\ y_2\\ \vdots\\ y_n\\ \end{array} \right] \right) ^T\cdot \left( \left[ \begin{matrix} 1& x_1& x_{1}^{2}& ...& x_{1}^{n}\\ 1& x_2& x_{2}^{2}& ...& x_{2}^{n}\\ 1& x_3& x_{3}^{2}& ...& x_{3}^{n}\\ \vdots& \vdots& \vdots& \ddots& \vdots\\ 1& x_m& x_{m}^{2}& ...& x_{m}^{n}\\ \end{matrix} \right] \cdot \left[ \begin{array}{l} a_0\\ a_1\\ a_2\\ \vdots\\ a_n\\ \end{array} \right] -\left[ \begin{array}{l} y_0\\ y_1\\ y_2\\ \vdots\\ y_n\\ \end{array} \right] \right) \\ &=\left( Xa-Y \right) ^T\left( Xa-Y \right) . \end{aligned} \end{equation} Where \begin{equation} X=\left[ \begin{matrix} 1& x_1& x_{1}^{2}& ...& x_{1}^{n}\\ 1& x_2& x_{2}^{2}& ...& x_{2}^{n}\\ 1& x_3& x_{3}^{2}& ...& x_{3}^{n}\\ \vdots& \vdots& \vdots& \ddots& \vdots\\ 1& x_m& x_{m}^{2}& ...& x_{m}^{n}\\ \end{matrix} \right] ,\;\;\;a=\left[ \begin{array}{l} a_0\\ a_1\\ a_2\\ \vdots\\ a_n\\ \end{array} \right] ,\;\;\;Y=\left[ \begin{array}{l} y_0\\ y_1\\ y_2\\ \vdots\\ y_n\\ \end{array} \right] , \end{equation} and $X$ is known as the Vandermonde matrix. The final equation to estimate the coefficients which minimize the total error can be written as: \begin{equation} \begin{aligned} \frac{\partial J}{\partial a}&=\frac{\partial \left[ \left( Xa-Y \right) ^T\left( Xa-Y \right) \right]}{\partial a}\\ &=X^TXa-X^TY\\ &=0.\\ \end{aligned} \end{equation} We can derive the polynomial coefficients: \begin{equation} a=\left( X^TX \right) ^{-1}X^TY. \end{equation} When applying polynomial fitting into CMB data processing procedure, we should first determine the proper order of polynomial, then calculate the polynomial coefficients to get the fitting curve, and finally subtract it from the TOD to get the filtered data. \subsection{Wiener filter} Among linear filters using statistical information from data, the generalized Wiener filter is widely used in data processing as a maximum posterior solution for signal and noise statistics\cite{2013A&A...549A.111E,2019MNRAS.490..947K}. We discuss the Wiener filter solution of this problem in frequency domain. Given an input $Y(f)$ at a frequency of $f$, after the filtering operation of $W(f)$, the output signal can be expressed as $\hat{X}(f)=W(f)Y(f)$. The error, which defines the difference between the input signal and the filtered output value, is often a key parameter used to describe the efficiency of filtering, and the error can be writen as \begin{equation} \begin{aligned} \mathscr{E}=X(f)-\hat{X}(f)=X(f)-W(f)Y(f). \end{aligned} \end{equation} The smaller the error, the better the filtering effect, so by finding the minimum value of the error, we will get the best filtering of the noise. We can define the Mean Square of the error (MSE): \begin{footnotesize \begin{equation} \begin{aligned} \mathbb{E}[|\mathscr{E}(f)|^2]&=\mathbb{E}[(X(f)-W(f)Y(f))^*(X(f)-W(f)Y(f))]\\ &=\mathbb{E}[|(X(f)|^2]-W(f)\mathbb{E}[X^*(f)Y(f)]-W(f)^*\mathbb{E}[Y^*(f)X(f)]+|W(f)|^2\mathbb{E}[|Y(f)|^2 ]\\ &=P_{XX}-W(f)P_{XY}-W^*(f)P_{YX}+|W(f)|^2P_{YY}. \end{aligned} \end{equation} \end{footnotesize} To minimize the MSE value, we can take the derivative of the above equation with respect to $W(f)$ and take it equal to $0$: \begin{equation} \begin{aligned} \frac{\partial \mathbb{E}[|\mathscr{E}(f)^2|]}{\partial W(f)}=W(f)P_{YY}(f)-P_{XY}(f)=0, \end{aligned} \end{equation} Where $P_{YY}(f)=\mathbb{E}[Y(f)Y^*(f)]$and $P_{XY}(f)=\mathbb{E}[X(f)Y^*(f)]$ are the power spectrum density (PSD) of input signal $Y(f)$ and the cross-PSD of $Y(f)$ and $X(f)$, respectively. From above equation, we finally get the Wiener filter with minimum MSE as: \begin{equation} \begin{aligned} W(f)=(P_{XY}(f))/(P_{YY}(f)) \end{aligned} \end{equation} Since there's no inherent correlation between atmospheric emission and the CMB signals, we can get the following relation: \begin{equation} \begin{aligned} P_{XY}(f)=P_{XX}(f) P_{YY}(f)=P_{XX}(f)+P_{NN}(f) \end{aligned} \end{equation} where $P_{XX}(f)$ and $P_{NN}(f)$ are the PSD of desired signal $X(f)$ and atmospheric emission $N(f)$. The final formula of Wiener solution can be written as: \begin{align} W(f) &= P_{XX}(f)/(P_{XX}(f)+P_{NN}(f)) \\ \nonumber &= 1/(1+\rho^{-1}(f)), \end{align} where $\rho(f) = P_{XX}(f)/P_{NN}(f)$ is the signal-to-noise ratio. Therefore, the estimated value after Winner filtering is: \begin{equation} \begin{aligned} \hat X(f)=W(f)Y(f)=P_{XX}(f)/(P_{XX}(f)+P_{NN}(f))Y(f), \end{aligned} \end{equation} where $P_{XX}(f)$ is estimated from TOD without the atmospheric fluctuation, and $P_{XX}(f)+P_{NN}(f)$ is estimated from total TOD itself. \subsection{High-pass filter} High-pass filter performs noise removal in frequency space by cutting off the signal on a specific frequency range which is considered to be dominated by noise. Here the cut-off frequency needs to be set to define the frequency limits for passing and blocking. The extent to which low frequencies are attenuated depends on the design of the filter. The function of the ideal high-pass filter is a step function, which has the following form: $$ w(f)=\left\{ \begin{array}{ll} 0 \;\;\;f<f_{cut-off},\\ 1 \;\;\;f>f_{cut-off} \end{array} \right. $$ High-pass filter is suitable for ground-based CMB observation time streams to filter out the noise from atmosphere. We know that, atmospheric emission dominates the loading of the detectors, and it is difficult to remove the atmospheric emission from the data in the time domain, especially when the signal-to-noise ratio is very low. However, in the frequency space, since atmospheric noise is mainly concentrated in the low frequency range, we can easily achieve the effect of reducing atmospheric noise through a high-pass filter. In Section 5.1, we apply a high-pass Butterworth filter on TOD to suppress atmospheric noise. \iffalse \begin{enumerate} \item \textbf{Beam mapping:} \item \textbf{Optical efficiency:} \item \textbf{Band pass and spectra response:} \item \textbf{....:} \end{enumerate} \fi \section{Introduction}\label{sec:intro} \input{introduction.tex} \section{Atmosphere effects}\label{sec:atmosphere-rev} \input{atm-review.tex} \section{Filters}\label{sec:metho} \input{filter-intro.tex} \section{Data Simulations and processing}\label{sec:simu} \input{simulation.tex} \section{Analysis and results}\label{sec:result} \input{results.tex} \section{Summary and discussion \input{conclusion.tex} \acknowledgments We thank Dr. Zi-Rui Zhang, Zi-Xuan Zhang, and Yi-Ming Wang, Hua Zhai for useful discussion. This study is supported in part by the National Key R\&D Program of China No.2020YFC2201600 and No. 2021YFC2203100, and by the NSFC No.11653002. \nocite{*} \subsection{Analysis at the level of time ordered data} Filtering operation is usually performed on the time ordered data series, so we can evaluate the filter's performance by comparing the effect of the signal and noise components in the TOD data before and after the filtering operation. We define the RMSE parameter as following: \begin{equation} E=\left( \sum^{N}_{i=0}(\hat x_i-x)^2/N\right) ^{\frac{1}{2}} \end{equation} Where $\hat x_i$ is the $i^{th}$ estimate value of desired signal, and $x$ is the input simulated data. It provides the squared average of the difference between the input and output signal before and after the filtering of the $N$ simulated data. The smaller value of RMSE means the output of the filter is closer to the desired signal. \begin{figure}[bthp] \begin{center} \includegraphics[width=0.8\textwidth]{figures/atm_noise/rmse_low.pdf} \end{center} \caption{The RMSE of three filters at low noise level. The red, blue, green lines are for the RMSE results at TOD domain by polynomial filter, highpass filter and Wiener filter, respectively.}\label{fig:rmse_low} \end{figure} \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/rmse_high.pdf} \end{center} \caption{The RMSE of three filters at high noise level. }\label{fig:rmse_high} \end{figure} In Figure \ref{fig:rmse_low} and Figure \ref{fig:rmse_high}, we show the results of the RMSE of the three filters, at two case, low noise level and high noise level, respectively. The results show that the polynomial fitting gives the largest RMSE value for both noise level cases, followed by the Wiener filter and the smallest one is given by the high-pass filter. For two white noise levels, $1000\mu K\sqrt{s}$, and $1400\mu K\sqrt{s}$, the RMSE values of all three filters increase with the noise level increases. The main reason is that white noise interferes with the efficiency of the filters. The increase of RMSE values of polynomial fitting is more significant than for the high-pass and Wiener filters, indicating that the performance of the polynomial fitting decreases significantly when the noise level increases. In order to compare the filtering efficiency more intuitively on the TOD domain, we have specifically calculated the ratio of the RMSE of the input TOD to the output TOD, which gives a more direct indication of the filtering efficiency. The ratio is defined as follows: \begin{equation} \mathrm{ratio} = \frac{\mathrm{RMSE(input)}}{\mathrm{RMSE(output)}}~. \end{equation} The distributions of ratio for low noise and high noise level are showed in Figure \ref{fig:snr}, with the polynomial filters having the significantly smaller values compared to the other two. Here the ratio characterizes the procedure of the noise reduction of the TOD data after applying the filters, so that usually all ratio values should be greater than one. The results show that all three filters can improve the data quality more or less, and high-pass filter and Wiener filter perform better. Especially when the noise is relatively high, on average they tend to reduce the RMSE by a factor of 20, which is more than four times better than the polynomial filter. \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/snr.pdf} \end{center} \caption{The distributions of ratio for low noise (left) and high noise (right) level. }\label{fig:snr} \end{figure} \subsection{Analysis at the map domain} The data processing at the TOD level is not the ultimate result in the CMB experiments, while the measurement of the CMB angular power spectra, including the estimation of its variance, is the final quantity of interest. Therefore, we further map the filtered TOD and analyze the effect of the filters on the angular power spectrum at the map level. We focus on the temperature maps, and ignore the polarization in this paper. \subsubsection{Map difference after filters} We do mapmaking by projecting the filtered TOD into maps, and trace the effects from filters on map domain. \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/mapdifflow.pdf} \end{center} \caption{The difference between the output and the input after filtering at low noise levels. For Polynomial filter (left), high-pass filter (centre) and Wiener filter (right) respectively. }\label{fig:mapdifflow} \end{figure} \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/mapdiffhigh.pdf} \end{center} \caption{The difference between the output and the input after filtering at high noise levels. For Polynomial filter (left), high-pass filter (centre) and Wiener filter (right) respectively.}\label{fig:mapdiffhigh} \end{figure} In Figure \ref{fig:mapdifflow} we plot the difference between the input data and the output data after the filtering operation at low noise levels. This difference is mainly the noise residual, but also includes part of the amount of variation in the signal. From left to right are the results given by the polynomial fit, the high-pass filter and the Wiener filter, respectively. As can be seen, in the leftmost panel, for the polynomial fit, many large scale streak structures are evident in the map, which is typical of the leakage caused by atmospheric radiation filtering, indicating that the noise residuals left by the polynomial fit are very visible at large scales. For the high-pass filtering, noise residuals on large scales are shown to exist, but at a much lower magnitude compared to the polynomial fit. The Wiener filtering gives results that do not show a particularly pronounced deviation from the particular structure of the CMB temperature fluctuations on the map. In Figure \ref{fig:mapdiffhigh}, the results are given at high noise levels. From the figure, it can be seen that as the noise level increases, the noise residuals after filtering also increase significantly. Those from polynomial fit are higher than those of the high-pass and Wiener filters obviously, with the noise residuals of the Wiener filter being more uniform and smaller in amplitude than those of the high-pass filter. \subsubsection{The noise residuals, as well as signals, after filters} We calculate the noise removal efficiency provided by the filters in this section. As we know, the simulated noise includes the $1/f$ noise of atmospheric radiation and white noise, and the main goal of the filter is to remove the atmospheric $1/f$ noise, which is dominant in the low frequency domain, and the more completely the $1/f$ noise is eliminated, the better. At the same time, the filters will also have a certain suppression on white noise. \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/noise_residual.pdf} \end{center} \caption{To show the angular power spectrum of noise residual by three filters. The gray solid line is the input noise power spectrum without any filter operation, and red dotted line, green dashed line, and blue dot-dashed line are the residual noise power spectrum after the action of polynomial filter, high-pass filter and winner filter, respectively. The left and right panels are for low noise level and high noise level, respectively.}\label{fig:noise_res} \end{figure} The residual noise power spectra at the two different noise levels are showed in the left and right panel of Figure \ref{fig:noise_res}, respectively. It can be seen that in the case of low noise level of $10^3\mu K \sqrt{s}$, the high-pass filter and Wiener filter can effectively reduce the noise spectrum by about three to four orders of magnitude at large scales ($l<100$). While polynomial filter is more modest, it mainly cut off $1/f$ noise on very large scale of $l<11$ with a significantly smaller suppression order. For high noise level of $1400\mu K \sqrt{s}$, similar results are provided for the three filters, that is to say winner filter and high pass filter can suppress the noise spectrum by five to six orders of magnitude on very large scale, while polynomial filter is much less effective. As the noise level increases, the efficiency of the Wiener and high-pass filters increases significantly, but for the polynomial fitting, although its order is increased from 15 to 20, the efficiency of the noise reduction does not reach the level of the high-pass and Wiener filters, and the residual amount of noise is relatively high. Since TOD is a linear combination of the signals and noise, we expect that filter has a similar effect on the input signals as the noise, i.e. the filter does cut off the signal while suppressing the noise. The fidelity of the signal given by the three filters are presented in Figure \ref{fig:snl_res}, in which the suppression effect on signal can be seen obviously, and they show the similar trend in both cases of the two noise levels. The numerical calculation shows that the polynomial fitting causes the least damage to the signal spectrum, as shown from the curves. However, the high-pass filter caused significant suppression of the signal on the large scale of $l<100$ at low noise level, and in high noise case showed the same trend, with the heavy suppression occurring on the scale of $l<200$. Unlike polynomial fitting and high-pass filters, the Wiener filter suppresses the signal at almost all scales. \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/snl_residual.pdf} \end{center} \caption{To show the angular power spectrum of signal residual by three filters. The gray solid line is the input signal power spectrum, and red dotted line, green dashed line, and blue dot-dashed line are the residual signal power spectrum after the action of polynomial filter, high-pass filter and winner filter, respectively. The left and right panels are for low noise level and high noise level, respectively. }\label{fig:snl_res} \end{figure} \subsubsection{Recovery of filtered signals} The suppression factor of the input signal, caused by the filtering, can be statistically estimated from a number of simulation of the filtering operation, so that the signal can be recovered by dividing by the suppression factor. In this study, we did a filtering implementation of 50 maps and estimated the average of the suppression factors for these 50 filterings. With the obtained suppression factors, we can recover the signals that were suppressed during the filtering process. During the process of recovering the signals in the maps, the suppression factors also amplify the residual noise, so that the final recovered noise level is higher than the residual noise in the case of no recovery. As the signal recovery is one necessary process during the CMB data analysis, our comparison of the final noise level is basing on the results after recovery. \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/noise_recovery.pdf} \end{center} \caption{To show the angular power spectrum of recovered noise. The gray solid line is the input noise power spectrum without any filter operation, and red dotted line, green dashed line, and blue dot-dashed line are the recovered noise power spectra of polynomial filter, high-pass filter and winner filter, respectively. The left and right panels are for low noise level and high noise level. }\label{fig:noise_rec} \end{figure} We compare the power spectra of the modified noise residual spectra for the three filters in the figure, \ref{fig:noise_rec}. It shows that at low noise levels, the modified noise spectra of the Wiener filter and polynomial fit is less than that of the high-pass filter at large scales, while for high noise levels, the polynomial filter still results in greater noise than the other two filters at most of the scale ranges. It is worth pointing out that the high-pass filter has higher noise levels on large scales than the unfiltered input noise after performing the signal recovery. On small scales, the Wiener filter and high-pass filter provide less noise level than polynomial filter for both high noise and low noise levels. \subsubsection{Standard deviation of the filtered maps} After recovering of the filtered maps, the signals are corrected to the same level as the input values for the three filters. We provide statistics on the covariance of the corrected power spectra as a way of assessing the contribution of the data filtering to the uncertainty of the CMB measurements. \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/binned.pdf} \end{center} \caption{Standard deviation of binned power spectra after filtering. Red, blue, green lines stand for the results after applying the polynomial filter, high-pass filter and Wiener filter, respectively. Dots and Y-axis error bars represent the mean value and one sigma error of the $\sigma(D_{\ell}^{\mathrm{TT}})$s in the bin. X-axis error bar show the multipole range. Left panel and right panel are for low and high noise level, respectively.}\label{fig:std} \end{figure} The smaller the standard deviation of the power spectrum, the more accurate the measurement of the power spectrum. Figure \ref{fig:std} shows the standard deviation of the power spectra obtained after the data has been filtered. The curves in the figure show that at low noise levels, the polynomial filter and the Wiener filter give comparable levels of standard deviation of the power spectrum, while the high-pass filter gives much higher values at large scales, e.g. at $l<50$, where the standard deviation can be an order of magnitude higher than the values from the polynomial fit and Wiener filter. On smaller scales, the high-pass filter performs well, giving smaller standard deviations than the polynomial filter and the Wiener filter. In the case of high noise, the standard deviation values of the power spectra given by the three filters are comparable in magnitude on very large scales, like above $l<30$, and on other scales the results of the high-pass filter and the Wiener filter are comparable, both being much better than the standard deviation values given by the polynomial filter. \subsection{Sky simulation in microwave band} For the simulation of CMB sky, we generate a HEALPix\cite{2005ApJ...622..759G} realization of the full sky CMB maps from the angular power spectra obtained by CAMB\cite{Lewis:1999bs}, and the input cosmology is the best fit $\Lambda$CMD model of Planck 2018 data release\cite{2020A&A...641A...5P}. For foreground emission simulation, we only consider the gaussian part, and start from the measured angular power spectra of various foreground components, such as synchrotron, dust, free-free, etc., given by the Planck satellite measurements\cite{2015A&A...576A.107P}, and use HEALPix to convert the angular power spectra to the sky maps for each components. The spectra of free-free, syncrotron and thermal dust follows: \begin{equation} \begin{aligned} C_{\ell}^{FF}&=0.068\times \left( \frac{f_{sky}}{0.6} \right) ^{\left[ 6.10+3.90\ln\left( f_{sky}/0.6 \right) \right]}\times \left( \frac{\ell}{100} \right) ^{-2.2}\times \left( \frac{\nu}{\nu_b} \right) ^{-4.28}, \\ C_{\ell}^{sync}&=2.96\times 10^9 \times \left( \frac{f_{sky}}{0.6} \right) ^{\left[ 2.12+2.67\ln\left( f_{sky}/0.6 \right) \right]}\times \left( \frac{\ell}{100} \right) ^{-2.5}\times \left( \frac{\nu}{\nu_c} \right) ^{-6.0}, \\ C_{\ell}^{dust}&=0.086\times \left( \frac{f_{sky}}{0.6} \right) ^{\left[ 4.60+7.11\ln\left( f_{sky}/0.6 \right) \right]}\times \left( \frac{\ell}{100} \right) ^{-2.4}\times D_{\nu}, \end{aligned} \end{equation} where the parameter $f_{sky}$ represents the covered sky area of scanning, and $\rm \nu_b=23GHz$ $\rm \nu_c=0.408GHz$, $\rm D_\nu$ is a spectral model of the dust emission. \subsection{ Atmospheric emission} For the simulation of the atmosphere contamination in microwave band, we generate the $1/f$ noise and the white noise, respectively, and then add the two types of noise up in the time ordered data. \begin{figure}[bthp] \begin{center} \includegraphics[width=1\textwidth]{figures/atm_noise/psd.pdf} \end{center} \caption{Left: PSD of observed data from ACT experiments. Middle: the generated TOD from PSDs in the left pannel. Right: the PSD of generated TOD. The simulated TOD are consistant with the observation of ACT.}\label{fig:psd} \end{figure} \begin{itemize} \item 1/f noise: This is achieved by the power spectrum $P(f)=\alpha f^{-\beta}$, where $\alpha$ and $\beta$ denote the fluctuation amplitude and the index of the spectral line, respectively. For the determination of these two parameters, we refer to the observed data from ACT experiments to ensure that the noise level and emission fluctuations are as reasonable and close to the real situation as possible\cite{2013ApJ...762...10D}. In Figure \ref{fig:psd}, the left panel is the PSD of observed data from ACT experiments, and for different power-law indices, we label them in five different colors. The dotted lines represent the knee frequencies at which the atmospheric noise PSD and the white noise PSD intersect with each other for different power-law indices, respectively. The middle panel is the results of generated TOD for different power laws of PSD. In order not to lose the generality, we simulated two levels of amplitude values for high and low noise levels, and the adopted parameters are listed in Table \ref{table:noise}. \item White noise: We generated white noise using random numbers, and the random number distribution was satisfied. \end{itemize} \begin{table}[ht!] \centering \caption{Parameters of noise spectra} \begin{tabular}{c|c|c|c} \hline Noise level & $\beta$ & Knee freq./Hz & White noise level \\ \hline High & $-3.2$ & $2.0$ & $1400 \mu K \sqrt{s}$ \\ \hline Low & $-1.5$ & $0.9$ & $1000 \mu K \sqrt{s}$ \\ \hline \end{tabular} \label{table:noise} \end{table} \subsection{TOD generation} We modeled an virtual and ideal telescope firsthand to simulate the TOD data, and with a simplified model we describe a telescope with 2000 detectors \footnote{With this model, we distribute the detectors are located at the center of its focal plane and do not include a real beam size for simplicity.}. A scan strategy is required to get the trajectory of each detector's pointing over the pre-computed sky maps. The basic scanning mode of the telescope is 360 degree azimuthal scanning with a fixed elevation axis. The values of basic parameters such as elevation angle, scanning speed and sampling frequency are listed in Table \ref{tab:my_label}. We assume that during the mission the telescope observes 400 days and works for 7 hours per day. Note that for simplicity we take the spinning of the Earth into account, while ignore the orbiting so that daily scans perfectly overlap with each other. It is convenient to name each 360 degrees' scan as a Ring scan. In total each dataset includes 400 days' data, each day has 420 ring scans, and each ring lasts 60 seconds and contains 3,000 samples according to the scan strategy. \begin{table}[h!] \centering \caption{Major parameters of scan strategy taken in the simulation} \begin{tabular}{c|c|c|c|c} \hline Site latitude & Elevation & Azimuth range & Scanning speed & Sampling frequency \\ \hline $32^\circ N$ & $50^{\circ}$ & $360^{\circ}$ & $\rm 6^{\circ}/s$ & $\rm 50Hz$ \\ \hline \end{tabular} \label{tab:my_label} \end{table} We get the signal time streams by sampling the CMB maps as well as the foreground maps. To generate the TOD data of atmospheric $1/f$ noise and white noise, we apply an inverse Fast Fourier Transform (iFFT) to the square root of their power spectrum density, before which the following procedure are taken: \begin{itemize} \item Convert the power spectrum density to amplitude: $A(f) = \sqrt{2P(f)} = \sqrt{2\alpha f^{-\beta}}$, where factor $2$ represents double sideband PSD, then we get the sample amplitude $\rm A(f)$ in frequency domain. \item Generate Gaussian white noise time series $n(t)$: the length of the white noise series is set to the same with samples, and the variance of white noise should be set to one. \item Apply FFT to the white noise: $ N(f) = FFT(n(t))$, this step is to acquire random variables with uniform phase distributed between $(0, 2\pi)$. \item Multiply the sample amplitude: $S(f) = A(f)*N(f)$, the spectrum of sample frequency components can be generated by multiplying sample amplitude spectrum $ A(f)$ and phase spectrum $N(f)$. In the last step, $Var_{noise}=1$ guarantees $|N(f)|=1$. \item Apply iFFT to the frequency domain representation: $S(t) = iFFT(S(f))$. Note that in actual computation we apply irFFT to $S(f)$ in stead of iFFT by using python package Numpy, as the samples are purely real input, and the output should be Hermitian-symmetric, and finally we take the real part of the result as the generated $1/f$ noise time series. \end{itemize} By summing the time streams obtained from the CMB and foreground sky maps with the atmospheric noise and white noise streams obtained by Fourier inversion, we finally get the TOD data of the virtual telescope. \begin{figure}[bthp] \begin{center} \includegraphics[width=0.6\textwidth]{figures/atm_noise/tod.pdf} \end{center} \caption{The generated TOD of four components: CMB and foreground (green line), atmosphere noise with low level (red dotted line ), atmosphere noise with high level (red solid line), white noise (gray line).}\label{fig:tod} \end{figure} \subsection{Map making with TOD} We implement the operation of map making with the simulated TOD as follows.: \begin{enumerate}[(a)] \item Building the pointing matrix. Basically this matrix has been derived from the TOD simulation procedure. Note that under our assumption all the detectors share the same pointing directions with the boresight direction of the virtual telescope. \item Projecting. We project the filtered TODs onto the map. Each pixel of the map is filled with the arithmetic average of all the samples within. \end{enumerate}
1,314,259,996,889
arxiv
\section{Introduction} \label{sec:intro} Recent advances in the field of \ac{NLU}~\citep{devlin2018bert, adiwardana2020towards, brown2020language} have enabled natural language interfaces to help users find information beyond what typical search engines provide, through systems such as open domain and task-oriented dialogue engines~\citep{li2018dialogue,li2020guided} and conversational recommenders~\citep{christakopoulou2016towards}, among others. However, most existing systems still present with one or both of the following limitations: (1)~answers are typically constrained to relatively simple and primarily factoid-style requests in natural language~\cite{kwiatkowski2019natural,DBLP:conf/eacl/SoleimaniMW21}, as is the case with search engines; and (2)~a requirement on availability of inferred user preferences~\citep{kostric2021soliciting}. However, user information needs, when expressed using natural language, can be inherently complex and contain many interdependent constraints, as is shown in Figure~\ref{fig:running_example}. When issuing such requests, users may be considered to be in \emph{exploratory mode}; they are looking for suggestions to pick from, rather than a single concrete answer. The task becomes especially challenging since most real applications~\citep{christakopoulou2018towards} need to constantly deal with cold-start users~\citep{kiseleva2016beyond, sepliarskaia2018preference}, for whom little to no preferential knowledge is known a priori. This may be due to infrequent visits, rapid changes in user preferences~\citep{bernardi2015continuous, kiseleva2014modelling, kiseleva2015behavioral}, or general privacy-preserving constraints. In this work, we aim to bridge the described gap of processing complex information-seeking requests in natural language from unknown users by developing a new type of application, which will work as illustrated in Figure~\ref{fig:running_example}. Concretely, our proposed solution is capable of jointly processing complex natural language requests, inferring user preferences, and suggesting new ones for users to explore, given real-time interactions with the \acf{IA}. \begin{figure}[t!] \centering \includegraphics[width=0.485\textwidth]{Figures/problemdef.png} \vspace{-2em} \caption{An example of a user request expressed complex information needs in natural language, which is processed by \ac{IA} Pluto to retrieve a list of suggestions that at least partially satisfy the specified constraints. The request contains a number of search constraints that are highlighted in yellow.} \label{fig:running_example} \vspace*{-6mm} \end{figure} One of the major bottlenecks in tackling the proposed problem of processing complex information-seeking is the lack of an existing interactive system to collect data and observe user interactions. Therefore, we designed a platform, which we call Pluto, which allows users to submit complex information-seeking requests. Using Pluto, we leverage human agents in the loop to help users accomplish their informational needs while collecting data on complex search behavior and user interactions~\citep[e.g.,][]{holzinger2016interactive,li2016dialogue}. Furthermore, we propose a novel \ac{IA} that seeks to replace human agents in the loop in order to scale out Pluto to a significantly broader audience while simultaneously making responses a near real-time experience. The proposed \ac{IA} contains \ac{NLU} unit that extracts a semantic representation of the complex request. It also integrates a novel score that estimates the completeness of a user's intent at each interactive step. Based on the semantic representation and completion score, the \ac{IA} interacts with users through a \ac{RL} loop that guides them through the process of specifying intents and expressing new ones. The proposed model leverages a user interface to suggest a ranked list of suggested intents that users may not have previously thought about, or even know. Online user feedback is leveraged through these interactions with users to automatically improve and update the reinforcement learner's policies. Another important aspect we touch on is a simple, straightforward evaluation of the proposed approach. We adopt pre-retrieval metrics~\citep[e.g.,][]{sarnikar2014query,roitman2019study} as a means to evaluate the extent to which refinement to the complex request afforded by the \ac{IA} better represents the actual user intent, or narrows down the search space. Our evaluation demonstrates that a better formulated complex request results in a more reliable and accurate retrieval process. For the retrieval phase, we break down the complex request based on the contained slots and generate a list of queries from the user intent, slots, and location. A search engine API is used to extract relevant documents, after which a GPT-3 based ranker re-ranks the final results based on the actual slot values or aspects. The final re-ranker considers the user preferences through the aspects values for the slots in the reformulated query. To summarize, the main contributions of this work are: \begin{enumerate}[leftmargin=*,label=\textbf{C\arabic*},nosep] \item \emph{Designing} a novel interactive platform to collect data for handling complex information-seeking tasks, which enables integration with humans-in-the-loop for initial processing of the user requests and search engines to retrieve relevant suggestions in response to refined user requests (Section~\ref{sec:infrastructure}) \item \emph{Formalizing new general problem} of interactive intent modeling for retrieving a list of suggestions in response to users' complex information-seeking requests expressed in natural language, such as presented in Figure~\ref{fig:running_example}, where there is no prior information about user preference (Section~\ref{sec:model}) \item \emph{Proposing a hybrid model}, which we name \acf{IA}, consisting of an \acf{NLU} and a \acf{RL} component. This model, inspired by conversational agents, encourages and empowers users to explicitly describe their search intents so that they may be more easily satisfied (Section~\ref{sec:solution}) \item \emph{Suggesting an evaluation metric}, \ac{CIS} that estimates the degree to which intent is expressed completely, at each step. This metric is used to continue the interactive loop so that users can express the maximum preferential information in a minimum number of steps (Section~\ref{sec:eval_metrics}) \end{enumerate} \noindent \begin{figure*}[!h] \centering \vspace{-2em} \includegraphics[width = 480pt]{Figures/Proposed-IntelligentAssitant.png} \vspace{-1em} \caption{\small The proposed \acf{IA} model, where (1)~represents the \ac{NLU} section; (2)~shows the interactive intent modeling via \ac{RL} model; and (3)~is the retrieval component.} \vspace{-1em} \label{fig:model} \end{figure*} \section{Background and Related Work} \label{sec:rel_work} Our work is relevant to four broad strands of research on multi-armed bandits, search engines, language as an interface for interactive systems, and exploratory search and trails, which we review below. \vspace{-0.5em} \subsubsection*{\bf Contextual bandits for recommendation} Multi-armed bandits are a classical exploration-exploitation framework from \acf{RL}, where the user feedback is available in each iteration~\citep{parapar2021diverse, cortes2018adapting, li2010contextual}. They are becoming popular for online applications such as adds ranking and recommendation systems~\cite[e.g.,][]{ban2021local, joachims2020reveal}, where information about user preferences is unavailable (cold-start users~\citep{bernardi2015continuous,kiseleva2016beyond}) \citep{felicio2017multi}. \citet{parapar2021diverse} proposed a multi-armed bandit model for personalized recommendations by diversifying the user preferences. Others examined the application of contextual bandit models in healthcare, finance, Dynamic Pricing, and Anomaly Detection \citep{bouneffouf2019survey}. \smallskip \noindent Our work adopts contextual bandits paradigm to the new problem of interactive intent modeling for complex information-seeking tasks. \vspace{-0.5em} \subsubsection*{\bf Search Engines} Commonly used search engines such as Google and Bing provide platforms focusing on the document retrieval process through search sessions~\cite{Hassan_wsdm_2010, kiseleva2014modelling, kiseleva2015behavioral,ageev_sigir_2011}. Developing retrieval models that can extract the most relevant documents from an extensive collection has been well-studied~\citep{croft2010search} for decades. The developed retrieval models focus on retrieving the most relevant documents to the search query concerning the user's textual and contextual information within or across search sessions~\citep{kotov2011modeling}. Although extracting relevant documents is necessary, it is not always sufficient, especially when the users have a complex information-seeking task~\citep{ingwersen2006turn}. \vspace{-1em} \subsubsection*{\bf Language as an interface for interactions} \ac{NLU} have been the important direction for human-computer interaction and information search for decades~\citep{woods1972lunar, codd1974seven, hendrix1978developing}. The recent impressive advances in capabilities of \ac{NLU}~\citep{devlin2018bert, LiuRoberta_2019, clark2020electra, adiwardana2020towards, roller2020recipes, brown2020language} powered by large-scale deep learning and increasing demand for new applications has led to a major resurgence of natural language interfaces in the form of virtual assistants, dialog systems, conversational search, semantic parsing, and question answering systems~\citep{liu2017iterative, liu2018adversarial, dinan2020second, zhang2019dialogpt}. The scope of natural language interfaces has been significantly expanding from databases~\citep{copestake1990natural} to knowledge bases~\citep{berant2013semantic}, robots~\citep{tellex2011understanding}, virtual assistants~\citep{kiseleva2016understanding, kiseleva2016predicting}, and other various forms of interaction~\citep{fast2018iris, desai2016program, young2013pomdp}. Recently, the community has focused on continuous learning through interactions, including systems that learn a new task from instructions~\citep{li-etal-2020-interactive}, assess their uncertainty~\citep{yao-etal-2019-model} and ask feedback from humans in case of uncertainty~\citep{aliannejadi2021building, Aliannejadi_convAI3} or for correcting possible mistakes~\citep{elgohary-etal-2020-speak}. The interfaces discussed above model user intent through a step-by-step process. Leveraging this mechanism is effective; however, according to the current advancement in designing dialogue managers, they struggle to model long-term dependencies. A slight mistake can cascade to the higher levels, which causes user dissatisfaction. Also, conversational search systems do not allow users to initially express all their intent at once, resulting in incomplete user intent. \subsubsection*{\bf Exploratory search and trails} Exploratory search refers to an information-seeking process in which the system assists the searcher in understanding the information space for iterative exploration and retrieval of information~\citep{ruotsalo2018interactive, hassan2014supporting, white2008evaluating}. The abnormal state of knowledge hypothesis is the main reason behind a demand for information-seeking search systems. According to this hypothesis, users usually cannot accurately formulate their information need by missing some essential information~\citep{liu2015personalizing, white2009exploratory}. In such cases, the system should assist the user in specifying their intent~\citep{marchionini2006exploratory}. \citet{odijk2015struggling} shows that a large portion of searches where users struggle to formulate their query, or the searcher explores to navigate to a particular Web page. New search interfaces need to be designed to support searchers through their information-seeking process \citep{villa2009aspectual}. Trails are another group of tools that were developed to guide users to accomplish search tasks. \citet{olston2003scenttrails} proposed ScentTrails that leverage an interface that combines browsing and searching and highlights potentially relevant hyperlinks. WebWatcher~\cite{joachims1997webwatcher}, like ScentTrails, underlined the relevant hyperlinks and improved the model based on the implicit feedback collected during previous tours. \smallskip \noindent To summarize, the \textbf{key distinctions} of our work compared to previous efforts are: similar to the exploratory search, trails, and conversational search, our model proposes an iterative information-seeking process and designs an interface for user interactions to guide struggling users and help them better understand the information space. Unlike exploratory search, trails, and conversational search that only focus on user interaction modeling and limit users in issuing short and imprecise queries and utterances, our model provides a platform for users to express their information needs in the form of long and complex requests. Users can utilize this capability to express their intent more accurately and prune significant parts of the search space for the exploratory search process. Adding this capability needs an advanced \ac{NLU} step and different machine learning components to understand and guide the final user through the search process. To this end, the proposed system has two new components, such as an \textbf{intent ontology} and \textbf{profile} for partitioning the information space, where the \ac{IA} can be more effective in exploring the search space. \section{Problem Setup} \label{sec:problem} Consider the example interaction in Figure~\ref{fig:model}. While this is not a real user \footnote{Exact user information cannot be reproduced as the data is proprietary} \todo[The part of incubation process is not clear ... not sure it should be here], the interaction is typical of those observed with our system during the real user interactions. In a regular search process in our system, a user may have an initial interest or information need that can be expressed in the form of a complex query (e.g., ``camping idea near Seattle, with gorgeous scenery, and access to ziplining''). The system understands users' complex queries, and in an iterative process, helps users specify their intent. Finally, the refined request is leveraged to extract the most relevant recommendations for the user. We define the \mg{this is the first time the term "context-aware" is used: we either need to introduce it in the introduction or leave it out.} context-aware interactive user intent modeling problem as follows: \begin{figure}[h!] \centering \noindent\begin{tabular}{l l} \hline \hline \textbf{Setting:} & context-aware \todo[what do we mean by context here. We have never talked about it before] interactive search engine with a\\ &intelligent assistant system for cold-start users\\ \textbf{Given:} & A complex search query $C$, containing a sequence of\\ &sentences $S_{0..s}$, \mg{"describing" instead of "revolving around"?} revolving around a user\\ &search intent $i \in I$ with a set of possible aspects $a \in A$, \\ & (e.g., user intents or \mg{"mini-skills" have not been introduced or defined} {\em \bf mini-skills}). \\ \textbf{Problem:} & Extract \mg{this is confusing - it sounds like a semantic representation of the user... Maybe just "semantic representation of user query"? } user semantic representation $semantic_{rep}$.\\ &Then, based on the extracted semantic representation \\ &at interaction turn $j$, predict a list of aspects $A_{0,..j}$ to \\ & suggest for the current user $u$, to narrow the search. \\ & Interact with user until the the user intent is complete\\ & Finally, recommend to $u$ a list of potential results\\ \hline \end{tabular} \label{fig:cts-problem} \caption*{Definition 1: context-aware interactive user intent modeling Problem Statement.} \end{figure} Note that this definition focuses on a new (cold-start) user's interaction with the system and does not explicitly consider the user's future satisfaction or engagement with the selected intent (e.g., as measured in reference~\cite{choi2019offline}). We plan to explore the connection of context-aware aspect suggestion and ultimate user satisfaction with the conversation in future work. We also emphasize that we formulate our model based only on short-term history (current user session) and not on long-term user interests. Long-term personalization is also a promising direction for future work. \section{Pluto: data collection infrastructure } \label{sec:infrastructure} Since the proposed problem is novel and requires non-trivial user interaction data, we designed a new pipeline to collect such data. Users of Pluto were supplied with a consent form explaining that their requests and interactions would be viewed by human agents and some members of the development team. Further, the human agents in the loop also consented to have their interactions with the system recorded. All data and interactions were anonymized, and no personal identifiers of users or agents were retained in any of the modeling and experimentation in our work. This section presents the details of the developed infrastructure, which is called Pluto. It is leveraging human-in-the-loop for data collection and curation. Pluto is comprised of two main components : \begin{enumerate}[leftmargin=*, label=\textbf{Phase\arabic*}, nosep] \item \emph{Refinement of complex user request in natural language}; \item \emph{Refinement of retrieved list of suggestions.} \end{enumerate} \subsubsection*{\bf{Complex user request refinement.}} When a user issues a request in natural language to express their complex information needs, which potentially has many expressed constraints (see Figure~\ref{fig:running_example} for several mentioned in the example), GPT-3~\cite{brown2020language} is leveraged to understand the request's intent and identify explicitly mentioned aspects. Once GPT-3 has identified these aspects, they will be used as the initial set for the request. To further expand this set, this phase will identify an additional list of aspects to be presented to users as a supplemental set of relevant considerations for their request. As stated, Pluto has integrated human-in-the-loop into its pipeline. The goal of human agents is to intervene at certain stages of the system to offer human judgment. One such intervention occurs when agents review users' requests, at which point they can correct the aspects this phase identified in the request as well as add new ones to better serve the user's needs. \subsubsection*{\bf{Suggestions refinement.}} Here, Pluto performs two tasks. First, it receives the slots selected by the user for processing and suggests additional slots (so as to further narrow down the request, with the aid of the user). These new slots can be generated via GPT-3 or by intervention from the human agents. Second, Pluto leverages the search engine to produce a series of suggestions that meet the slots for the request as well as the new slot proposal. GPT-3 is leveraged at this stage to aid in determining what potential suggestions meet which aspects from the request so that the system can rank them. Human agents then make final decisions on which suggestions to present to users. Once that is done, users can either accept the suggestions if they are satisfying, or request another iteration of the retrieval phase. When users request another iteration, they may change either the language of the request or add/remove aspects from it (including the newly suggested ones). Additionally, for any iteration of this phase, users can provide feedback that is captured via a form to help refine the system. Finally, human agents are responsible for another very valuable and essential contribution: intent and aspect curation. In either of the phases described above, GPT-3 may suggest various aspects and intents that sometimes are not as relevant or useful. All of these are considered entries into the dynamic intent ontology. However, human agents then curate them. Intent and aspects that are considered higher quality by the agents are then given more weight when suggesting aspects in either of the two phases. Next, we will describe the formal problem description and elaborate on the problem formulation and the proposed interactive agent. \section{Problem description} \label{sec:model} \label{sec:model_overview} In this section, we formalize the problem of interactive intent modeling for supporting complex information-seeking tasks. \subsubsection*{\bf Notation.} \label{sec:problem_formulation} We begin by formally defining used notation as follows: \begin{itemize}[leftmargin=*,nosep] \item \emph{User request $(cr)$:} is a complex information seeking task expressed in natural language, which contains contains multiple functional desiderata, preferences, and conditions (e.g.~Figure~\ref{fig:running_example}). \item \emph{Request topic $(\tau)$:} determines the topic the request belongs to, e.g. ``activity'' or ``service'', where $\tau \in T$ is the list of all existing topics. \item \emph{User intent $(i)$:} for each topic $\tau$, a list of user intents can be defined. Intents are the identification of what a user wants to find. For example, in Figure~\ref{fig:running_example}, the user request has an ``activity'' topic, with a ``hiking'' intent. This definition allows having identical intents in different domains, where $i \in I$ is the list of all intents. \item \emph{Slot $(\xi)$ and aspect $(\alpha)$:} for each specific topic $\tau$ and user intent $i$, a list of slots $\xi$ are defined that describe features and properties of the intent $i$ in topic $\tau$, and aspect (values) $\alpha$ is a restriction on the slot $\xi$. For example, from Figure~\ref{fig:running_example}, ``date'' is a slot related to ``hiking'', with aspect value of ``May 9th to May 29th, 2021''. \item \emph{Intent completion score $(ICS_i)$:} is a score to estimate the completeness of user intent $i$ in the interaction step $j$. \item \emph{Semantic representation $(\sigma)$:} is an information frame which represents an abstract representation of the $cr$ as $(\tau,i, [ \xi_{0 \dots n} ],ICS_{(\tau,i)} )$ \item \emph{Intent ontology $(\Omega)$:} is the graph structure representing relations among the defined domains, intents, and slots. \item \emph{Intent profile $(\Phi)$:} is the list of all conditional distributions $P(\xi|i,\tau)$, for slot $\xi$ with respect to topic $\tau$ and intent $i$. It can be changed over time via user interactions with specific intent $i$ and topic $\tau$. Figure \ref{fig:intentontology} shows different steps to generate the intent profile. \item \emph{List of retrieved suggestions $S=(Sug_{0},\dots,Sug_n)$:} is the list of retrieves suggestions in response to $cr$. \end{itemize} \vspace{-1em} \subsubsection*{\bf Problem formulation.} \begin{algorithm} \label{alg:1} \caption{The proposed interactive user intent modeling for supporting complex information seeking tasks.} \label{alg:cap} \begin{algorithmic} \State \textbf{Input: } User complex search request \State \State \textit{NLU component starts} \Comment{/*This is a comment*/} \State Pre-processing of complex request $cr$ \State $\sigma = ( \tau$, $i$, $\xi_{0 \dots n}$, $ICS_{(\tau,i)} ) \gets$ \textbf{NLU}($cr$) \State Initialize $c$ = one of the methods in section \ref{sec:interactive} \State \State \textit{RL model starts} \Comment{/*This is a comment*/} \While{$ICS_{(\tau,i)} \leq \mu(P(\xi_k | i, \tau )) + sd(P(\xi_k | i, \tau ))$} \If{User Feedback} \State 1- Retrain the RL policy based on user feedback \State 2- Update context $c$ based on user feedback \State 3- Update $ICS_i$ in Eq.~\ref{eq:ics} \State $ICS_{(\tau,i)} = \Sigma^{n_{\tau,i}}_{k=1}{P(\xi_k | i, \tau )} + \Sigma^{m_{\tau,i}}_{j=1}{P(\xi_j | i, \tau )}$ \Else \hspace{5pt} break \EndIf \EndWhile \State \State \textit{Retrieval step starts} \Comment{/*This is a comment*/} \State $p_s$ = [] \Comment{/*potential suggestions*/} \For{slot $\xi$ in context $c$} \State query $\gets i + with + \xi + in + $ \textit{location} \State top-10 documents $\gets$ search engine\_API (query) \State Update $ p_s \gets p_s \cup$ top-10 documents \EndFor \State \textbf{return} top-K results from \textbf{GPT-3 Ranker}($P_s$) \end{algorithmic} \end{algorithm} This section provides a high-level problem formulation. The desired \ac{IA} aims to map a request expressing complex information-seeking task $cr$ to a set of relevant suggestions $S$, as illustrated in Figure~\ref{fig:model}. The proposed model is comprised of three main components: \begin{enumerate}[leftmargin=*, nosep] \item \textbf{\acf{NLU} component:} consists of a topic and intent classifier, and a slot tagger to extract topic $\tau$, user intent $i$, and a list of slots $\xi_{0 \dots n}$, respectively. The unit leverages GPT-3 to improve and generalize the predictions for unseen slots. Finally, NLU generates the semantic representation $\sigma = (\tau,i,[\xi_{0 \dots n}], ICS_{(\tau,i)})$ for a complex request $cr$. \item \textbf{Interactive intent modeling component:} is an iterative model leveraging contextual multi-armed bandits~\citep{cortes2018adapting} that receives the semantic representation $\sigma$ and context $c$ for the request $cr$ from the NLU unit and predicts the most relevant set of slots for $c$. \item \textbf{Retrieval component:} generates a sequence of sub-queries based on the list of slots and their corresponding aspects. Relevant documents are retrieved and ranked by GPT-3 to provide the final list of retrieved suggestions $S=\{Sug_0, \dots, Sug_k\}$. \end{enumerate} \smallskip \noindent To summarize, this section formally defines a problem we intend to solved (Algorithm~1) in the next section. \section{Method Description} \label{sec:solution} This section presents a detailed description of the proposed strategy to model \acf{IA}. \vspace{-1em} \subsection{Creating intent profile $\Phi$} \label{sec:dynamicontology} \begin{figure} \centering \vspace{-1em} \includegraphics[width = 240pt]{Figures/longterm.png} \vspace{-2em} \caption{\small Intent profile creation through historical user interactions, where $Sug_i$ represents the $i-th$ suggestion with associate slots.} \vspace{-1em} \label{fig:intentontology} \end{figure} Based on the intent ontology $\Omega$ created in Section~\ref{sec:infrastructure}, and historical users' interactions with topic $\tau$, intent $i$, slot $\xi$, an dynamic intent profile $\Phi$ can be formed. To do so, for each individual $\xi$, $i$, and $\tau$, the intent profile stores a conditional probability, which can be updated in real-time using new user interactions with triple $(\tau,i,\xi)$. The conditional probability $P(\xi | i,\tau )$ is computed as follows: \begin{equation} P(\xi_k | i, \tau ) = \frac{Frequency(\xi_{(\tau,i,k)})}{\Sigma^{N_\xi}_{j=1} {Frequency(\xi_{(\tau,i,j)})}} \label{eq:ontology} \end{equation} where $\xi_{(\tau,i,k)}$ is the $kth$ slot for intent $i$, and topic $\tau$, $N_i$ is the number of slots for intent $i$ and topic $\tau$ in intent ontology $\Omega$. \subsection{NLU component} \label{sec:nlu} The NLU unit contains three main components: (1) a topic classifier, (2) an intent classifier, and (3) a slot tagger. For each incoming complex request $cr$, this unit generates a semantic representation as follows: $\sigma = (\tau, i, [\xi_{0 \dots n)}],ICS_{(\tau,i)})$. \subsubsection*{ \bf GPT-3} To generate the semantic representation, we leveraged GPT-3~\citep{brown2020language}, a very large language model trained on massive amounts of textual data that has proven capable of natural language generalization and task-agnostic reasoning. One of the hallmarks of GPT-3 is its ability to generate realistic natural language outputs from few or even no training examples (few-shot and zero-shot learning). The creativity of the model for generating arbitrary linguistic outputs can be controlled using a temperature hyperparameter. We use an internal deployment of GPT-3\footnote{Based on \url{https://beta.openai.com/}.} as the basis for our NLU. \subsubsection*{ \bf Intent Completion score} We propose a score \ac{ICS} to manage the number of interactions for the interactive steps. The ICS value can be calculated using the semantic representation $\sigma$ and the generated dynamic intent profile $\Phi$. The initial ICS value is equal to the summation over all the conditional probabilities of slots in the request. Then, in the following steps, ICS becomes updated by new slots that the user selects. \begin{equation} \label{eq:ics} ICS_{(\tau,i)} = \Sigma^{n_{\tau,i}}_{k=1}{P(\xi_k | i, \tau )} + \Sigma^{m_{\tau,i}}_{j=1}{P(\xi_j | i, \tau )} \end{equation} Where $n_{\tau,i}$ is the number of explicitly \textbf{mentioned} slots in the $cr$ and $m_{\tau,i}$ is the number of \textbf{selected} slots through the interactive steps. Also, $P(\xi | i, \tau )$ indicates the conditional probability extracted from intent profile $\Phi$ in Eq.~\ref{eq:ontology}. \subsection{Interactive user intent modeling} \label{sec:interactive} \begin{algorithm} \label{alg:cap} \caption{Contextual Multi-armed Bandit Model. $CBM_{(\tau,i)}$ is the contextual bandit model trained on the topic $\tau$ and intent $i$, $\Pi_\theta (.|c)$ is the policy to train the $CBM_{(\tau,i)}$ with respect to context $c$.} \begin{algorithmic} \State \textbf{Input: } semantic representation: $\sigma = (\tau,i,[\xi_{0 \dots n}],ICS_{(\tau,i)})$ \State Generate context vector $c$ using Method 1,2, or 3 in section \ref{method1} \State Select the $CBM_{(\tau,i)}$ based on $(\tau, i)$ tuple \State Initialize Policy $\Pi_\theta (.|c)$ with random weights (a feed forward Neural Network or a linear regression) \For{each step and context of $c$ } \State Sample actions $a_ {0 \dots k} $ from list of actions: $a_t \sim \Pi_\theta (.|c)$ \State Receive reward $\epsilon$ (user feedback by selecting actions) \State Add sampled actions $a_{{0 \dots k}}$ to the list of observed arms \State Update Policy \textbf{$\Pi_\theta$} with new reward \State Update $ c \gets c \cup a_s$ \Comment{/*$a_s$ is the selected actions $\subset a_{{0 \dots k}}$*/} \EndFor \end{algorithmic} \end{algorithm} \begin{figure} \centering \vspace{-1em} \includegraphics[width = 240pt]{Figures/SRS.png} \vspace{-2em} \caption{ \small Deep neural network for slot suggestion.} \vspace{-1em} \label{fig:session} \end{figure} We leveraged contextual multi-armed bandits to model online user interactions. In each iteration, the system interacts with users, receives user feedback, and updates its policies. Multi-armed bandits \citep{barraza2020introduction} are a type of RL model that make rewards immediately available after the interaction of an agent with the environment. Contextual multi-armed bandits are an extension of multi-armed bandits, where the context of the environment is also modeled in predicting the next step. Contextual multi-armed bandits are utilized in the interactive agent as users are capable of providing feedback for the agent in each step. We trained a separate contextual multi-armed bandit to represent each $(\tau, i)$ pair. The corresponding bandit model is then invoked at the inference time, based on the semantic representation $\sigma$. One of the main elements in designing the contextual bandits is how to represent the context $c$. To this end, we suggested three different methods that are described in the following sections. \subsubsection*{\bf Method 1:} \label{method1} This method uses a one-hot representation of the semantic representation $\sigma$. During the interactions with our agent, the one-hot representation is updated by adding newly selected slots. As a result, the size of the context $c$ equals the number of slots for each specific intent. \vspace{-0.45cm} \begin{equation} c = \vec{O_k} = \sum^{N}_{j=1} {\bigg\{^{1\hspace{0.5cm} x_{(j,k)} \in \xi_i}_{0\hspace{0.5cm} x_{(j,k)} \notin \xi_i}} \end{equation} Where $\vec{O_k}$ is the one-hot vector of the collected slots in the interaction step $k$. $N$ is the total number of slots in the interaction step $k$, and $\xi_i$ is the slots belonging to intent $i$. \subsubsection*{\bf Method 2} \label{method2} In method 2, the request representation is concatenated with the one-hot representation of the slots to enrich the context representation. We used the Google Universal Sentence Encoder(USE) \cite{cer2018universal}, which is trained with a deep averaging network (DAN) encoder for encoding text into a 512-dimensional vector for each request. \begin{equation} \vspace{-0.5em} c = \vec{O_k} + \vec{USE_Q} \end{equation} Where $\vec{O_k}$ is the one-hot vector of the collected slots at step $k$. \vspace{-0.2cm} \subsubsection*{\bf Method 3} \label{sec:popularity} Inspired by session-based recommender systems~\citep{wu2017session}, we developed a deep learning model in Figure~\ref{fig:session} to extract the slot representations. users were excluded from the model as we only focused on intent modeling independent of the user. The goal is to predict the list of slots most likely to be selected by the user, given the input request and explicitly mentioned lists of slots in semantic representation $\sigma$. The model consists of (1) embedding layer, (2) representation layer, and (3) prediction layer. We used sigmoid cross-entropy to compute the loss since the task is a multi-label problem: a subset of slots is predicted for an input list of slots and the request representation. Finally, max-pooling is done across all the slot embeddings and concatenated with the request embedding vector to represent $c$. \begin{equation} c = Max Pooling (\vec{O_k} * \xi_{(\tau,i,j,k)}) + \vec{USE_Q} \end{equation} where $\xi_{(\tau,i,j,k)}$ is the $j^{th}$ slot embedding, with respect to intent $i$ and topic $\tau$, and $\vec{O_k}$ is the one-hot vector of the collected slots in step $k$. \subsubsection*{\bf Threshold to stop iterations:} We leverage the $ICS_{(\tau,i)}$ score to stop the contextual bandit iterations, which has a steady increase in its value through the interactions. To this end, when this value becomes greater than a threshold, the contextual bandit model stops iteration. The threshold varies per $(\tau, i)$ pair. Hence, we consider a threshold value of $ICS_{(\tau,i)} \leq \mu(P(\xi_k | i, \tau )) + sd(P(\xi_k | i, \tau ))$ the mean plus the standard deviation of the slot distribution within $(\tau, i)$. \subsection{Retrieval Component} \label{sec:retrieval} To extract the final recommendations for the users, we use a retrieval engine that consists of two main components: 1) search retrieval and 2) ranking. For the retrieval part, we need to collect a corpus that is representative of the search space on the Web. Then, we can evaluate the pre-retrieval metrics is discussed in section \ref{sec:eval_metrics}. for both initial requests and reformulated requests at inference time. \subsubsection*{\bf Corpus collection:} To generate the corpus, we need to issue a series of queries to a search engine that will capture the search space of the web. Algorithm \ref{alg:subquery} shows the steps we used to generate these queries and collect the corpus. In essence, we leveraged a pool of sub-queries derived from the internal intent ontology. To create these sub-queries, we use the idea of request refinement using request sub-topics \cite{nallapati2006evaluating}, and generated a list of them by combining each selected topic/intent with the set of aspects we have associated with it. Finally, these queries were issued to the Bing Web Search API, and the top 100 results (consisting of the page's title, URL, and snippet) for each query were added to the corpus. \subsubsection*{\bf Few-shot Ranker:} A few shot GPT-3 model, which has been fine-tuned on a limited number of training samples, is deployed on the pool of potential suggestions extracted from the Web Search API. The GPT-3 ranker then ranks all the potential suggestions concerning the evolved user intent and the actual aspect values $\alpha$. The GPT-3 ranker considers the user preferences for the final ranking results. \section{datasets} \label{sec:datasets} To evaluate the proposed interactive model, we leveraged the real data collected through user interaction with Pluto. We collected more than $16,699$ user requests with $166,990$ user interactions for training, and $1,140$ user requests with 13,840 interactions for testing. In Section~\ref{sec:nludatacollection}, we describe a crowd-sourcing procedure that is designed to collect annotated data, which is used to train and test the slot tagger in the \acf{NLU} unit. Section~\ref{sec:datasetcollection} describes the interactive data collected via Pluto (Section~\ref{sec:infrastructure}).\footnote{The datasets cannot be shared publicly due to privacy concerns. However, we believe using the presented descriptions the dataset collection can be reproduced.} \subsection{Dataset Collected for \ac{NLU} unit} \label{sec:nludatacollection} To collect the data for training and evaluating the \ac{NLU} model, we use a crowd-sourcing platform that provides an easy way for researchers to build interfaces for data collection and labeling. Using the platform, we developed a simple interface that presented annotators with a natural language request paired with up to five possible slots. Annotators were then asked to mark relevant slots and given the opportunity to highlight sections of the request that mapped to the slot $\xi$ to their corresponding aspect $\alpha$ in question. The set of requests and slots presented to annotators was created from a seed set of $3,246$ requests, where each request was paired with all the slots from the subsuming intent. Three annotators then used the interface to map slots to requests as appropriate. \paragraph{Evaluating quality of the collected dataset.} Requests were randomly selected from two different topics and 14 user intents, respectively (Table~\ref{tab:kripendorf}). We only chose two topics as the selected intents were a part of these two topics. Three different human annotators manually labeled these queries through the data collection interface described in the previous section. Table~\ref{tab:kripendorf} presents Krippendorff's alph scores~\citep{krippendorff2011computing} across all the intents. A score above or equal to 0.667 is often considered a good reliability test. The results demonstrate an acceptable agreement among all annotators, except ``Hike'' intent which shows a moderate agreement~\cite{krippendorff2011computing}. After evaluating the $\Omega$, we notice that the slots for ``hike'' intent have overlapped, meaning there are slots that refer to the same thing with different textual representations. These semantic overlaps happened even after normalization with the clustering, which sometimes confuses annotators. \begin{table}[!t] \small \begin{center} \begin{tabular}{p{0.8cm}|p{1.3cm}|c|c|p{1.3cm}|c} \bottomrule \bf Topic &\bf Intent&\bf K (dist)& \bf Topic&\bf Intent&\bf K (dist)\\ \bottomrule Service & restaurants & 0.74 \footnotesize(12\%) & Service& appliance & 0.71 \footnotesize(11\%)\\ Service& electrician & 0.79 \footnotesize(13\%) & Service& hotel &0.71 \footnotesize(2\%)\\ Service&landscaping & 0.67 \footnotesize(16\%) & Service& handyman& 0.75 \footnotesize(2\%)\\ Activity& hike & 0.58 \footnotesize(10\%)& Service& cleaners& 0.69 \footnotesize(4\%)\\ Activity& general & 0.74 \footnotesize(8\%) & Service& remodeling& 0.82 \footnotesize(3\%)\\ Activity& springbreak & 1.00 \footnotesize(5\%) & Activity& daytrip& 0.73 \footnotesize(2\%)\\ Activity& campground & 0.74 \footnotesize(6\%) & Activity& summercamp& 0.75 \footnotesize(6\%)\\ \bottomrule \end{tabular} \end{center} \caption{Krippendorff's score across all $(\tau,i)$ tuples in the $\Omega$. Where $K$ and dist stand for Krippendorff's score and the distribution of the tuple in the dateaset.} \vspace{-2em} \label{tab:kripendorf} \end{table} \subsection{Dataset Collected via Pluto} \label{sec:datasetcollection} The data for training and evaluating our proposed model is collected during six months of Pluto proprietary interactive logs described in Section~\ref{sec:infrastructure}. We used the first five months to form the training set and reserved the last month for testing. Since GPT-3 is a generative model, the suggested slots during data collection may not be expressed identically, despite representing the same underlying intent (e.g.``access to parking'' and ``parking availability''). To address this issue, we used a universal sentence encoder~\cite{cer2018universal} to softly match a generated slot to a slot in $\Omega$. The slot with the lowest cosine distance is considered the target slot. Pluto is capable of covering hundreds of different user intents. In this study, however, we selected the $14$ most frequent search intents in the logs because we observed a sharp drop-off in frequency after that. Table. \ref{tab:kripendorf} represents the intent values with their corresponding topic. Each sample in the collected interactive dataset has the form of $\xi_{j \dots n} \rightarrow \xi_{k \dots m}$, where there is no intersection between two sets of slots $\xi_{j \dots n} \cap \xi_{k \dots m} = \emptyset $. The selected slots are the slots user selects during the interaction with the interactive agent. We collected more than $16,699$ user requests with $166,990$ user interactions for training, and $1,140$ user requests with 13,840 interactions for testing. \subsection{Corpus Collection} To generate the corpus, we need to issue a series of queries to a search engine that will capture the search space of the web. Algorithm \ref{alg:subquery} shows the steps used to generate these queries and collect the corpus. \begin{algorithm} \caption{Algorithm to generate corpus for evaluation. \textbf{L} is the name of all US major cities. } \label{alg:subquery} \begin{algorithmic} \State \textbf{Input: } $\Omega_{(\tau, i)}$, and \textbf{L} is the user location. \State corpus = [] \For{each \textit{location} in database \textbf{L} } \For{each $(\tau,i)$ in $\Omega$ } \For{each $\xi$ in $\Omega_{(\tau,i)}$ } \State query $\gets i + \text{near} + location + \text{with} + \xi $ \State top-100 documents $\gets$ search engine\_API (query) \State corpus = corpus $\cup$ top-100 documents \EndFor \EndFor \EndFor \State \textbf{Output: } corpus \end{algorithmic} \end{algorithm} \section{Experimental Setup and Results} \label{sec:experiements} \subsection{Methods} \vspace{-0.5em} \subsubsection*{\bf Baseline: Popularity Method} \label{sec:popularity} The popularity-based method is a heuristic, suggesting the next set of related slots based on overall frequency (popularity) in the intent profile $\Phi$. The order of suggestions can change over time as some slots become more popular for specific intents. The models proposed in this paper are unique in modeling the user intent, and they can be baselines for future research in this area, since the current conversational and exploratory search models~\cite{louvan2020recent,white2009exploratory, kostric2021soliciting} are not applicable to tackle the described task. For convenience, we summarize the methods compared for reporting the experimental results as follows: \vspace{-0.5em} \subsubsection*{\bf Group 1: Contextual Multi-armed Bandit Policies} \label{sec:Bandits} We report the results on $13$ different policies for contextual bandit models, including: ``Bootstrapped Upper Confidence Bound'', ``Bootstrapped Thompson Sampling'', ``Epsilon Greedy'', ``Adaptive Greedy'', ``SoftMax Explorer'', etc. which have been extensively investigated in~\citep{cortes2018adapting}. The library to implement the policies is available \href{https://contextual-bandits.readthedocs.io/en/latest/}{here}\footnote{\url{https://contextual-bandits.readthedocs.io/en/latest/}}. \vspace{-0.5em} \subsubsection*{\bf Group 2: Different context representation:} We report the results for the three different proposed context $c$ described in Section~\ref{sec:interactive}. \vspace{-1em} \subsection{Evaluation Metrics} \label{sec:eval_metrics} Evaluating complex search tasks has always been quite challenging. Since the task is not supervised and there is no available dataset or labels, we could not directly evaluate the results. In addition, our goal is to refine requests in a way that they lead to better suggestions. Therefore, we propose to employ \ac{QPP} metrics for evaluation purposes. \ac{QPP} task is defined as predicting the performance of a retrieval method on a given input request~\citep{carmel2010estimating,cronen2002predicting,he2004inferring}. In other words, query performance predictors predict the quality of retrieved items w.r.t to the query. QPP methods have been used in different applications such as query reformulation, query routing, and in intelligent systems~\citep{sarnikar2014query,roitman2019study}. QPP methods are a promising indicator of retrieval performance and are categorized into pre-retrieval, and post-retrieval methods~\cite{carmel2010estimating}. Post retrieval QPP methods generally show superior performance compared to pre-retrieval ones, whereas the pre-retrieval QPP methods have been more often used in more real-life applications and can address more practical problems since their prediction occurs before the retrieval. In addition, almost all of the post-retrieval methods work based on the relevance scores of the retrieved list of documents, and in our case, the relevance score was not available from the search engine API; thus, we only employed pre-retrieval QPP methods for this work’s evaluation purposes. Having said that, we predict and compare the performance of the initial complex requests as well as our reformulated requests using SOTA pre-retrieval QPP methods which have been shown to have a high correlation with retrieval methods on different corpora \cite{hashemi2019performance,arabzadeh2020neural,arabzadeh2020neural1,zhao2008effective,hauff2008survey,hauff2009combination,carmel2012query,he2004inferring}. The intuition behind evaluating our proposed method with pre-retrieval QPP methods is that QPP methods have shown to be a promising indicator of performance. Therefore, we can compare the predicted performance of the initial complex request as well as our reformulated request and predict which one is more likely to perform better. To simply put, higher \ac{QPP} values mean that there is a higher probability that the request is going to be easily satisfied, and lower \ac{QPP} values indicate a higher chance of poor retrieval results. In the following, we elaborate on the SOTA pre-retrieval QPP methods that showed promising performance over different corpora and query sets, and we leveraged them for evaluating this work. \textbf{Simplified Clarity Score (SCS):} SCS is a specificity-based QPP method, which captures the intuition behind that the more specific a query is, the more likely a system is to specify the query \cite{he2004inferring,plachouras2004university}. SCS measures the KL divergence between the query and the corpus language model, thereby capturing how well the query is distinguishable from the corpus. \textbf{Similarity of Corpus and Query (SCQ): } SCQ leverages the intuition that if a query is more similar to the collection, there is a higher potential to find an answer in the collection \cite{zhao2008effective}. Concretely, the metric measures the similarity between collection and query for each term and then aggregates over the query, reporting the average of each query term's individual score. \textbf{Neural embedding based QPPs (Neural-CC):} Neural embedding-based \ac{QPP} metrics have shown excellent performance on several \ac{IR} benchmarks. They go beyond the traditional term-frequency based \ac{QPP} metrics and capture the semantic aspects of terms \cite{zamani2018neural,roy2019estimating,arabzadeh2019geometric,arabzadeh2020neural,arabzadeh2020neural1,khodabakhsh2021semantics,roitman2020ictir,hashemi2019performance}. We adapted one of the recently proposed \ac{QPP} metrics which build a network between query terms and their most similar neighbors in the embedding space. Similar to \citep{he2004inferring,plachouras2004university}, this metric is based on query specificity. The intuition behind this metric is that specific queries play a more central and vital role in their neighborhood network than more generic ones. Here, as suggested in~\citep{arabzadeh2020neural,arabzadeh2020neural1,arabzadeh2019geometric}, we adapted the Closeness Centrality (CC) of query terms within their neighborhood network, which has shown to have the highest correlation across different \ac{IR} benchmarks. \begin{figure} \centering \vspace{-1em} \includegraphics[width = 230pt]{Figures/mainresults_percentage_sig_new.png} \vspace{-1em} \caption{\small Results of three \ac{QPP} metrics on reformulated queries difference percentile with original queries on all intents. The darker bars indicates statistically significant improvement with $\alpha=0.05.$} \vspace{-1em} \label{fig:mainresult} \end{figure} \subsubsection*{\bf Training Parameters:} For contextual bandits and GPT-3 models, the default parameters for the available libraries were used, and no parameter tuning was done. To train the model described in Section~\ref{sec:popularity}, we use an Adam optimizer with a learning rate of $\eta=0.001$, a mini-batch of size 8 for training, and embedding of size 100 for both word and aspects. A dropout rate of 0.5 is applied at the fully connected and ReLU layers to prevent potential overfitting. \vspace{-1em} \subsection{Experimental Results} \label{sec:exp_results} We compare the result of \ac{QPP} metrics on our best policies and popular attributes with the original requests in Figure~\ref{fig:mainresult}, where we report the percentage difference w.r.t the full form. That is, to what extent do the QPP metrics predict that the reformulated requests are likely to perform better than the original ones. We examine the difference between the average of QPP metrics on reformulated requests with the best policy (adaptive active greedy) and the full form of requests. In addition, we compared the reformulated requests with popular attributes and the full form of the request and reported them in the same figure. As shown in Figure~\ref{fig:mainresult}, the adaptive active greedy policy showed improvements over all the three QPP metrics and on all intents. These bars in Figure~\ref{fig:mainresult} can be interpreted as the percentage of predicted improvement for the reformulated requests compared to full form of requests. For instance, for restaurants intent, SCQ, SCS, and neural embeddings QPP methods, have improved with 3.3\%, 3.1\%, and 22.5\%, respectively. We measure statistical significance using a paired t-test with a p-value of 0.05. We note that while the improvement made by the adaptive active greedy policy were consistently statistically significant on all intents by the SCQ and neural embedding QPP metrics, the gains were only statistically significant 4 intents on the SCS metric: ``Restaurants'', ``Landscaping'', ``Home cleaners'', and ``Home Remodeling.'' It should be noted that while QPP methods are potential indicators of performance, every QPP method focuses on a different quality aspect of the query. Therefore, they do not necessarily agree on the predicted performance according to different queries, corpora, or retrieval methods. This observation has been made on different SOTA QPP methods and various well-known corpora such as TREC corpora or MS MARCO and their associated query set~\cite{carmel2010estimating,arabzadeh2021bert,hashemi2019performance}. Thus, we conclude that the level of agreement could strengthen our confidence in the query performance prediction. In other words, the more QPP metrics agree on query performance, the more confidence we have in that prediction. In addition, we can interpret each QPP performance based on their intuition behind them. For example, the SCS method relies on query clarity level, while the SCQ method counts on the existence of potential answers in the corpus. When the two QPP methods do not agree on the query's performance, we consider it as how the query satisfies the intuition behind one of the QPP methods while failing to satisfy the others. For example, take the `activity' intent in Figure~\ref{fig:mainresult}, in which the SCQ methods showed significant improvement, but the SCS method did not. We interpret this observation as the clarity of the query has not been significantly increased while refined by our method. However, the query was expanded so that the existing potential answers in the corpus has increased. \subsubsection*{\bf NLU evaluation} To evaluate the topic and intent classifiers, we used the test set described in Section~\ref{sec:datasetcollection} and achieved a 99.3\% and 95.2\% accuracy for topic and intents, respectively. For evaluating the slot tagger, we leveraged the annotated data collected by three different judges described in Section~\ref{sec:nludatacollection} performing 4-fold cross-validation and achieved a 0.75 macro-F1 across all the intents and slots. The results for slot tagging are promising despite the challenges, e.g., a small amount of labeled data, a large number of slots per intent, and overlapping slots across user intents. The results indicate the ability of GPT-3 to generalize in few-shot learning. \subsection{Ablation Analysis} \begin{figure} \centering \vspace{-1em} \includegraphics[width = 210pt]{Figures/spec-broad.png} \vspace{-1em} \caption{\small Comparing Specific Requests vs Broad Requests in terms of 3 different pre-retrieval QPP metrics.} \vspace{-2em} \label{fig:broad-spec} \end{figure} \subsubsection*{\bf Broad vs. Specific:} Studying a system's performance deeply on a per-query basis can enlighten where the systems fail, i.e., which queries a system fails to answer and which group of queries can be handled easily. Thus, it could potentially lead to future improvements to the system. As such, exploring query characteristics has always attracted lots of attention between IR and NLP communities because query broadness has shown to be a crucial factor that could potentially lead to an unsatisfactory retrieval~\cite{song2009identification,clarke2009effectiveness,sanderson2008ambiguous,nel2019effect,min2020ambigqa}. Here, we separately study the performance of our proposed method on two groups of broad and specific queries. We are interested in examining whether our proposed method can address both requests consistently, i.e., broad and specific ones. Here we define the \textit{broad requests} as the requests with less complex information-seeking tasks and fewer preferences expressed; they are short and contain a small number of slots/values ($\leq$ 3), hence requiring more steps for the \ac{RL} model to refine the user intent. On the contrary, the \textit{specific requests} is defined as the longer ones which contain many slots/values, and users need fewer steps to finalize their intent. Figure~\ref{fig:broad-spec} demonstrates the evaluation results of broad and specific requests. As demonstrated in this figure, although all the employed QPP metrics agreed that both types of requests had been improved, Adaptive Active Greedy would perform relatively better on broad queries compared to specific ones. It is an expected output because specific requests are more complex than broad ones, and more criteria should be addressed to satisfy them. Moreover, suggesting the popular slots have a deteriorating effect on all the metrics across the intents for the specific requests, showing a challenging reformulation process, while the proposed model in all metrics improves the QPP. \subsubsection*{\bf Different Context $c$:} We compare three different proposed contexts described in section \ref{sec:interactive} in terms of percentage difference w.r.t original form of requests on predicted performance by QPP metrics in Figure \ref{fig:dif-context} on the top-5 most popular intents. The results show all the three proposed contexts outperform the original representation across all metrics and intents. We observe that QPP metrics do not consistently agree on the predicted performance between these three different methods. While neural-cc predicts that method 3 and method 2 to define the $c$ perform better than method 1. we also noticed that SCS and SCQ sometimes behave the opposite. We hypothesize that this difference could potentially be because neural-cc works based on neural embedding while SCS and SCQ work based on term frequency and corpus statistics. Therefore, each group might capture different aspects of requests. Although all the proposed contexts $c$ significantly outperform the original query, we can not conclude which one among them outperforms the others. \subsubsection*{\bf Policy evaluation for contextual bandit model: } We performed an experiment for policy evaluation on contextual bandits. We selected the popular intent for off-policy evaluation. Off-line contextual bandits assessment is complicated because they interact in online environments. There are multiple methods for policy evaluation for off-line setting such as Rejection Sampling (RS)~\citep{li2010contextual} and Normalised Capped Importance Sampling (NCIS)~\citep{gilotte2018offline}. All the results are reported based on the best arm performance. The system can expose users to multiple slots. As a result, in the proposed setting, the final performance will be much better than the described results. Table~\ref{tab:comp} shows the result for three different models. \begin{figure} \centering \vspace{-1em} \includegraphics[width = 220pt]{Figures/g3_2.png} \caption{\small Comparison analysis between different contexts w.r.t original form of requests based on pre-retrieval QPP metrics. } \label{fig:dif-context} \vspace{-1em} \end{figure} \begin{table}[t] \small \begin{center} \begin{tabular}{l c c c cc} \toprule Avg. reward &restaurant&landscaping &hike&activity&appliance\\ \midrule RS & 0.538 & 0.352 & 0.455& 0.25 &0.375\\ NCIS & 0.413 & 0.469 & 0.555& 0.407 &0.654\\ Real & 0.378 & 0.440 & 0.495& 0.396& 0.670\\ \bottomrule \end{tabular} \end{center} \caption{Policy evaluation results for RS and NCIS models.} \vspace{-3em} \label{tab:comp} \end{table} According to the results, RS sometimes underestimates the performance on intents like ``restaurant" and ``appliance repair'' with overestimating other intents such as ``hike." The NCIS method provides a more accurate estimation and provides a more realistic estimate. \section{Conclusion and Future Work} \label{sec:conclusion} This paper proposed a novel application of natural language interfaces, allowing cold-start users to submit and receive responses for complex information-seeking requests expressed in natural language. Unlike traditional search engines, where a single most relevant result is expected, users of our system are presented with a set of suggestions for further exploration. We have designed and deployed a system that permitted us to conduct initial data collection and potential future online experimentation using the A/B testing paradigm. To complement this platform for complex user requests, we developed a novel interactive agent-based on contextual bandits guides users to express their initial request more clearly by refining their intents and adding new preferential desiderata. During this guided interaction, a \ac{NLU} unit is used to build a structured semantic representation of the request. The system also uses a proposed \acf{CIS} that estimates the degree to which intent is entirely expressed at each interaction step. When the system determines that an optimal request has been expressed, it leverages a search API to retrieve a list of suggestions. To demonstrate the efficacy of the proposed modeling paradigm we have adopted various pre-retrieval metrics that capture the extent to which guided interactions with our system yield better retrieval results. In a suite of experiments, we demonstrated that our method significantly outperforms several robust baseline approaches. In future work, we plan to design an online experiment that will involve business metrics, such as user satisfaction, the ratio of returning users, and interactively collect ratings for the list of suggestions made by our system. This will allow us to learn from language and rating data jointly. Another possible direction is designing intent ontologies in a more complex hierarchical form.
1,314,259,996,890
arxiv
\section{#1} \setcounter{equation}{0}} \newcommand{\norm}[3]{\|#3\|_{#1,#2}} \setcounter{secnumdepth}{2} \newcommand{\Degin}[2]{{d^{\,\text{in}}_{{#1},{#2}}}} \newcommand{\Degout}[2]{{d^{\,\text{out}}_{{#1},{#2}}}} \newcommand{\degin}[1]{{d^{\,\text{in}}_{{#1}}}} \newcommand{\degout}[1]{{d^{\,\text{out}}_{{#1}}}} \newcommand{\is}[1]{{\mathbf{#1}}} \newcommand{\isd}[1]{{\mathbf{#1}^\bullet}} \newcommand{\dt}[1]{{\dot{#1}}} \newcommand{\isb}[1]{{\overline{\mathbf{#1}}}} \newcommand{\mbox{$\;|\;$}}{\mbox{$\;|\;$}} \newcommand{\pitchfork}{\pitchfork} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mbox{$\phi_\Pi^\star$}}{\mbox{$\phi_\Pi^\star$}} \newcommand{\displaystyle}{\displaystyle} \newcommand{\mbox{$\varepsilon$}}{\mbox{$\varepsilon$}} \newcommand{{\mathbb{R}}}{{\mathbb{R}}} \newcommand{\mbox{$\mathbb{R}$}}{\mbox{$\mathbb{R}$}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mbox{$\mathbb{K}$}}{\mbox{$\mathbb{K}$}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mbox{$\mathbb Q$}}{\mbox{$\mathbb Q$}} \newcommand{\mbox{$\mathbb{H}$}}{\mbox{$\mathbb{H}$}} \newcommand{{\mathbb{Z}}}{{\mathbb{Z}}} \newcommand{{\mathbf{x}}}{{\mathbf{x}}} \newcommand{{\mathbf{z}}}{{\mathbf{z}}} \newcommand{\dot{\is{x}}}{\dot{\is{x}}} \newcommand{\dot{\is{y}}}{\dot{\is{y}}} \newcommand{\dot{\is{z}}}{\dot{\is{z}}} \newcommand{\dot{\is{w}}}{\dot{\is{w}}} \newcommand{\dot{{W}}}{\dot{{W}}} \newcommand{\dot{w}}{\dot{w}} \newcommand{\dot{w}}{\dot{w}} \newcommand{\mc}[1]{{\mathcal{#1}}} \newcommand{{\dot{x}}}{{\dot{x}}} \newcommand{\dot{z}}{\dot{z}} \newcommand{\stackrel{\mathrm{def}}{=}}{\stackrel{\mathrm{def}}{=}} \newcommand{{\boldsymbol{\ell}}}{{\boldsymbol{\ell}}} \newcommand{{\boldsymbol{\Delta}}}{{\boldsymbol{\Delta}}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\widehat{\mathcal{N}}}{\widehat{\mathcal{N}}} \newcommand{\widehat{N}}{\widehat{N}} \newcommand{{\is{A}}}{{\is{A}}} \newcommand{{\is{y}}}{{\is{y}}} \newcommand{{\is{v}}}{{\is{v}}} \newcommand{{\is{w}}}{{\is{w}}} \newcommand{{\is{W}}}{{\is{W}}} \newcommand{{\is{u}}}{{\is{u}}} \newcommand{{\is{q}}}{{\is{q}}} \newcommand{{\is{X}}}{{\is{X}}} \newcommand{{\is{Y}}}{{\is{Y}}} \newcommand{{\is{f}}}{{\is{f}}} \newcommand{\mathbf{Z}}{\mathbf{Z}} \newcommand{\mbox{$\mathbb{T}$}}{\mbox{$\mathbb{T}$}} \newcommand{\mbox{$\mathbb{N}$}}{\mbox{$\mathbb{N}$}} \newcommand{\mbox{sgn}}{\mbox{sgn}} \newcommand{{\rightarrow}}{{\rightarrow}} \newcommand{{\longrightarrow}}{{\longrightarrow}} \newcommand{\mbox{$\rightarrow$}}{\mbox{$\rightarrow$}} \newcommand{\COM}[1]{\noindent \\{\bf COMMENT: {#1}}\\} \newcommand{{\bf GL}}{{\bf GL}} \newcommand{{\bf SL}}{{\bf SL}} \newcommand{{\bf SO}}{{\bf SO}} \newcommand{{\bf SE}}{{\bf SE}} \newcommand{\operatorname{Diff}}{\operatorname{Diff}} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\mathbf{M}}{\mathbf{M}} \newcommand{\overline{\Mb}}{\overline{\mathbf{M}}} \newcommand{\overline{\XX}}{\overline{{\is{X}}}} \newcommand{\rset}[2]{\left\lbrace #1 \mbox{$\;|\;$} #2\right\rbrace} \newcommand{\lset}[2]{\left\lbrace\left. #1\;\right|\,#2\,\right\rbrace} \newcommand{\set}[2]{\rset{#1}{#2}} \newcommand{\tset}[2]{\big\lbrace #1\,\big|\;#2\big\rbrace} \newcommand{\sset}[1]{\left\lbrace #1\right\rbrace} \newcommand{\tsset}[1]{\big\lbrace #1\big\rbrace} \newcommand{\bu}[1]{{#1}^{\bullet}} \newcommand{\iz}[1]{\bu{\is{#1}}} \newcommand{\simp}[1]{\boldsymbol{\Delta}_{#1}} \newcommand{\orth}[1]{\text{O}_{#1}} \newcommand{\diag}[1]{{\mathbf{D}({#1})}} \newcommand{\mathbf{N}}{\mathbf{N}} \newcommand{\boldsymbol{\nu}}{\boldsymbol{\nu}} \newcommand{\smallsetminus}{\smallsetminus} \newtheorem{lemma}{Lemma}[section] \newtheorem{header}{}[section] \newtheorem{prop}[lemma]{Proposition} \newtheorem{thm}[lemma]{Theorem} \newtheorem{cor}[lemma]{Corollary} \newtheorem{alg}[lemma]{Algorithm} \theoremstyle{definition} \newtheorem{Def}[lemma]{Definition} \newtheorem{exam}[lemma]{Example} \newtheorem{exams}[lemma]{Examples} \newtheorem{exer}[lemma]{Exercise} \newtheorem{exers}[lemma]{Exercises} \theoremstyle{remark} \newtheorem{rem}[lemma]{Remark} \newtheorem{rems}[lemma]{Remarks} \newtheorem{con}[lemma]{Conjecture} \newenvironment{pfof}[1]{\vspace{1ex}\noindent{\bf Proof of #1}\hspace{0.5em}}{\hfill\qed\vspace{1ex}} \newcommand{\mbox{\bf O}}{\mbox{\bf O}} \newcommand{\varepsilon}{\varepsilon} \newcommand{{\it et al.}}{{\it et al.}} \newcommand{\textnormal{range}}{\textnormal{range}} \title{Synchrony and Anti-synchrony in Weighted Networks} \author{Manuela Aguiar} \address{Manuela Aguiar, Faculdade de Economia, Centro de Matem\'atica, Universidade do Porto, Rua Dr Roberto Frias, 4200-464 Porto, Portugal.} \email[Corresponding author]{maguiar@fep.up.pt} \author{Ana Dias} \address{Ana Dias, Departamento de Matem\'atica, Centro de Matem\'atica, Universidade do Porto, Rua do Campo Alegre, 687, 4169-007 Porto, Portugal} \email{apdias@fc.up.pt} \date{\today} \begin{document} \maketitle \begin{abstract} We consider weighted coupled cell networks, that is networks where the interactions between any two cells have an associated weight that is a real valued number. Weighted networks are ubiquitous in real-world applications. We consider a dynamical systems perspective by associating to each network a set of continuous dynamical systems, the ones that respect the graph structure of the network. For weighted networks it is natural for the admissible coupled cell systems to have an additive input structure. We present a characterization of the synchrony subspaces and the anti-synchrony subspaces for a weighted network depending on the restrictions that are imposed to their admissible input-additive coupled cell systems. These subspaces are flow-invariant by those systems and are generalized polydiagonal subspaces, that is, are characterized by conditions on the cell coordinates of the types $x_i = x_j$ and/or $x_k = -x_l$ and/or $x_m=0$. The existence and identification of the synchrony and anti-synchony subspaces for a weighted network are deeply relevant from the applications and dynamics point of view. Our characterization of the synchrony and anti-synchrony subspaces of a weighted network follows from our results where we give necessary and sufficient conditions for a generalized polydiagonal to be left invariant by the adjacency matrix and/or the Laplacian matrix of the network. \vspace{3mm} \noindent AMS classification scheme numbers: 05C50 05C22 05C90 34C15 \vspace{3mm} \noindent Keywords: weighted network, adjacency matrix, Laplacian matrix, generalized polydiagonal, coupled cell system with additive structure, synchrony, anti-synchrony. \end{abstract} \tableofcontents \section{Introduction} Networks are often used to model many applications in a huge set of scientific areas, see Arenas~{\it et al.}~\cite{ADMZ08} and references therein. From the dynamical systems perspective, an ultimate goal is to use properties of the {\it network}, as a graph object, to induce features for the associated {\it coupled cell systems}, the ones that respect the graph structure of the network. Here, we consider systems of ordinary differential equations. In the coupled cell systems formalism of Stewart, Golubitsky and collaborators~\cite{GSP03,GST05} and in the one of Field~\cite{F05}, the network connections have assigned nonnegative integer values. When the values associated with the connections can be any real number, then we have {\it weighted networks}. See Aguiar, Dias and Ferreira~\cite{ADF17} and Aguiar and Dias~\cite{AD18}. In the context of weighted networks, it is common to assume that the coupled cell systems with structure consistent with the network have {\it additive input structure}, that is, the input to any cell is a sum of the {\it pairwise interactions} between the cell and the cells connected to it, scaled by the weight of the connection. See, for example, Field~\cite{F15}, Bick and Field~\cite{BF17} and Newman~\cite{N10}. An important achievement in the two mentioned coupled cell systems formalisms is the characterization of the {\it synchrony spaces} for a network, the polydiagonals defined by equalities of cell coordinates, which are flow-invariant by any coupled cell system associated with the network structure. Moreover, their existence and characterization relies only on the network structure. In fact, algorithms exist that determine the set of network synchrony spaces using solely the network adjacency matrix (or matrices in case there is more than one type of interactions between cells). See, for example, Aguiar and Dias~\cite{AD14}. Remark 2.11 of Aguiar and Dias~\cite{AD18} and Theorem 2.4 of Aguiar, Dias and Ferreira~\cite{ADF17} state that the same holds for weighted networks considering coupled cell systems with additive input structure. In this work we consider a combination of the additive input structure of the coupled cell systems with restrictions on the internal dynamics and on the coupling functions and show that this can lead to a drastic increase on the type of robust phenomena that the systems can exhibit. As it is widely known, the existence of robust flow invariant subspaces has a strong impact in the dynamics and favor the existence of non-generic dynamical behavior like robust heteroclinic cycles and networks and bifurcation phenomena. See, for example, Aguiar~{\it et al.}~\cite{AADF11}, Field~\cite{F15}, Golubitsky~{\it et al.}~\cite{GNS04} and Golubitsky and Lauterbach~\cite{GL09}. Let $G$ be an $n$-cell weighted network. Consider a coupled cell system with additive input structure associated with $G$ given by $\dot{x} = f(x)$, where $f = (f_1, \ldots, f_n)$ so that the equation $\dot{x}_j = f_j(x)$ is associated with cell $j$ and it has the form: \begin{equation} \dot{x}_j=g(x_j) +\sum_{i=1}^n {w_{ji}h\left(x_j,x_i\right)}\, . \label{eq:intro_EDOsystem} \end{equation} Here, $g:\mbox{$\mathbb{R}$} \rightarrow \mbox{$\mathbb{R}$}$ and $h:\mbox{$\mathbb{R}$} \times \mbox{$\mathbb{R}$} \rightarrow \mbox{$\mathbb{R}$}$ are smooth functions and characterize the internal dynamics and the coupling function, respectively; each $w_{ji}\in \mbox{$\mathbb{R}$}$ is the value of the weight of the coupling strength of the directed edge from cell $i$ to cell $j$. If there is no directed edge from cell $i$ to cell $j$, the weight is assumed to be zero. Let $W_G = [w_{ij}]$ denote the $n \times n$ adjacency matrix of $G$. A polydiagonal space $\Delta$ is left invariant under any coupled cell system of the form (\ref{eq:intro_EDOsystem}) if and only if it is left invariant under $W_G$. See~\cite{GSP03, GST05, F05, ADF17,AD18}. In the literature, it is often common to assume, in addition to this additive input structure of the systems (\ref{eq:intro_EDOsystem}), certain restrictions on the coupling function $h$. One usual restriction is \begin{equation}\label{eq:asump_h} h(x,x) = 0,\, \forall x \in \mbox{$\mathbb{R}$}\, . \end{equation} Now observe that $h$ satisfies the hypothesis (\ref{eq:asump_h}) if and only if \begin{equation}\label{eq:form_h} h(x,y) = (x-y) h_1(x,y),\, \forall x,y \in \mbox{$\mathbb{R}$}, \end{equation} for some smooth function $h_1:\, \mbox{$\mathbb{R}$}^2 \to \mbox{$\mathbb{R}$}$. Using this notation, equation (\ref{eq:intro_EDOsystem}) becomes \begin{equation} \dot{x}_j=g(x_j) +\sum_{i=1}^n {w_{ji} (x_j - x_i) h_1\left(x_j,x_i\right)}\, . \label{eq:3intro_EDOsystem} \end{equation} Note that, for equations of the form (\ref{eq:3intro_EDOsystem}), we have the full synchronized space $\Delta_0 = \{ x:\, x_1 = x_2 = \cdots = x_n\}$, as $\Delta_0$ is left invariant under the flow for any choice of the functions $g$ and $h_1$. Examples of coupled cell systems of the form (\ref{eq:3intro_EDOsystem}) are the {\it exo-difference-coupled cell systems} considered by Neuberger, Sieben, and Swift~\cite{NSS19} and the {\it diffusive networks} addressed for example by Poignard, Pade and Pereira~\cite{PPP19} where \begin{equation} \label{eq:dif} h(x,y) = H(x-y) \end{equation} for smooth $H:\, \mbox{$\mathbb{R}$} \to \mbox{$\mathbb{R}$}$ such that $H(0) = 0$. In \cite{PPP19} it is addressed the way the structure of the coupling structure of the graph affects the transverse stability of the full synchronized space $\Delta_0$. Note also that coupled cell systems where cell equations have the form (\ref{eq:3intro_EDOsystem}) are consistent with the network obtained from $G$ by considering the entries $w_{jj}$ equal to zero, and so, a polydiagonal space $\Delta$ is left invariant under any coupled cell system of the form (\ref{eq:3intro_EDOsystem}) if and only if it is left invariant under the adjacency matrix obtained from $W_G$ by taking the diagonal entries equal to zero. Trivially, that is equivalent to checking if $\Delta$ is invariant under the Laplacian matrix $L_G$. Recall that, taking the adjacency matrix $W_G$ of the weighted network $G$, the {\it Laplacian matrix} associated with it is given by $L_G=D_G-W_G$, where $D_G$ is the diagonal matrix with the input valencies of the cells at the diagonal. In Aguiar and Dias~\cite{AD18}, we consider synchronization for weighted networks and end with examples of flow-invariant subspaces whose definition includes, besides cells coordinates that are equal, cells coordinates with the same magnitude but opposite sign. Using the terminology of Neuberger, Sieben, and Swift~\cite{NSS19}, these are {\it anti-synchrony subspaces}. In Neuberger, Sieben, and Swift~\cite{NSS19}, the authors consider four nested sets of difference-coupled systems, for $0-1$ undirected networks, by adding restrictions on the internal dynamics and coupling functions, and characterize the synchrony and anti-synchrony subspaces for a network with respect to those four subsets of admissible difference-coupled systems. Anti-synchronization has been observed in different coupled cell systems scenarios, as coupled oscillators, Kim~{\it et al.}~\cite{KRKRP03}, neural networks, Meng and Wang~\cite{MW07} and muti-agent systems, Hu and Zheng~\cite{HZ13}. Motivated by the works of Aguiar and Dias~\cite{AD18} and Neuberger, Sieben, and Swift~\cite{NSS19}, we introduce the definition of generalized polydiagonal subspace as a space where the defining conditions, besides equalities of cell coordinates, can also include conditions such as $x_i = -x_j$ or $x_k =0$ and characterize the generalized polydiagonals that are left invariant by the adjacency matrix and/or by the Laplacian matrix of a weighted network. We then get the synchrony and anti-synchrony subspaces for different subclasses of the class of the input additive coupled cell systems of the network. Those subclasses are defined by considering restrictions on the internal dynamics and on the coupling functions, namely, to be odd, or even, or linear. As more restrictions are imposed, more types of flow-invariant spaces can occur for any such systems. We give next a very simple illustration of this by presenting a weighted network that has no robust synchrony spaces when taking general coupled cell systems with additive input structure but when a restriction is added on the coupling function, then the opposite occurs. \begin{figure}[!h] \begin{tabular}{cc} \begin{tikzpicture} [scale=.15,auto=left, node distance=1.5cm] \node[fill=magenta,style={circle,draw}] (n1) at (4,0) {\small{1}}; \node[fill=magenta,style={circle,draw}] (n2) at (14,0) {\small{2}}; \node[fill=white,style={circle,draw}] (n3) at (24,0) {\small{3}}; \draw[->, thick] (n1) edge[thick, bend left=30] node [near end, above=0.1pt] {{\tiny $2$}} (n2); \draw[->, thick] (n2) edge[thick, bend left=30,] node [near end, below=0.1pt] {{\tiny $1$}} (n1); \draw[->, thick] (n2) edge[thick] node [near end, above=0.1pt] {{\tiny $3$}} (n3); \end{tikzpicture} \qquad & \qquad \begin{tikzpicture} [scale=.15,auto=left, node distance=1.5cm] \node[fill=magenta,style={circle,draw}] (n1) at (4,0) {\small{$[1]$}}; \node[fill=white,style={circle,draw}] (n3) at (14,0) {\small{$[3]$}}; \draw[->, thick] (n1) edge[thick] node [near end, above=0.1pt] {{\tiny $3$}} (n3); \end{tikzpicture} \end{tabular} \caption{(Left) A three-cell network. (Right) A two-cell network.} \label{f:simples} \end{figure} Coupled cell systems with additive input structure consistent with the three-cell network in Figure~\ref{f:simples} have the form: $$ \left\{ \begin{array}{rcl} \dot{x}_1 & = & g(x_1) + h(x_1,x_2) \\ \dot{x}_2 & = & g(x_2) + 2 h(x_2,x_1) \\ \dot{x}_3 & = & g(x_3) + 3 h(x_3,x_2) \end{array} \, . \right. $$ Note that for this network, there are no synchrony spaces. Assume now hypothesis (\ref{eq:form_h}). Thus we have the coupled cell system given by $$ \left\{ \begin{array}{rcl} \dot{x}_1 & = & g(x_1) + (x_1 -x_2) h_1(x_1,x_2) \\ \dot{x}_2 & = & g(x_2) + 2 (x_2 - x_1) h_1(x_2,x_1) \\ \dot{x}_3 & = & g(x_3) + 3 (x_3 - x_1) h_1(x_3,x_2) \end{array} \right. $$ and $\Delta = \{ x:\, x_1 = x_2\}$ is left invariant under the flow of any such coupled cell system. Moreover, the restriction to $\Delta$ gives the coupled cell system $$ \left\{ \begin{array}{rcl} \dot{x}_1 & = & g(x_1) \\ \dot{x}_3 & = & g(x_3) + 3 (x_3 - x_1) h_1(x_3,x_1) \end{array} \right. $$ which is consistent with the two-cell network in Figure~\ref{f:simples}. In this paper, we caracterize the set of the synchrony and anti-synchrony subspaces of a general weighted network $G$, which correspond to the generalized polydiagonals invariant under the adjacency and/or Laplacian matrices of $G$. More precisely, the set of synchrony and anti-synchrony subspaces of a general weighted network $G$ corresponding to the generalized polydiagonals that are flow-invariant by any coupled cell system with input additive structure that are linear-balanced, that is, those where the internal dynamics function is odd and the coupling function is odd and linear, are in correspondence with the generalized polydiagonals invariant under the network Laplacian matrix. See Proposition~\ref{prop:IGL}. The set of synchrony and anti-synchrony subspaces of a general weighted network $G$ corresponding to the generalized polydiagonals that are flow-invariant by any coupled cell system with input additive structure that are even-odd-balanced, that is, those where the internal dynamics function is odd and the coupling function is even in the first variable and odd in the second variable, are in correspondence with the generalized polydiagonals invariant under the network adjacency matrix. See Proposition~\ref{prop:eoia}. It follows so from our results that the caracterization of the set of the synchrony and anti-synchrony subspaces of a general weighted network, for the above classes of coupled cell systems with input additive structure follows from the characterization of the generalized polydiagonals invariant under the adjacency and/or Laplacian matrices of the network. Using an extension of the results of Aguiar and Dias~\cite{AD14}, we characterize the synchrony and anti-synchrony subspaces of a general weighted network by considering generalized polydiagonal subspaces and using the eigenvalue and eigenvector structures of the adjacency matrix and the Laplacian matrix. See Section~\ref{sec:algm}. The paper is organized in the following way. In Section~\ref{sec:wn}, we establish notation and a few facts concerning weighted networks. In Section~\ref{sec_gen_tag} we introduce the definitions of generalized polydiagonal subspace and of the associated tagged partition of the network set of cells. These include, as particular cases, the definitions of polydiagonal subspace and of the network set of cells associated partition. The characterization of the generalized polydiagonal subspaces that are left invariant by the adjacency matrix and/or the Laplacian matrix of a weighted network appears in Section~\ref{sec:gen_poly}. This is done through necessary and sufficient conditions on the blocks of any block structure of the weighted and Laplacian network matrices adapted to the generalized polydiagonal subspace and leads to the definition of several kinds of tagged partitions. In Section~\ref{sec_CCNS}, we review coupled cell systems with additive input structure and, following the terminology in Neuberger, Sieben, and Swift~\cite{NSS19}, define subclasses of these coupled cell systems, namely, exo-input-additive, odd-input-additive, and linear-input-additive coupled cell systems. We also define the class of even-odd-input-additive coupled cell systems. In Sections~\ref{sec_bal}-\ref{sec_eo_bal}, using the results obtained in Section~\ref{sec:gen_poly}, we characterize the synchrony and the anti-synchrony subspaces for weighted coupled cell networks depending on the additional restrictions imposed to their input-additive admissible coupled cell systems. In Section~\ref{sec:algm}, we show, for the adjacency matrix and for the Laplacian matrix of a network, that the set of the generalized polydiagonals that are left invariant by the matrix is a lattice. We show that the work in Aguiar and Dias~\cite{AD14} generalizes easily to the lattice of synchrony and anti-synchrony subspaces and how to apply the algorithm there to find these two lattices and, thus, the set of the synchrony and anti-synchrony subspaces of the network. We endup with some conclusions in Section~\ref{sec:conclu}. \section{Weighted networks}\label{sec:wn} We consider {\it weighted networks}, that is, networks given by directed graphs where edges have associated weights, given by real values. If $G$ is an $n$-cell weighted network, with set of cells $C=\{1,\ldots,n\}$, its $n \times n$ {\it weighted adjacency matrix} is $W_G = [w_{ij}]_{1 \le i,j \le n}$, where $w_{ij}$ is the weight of the edge from cell $j$ to cell $i$ or zero if there is no such edge. The {\it input valency} of a cell $i \in C$, denoted by $v(i)$, is the sum of the weights of the edges directed to the cell $i$, that is, $v(i) = \sum_{j \in C} w_{ij}$. A network is said to be {\it regular} when all the network cells have the same input valency, that is, $v(i) = v(j)$, for all $i,j \in C$. When the network is regular, we have that its weighted matrix $W_G$ has constant row sum, say $v_W = \sum_{k=1}^n w_{ik}$, for $i= 1, \ldots, n$. In that case, we also say that $W_G$ is {\it regular} of {\it valency} $v_W$. \begin{Def} \normalfont Define the {\it row sum operator} ${\mathrm rs}:\, M_{s,t}(\mbox{$\mathbb{R}$}) \to M_{s,1}(\mbox{$\mathbb{R}$})$ which maps an $s\times t$ matrix $M$ to the $s \times 1$ column matrix where the $i$-th entry is the sum of the entries of the $i$-th row of $M$, for $i=1, \ldots, s$. \hfill $\Diamond$ \end{Def} \begin{rem}\normalfont Note that $\left({\mathrm rs}\left( W_G\right)\right)_i = v(i)$, for $W_G$, the weighted adjacency matrix of a weighted network $G$ with set of cells $C$, and $v(i)$, the input valency of cell $i \in C$. \hfill $\Diamond$ \end{rem} We recall that, given an $n$-cell weighted network $G$ with adjacency matrix $W_G=[w_{i j}]_{n\times n}$, the corresponding {\it Laplacian matrix} is given by $L_G=D_G-W_G$, where $D_G$ is the diagonal matrix with the principal diagonal given by the entries of ${\mathrm rs}\left( W_G\right)$, that is, the input valencies of the cells at the diagonal. The Laplacian matrix associated with $G$ is the regular $n \times n$ matrix $L_G = [l_{ij}]_{n\times n}$ defined by: $$l_{ij} = \left\{ \begin{array}{l} -w_{ij} \mbox{ if } i\not=j; \\ v(i) - w_{ii} \mbox{ if } i=j\, . \end{array} \right. $$ \begin{rem}\normalfont \label{rmk:Laplacian} The Laplacian matrix $L_G$ of a weighted network $G$ is regular with valency $0$. Thus, it can be seen as the weighted adjacency matrix of another weighted network, which is regular as the input valency of each cell is zero. \hfill $\Diamond$ \end{rem} \begin{Def} Given a weighted network $G$, we denote by $G_L$ the regular weighted network with weighted adjacency matrix the Laplacian matrix $L_G$ of $G$. \hfill $\Diamond$ \end{Def} \section{Generalized polydiagonals and tagged partitions}\label{sec_gen_tag} A polydiagonal subspace $\Delta$ of $\mbox{$\mathbb{R}$}^n$ is a subspace of $\mbox{$\mathbb{R}$}^n$ characterized by equalities of the form $x_i = x_j$ where $x_i,x_j$ denote coordinates of cells $i,j$. Such polydiagonal can be associated with a partition of $C = \{ 1, \ldots, n\}$ into the disjoint union of a certain number of nonempty parts with union $C$ and such that $i,j$ belong to the same part if and only if $x_i = x_j$ is a condition in the definition of $\Delta$. In this section, we generalize the notion of polydiagonal subspace of $\mbox{$\mathbb{R}$}^n$ to include possibly equalities of the form $x_k = -x_l$ or $x_m = 0$. \begin{Def} \label{def:tag_part} \normalfont Let $C = \{ 1, \ldots, n\}$ and $p,q,r$ be nonnegative integers, where $0\leq q\leq p \leq n$ and $r \in \{0,1\}$. \\ (i) A {\it tagged partition} of $C$ determined by $p,q,r$ is a partition of $C$ into the disjoint union of $p+q+r$ parts, $P_k$, for $k=1,\ldots,p$ if $p>0$, $\overline{P}_l$, for $l =1,\ldots,q$ if $q>0$ and $P_0$ if $r=1$. If $q > 0$, then for $l =1,\ldots,q$, each part $\overline{P}_l$ is the {\it counterpart} of $P_l$. If $r =1$ then the part $P_0$ is called the {\it zero part}. If $r=0$ then there is no zero part. \\ (ii) A tagged partition with no counterparts and no zero part, that is, a tagged partition determined by $p,q,r$ where $q=0, r=0$, it is called a {\it standard partition} of $C$ into disjoint $p$ parts, $P_1, \ldots, P_p$. \\ (iii) There is a unique tagged partition such that $p=0$ which we call the {\it null partition}. In that case, $q=0$ and $r=1$ and it corresponds to the partition of $C$ with only the zero part, that is, $P_0 = C$. \hfill $\Diamond$ \end{Def} We can associate with a tagged partition, a subspace of $\mbox{$\mathbb{R}$}^n$ in the following way: \begin{Def} \label{def:gen_poly} \normalfont (i) Given a tagged partition $\mathcal{P}$ of $C = \{ 1, \ldots, n\}$, a {\it generalized polydiagonal subspace} of $\mbox{$\mathbb{R}$}^n$ is a subspace of the form {\small $$ \hspace{-2mm} \begin{array}{l} \Delta_{\mathcal{P}} = \left\{ x \in \mbox{$\mathbb{R}$}^n:\ x_j = x_i \ (x_j = -x_i) \mbox{ if $j$ is in the same part of $i$ (if $j$ is in the counterpart of $i$)}; \right. \\ \left. \qquad \qquad \qquad \ \ x_j=0 \mbox{ if $j$ is in the zero part}\right\}\, . \end{array} $$ } \\ (ii) A {\it polydiagonal subspace} of $\mbox{$\mathbb{R}$}^n$ is a particular case of a generalized polydiagonal subspace associated with a standard partition of $C$. \\ (iii) The {\it null subspace} $\{ (0, \ldots, 0)\}$ of $\mbox{$\mathbb{R}$}^n$ is the generalized polydiagonal subspace associated with the null partition of $C$. \hfill $\Diamond$ \end{Def} \begin{exam} \normalfont Consider the following tagged partitions of $C = \{1,2,3,4,5\}$: $$ \begin{array}{l} \mathcal{P}_1 = \left\{ P_1 = \{1,2\},\, P_2 = \{3\},\, \overline{P}_1 = \{4\}, P_0 = \{5\} \right\}, \\ \mathcal{P}_2 = \left\{ P_1 = \{1,2\},\, P_2 = \{3\},\, P_3 = \{5\},\, \overline{P}_1 = \{4\} \right\}, \\ \mathcal{P}_3 = \left\{ P_1 = \{1,2\},\, P_2 = \{3\},\, P_3 = \{4\},\, P_0 = \{5\} \right\}, \\ \mathcal{P}_4 = \left\{ P_1 = \{1,2\},\, P_2 = \{3\},\, P_3 = \{4,5\} \right\}\, . \end{array} $$ The associated generalized polydiagonal subspaces are: $$ \begin{array}{l} \Delta_{\mathcal{P}_1} = \left\{ x \in \mbox{$\mathbb{R}$}^5:\, x_1 = x_2 = -x_4,\, x_5=0 \right\}, \ \Delta_{\mathcal{P}_2} = \left\{ x \in \mbox{$\mathbb{R}$}^5:\, x_1 = x_2\ = -x_4 \right\}, \\ \Delta_{\mathcal{P}_3} = \left\{ x \in \mbox{$\mathbb{R}$}^5:\, x_1 = x_2,\, x_5 = 0 \right\}, \ \Delta_{\mathcal{P}_4} = \left\{ x \in \mbox{$\mathbb{R}$}^5:\, x_1 = x_2,\, x_4 = x_5 \right\}\, . \end{array} $$ \hfill $\Diamond$ \end{exam} \begin{Def} \label{def:inputvalency} \normalfont Let $G$ be a coupled cell network with set of cells $C$ and $\mathcal{P}$ a tagged partition of $C$. We denote by $v_{P} (i)$ the {\it input valency of a cell $i$ relative to the part $P \in \mathcal{P}$}, given by $$ v_{P} (i) = \sum_{j \in P} w_{ij}. $$ \hfill $\Diamond$ \end{Def} We call the {\it input relation} on $G$, the equivalence relation corresponding to the partition of the network set of cells where two cells are in the same part if and only if they have the same input valency. \begin{Def} \label{def:partition} \normalfont Let $G$ be a weighted network with set of cells $C=\{ 1, \ldots, n\}$ and $\mathcal{P} = \{ P_1, P_2, \ldots, P_p, \overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q, P_0\}$ a tagged partition of $C$. Cells in $C$ can be renumbered, if necessary, so that the $n \times n$ adjacency matrix $W_G$ (resp. Laplacian matrix $L_G$) of $G$ have a block form where each submatrix of $W_G$ (resp. $L_G$) represents the edges between the cells of two parts of $\mathcal{P}$: \begin{equation} \left( \begin{array}{cccc|cccc|c} Q_{11} & Q_{12} & \cdots &Q_{1p} & R_{11} & R_{12} & \cdots &R_{1q} & Z_{10} \\ \vdots& \vdots & \cdots &\vdots & \vdots& \vdots & \cdots &\vdots & \vdots \\ Q_{p1} & Q_{p2} & \cdots &Q_{pp} & R_{p1} & R_{p2} & \cdots &R_{pq} & Z_{p0} \\ & & & & & & & & \\ \hline & & & & & & & & \\ \overline{R}_{11} & \overline{R}_{12} & \cdots &\overline{R}_{1p} & \overline{Q}_{11} & \overline{Q}_{12} & \cdots &\overline{Q}_{1q} & \overline{Z}_{10} \\ \vdots& \vdots & \cdots &\vdots & \vdots& \vdots & \cdots &\vdots & \vdots \\ \overline{R}_{q1} & \overline{R}_{q2} & \cdots &\overline{R}_{qp} & \overline{Q}_{q1} & \overline{Q}_{q2} & \cdots &\overline{Q}_{qq} &\overline{Z}_{q0} \\ & & & & & & & & \\ \hline & & & & & & & & \\ Z_{01} & Z_{02} & \cdots &Z_{0p} & \overline{Z}_{01} & \overline{Z}_{02} & \cdots &\overline{Z}_{0q} & Z_{00} \\ \end{array} \right)\, . \label{eq:oddbf} \end{equation} Thus, matrices $Q_{ij}$, $R_{ij}$ and $Z_{i0}$ represent the connections to part $P_i$ from parts $P_j$, $\overline{P}_j$ and $P_0$, respectively. Matrices $\overline{R}_{ij}$, $\overline{Q}_{ij}$ and $\overline{Z}_{i0}$ represent the connections to part $\overline{P}_i$ from parts $P_j$, $\overline{P}_j$, and $P_0$, respectively. Matrices $Z_{0j}$, $\overline{Z}_{0j}$ and $Z_{00}$ represent the connections to part $P_0$ from parts $P_j$, $\overline{P}_j$ and $P_0$, respectively. We say that the {\it enumeration of the network set of cells is adapted to the (tagged) partition} $\mathcal{P}$. \hfill $\Diamond$ \end{Def} \begin{rem}\normalfont \label{rmk_block_match} The row sum operator can be applied to each block matrix in (\ref{eq:oddbf}) of Definition~\ref{def:partition}. If $B$ is a block matrix representing the connections to part $L$ from part $H$, we have that $$ v_{H} (k) = \left( {\mathrm rs} \left( B \right) \right)_k , \quad k \in L\, . $$ \hfill $\Diamond$ \end{rem} \section{Generalized polydiagonals invariant by the adjacency matrix and/or by the Laplacian matrix of a weighted network}\label{sec:gen_poly} \begin{prop} \label{prop:subset} Let $G$ be a weighted coupled $n$-cell network with adjacency matrix $W_G$ and Laplacian matrix $L_G$. We have:\\ (i) The set of the generalized polydiagonal subspaces that are invariant by the adjacency matrix $W_G$ coincides with the set of the generalized polydiagonal subspaces that are invariant by the Laplacian matrix $L_G$ if and only if if $G$ is regular.\\ (ii) If $G$ is not regular, the set of the polydiagonal subspaces that are invariant by the adjacency matrix $W_G$ is strictly contained in the set of the polydiagonal subspaces that are invariant by the Laplacian matrix $L_G$ . \end{prop} \begin{proof} (i) We have that $L_G = D_G - W_G$. Moreover, $G$ is a regular network with valency $v_w$ if and only if $D_G = v_w I$, with $I$ the identity matrix of order $n$. In that case, we have that a space is invariant under $W_G$ if and only if it is invariant under $L_G$. In particular, that holds for invariant generalized polydiagonal spaces. If $G$ is not regular, then at least the diagonal space $\{ x:\, x_i = x_j, \mbox{ for all } i,j\}$ is invariant under $L_G$ but not under $W_G$. Thus there is at least one generalized polydiagonal that is left invariant under $L_G$ but not under $W_G$. \\ (ii) Given a polydiagonal subspace $\Delta$, consider the associated (standard) partition $\mathcal{P}$. That is, $\Delta=\Delta_{\footnotesize{\mathcal{P}}}$. If $\Delta_{\footnotesize{\mathcal{P}}}$ is invariant by the adjacency matrix $W_G$, we have that any two cells $i,j$ in the same part have the same input valency $v (i) = v (j)$. It follows that the entries $ii$ and $jj$ of the diagonal matrix $D_G$ are equal and thus, that $\Delta_{\footnotesize{\mathcal{P}}}$ is also left invariant by $D_G$ and, consequently, by $L_G$. As already mentioned in the proof of (i), since the Laplacian matrix $L_G$ is regular, the polydiagonal subspace where all the variables are identified is always invariant by $L_G$ but it is not invariant by $W_G$, in the case where $G$ is not regular. Thus, when $G$ is not regular, the set of the polydiagonals invariant by $W_G$ is strictly contained in the set of the polydiagonals invariant by $L_G$. \end{proof} \begin{rem}\normalfont Given Proposition~\ref{prop:subset} (ii) we can ask, when $G$ is not a regular network, if the set of the generalized polydiagonal subspaces that are invariant by the adjacency matrix $W_G$ is contained in the set of the generalized polydiagonal subspaces that are invariant by the Laplacian matrix $L_G$. The following example shows that there can be generalized polydiagonal subspaces that are invariant by the adjacency matrix $W_G$ but not by the Laplacian matrix $L_G$. \hfill $\Diamond$ \end{rem} \begin{exam} \label{ex:notequal} Let $G$ be the four-cell non-regular network in Figure~\ref{fig:notequal} with adjacency matrix $$ W_{G} = \left( \begin{array}{cc|cc} 3 & 1 & 1& 1 \\ 1 & 1 & 0& 0 \\ \hline 0 & 0 & 5& -3 \\ 4 & 2 & 5& 3 \end{array} \right) = \displaystyle \left( \begin{array}{c|c} Q_{11} & R_{11} \\ \hline \overline{R}_{11} & \overline{Q}_{11} \end{array} \right) \, . $$ The generalized polydiagonal subspace $$\Delta_{\footnotesize{\mathcal{P}}}=\{ x \in \mbox{$\mathbb{R}$}^4: \ x_1 = x_2=-x_3=-x_4 \} $$ is left invariant by the adjacency matrix $W_{G}$ but not by the Laplacian matrix $$ L_{G} = \left( \begin{array}{cc|cc} 3 & -1 & -1& -1 \\ -1 & 1 & 0& 0 \\ \hline 0 & 0 & -3& 3 \\ -4 & -2 & -5& 11 \end{array} \right) \, . $$ \hfill $\Diamond$ \end{exam} \begin{figure}[h!] \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (1) at (-5cm, 4cm) [fill=white!60] {$1$}; \node[node] (4) at (-5cm, 1cm) [fill=white!20] {$4$}; \node[node] (3) at (-7cm, 2.5cm) [fill=white] {$3$}; \node[node] (2) at (-3cm, 2.5cm) [fill=white] {$2$}; \path (1) edge [loop above] node {{\tiny $3$}} (1) (1) edge [bend left=10, thick] node [pos=0.75, sloped, above] {{\tiny $1$}} (2) (1) edge[bend left=10, thick] node [pos=0.75, sloped, below] {{\tiny $4$}} (4) (2) edge [loop right] node {{\tiny $1$}} (2) (2) edge [bend left=10, thick] node [pos=0.75, sloped, below] {{\tiny $1$}} (1) (2) [->] edge node [pos=0.75, sloped, above] {{\tiny $2$}} (4) (3) edge [loop left] node {{\tiny $5$}} (3) (3) edge[bend left=10, thick] node [pos=0.75, sloped, above] {{\tiny $1$}} (1) (3) edge [bend left=10, thick] node [pos=0.75, sloped, above] {{\tiny $5$}} (4) (4) edge [bend left=10, thick] node [pos=0.7, sloped, above] {{\tiny $1$}} (1) (4) edge [loop right] node {{\tiny $3$}} (4) (4) edge [bend left=10, thick] node [pos=0.75, sloped, below] {{\tiny $-3$}} (3); \end{tikzpicture} \end{center} \caption{The four-cell weighted network $G$ in Example~\ref{ex:notequal}.} \label{fig:notequal} \end{figure} In the next result, we caracterize, for a weighted network, the generalized polydiagonals that are invariant under its adjacency matrix (resp. Laplacian matrix). \begin{prop}\label{thm:mainLaplacian} Let $G$ be a weighted network with set of cells $C = \{ 1, \ldots, n\}$ and $\Delta_{\mathcal{P}}$ a non-null generalized polydiagonal subspace of $\mbox{$\mathbb{R}$}^n$ where $\mathcal{P}$ is a tagged partition $\{ P_1, P_2, \ldots, P_p, \overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q, P_0 \}$ of $C$. Consider an enumeration of $C$ adapted to the partition $\mathcal{P}$. The adjacency matrix $W_G$ (resp. the Laplacian matrix $L_G$) of $G$ leaves invariant the generalized polydiagonal $\Delta_{\mathcal{P}}$ if and only if the block structure (\ref{eq:oddbf}) of $W_G$ (resp. $L_G$) satisfies the following conditions: \\ \ \\ When $q>0$: \begin{equation}\label{eq:equal_Lap} {\small \begin{array}{l} \left\{ \begin{array}{ll} {\mathrm rs}\left(Q_{ij}\right) - {\mathrm rs}\left( R_{ij}\right),\ {\mathrm rs}\left(\overline{Q}_{ij}\right) - {\mathrm rs}\left( \overline{R}_{ij}\right) \mbox{ are regular of the same valency} & \left( 1 \leq i, j \leq q \right);\\ {\mathrm rs}\left(Q_{ij}\right), \ -{\mathrm rs}\left( \overline{R}_{ij}\right) \mbox{ are regular of the same valency} & \left( 1 \leq i \leq q;\ q+1 \leq j \leq p \right); \\ {\mathrm rs}\left(Q_{ij}\right) - {\mathrm rs}\left( R_{ij}\right) \mbox{ is regular} &\left(q+1 \leq i \leq p,\ 1 \leq j \leq q\right);\\ Q_{ij} \mbox{ is regular} & \left(q+1 \leq i, j \leq p \right); \\ \mbox{If } r=1 \mbox{ then } {\mathrm rs}\left(Z_{0j}\right) = {\mathrm rs}\left(\overline{Z}_{0j}\right) & \left( 1 \leq j \leq q\right); \\ \qquad \qquad \qquad {\mathrm rs}\left(Z_{0j}\right) = 0 & \left( q+1 \leq j \leq p \right). \end{array} \right. \end{array}} \end{equation} \ \\ When $q=0$: \begin{equation}\label{eq:equal_Lap_Z} \begin{array}{l} \left\{ \begin{array}{ll} Q_{ij} \mbox{ is regular} & \left( 1 \leq i,j \leq p \right);\\ \mbox{If } r=1 \mbox{ then } {\mathrm rs}\left(Z_{0j}\right) = 0 & \left( 1 \leq j \leq p \right). \end{array} \right. \end{array} \end{equation} \end{prop} \begin{proof} Assume the tagged partition $\mathcal{P}$ has $q>0$. Denote by $X_i$, for $1\leq i\leq p$ (resp. $-X_i$, for $1\leq i \leq q$), the coordinates corresponding to the cells in part $P_i$ (resp. $\overline{P}_i$). Applying the matrix $W_G$ (resp. $L_G$) with block structure (\ref{eq:oddbf}) to the column vector $X = \left(X_1, \ldots, X_q, X_{q+1}, \ldots, X_p, -X_1, \ldots, -X_q, {\bf 0} \right) \in \Delta_{\mathcal{P}}$, where cell coordinates corresponding to the zero part $P_0$ are set to zero (in case $r=1$), we obtain a column vector and corresponding properties in order to belong to $\Delta_{\mathcal{P}}$:\\ (i) The components corresponding to the cells in the $q$ parts $P_1, \ldots, P_q$ are the entries of the column vector $$ \begin{array}{l} \left( \begin{array}{cccc} Q_{11} & Q_{12} & \cdots &Q_{1q} \\ \vdots& \vdots & \cdots &\vdots \\ Q_{q1} & Q_{q2} & \cdots &Q_{qq} \end{array} \right) \left( \begin{array}{c} X_1\\ \vdots \\ X_q \end{array} \right) + \left( \begin{array}{cccc} R_{11} & R_{12} & \cdots &R_{1q} \\ \vdots& \vdots & \cdots &\vdots \\ R_{q1} & R_{q2} & \cdots &R_{qq} \end{array} \right) \left( \begin{array}{c} -X_1\\ \vdots \\ -X_q \end{array} \right) \\ \\ + \left( \begin{array}{cccc} Q_{1,q+1} & Q_{1,q+2} & \cdots &Q_{1p} \\ \vdots& \vdots & \cdots &\vdots \\ Q_{q,q+1} & Q_{q,q+2} & \cdots &Q_{qp} \end{array} \right) \left( \begin{array}{c} X_{q+1}\\ \vdots \\ X_p \end{array} \right); \end{array} $$ \noindent The components corresponding to the cells in the counterparts $\overline{P}_1, \ldots, \overline{P}_q$ are the entries of the column vector: $$ \begin{array}{l} \left( \begin{array}{cccc} \overline{R}_{11} & \overline{R}_{12} & \cdots &\overline{R}_{1q} \\ \vdots& \vdots & \cdots &\vdots \\ \overline{R}_{q1} & \overline{R}_{q2} & \cdots &\overline{R}_{qq} \end{array} \right) \left( \begin{array}{c} X_1\\ \vdots \\ X_q \end{array} \right) + \left( \begin{array}{cccc} \overline{Q}_{11} & \overline{Q}_{12} & \cdots & \overline{Q}_{1q} \\ \vdots& \vdots & \cdots &\vdots \\ \overline{Q}_{q1} & \overline{Q}_{q2} & \cdots &\overline{Q}_{qq} \end{array} \right) \left( \begin{array}{c} -X_1\\ \vdots \\ -X_q \end{array} \right) \\ \\ + \left( \begin{array}{cccc} \overline{R}_{1,q+1} & \overline{R}_{1,q+2} & \cdots &\overline{R}_{1p} \\ \vdots& \vdots & \cdots &\vdots \\ \overline{R}_{q,q+1} & \overline{R}_{q,q+2} & \cdots &\overline{R}_{qp} \end{array} \right) \left( \begin{array}{c} X_{q+1}\\ \vdots \\ X_p \end{array} \right); \end{array} $$ We have so that ${\mathrm rs}\left( Q_{ij} \right)- {\mathrm rs}\left( R_{ij} \right) = {\mathrm rs} \left( \overline{Q}_{ij} \right) - {\mathrm rs} \left( \overline{R}_{ij} \right)$, for $1 \leq i,j \leq q$. Similarly, ${\mathrm rs}\left( Q_{i,j} \right) = -{\mathrm rs} \left( \overline{R}_{i,j} \right)$, for $1 \leq i \leq q$ and $q+1 \leq j \leq p$. Moreover, all these column vectors (of the form ${\mathrm rs} (M)$) have constant entries, that is, they are regular. \\ \noindent (ii) The components corresponding to the cells in the parts $P_{q+1}, \ldots, P_p$ are the entries of the column vector: $$ \begin{array}{l} \left( \begin{array}{cccc} Q_{q+1,1} & Q_{q+1,2} & \cdots &Q_{q+1,q} \\ \vdots& \vdots & \cdots &\vdots \\ Q_{p1} & Q_{p2} & \cdots &Q_{pq} \end{array} \right) \left( \begin{array}{c} X_1\\ \vdots \\ X_q \end{array} \right) + \left( \begin{array}{cccc} R_{q+1,1} & R_{q+1, 2} & \cdots &R_{q+1,q} \\ \vdots& \vdots & \cdots &\vdots \\ R_{p1} & R_{p2} & \cdots &Q_{pq} \end{array} \right) \left( \begin{array}{c} -X_1\\ \vdots \\ -X_q \end{array} \right) \\ \\ + \left( \begin{array}{cccc} Q_{q+1,q+1} & Q_{q+1,q+2} & \cdots &Q_{q+1p} \\ \vdots& \vdots & \cdots &\vdots \\ Q_{p,q+1} & Q_{p,q+2} & \cdots &Q_{pp} \end{array} \right) \left( \begin{array}{c} X_{q+1}\\ \vdots \\ X_p \end{array} \right); \end{array} $$ \noindent Thus ${\mathrm rs} \left( Q_{ij} \right) - {\mathrm rs} \left( R_{ij} \right)$ is regular, for $q+1 \leq i \leq p$ and $1 \leq j \leq q$. Also, $Q_{i,j}$ is regular for $q+1 \leq i,j \leq p$. \\ \noindent (iii Finally, the components corresponding to the cells in the part $P_0$ in case $r=1$ are the entries of the column vector: $$ \begin{array}{l} \left( \begin{array}{cccc} Z_{01} & Z_{02} & \cdots &Z_{0q} \end{array} \right) \left( \begin{array}{c} X_1\\ \vdots \\ X_q \end{array} \right) + \left( \begin{array}{cccc} \overline{Z}_{01} & \overline{Z}_{02} & \cdots & \overline{Z}_{0q} \end{array} \right) \left( \begin{array}{c} -X_1\\ \vdots \\ -X_q \end{array} \right) \\ \\ + \left( \begin{array}{cccc} Z_{0,q+1} & Z_{0,q+2} & \cdots &Z_{0p} \end{array} \right) \left( \begin{array}{c} X_{q+1}\\ \vdots \\ X_p \end{array} \right)\, . \end{array} $$ Thus, ${\mathrm rs}\left(Z_{0j}\right) = {\mathrm rs}\left(\overline{Z}_{0j}\right)$ for $1 \leq j \leq q$ and ${\mathrm rs}\left(Z_{0j}\right) = 0$ for $q+1 \leq j \leq p$. We have so that $\Delta_{\mathcal{P}}$ is left invariant under $W_G$ (resp. $L_G$) if and only if conditions (\ref{eq:equal_Lap}) are verified. Now, for tagged partitions where $q=0$, that is, there are no counterparts, we obtain conditions (\ref{eq:equal_Lap_Z}), since in that case, applying the matrix $W_G$ (resp. $L_G$) with block structure (\ref{eq:oddbf}) to the collumn vector $X = \left(X_1, \ldots, \ldots, X_p, {\bf 0} \right) \in \Delta_{\mathcal{P}}$, where in case $r=1$, as before, cells corresponding to $P_0$ are set to zero, we obtain a column vector where:\\ (i) The components corresponding to the cells in the $p$ parts $P_1, \ldots, P_p$ are the entries of the column vector $$ {\tiny \left( \begin{array}{cccc} Q_{11} & Q_{12} & \cdots &Q_{1p} \\ \vdots& \vdots & \cdots &\vdots \\ Q_{p1} & Q_{p2} & \cdots &Q_{pp} \end{array} \right) \left( \begin{array}{c} X_1\\ \vdots \\ X_p \end{array} \right)}; $$ \noindent (ii) The components corresponding to the cells in the part $P_0$ in case $r=1$ are the entries of the column vector: $$ {\tiny \left( \begin{array}{cccc} Z_{01} & Z_{02} & \cdots &Z_{0p} \end{array} \right) \left( \begin{array}{c} X_1\\ \vdots \\ X_p \end{array} \right)}\, . $$ \end{proof} \begin{exam} Let $G$ be the five-cell non-regular network with adjacency matrix $$ W_{G} = \left( \begin{array}{c|cc|c|c} 0 & -\frac{3}{2} & -\frac{3}{2} & 1 & \frac{23}{10} \\ \hline -2 & 0 & 1 & 1 & 1 \\ -1 & 1 & 0 & 2 & 0 \\ \hline 2 & 3 & 0 & 1 & \frac{11}{10} \\ \hline 1 & 1 & -1 & 1 & -3 \end{array} \right) = \displaystyle \left( \begin{array}{c|c|c|c} Q_{11} & Q_{12} & R_{11} & Z_{10} \\ \hline Q_{21} & Q_{22} & R_{21} & Z_{20} \\ \hline \overline{R}_{11} & \overline{R}_{12} & \overline{Q}_{11} & \overline{Z}_{10} \\ \hline Z_{01} & Z_{02} & \overline{Z}_{01} & Z_{00} \end{array} \right) \, . $$ Consider the generalized polydiagonal subspace $$\Delta_{\footnotesize{\mathcal{P}}}=\{ x \in \mbox{$\mathbb{R}$}^5: \ x_1 =-x_4,\ x_2=x_3,\ x_5=0\} $$ for the tagged partition $$\mathcal{P} = \{ P_1 =\{1\}, P_2= \{2,3\}, \overline{P}_1 =\{4\}, P_0 =\{5\} \} $$ of $C$. Thus, $p=2$, $q=1$ and $r=1$. Note that the enumeration of the network set of cells is adapted to $\mathcal{P}$ providing the above block structure of $W_G$. We have that:\\ \begin{equation*} \begin{array}{l} \left\{ \begin{array}{ll} {\mathrm rs}\left(Q_{11}\right) - {\mathrm rs}\left( R_{11}\right) = -{\mathrm rs}\left( \overline{R}_{11}\right)+{\mathrm rs}\left(\overline{Q}_{11}\right) =(-1);\\ {\mathrm rs}\left(Q_{12}\right) = -{\mathrm rs}\left( \overline{R}_{12}\right) =(-3); \\ {\mathrm rs}\left(Q_{21}\right) - {\mathrm rs}\left( R_{21}\right) \mbox{ is regular of valency } -3;\\ Q_{22} \mbox{ is regular of valency } 1; \\ {\mathrm rs}\left(Z_{01}\right) = {\mathrm rs}\left(\overline{Z}_{01}\right) = (1); \\ {\mathrm rs}\left(Z_{02}\right) = (0). \end{array} \right. \end{array} \end{equation*} It follows, from Proposition~\ref{thm:mainLaplacian} that $\Delta_{\footnotesize{\mathcal{P}}}$ is left invariant by the adjacency matrix $W_{G}$. \hfill $\Diamond$ \end{exam} The next corollary gives a characterization of the generalized polydiagonals that are invariant under the Laplacian matrix of a weighted network but in terms of its adjacency matrix. \begin{cor}\label{cor:mainLaplacian} Let $G$ be a weighted network with set of cells $C = \{ 1, \ldots, n\}$ and $\Delta_{\mathcal{P}}$ a generalized polydiagonal subspace of $\mbox{$\mathbb{R}$}^n$ for a tagged partition $\mathcal{P}$ of $C$. Consider an enumeration of $C$ adapted to the partition $\mathcal{P}$. The Laplacian matrix $L_G$ leaves invariant the generalized polydiagonal $\Delta_{\mathcal{P}}$ if and only if the block structure (\ref{eq:oddbf}) of the adjacency matrix $W_G$ of $G$ satisfies the following conditions: \\ \noindent The matrices $Q_{ij},\, R_{ij},\, Z_{i0}$ and $\overline{Q}_{ij}, \, \overline{R}_{ij},\ \overline{Z}_{i0}$ are related in the following way:\\ \noindent When $q > 0$,\\ \noindent (i) For each $i=1, \ldots, q$, and for $j=1, \ldots, q;\, j\not=i$ the column vectors {\small $$ \left\{ \begin{array}{lr} \displaystyle \sum_{j=1, j\not= i}^{p} {\mathrm rs}\left(Q_{ij}\right) + \sum_{j=1, j\not= i}^{q} {\mathrm rs}\left(R_{ij}\right) + 2 {\mathrm rs}\left(R_{ii}\right) + {\mathrm rs}\left( Z_{i0} \right) & \\ & \mbox{ are regular of the same valency $r_i$;}\\ \displaystyle \sum_{j=1, j\not= i}^{q} {\mathrm rs}\left(\overline{Q}_{ij}\right) + \sum_{j=1, j\not= i}^{p} {\mathrm rs}\left(\overline{R}_{ij}\right) + 2 {\mathrm rs}\left(\overline{R}_{ii}\right) + {\mathrm rs}\left( \overline{Z}_{i0} \right) & \end{array}\right. $$} \noindent $ {\small \left\{ \begin{array}{lr} \\ - {\mathrm rs}(Q_{ij}) + {\mathrm rs}(R_{ij}) & \\ & \mbox{ are regular of the same valency $q_{ij}$}. \\ {\mathrm rs}\left(\overline{R}_{ij}\right) - {\mathrm rs}\left(\overline{Q}_{ij}\right) & \end{array}\right.} $ \\ \\ \noindent (ii) For each $i \in \{1,\ldots,q\}$ and $j \in \{q+1,\ldots,p\}$, \\ \\ \noindent $ \left\{ \begin{array}{lr} - {\mathrm rs}\left(Q_{ij}\right) & \\ & \mbox{ are regular of the same valency } q_{ij}.\\ {\mathrm rs}\left( \overline{R}_{ij}\right) & \end{array}\right. $\\ \\ \noindent For all $q \in \mathbf{N}_0$, \\ (iii) For each $i \in \{q+1,\ldots,p\}$ and $j \in \{1,\ldots,q\}$, $$ \begin{array}{l} \displaystyle \sum_{j=1, j\not= i}^{p} {\mathrm rs}\left(Q_{ij}\right) + \sum_{j=1}^{q} {\mathrm rs}\left(R_{ij}\right) + {\mathrm rs}\left( Z_{i0} \right) \mbox{ is regular of valency $r_i$;}\\ \\ - {\mathrm rs}\left(Q_{ij}\right) + {\mathrm rs}\left( R_{ij}\right) \mbox{ is regular of valency } q_{ij}. \end{array} $$ \noindent (iv) For each $i \in \{q+1,\ldots,p\}$ and $ j \in \{q+1,\ldots,p\},\ j\not=i$, \\ $$-Q_{ij} \mbox{ is regular of valency $q_{ij}$} \, .$$ \noindent The matrices $Z_{0j}$ and $\overline{Z}_{0j}$ satisfy:\\ (v) If $q > 0$, for each $j \in \{1,\ldots,q\}$, we have: \\ $$ {\mathrm rs}(Z_{0j}) = {\mathrm rs}\left(\overline{Z}_{0j}\right)\, . \ $$ \noindent (vi) For all $q \in \mathbf{N}_0$, for each $j \in \{q+1,\ldots,p\}$, \\ $$ Z_{0j} \mbox{ is regular of valency zero.} $$ \end{cor} \begin{proof} Let $G$ be a weighted network with set of cells $C = \{ 1, \ldots, n\}$, adjacency matrix $W_G$ and Laplacian matrix $L_G = D_G - W_G$. Given a generalized polydiagonal $\Delta_{\mathcal{P}}$, we consider the associated tagged partition $\mathcal{P}$ of $C$ determined by $p,q,r$ where $0 \leq q \leq p \leq n$ and $r \in \{0,1\}$. Denote the parts of $\mathcal{P}$ by $P_1, P_2, \ldots, P_p,$ the counterparts by $\overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q$ and the zero part by $P_0$. Note that if $q=0$, then there are no counterparts and if $r=0$ then there is no zero part. Consider an enumeration of $C$ adapted to the partition $\mathcal{P}$ providing a block structure (\ref{eq:oddbf}) for $W_G$ and corresponding block structures for $L_G$ and $D_G$. For the blocks in $L_G$ and $D_G$ we use superscripts, respectively, $L$ and $D$. By Proposition~\ref{thm:mainLaplacian}, the generalized polydiagonal $\Delta_{\mathcal{P}}$ is left invariant by the Laplacian matrix $L_G$ if and only if conditions (\ref{eq:equal_Lap}) and (\ref{eq:equal_Lap_Z}) are verified for the blocks $Q^L_{ij}$, $R^L_{ij}$, $Z^L_{0j}$, $\overline{Q}^L_{ij}$, $\overline{R}^L_{ij}$ and $\overline{Z}^L_{0j}$ of $L_G$. For $i=j$, we have $$ Q^L_{ii} = Q^D_{ii} - Q_{ii}, \quad R^L_{ii} = - R_{ii}, \quad \overline{Q}^L_{ii} = \overline{Q}^D_{ii} - \overline{Q}_{ii}, \quad \overline{R}^L_{ii} = - \overline{R}_{ii}. $$ Thus, for $i=j$, the first condition in (\ref{eq:equal_Lap}) is equivalent to $$ {\mathrm rs}\left(Q^D_{ii}\right) - {\mathrm rs}\left(Q_{ii}\right) + {\mathrm rs}\left(R_{ii}\right) = {\mathrm rs}\left(\overline{R}_{ii}\right) + {\mathrm rs}\left(\overline{Q}^D_{ii}\right)-{\mathrm rs}\left(\overline{Q}_{ii}\right), $$ where the left and right columns of this equality are regular. Given that, {\tiny $$ {\mathrm rs}\left(Q^D_{ii}\right) = \sum_{j=1}^p {\mathrm rs}\left(Q_{ij}\right) + \sum_{j=1}^q {\mathrm rs}\left(R_{ij}\right) + {\mathrm rs}\left(Z_{i0}\right), \quad {\mathrm rs}\left(\overline{Q}^D_{ii}\right) = \sum_{j=1}^q {\mathrm rs}\left(\overline{Q}_{ij}\right) + \sum_{j=1}^p {\mathrm rs}\left(\overline{R}_{ij}\right) + {\mathrm rs}\left(\overline{Z}_{i0}\right), $$} if $1\leq i \leq q$, the above equality simplifies to {\tiny $$ \sum_{j=1, j\not= i}^{p} {\mathrm rs}\left(Q_{ij}\right) + \sum_{j=1, j\not= i}^{q} {\mathrm rs}\left(R_{ij}\right) + 2 {\mathrm rs}\left(R_{ii}\right) + {\mathrm rs}\left(Z_{i0}\right) = \sum_{j=1, j\not= i}^{q} {\mathrm rs}\left(\overline{Q}_{ij}\right) + \sum_{j=1, j\not= i}^{p} {\mathrm rs}\left(\overline{R}_{ij}\right) + 2 {\mathrm rs}\left(\overline{R}_{ii}\right) + {\mathrm rs}\left(\overline{Z}_{i0}\right)\, . $$ } Thus we obtain the first equality in condition (i) of Corollary~\ref{cor:mainLaplacian}. Moreover, for $i\ne j$, we have $$ Q^L_{ij} = - Q_{ij}, \quad R^L_{ij} = - R_{ij}, \quad \overline{Q}^L_{ij} = - \overline{Q}_{ij}, \quad \overline{R}^L_{ij} = - \overline{R}_{ij}. $$ Thus, for $i\ne j$, the first condition in (\ref{eq:equal_Lap}) is equivalent to $$ - {\mathrm rs}\left(Q_{ij}\right) + {\mathrm rs}\left(R_{ij}\right) = {\mathrm rs}\left(\overline{R}_{ij}\right) -{\mathrm rs}\left(\overline{Q}_{ij}\right), $$ where the left and right columns of this equality are regular (of the same valency). Thus we obtain the second equality in condition (i) of Corollary~\ref{cor:mainLaplacian}. Finally, the remaining conditions in (\ref{eq:equal_Lap}) and (\ref{eq:equal_Lap_Z}) of Proposition~\ref{thm:mainLaplacian} are equivalent to (ii)-(vi) of Corollary~\ref{cor:mainLaplacian}. \end{proof} In Proposition~\ref{thm:mainLaplacian}, if we restrict to polydiagonal subspaces we get the following. \begin{cor} \label{thm:mainpart_exo} Let $G$ be a weighted network with set of cells $C = \{ 1, \ldots, n\}$ and $\Delta_{\mathcal{P}}$ a polydiagonal subspace of $\mbox{$\mathbb{R}$}^n$. Consider the associated (standard) partition $\mathcal{P}$ of $C$ with $p>0$ parts, say $P_1, P_2, \ldots, P_p$, and take an enumeration of $C$ adapted to the partition $\mathcal{P}$. \\ (i) The adjacency matrix $W_G$ of $G$ leaves invariant the polydiagonal $\Delta_{\mathcal{P}}$ if and only if in the block structure (\ref{eq:oddbf}) of $W_G$ every matrix $Q_{ij}$ is regular, for $i,j \in \{1, \ldots, p\}$. \\ (ii) The Laplacian matrix $L_G$ of $G$ leaves invariant the polydiagonal $\Delta_{\mathcal{P}}$ if and only if in the block structure (\ref{eq:oddbf}) of $W_G$ every matrix $Q_{ij}$, with $i \ne j$, is regular, for $i,j \in \{1, \ldots, q\}$. \end{cor} \begin{proof}The statement (i) follows directly from Proposition~\ref{thm:mainLaplacian}, considering that the partition $\mathcal{P}$ is standard, i.e., it is determined by $p>0$ and $q=r=0$, that is, there are no counterparts neither the zero part. To conclude (ii), note that, applying (i) to $L_G$, as $L_G$ is regular of valency zero, it follows that every matrix $Q^L_{ij}$ is regular, for all $i,j \in \{1, \ldots, q\}$ if and only if $Q^L_{ij} = -Q_{ij}$ is regular, for all $i,j \in \{1, \ldots, q\}$ with $i\not=j$. \end{proof} \subsection*{Exo-balanced and balanced standard partitions} We recall the concepts of balanced and exo-balanced (standard) partitions for weighted networks, as in Aguiar and Dias~\cite{AD18}. The concept of balanced partition was first introduced in the formalism of Golubitsky, Stewart and collaborators, where the network connections have associated nonnegative integer values and extended to the weighted formalism in Aguiar and Dias~\cite{AD18}. \begin{Def} \normalfont \label{def:balanced} Let $G$ be a weighted network with set of cells $C$ and a standard partition $\mathcal{P} = \left\{ P_1, P_2, \ldots, P_p \right\}$ of $C$.\\ (i) The partition $\mathcal{P}$ is said {\it exo-balanced} when the corresponding polydiagonal subspace $\Delta_{\mathcal{P}}$ is left invariant by the Laplacian matrix $L_G$ of $G$, that is, when $$v_{P} (i) = v_{P} (i')$$ for $[i]=[i']$ and for all $P \in \mathcal{P} \setminus \{ [i] \}$. We denote by $\mathcal{P}_{G,exo}$ the {\it set of exo-balanced (standard) partitions} of $G$. \\ (ii) The partition $\mathcal{P}$ is said {\it balanced} when the corresponding polydiagonal subspace $\Delta_{\mathcal{P}}$ is left invariant by the adjacency matrix $W_G$ of $G$, that is, when $$v_{P} (i) = v_{P} (i')$$ for $[i]=[i']$ and for all $P \in \mathcal{P}$. We denote by $\mathcal{P}_{G,bal}$ the {\it set of balanced (standard) partitions} of $G$. \hfill $\Diamond$ \end{Def} \begin{rem}\normalfont Recalling Remark~\ref{rmk_block_match} and using the notation of Corollary~\ref{thm:mainpart_exo} for the adjacency matrix $W_G$ of a weighted network $G$, we have that a standard partition $\mathcal{P}$ is exo-balanced when every matrix $Q_{ij}$, with $i \ne j$, is regular, for $i,j \in \{1, \ldots, p\}$. Also, a standard partition $\mathcal{P}$ is balanced when every matrix $Q_{ij}$ is regular, for all $i,j \in \{1, \ldots, p\}$. \hfill $\Diamond$ \end{rem} \begin{rem}\normalfont \label{rmk:bal_G_L} Let $G$ be a weighted network with weighted adjacency matrix $W_G$ and consider the network $G_L$ associated with the Laplacian matrix $L_G$ of $G$. We have that $\mathcal{P}_{G_L,bal} = \mathcal{P}_{G,exo}$. \hfill $\Diamond$ \end{rem} It follows from Proposition~\ref{prop:subset} the following relation between the set of the balanced and the set of the exo-balanced partitions of a network $G$, which is a generalization to weighted networks of Proposition 3.15 in Neuberger {\it et al.} \cite{NSS19}. \begin{cor} \label{prop:reg} Let $G$ be a weighted network. We have,\\ (i) $\mathcal{P}_{G,bal} \subseteq \mathcal{P}_{G,exo}$; \\ (ii) $\mathcal{P}_{G,bal} = \mathcal{P}_{G,exo}$ if and only if $G$ is regular;\\ \end{cor} It follows from Corollary~\ref{prop:reg} that, $\mathcal{P}_{G,exo} \setminus \mathcal{P}_{G,bal} \not= \emptyset$, if a network $G$ is not regular. We have then the following definition. \begin{Def} \normalfont A standard partition in $\mathcal{P}_{G,exo} \setminus \mathcal{P}_{G,bal}$ is said to be {\it strictly exo-balanced}. \hfill $\Diamond$ \end{Def} A standard partition $\mathcal{P}$ is so strict exo-balanced if and only if the subspace $\Delta_P$ is left invariant by the Laplacian matrix $L_G$ of $G$ but not by its adjacency matrix $W_G$. \begin{exam} Any weighted network has at least the exo-balanced standard partition corresponding to the trivial partition with only one part, the network set of cells. If the network is not regular, then the trivial (standard) partition is strictly exo-balanced. \hfill $\Diamond$ \end{exam} \begin{rem}\normalfont \label{rmk:bal_exo_L} Let $G$ be a weighted network with weighted adjacency matrix $W_G$ and consider the network $G_L$ associated with the Laplacian matrix $L_G$ of $G$. By Remark~\ref{rmk:bal_G_L}, the set of strict exo-balanced partitions for $G$ is formed by the balanced partitions of $G_L$ which are not balanced for $G$, that is, $\mathcal{P}_{G_L,bal} \setminus \mathcal{P}_{G,bal}$. \hfill $\Diamond$ \end{rem} \subsubsection*{Quotient networks for balanced and exo-balanced standard partitions} We recall the concept of quotient network on balanced standard partitions of networks in the formalism of Golubitsky, Stewart and collaborators, where the network connections have associated nonnegative integer values. These concepts are also valid and extend trivially to the weighted formalism as stated in Aguiar and Dias~\cite{AD18}. Following Section 2 of \cite{AD18}, given a balanced standard partition $\mathcal{P}$ of the set of cells of a weighted network $G$, the associated {\it quotient network} $G_{\footnotesize{\mathcal{P}}}$ is the weighted network defined in the following way: each cell in $G_{\footnotesize{\mathcal{P}}}$ corresponds to a part in $\mathcal{P}$; denoting by $[i]$ the part in $\mathcal{P}$ containing $i$, there is an edge from $[j]$ directed to $[i]$ if and only if there exists in $G$ an edge directed from $j'$ to $i'$, with $j' \in [j]$ and $i' \in [i]$. Moreover, the weight of the edge directed from $[j]$ to $[i]$ is $v_{[j]} (i) $. That is, if the balanced partition $\mathcal{P}$ has $p$ parts and $W_{G_{\footnotesize{\mathcal{P}}}}=[q_{[i],[j]}]_{p \times p}$ is the weighted adjacency matrix of the quotient network $G_{\footnotesize{\mathcal{P}}}$, we have $q_{[i],[j]} = v_{[j]} (i)$. The network $G$ is said to be a {\it lift} of $G_{\footnotesize{\mathcal{P}}}$ by a balanced partition. From Definition~\ref{def:balanced} (ii) and Corollary~\ref{thm:mainpart_exo} (i), we have: \begin{prop} \label{prop:quotient_bal} Let $G$ be a weighted network and $W_G$ the corresponding weighted adjacency matrix. Let $\mathcal{P}$ be a balanced standard partition of the set of cells of $G$ with parts $P_1, \ldots, P_p$ and assume an enumeration of the network set of cells adapted to the partition $\mathcal{P}$ providing a block structure (\ref{eq:oddbf}). The adjacency matrix of the quotient network $G_{\footnotesize{\mathcal{P}}}$ is the $p \times p$ matrix $W_{G_{\footnotesize{\mathcal{P}}}} = [q_{ij}]$ with $q_{ij} = v_{Q_{ij}}$. \end{prop} Given a strict exo-balanced standard partition $\mathcal{P}$ on the set of cells of a weighted network $G$, we have that $\mathcal{P}$ is balanced for $G_{-L}$ by Remark~\ref{rmk:bal_G_L}. It follows that we can take the quotient network of $G_{-L}$ by $\mathcal{P}$, as defined above, where the $ij$ entry is $v_{[j]} (i)$ if $i \not= j$. We define: \begin{Def} \normalfont \label{def:quo:_exo} Let $G$ be a coupled cell network with set of cells $C$. Let $W=[w_{i,j}]_{n\times n}$ be the weighted adjacency matrix of $G$, $\mathcal{P}$ a strict exo-balanced standard partition on $C$ with $p$ parts and $Q_{-L}$ the weighted quotient network of $G_{-L}$ by the balanced partition $\mathcal{P}$. Then, we define the {\it quotient of $G$ by $\mathcal{P}$} to be the network $Q_{\mathcal{P}}$ with adjacency matrix $[q_{i j}]_{1\leq i,j \leq p}$ obtained from the adjacency matrix of $Q_{-L}$ by setting to zero the diagonal entries: $$ q_{i j} = \left\{ \begin{array}{ll} 0, & \mbox{ if } [i] = [j] \\ \\ v_{[j]} (i), & \mbox{ if } [i] \ne [j] \end{array} \right. \, . $$ \hfill $\Diamond$ \end{Def} \begin{exam} In Figure~\ref{f:um} we show a six-cell network $G$ and the standard partition $\mathcal{P} = \left\{ [1] = \{1,2,3\},\, [4] = \{4,5\},\, [6] = \{6\} \right\}$ of its set of cells. Note that $\mathcal{P}$ is not balanced for $G$ but it is balanced for $H \equiv G_{-L}$. Thus $\mathcal{P}$ is strictly exo-balanced for $G$. On the right of Figure~\ref{f:um} we show the corresponding quotient networks as defined above. \hfill $\Diamond$ \end{exam} \begin{figure}[!h] \begin{tabular}{cc} \begin{tikzpicture} [scale=.15,auto=left, node distance=1.5cm] \node[fill=magenta,style={circle,draw}] (n1) at (4,0) {\small{1}}; \node[fill=magenta,style={circle,draw}] (n2) at (4,-6) {\small{2}}; \node[fill=magenta,style={circle,draw}] (n3) at (14,0) {\small{3}}; \node[fill=white,style={circle,draw}] (n4) at (14,-6) {\small{4}}; \node[fill=white,style={circle,draw}] (n5) at (24,0) {\small{5}}; \node[fill=green,style={circle,draw}] (n6) at (24,-6) {\small{6}}; \draw[->, thick] (n1) edge[thick] node [near end, above=0.1pt] {{\tiny $1$}} (n3); \draw[->, thick] (n2) edge[thick] node [above=0.1pt] {{\tiny $2$}} (n3); \draw[->, thick] (n2) edge[thick] node [near end, above=0.1pt] {{\tiny $1$}} (n4); \draw[->, thick] (n3) edge[thick] node [near end, above=1pt] {{\tiny $1$}} (n5); \draw[->, thick] (n4) edge[thick] node [near end, above=1pt] {{\tiny $2$}} (n6); \end{tikzpicture} & \begin{tikzpicture} [scale=.15,auto=left, node distance=1.5cm] \node[fill=magenta,style={circle,draw}] (n2) at (4,-6) {$\small{[1]}$}; \node[fill=white,style={circle,draw}] (n4) at (14,-6) {$\small{[4]}$}; \node[fill=green,style={circle,draw}] (n6) at (24,-6) {$\small{[6]}$}; \draw[->, thick] (n2) edge[thick] node [near end, above=0.1pt] {{\tiny $1$}} (n4); \draw[->, thick] (n4) edge[thick] node [near end, above=1pt] {{\tiny $2$}} (n6); \end{tikzpicture} \\ $G$ & $Q_{{\mathcal P}}$ \\ \\ \begin{tikzpicture} [scale=.15,auto=left, node distance=1.5cm] \node[fill=magenta,style={circle,draw}] (n1) at (4,0) {\small{1}}; \node[fill=magenta,style={circle,draw}] (n2) at (4,-6) {\small{2}}; \node[fill=magenta,style={circle,draw}] (n3) at (14,0) {\small{3}}; \node[fill=white,style={circle,draw}] (n4) at (14,-6) {\small{4}}; \node[fill=white,style={circle,draw}] (n5) at (24,0) {\small{5}}; \node[fill=green,style={circle,draw}] (n6) at (24,-6) {\small{6}}; \draw[->, thick] (n1) edge[thick] node [near end, above=0.1pt] {{\tiny $1$}} (n3); \draw[->, thick] (n2) edge[thick] node [above=0.1pt] {{\tiny $2$}} (n3); \draw[->, thick] (n2) edge[thick] node [near end, above=0.1pt] {{\tiny $1$}} (n4); \draw[->, thick] (n3) edge[thick] node [near end, above=1pt] {{\tiny$1$}} (n5); \draw[->, thick] (n4) edge[thick] node [near end, above=1pt] {{\tiny $2$}} (n6); \draw[->, thick] (n3) edge[loop, out=20, in=70, looseness=5] node [above left, pos=.45] {{\tiny $-3$}} (n3); \draw[->, thick] (n4) edge[loop, out=210, in=260, looseness=5] node [below right, pos=.45] {{\tiny $-1$}} (n4); \draw[->, thick] (n5) edge[loop, out=20, in=70, looseness=5] node [above left, pos=.45] {{\tiny $-1$}} (n5); \draw[->, thick] (n6) edge[loop, out=210, in=260, looseness=5] node [below right, pos=.45] {{\tiny $-2$}} (n6); \end{tikzpicture} & \begin{tikzpicture} [scale=.15,auto=left, node distance=1.5cm] \node[fill=magenta,style={circle,draw}] (n2) at (4,-6) {$\small{[1]}$}; \node[fill=white,style={circle,draw}] (n4) at (14,-6) {$\small{[4]}$}; \node[fill=green,style={circle,draw}] (n6) at (24,-6) {$\small{[6]}$}; \draw[->, thick] (n2) edge[thick] node [near end, above=0.1pt] {{\tiny $1$}} (n4); \draw[->, thick] (n4) edge[thick] node [near end, above=1pt] {{\tiny $2$}} (n6); \draw[->, thick] (n4) edge[loop, out=210, in=260, looseness=5] node [below right, pos=.45] {{\tiny $-1$}} (n4); \draw[->, thick] (n6) edge[loop, out=210, in=260, looseness=5] node [below right, pos=.45] {{\tiny $-2$}} (n6); \end{tikzpicture} \\ \\ $H \equiv G_{-L}$ & $H_{{\mathcal P}}$ \end{tabular} \caption{Two six-cell networks $G$ and $G_{-L}$ and a partition $\mathcal{P} = \left\{ [1] = \{1,2,3\},\, [4] = \{4,5\},\, [6] = \{6\} \right\}$ of their sets of cells. (Top) The partition $\mathcal{P}$ is exo-balanced but not balanced for $G$. The network $Q_{{\mathcal P}}$ is the three-cell quotient network of $G$ by the exo-balanced partition $\mathcal{P}$. (Bottom) The partition $\mathcal{P}$ is balanced for $H \equiv G_{-L}$. The network $H_{{\mathcal P}}$ is the quotient network of $H$ by $\mathcal{P}$.} \label{f:um} \end{figure} \subsection*{Linear-balanced, even-odd-balanced and odd-balanced tagged partitions} We define next, for general weighted networks, the concepts of linear-balanced, even-odd-balanced and odd-balanced partitions. We use here the terminology of linear-balanced and odd-balanced partitions in Definitions 4.17 and 4.6 of \cite{NSS19}, respectively, for the class of undirected networks $G$. \begin{Def} \normalfont \label{def:linear} Let $G$ be a weighted network with set of cells $C$. A non-standard tagged partition $\mathcal{P} = \left\{ P_1, P_2, \ldots, P_p, \overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q, P_0\right\}$ of $C$ is said {\it linear-balanced} (resp. {\it even-odd-balanced}) if the corresponding generalized polydiagonal subspace $\Delta_{\mathcal{P}}$ is left invariant by the Laplacian matrix $L_G$ (resp. adjacency matrix $W_G$) of $G$. We denote by $\mathcal{P}_{G,lin}$ the set of linear-balanced partitions of $G$ and by $\mathcal{P}_{G,eo}$ the set of even-odd-balanced partitions of $G$ \hfill $\Diamond$ \end{Def} \begin{rem}\normalfont By Proposition~\ref{prop:subset}, for a regular network $G$, we have $\mathcal{P}_{G,lin} = \mathcal{P}_{G,eo}$. \hfill $\Diamond$ \end{rem} \begin{Def} \normalfont \label{def:odd} Let $G$ be a weighted network with set of cells $C$. A non-standard tagged partition $\mathcal{P} = \left\{ P_1, P_2, \ldots, P_p, \overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q, P_0\right\}$ of $C$ is said {\it odd-balanced} if, given an enumeration of the network set of cells adapted to the partition $\mathcal{P}$ providing a block structure (\ref{eq:oddbf}) for the adjacency matrix $W_G$, we have\\ (a) All the blocks, excluding the blocks $Q_{ii}$, $\overline{Q}_{ii}$, $Z_{0j},\, \overline{Z}_{0j}$ and $Z_{00}$, are regular.\\ (b) If $q >0$, for $1 \le i,j \le q $, each pair of blocks of the type $Q_{ij},\, \overline{Q}_{ij}$, for $i\not=j$, $R_{ij},\, \overline{R}_{ij}$ and $Z_{i0},\, \overline{Z}_{i0}$ have the same valency. \\ (c) If $q>0$ and $r=1$, the blocks $Z_{0j},\, \overline{Z}_{0j}$ satisfy ${\mathrm rs}(Z_{0j}) = {\mathrm rs}\left(\overline{Z}_{0j}\right)$ for $j \in \{1,\ldots,q\}$. \\ (d) If $r=1$, ${\mathrm rs}\left(Z_{0j}\right) = 0$, for $j \in \{q+1,\ldots,p\}$. \\ We denote by $\mathcal{P}_{G,odd}$ the set of odd-balanced partitions of $G$ \hfill $\Diamond$ \end{Def} \begin{rem}\normalfont In the definition of linear-balanced and odd-balanced tagged partitions, the blocks $Q_{ii},\,\overline{Q}_{ii}$ for all $i$ and $Z_{00}$ have no restrictions. \hfill $\Diamond$ \end{rem} \begin{rem}\normalfont (i) Given a weighted network $G$, the conditions in Definition~\ref{def:odd} of odd-balanced tagged partition imply the conditions in Proposition~\ref{thm:mainLaplacian} for the corresponding generalized polydiagonal subspace $\Delta_{\mathcal{P}}$ to be left invariant by the Laplacian matrix $L_G$. Thus we have $\mathcal{P}_{G,odd} \subseteq \mathcal{P}_{G,lin}$. \\ (ii) A linear-balanced partition of a network set of cells does not have to be odd-balanced, as we show in Example~\ref{exs:linear}. \\ (iii) An odd-balanced partition may not be even-odd-balanced and an even-odd-balanced partition may not be odd-balanced, as we show in Examples~\ref{ex:odd-eo} and \ref{ex:eo-odd}, respectively. \hfill $\Diamond$ \end{rem} \begin{figure}[h!] \begin{center} {\tiny \begin{tabular}{cc} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (6) at (-6cm, 3cm) [fill=black!70] {$6$}; \node[node] (3) at (-6cm, 1cm) [fill=black!20] {$3$}; \node[node] (1) at (-7cm, 2cm) [fill=white] {$1$}; \node[node] (4) at (-5cm, 2cm) [fill=black!50] {$4$}; \node[node] (5) at (-3.5cm, 2cm) [fill=black!50] {$5$}; \node[node] (2) at (-2cm, 2cm) [fill=white] {$2$}; \path (4) edge node {} (5) (5) edge node {} (4) (5) edge node {} (2) (2) edge node {} (5) (3) edge node {} (4) (4) edge node {} (3) (1) edge node {} (3) (3) edge node {} (1) (1) edge node {} (6) (6) edge node {} (1) (6) edge node {} (4) (4) edge node {} (6); \end{tikzpicture} \qquad & \qquad \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (6) at (-6cm, 3cm) [fill=black!70] {$6$}; \node[node] (5) at (-6cm, 1cm) [fill=black!70] {$5$}; \node[node] (1) at (-7cm, 2cm) [fill=white] {$1$}; \node[node] (3) at (-5cm, 2cm) [fill=black!20] {$3$}; \node[node] (4) at (-3.5cm, 2cm) [fill=black!20] {$4$}; \node[node] (2) at (-2cm, 2cm) [fill=white] {$2$}; \path (4) edge node {} (2) (2) edge node {} (4) (3) edge node {} (4) (4) edge node {} (3) (5) edge node {} (3) (3) edge node {} (5) (6) edge node {} (3) (3) edge node {} (6) (1) edge node {} (6) (6) edge node {} (1) (1) edge node {} (5) (5) edge node {} (1); \end{tikzpicture} \end{tabular}} \end{center} \caption{A six-cell bidirectional network $G$. Two linear-balanced tagged partitions which are not odd-balanced. (Left) $\mathcal{P} = \left\{ P_1 = \{1,2\},\, P_2 = \{ 3\},\, \overline{P}_1 = \{4,5\},\, \overline{P}_2 = \{ 6\} \right\}$; (Right) $\mathcal{P} = \{ P_1 =\{1,2\}, \, \overline{P}_1 = \{ 3, 4\},\, P_0 =\{5,6\}\}$.} \label{f:2ndsixSwift} \end{figure} \begin{exams} \label{exs:linear} \normalfont Take the two isomorphic six-cell networks in Figure~\ref{f:2ndsixSwift} which correspond to the six-cell bidirectional network in Figure 9 of \cite{NSS19}. \\ (i) Consider the network on the left of Figure~\ref{f:2ndsixSwift} and take the tagged partition of its set of cells $\mathcal{P} = \{ P_1 =\{1,2\}, \, P_2 = \{ 3\},\, \overline{P}_1 =\{4,5\}, \, \overline{P}_2 = \{ 6\} \}$. The adjacency matrix has block form: $$ W_{G} = \left( \begin{array}{cc|c|cc|c} 0 & 0 & 1& 0 & 0& 1 \\ 0 & 0 & 0& 0 & 1& 0 \\ \hline 1 & 0 & 0& 1 & 0& 0 \\ \hline 0 & 0 & 1& 0 & 1 & 1 \\ 0 & 1 & 0& 1 & 0& 0 \\ \hline 1 & 0 & 0& 1 & 0& 0 \end{array} \right) = \displaystyle \left( \begin{array}{cc|cc} Q_{11} & Q_{12} & R_{11} & R_{12}\\ Q_{21} & Q_{22} & R_{21} & R_{22}\\ \hline \overline{R}_{11} & \overline{R}_{12} & \overline{Q}_{11} & \overline{Q}_{12} \\ \overline{R}_{21} & \overline{R}_{22} & \overline{Q}_{21} & \overline{Q}_{22} \end{array} \right) \, . $$ We have that $\mathcal{P}$ is linear-balanced. By Corollary~\ref{cor:mainLaplacian}, this follows from the equalities: $$ {\tiny \begin{array}{l} \left( \begin{array}{l} 2\\ 2 \end{array} \right) = {\mathrm rs}(Q_{12}) + {\mathrm rs}(R_{12}) + 2 {\mathrm rs}(R_{11}) = {\mathrm rs}\left(\overline{Q}_{12}\right) + {\mathrm rs}\left(\overline{R}_{12}\right) + 2 {\mathrm rs}\left(\overline{R}_{11}\right) = \left( \begin{array}{l} 1\\ 0 \end{array} \right) + \left( \begin{array}{l} 1\\ 0 \end{array} \right) + 2 \left( \begin{array}{l} 0\\ 1 \end{array} \right); \ \\ \ \\ \left( \begin{array}{l} 0\\ 0 \end{array} \right) = - {\mathrm rs}(Q_{12}) + {\mathrm rs}(R_{12}) = - \left( \begin{array}{l} 1\\ 0 \end{array} \right) + \left( \begin{array}{l} 1\\ 0 \end{array} \right) = {\mathrm rs}\left(\overline{R}_{12}\right) - {\mathrm rs}\left(\overline{Q}_{12}\right) = \left( \begin{array}{l} 1\\ 0 \end{array} \right) - \left( \begin{array}{l} 1\\ 0 \end{array} \right); \ \\ \ \\ \left( \begin{array}{l} 2 \end{array} \right) = {\mathrm rs}(Q_{21}) + {\mathrm rs}(R_{21}) + 2 {\mathrm rs}(R_{22}) = {\mathrm rs}\left(\overline{Q}_{21}\right) + {\mathrm rs}\left(\overline{R}_{21}\right) + 2 {\mathrm rs}\left(\overline{R}_{22}\right) = \left( \begin{array}{l} 1 \end{array} \right) + \left( \begin{array}{l} 1 \end{array} \right) + 2 \left( \begin{array}{l} 0 \end{array} \right);\\ \ \\ \ \\ \left( \begin{array}{l} 0 \end{array} \right) = - {\mathrm rs}(Q_{21}) + {\mathrm rs}(R_{21}) = - \left( \begin{array}{l} 1 \end{array} \right) + \left( \begin{array}{l} 1 \end{array} \right) = {\mathrm rs}\left(\overline{R}_{21}\right) - {\mathrm rs}\left(\overline{Q}_{21}\right) = \left( \begin{array}{l} 1 \end{array} \right) - \left( \begin{array}{l} 1 \end{array} \right)\, . \end{array} } $$ The tagged partition $\mathcal{P}$ is not odd-balanced as, for example, the block $R_{11}$ is not regular.\\ \noindent (ii) Consider the network on the right of Figure~\ref{f:2ndsixSwift} and take the tagged partition of its set of cells $\mathcal{P} = \{ P_1 =\{1,2\}, \, \overline{P}_1 = \{ 3, 4\},\, P_0 =\{5,6\}\}$. The adjacency matrix has block form: $$ W_{G} = \left( \begin{array}{cc|cc|cc} 0 & 0 & 0& 0 & 1 & 1 \\ 0 & 0 & 0& 1 & 0& 0 \\ \hline 0 & 0 & 0& 1 & 1& 1 \\ 0 & 1 & 1 & 0 & 0& 0 \\ \hline 1 & 0 & 1& 0 & 0 & 0 \\ 1 & 0 & 1 &0 & 0& 0 \end{array} \right) = \left( \begin{array}{c|c|c} Q_{11}& R_{11} & Z_{10}\\ \hline \overline{R}_{11} & \overline{Q}_{11} & \overline{Z}_{10} \\ \hline Z_{01} & \overline{Z}_{01} & Z_{00} \end{array} \right) \, . $$ By Corollary~\ref{cor:mainLaplacian}, we have that $\mathcal{P}$ is linear-balanced given the following equalities: $$ {\small \begin{array}{l} \left( \begin{array}{l} 2\\ 2 \end{array} \right) = 2 {\mathrm rs}(R_{11}) + {\mathrm rs}(Z_{10}) = 2 {\mathrm rs}\left(\overline{R}_{11}\right) + {\mathrm rs}\left(\overline{Z}_{10}\right) = 2\left( \begin{array}{l} 0\\ 1 \end{array} \right) + \left( \begin{array}{l} 2\\ 0 \end{array} \right); \\ {\mathrm rs}(Z_{01}) = {\mathrm rs}\left(\overline{Z}_{01}\right) = \left( \begin{array}{l} 1\\ 1 \end{array} \right) \, . \end{array} } $$ The tagged partition $\mathcal{P}$ is not odd-balanced as, for example, the block $R_{11}$ is not regular. \hfill $\Diamond$ \end{exams} \begin{figure}[ht!] \begin{center} \vspace{-4mm} \hspace{-4mm} {\small \begin{tikzpicture} [-,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (1) at (-75mm, 4cm) [fill=white] {$1$}; \node[node] (2) at (-6cm, 4cm) [fill=white] {$2$}; \node[node] (3) at (-45mm, 4cm) [fill=white] {$3$}; \node[node] (4) at (-30mm, 4cm) [fill=white] {$4$}; \node[node] (5) at (-15mm, 4cm) [fill=white] {$5$}; \node[node] (6) at (0mm, 4cm) [fill=white] {$6$}; \path (1) edge node {} (2) (2) edge node {} (3) (3) edge node {} (4) (4) edge node {} (5) (5) edge node {} (6) ; \end{tikzpicture}} \caption{The network in Figure 6 (iii) of Neuberger{\it et al.}~\cite{NSS19}.} \label{fig:6cnetwork_2} \end{center} \end{figure} \begin{exam} \label{ex:odd-eo} \normalfont Consider the network $G$ in Figure~\ref{fig:6cnetwork_2} which corresponds to the network in Figure 6 (iii) of Neuberger{\it et al.}~\cite{NSS19}. The tagged partition $\mathcal{P} = \{ P_1 =\{1,6\}, \, \overline{P}_1 =\{3,4\} , \, P_0 =\{2,5\} \}$ of the set of cells of $G$ is odd-balanced but not even-odd-balanced. Considering the ordering $1,6,3,4,2,5$ of the cells of $G$ adapted to the tagged partition $\mathcal{P}$, the adjacency matrix of $G$ has the following block structure: $$ W_G =\left( \begin{array} {cc|cc|cc} 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \hline 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \\ \hline 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 \end{array} \right) = \left( \begin{array} {c|c|c} Q_{11} & R_{11} & Z_{10}\\ \hline \overline{R}_{11} & \overline{Q}_{11} & \overline{Z}_{10}\\ \hline Z_{01} & \overline{Z}_{01} & Z_{00} \end{array} \right). $$ The blocks $R_{11},\, \overline{R}_{11}$ are regular of the same valency $0$ and $Z_{10},\, \overline{Z}_{10}$ are regular of same valency $1$. Moreover, ${\mathrm rs}\left(Z_{01} \right)={\mathrm rs}\left(\overline{Z}_{01} \right) = \left( \begin{array} {c} 1\\ 1 \end{array} \right). $ Thus, by Definition~\ref{def:odd}, the partition $\mathcal{P}$ is odd-balanced. However, by Definition~\ref{def:linear}, $\mathcal{P}$ is not even-odd-balanced as ${\mathrm rs}\left(Q_{11} \right) - {\mathrm rs}\left(R_{11} \right) = \left( \begin{array} {c} 0\\ 0 \end{array} \right) \ne {\mathrm rs}\left(\overline{Q}_{11} \right) - {\mathrm rs}\left(\overline{R}_{11} \right) = \left( \begin{array} {c} 1\\ 1 \end{array} \right) $ and, thus, fails the first condition in (4.7) of Proposition~\ref{thm:mainLaplacian}. \hfill $\Diamond$ \end{exam} \begin{exam} \label{ex:eo-odd} \normalfont Consider the weighted network $G$ with set of cells $\{1,\ldots,8\}$ and adjacency matrix $$ W_G =\left( \begin{array} {cc|cc|cc|cc} 3 & 2 & 2 & 1 & 1 & 1 & 1 & 0 \\ 3 & 1 & 2 & 2 & 1 & 0 & 1 & 1 \\ \hline 2 & 2 & 2 & 2 & 1 & 1 & 1 & 0 \\ 1 & 1 & 3 & 3 & 0 & 0 & 2 & 1 \\ \hline -1 & 0 & 2 & 2 & 1 & 1 & 4 & 2 \\ 0 & 1 & 2 & 0 & 2 & 2 & 2 & 2 \\ \hline 0 & 1 & 1 & 1 & 3 & 0 & 5 & 0 \\ 0 & 1 & 0 & 2 & 2 & 1 & 1 & 4 \ \end{array} \right) = \left( \begin{array} {c|c|c|c} Q_{11} & Q_{12} & R_{11} & R_{12}\\ \hline Q_{21} & Q_{22} & R_{21} & R_{22}\\ \hline \overline{R}_{11} & \overline{R}_{12} & \overline{Q}_{11} & \overline{Q}_{12}\\ \hline \overline{R}_{21} & \overline{R}_{22} & \overline{Q}_{21} & \overline{Q}_{22} \end{array} \right), $$ which is regular of valency $11$, and consider the tagged partition $\mathcal{P} = \{ P_1 =\{1,2\}, \, P_2 =\{3,4\}, \, \overline{P}_1 =\{5,6\} , \, \overline{P}_2 =\{7,8\} \}$ of the set of cells of $G$. By Proposition~\ref{thm:mainLaplacian} and Definition~\ref{def:linear}, $\mathcal{P}$ is even-odd-balanced (linear balanced) as, for $1 \le i,j \le 2$, we have that ${\mathrm rs}\left(Q_{ij} \right) - {\mathrm rs}\left(R_{ij} \right)$ and ${\mathrm rs}\left(\overline{Q}_{ij} \right) - {\mathrm rs}\left(\overline{R}_{ij} \right)$ are regular of the same valency. Clearly, $\mathcal{P}$ is not odd-balanced as, for example, the block $Q_{12}$ is not regular. \hfill $\Diamond$ \end{exam} In \cite{NSS19}, for the particular class of networks with symmetric $(0,1)$-adjacency matrices (undirected graphs), Neuberger {\it et al.} show in Proposition 5.6 that in an odd-balanced partition each part $P_r$ and its counterpart $\overline{P}_r$ have the same number of cells. Moreover, they conjecture that this is also true for the linear-balanced partitions. The next example shows that Proposition 5.6 (and, thus, Conjecture 5.3) in \cite{NSS19} does not hold for general weighted networks. \begin{exam} \label{ex:odd_dif_num} Consider the four-cell weighted network $G$ with set of cells $\{1,2,3,4\}$ and adjacency matrix $$ W_{G} = \left( \begin{array}{c|cc|c} 0 & \frac{1}{2} & \frac{1}{2} & \frac{6}{5} \\ \hline 1 & 0 & 0 & \frac{6}{5} \\ 1 & 0 & 0 & \frac{6}{5} \\ \hline 1 & 0 & 1 & 0 \end{array} \right) = \left( \begin{array}{c|c|c} Q_{11} & R_{11} & Z_{10}\\ \hline \overline{R}_{11} & \overline{Q}_{11} & \overline{Z}_{10} \\ \hline Z_{01} & \overline{Z}_{01} & Z_{00} \end{array} \right) \, . $$ Take the tagged partition $${\mathcal P} = \left\{ P_1=\{1\}, \overline{P}_1=\{ 2,3\}, P_0=\{ 4\} \right\}$$ and note that the enumeration of the network set of cells is adapted to this partition. As $R_{11}, \overline{R}_{11}$ are regular of the same valency, $Z_{10}, \overline{Z}_{10}$ are regular of the same valency and ${\mathrm rs}\left( Z_{01} \right) = {\mathrm rs} \left( \overline{Z}_{01}\right)$, we have that $\mathcal{P}$ is odd-balanced for $G$. Note that $\# P_1 \ne \# \overline{P}_1$. \hfill $\Diamond$ \end{exam} In the following remark, we consider tagged partitions where $r=0$, that is, there is no zero part $P_0$, and we relate the concepts of exo-balanced and odd-balanced partitions given their definitions in Definitions~\ref{def:balanced} (i) and~\ref{def:odd}, respectively. Observe that, by definition, an odd-balanced partition is, in particular, a non-standard tagged partition and an exo-balanced partition is standard. So, in the next remark we relate the two concepts of exo-balanced and odd-balanced partitions of a partition $\mathcal{P}$, with non zero part, by interpreting the $q >0$ counterparts when referring to $\mathcal{P}$ as odd-balanced and as independent parts if interpreting $\mathcal{P}$ as a standard partition. \begin{rem}\normalfont Let $G$ be a weighted network with set of cells $C$ and adjacency matrix $W_G$. Consider a non-standard tagged partition $\mathcal{P} =\{ P_1, P_2, \ldots, P_p,$ $\overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q \}$ of $C$ and consider an ordering of the cells adapted to $\mathcal{P}$ so that $W_G$ has a block form (\ref{eq:oddbf}). Set $P_{p+1} = \overline{P}_1, \ldots, P_{p+q} = \overline{P}_q$. We have:\\ (i) If $\mathcal{P}$ is odd-balanced then the standard partition $\{ P_1, P_2, \ldots, P_p, P_{p+1}, \ldots, P_{p+q}\}$ is exo-balanced. The converse is not true. \\ (ii) Assume the standard partition $\{ P_1, P_2, \ldots, P_p, P_{p+1}, \ldots, P_{p+q}\}$ is exo-balanced. Then $\mathcal{P}$ is odd-balanced if and only if for all $i \not=j$, with $1 \leq i,j \leq q$, the regular matrices, $Q_{ij}, \overline{Q}_{ij}$ have the same valency, and for all $1 \leq i, j \leq q$, the regular matrices $R_{ij}, \overline{R}_{ij}$ have the same valency. \hfill $\Diamond$ \end{rem} \subsubsection*{Quotient networks for odd-balanced, linear-balanced and even-odd-balanced partitions} We define next the concepts of quotient networks for weighted networks by odd-balanced, linear-balanced and even-odd-balanced partitions. In the next section, we show an application of these concepts to coupled cell systems with additive linear input. \begin{Def}\label{def:quo_odd} Let $G$ be a network with set of cells $C = \{1, \ldots, n\}$ and $\mathcal{P}$ a non-standard tagged partition of $C$ with $p+q+1$ parts, $P_1, P_2, \ldots, P_p, \overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q, P_0$ and recall Definitions~\ref{def:linear} and~\ref{def:odd}. \\ (i) If $\mathcal{P}$ is odd-balanced, we define the {\it symbolic quotient} of $G$ by the odd-balanced partition $\mathcal{P}$ to be the $(p+q+1)$-cell network, where the cells are the parts of $\mathcal{P}$ and the edges are defined in the following way. For $i, j_1=1, \ldots, p$ and $j_2=1, \dots, q$ such that $i\not=j_1$, there are directed edges from $P_{j_1}$ ($\overline{P}_{j_2}$) to $P_i$ with weight the valency of $Q_{i,,j_1}$ ($R_{i,j_2}$). For $i=1, \ldots, p$, there are directed edges from $P_0$ to $P_i$ with the valency of $Z_{i0}$. \\ (ii) If $\mathcal{P}$ is linear balanced, take an enumeration of the network set of cells adapted to the tagged partition $\mathcal{P}$ so that the the adjacency matrix $W_G$ of $G$ has a block structure (\ref{eq:oddbf}) satisfying the conditions (i)-(v) of Corollary~\ref{cor:mainLaplacian}. We call the following matrix the adjacency matrix of the {\it symbolic quotient} of $G$ by the linear balanced partition $\mathcal{P}$: \ \\ \begin{equation} \left( \begin{array}{cccc|c} 0 & q_{12} & \cdots &q_{1p} & r_{1} \\ \vdots& \vdots & \cdots &\vdots & \vdots \\ q_{p1} & q_{p2} & \cdots & 0 & r_{p} \end{array} \right)\, . \label{eq:quolin} \end{equation} (iii) If $\mathcal{P}$ is even-odd-balanced, take an enumeration of the network set of cells adapted to the tagged partition $\mathcal{P}$ so that the the adjacency matrix $W_G$ of $G$ has a block structure (\ref{eq:oddbf}) satisfying the conditions (\ref{eq:equal_Lap})-(\ref{eq:equal_Lap_Z}) of Proposition~\ref{thm:mainLaplacian}. Denoting by $q_{ij}$ the valency of ${\mathrm rs}\left(Q_{ij}\right) - {\mathrm rs}\left( R_{ij}\right)$ for $1 \leq i \leq p,\ 1 \leq j \leq q$, of ${\mathrm rs}\left(Q_{ij}\right)$ if $1 \leq i \leq p;\ q+1 \leq j \leq p$, then we call the following matrix the adjacency matrix of the {\it symbolic quotient} of $G$ by the even-odd-balanced partition $\mathcal{P}$: \ \\ \begin{equation} \left( \begin{array}{cccc} q_{11} & q_{12} & \cdots &q_{1p} \\ \vdots& \vdots & \cdots &\vdots \\ q_{p1} & q_{p2} & \cdots & q_{pp} \end{array} \right)\, . \label{eq:quoeo} \end{equation} \hfill $\Diamond$ \end{Def} \begin{exam}\normalfont \label{ex:simples} Take the three-cell network $G$ at the left of Figure~\ref{fig:odd3cell} and the tagged partition $$\mathcal{P} = \left\{P_1 = \{1\},\, \overline{P}_1 = \{2,3\} \right\}\, .$$ The adjacency matrix of $G$ has the block form $$\left( \begin{array}{c|cc} 0&1&1\\ \hline 2&0&0\\ 2&0&0 \end{array} \right) = \left( \begin{array}{c|c} Q_{11}&R_{11}\\ \hline \overline{R}_{11}& \overline{Q}_{11} \end{array} \right),$$ where the block $R_{11} = (1 \, 1)$ is regular of valency $2$, that is, it has row sum $2$ and the block $\overline{R}_{11} = (2 \, 2)^t$ is regular of valency $2$, since the entry of each row is $2$. The partition $\mathcal{P}$ is exo-balanced since $R_{11}$ and $\overline{R}_{11}$ are regular. The quotient network is network $Q_2$ in Figure~\ref{fig:odd3cell}. As $R_{11}$ and $\overline{R}_{11}$ are regular of the same valency, we have that $\mathcal{P}$ is odd-balanced. The symbolic quotient network is network $Q_1$ in Figure~\ref{fig:odd3cell}. If the entries of the block $ \overline{R}_{11}$ were $3$, instead of $2$, then the partition $\mathcal{P}$ would also be exo-balanced but not odd-balanced. \hfill $\Diamond$ \end{exam} \begin{figure}[h!] \begin{center} \begin{tabular}{ccc} $G$ & $Q_1$ & $Q_2$ \\ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (2) at (-5cm, 2cm) [fill=black!60] {{\tiny $2$}}; \node[node] (1) at (-35mm, 2cm) [fill=white] {{\tiny $1$}}; \node[node] (3) at (-20mm, 2cm) [fill=black!60] {{\tiny $3$}}; \path (2) edge node {} (1) (3) edge node {} (1) (1) [->] edge[bend left=30, thick] node {{\tiny $2$}} (2) (1) [->] edge[bend right=30, thick] node [below right, pos=.35] {{\tiny $2$}} (3); \end{tikzpicture} \qquad & \qquad \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (1) at (-35mm, 2cm) [fill=white] {{\tiny $[1]$}}; \node[node] (2) at (-20mm, 2cm) [fill=black!60] {{\tiny $-[1]$}}; \path (2) edge node [above right, pos=.50, near end] {{\tiny $2$}} (1); \end{tikzpicture} \qquad & \qquad \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (1) at (-35mm, 2cm) [fill=white] {{\tiny $[1]$}}; \node[node] (2) at (-20mm, 2cm) [fill=black!60] {{\tiny $[2]$}}; \path (2) edge[bend left=30, thick] node [below right, near end] {{\tiny $2$}} (1) (1) edge[bend left=30, thick] node [above right, near end] {{\tiny $2$}} (2); \end{tikzpicture} \end{tabular} \end{center} \caption{A three-cell regular network $G$ of valency two. The tagged partition $\mathcal{P} = \left\{[1] = P_1 = \{1\},\, [2] = \overline{P}_1 = \{2,3\} \equiv -[1] \right\}$ is odd-balanced and exo-balanced for $G$. On the center we see the symbolic quotient network $Q_1$ of $G$ by the odd-balanced tagged partition $\mathcal{P}$. On the right, $Q_2$ is the quotient network of $G$ by the exo-balanced partition $\mathcal{P}$.} \label{fig:odd3cell} \end{figure} \begin{exam}\normalfont Take the six-cell network $G$ in Figure~\ref{f:sixSwift} and the tagged partition of the network set of cells $\mathcal{P} = \{ [1] = P_1 =\{1\}, \, -[1] = \overline{P}_1 = \{ 2\},\, [3] = P_0 =\{3,4,5,6\}\}$. This network is the bidirectional network in Figure 9 of \cite{NSS19}. The adjacency matrix of $G$ has the following block form: $$ W_{G} = \left( \begin{array}{c|c|cccc} 0 & 0 & 1& 1 & 0 & 0 \\ \hline 0 & 0 & 1& 1 & 0 & 0 \\ \hline 1 & 1 & 0& 0 & 0& 0 \\ 1 & 1 & 0& 0 & 1 & 0 \\ 0 & 0 & 0& 1 & 0& 1 \\ 0 & 0 & 0& 0 & 1 & 0 \end{array} \right) = \left( \begin{array}{c|c|c} Q_{11} & R_{11} & Z_{10}\\ \hline \overline{R}_{11} & \overline{Q}_{11} & \overline{Z}_{10} \\ \hline Z_{01} & \overline{Z}_{01} & Z_{00} \end{array} \right) \, . $$ Note that, both $R_{11}$ and $\overline{R}_{11}$ are regular of valency $0$ and $Z_{10}$ and $\overline{Z}_{10}$ are regular of valency $2$. Moreover, $ {\mathrm rs}(Z_{01}) = {\mathrm rs}\left(\overline{Z}_{01}\right) = (1 \, 1 \, 0 \, 0)^t$. We have that $\mathcal{P}$ is odd-balanced. See in Figure~\ref{f:sixSwift} the symbolic quotient of $G$ by the odd-balanced partition $\mathcal{P}$. However, $\mathcal{P}$ is not exo-balanced precisely because of the above equality: cells $3,4$ of the class $P_0$ receive one input from cells in the class $P_1$ (resp. $\overline{P}_1$)), whereas cells $5,6$ receive no inputs from cells in the class $P_1$ (resp. $\overline{P}_1$). \hfill $\Diamond$ \end{exam} \begin{figure}[h!] \begin{center} \begin{tabular}{cc} $G$ & $Q$ \\ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (1) at (-5cm, 3cm) [fill=black!60] {{\tiny $1$}}; \node[node] (2) at (-5cm, 1cm) [fill=black!20] {{\tiny $2$}}; \node[node] (3) at (-7cm, 2cm) [fill=white] {{\tiny $3$}}; \node[node] (4) at (-3cm, 2cm) [fill=white] {{\tiny $4$}}; \node[node] (5) at (-1cm, 2cm) [fill=white] {{\tiny $5$}}; \node[node] (6) at (1cm, 2cm) [fill=white] {{\tiny $6$}}; \path (4) edge node {} (5) (5) edge node {} (4) (5) edge node {} (6) (6) edge node {} (5) (3) edge node {} (1) (1) edge node {} (3) (3) edge node {} (2) (2) edge node {} (3) (1) edge node {} (4) (4) edge node {} (1) (2) edge node {} (4) (4) edge node {} (2); \end{tikzpicture} \qquad & \qquad \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (1) at (-40mm, 2cm) [fill=black!60] {{\tiny $[1]$}}; \node[node] (3) at (-20mm, 2cm) [fill=white] {{\tiny $[3]$}}; \node[node] (2) at (-30mm, 3cm) [fill=black!20] {{\tiny $-[1]$}}; \path (3) edge[thick] node [below right, near end] {{\tiny $2$}} (1); \end{tikzpicture} \end{tabular} \end{center} \caption{A six-cell bidirectional network $G$. The tagged partition $\mathcal{P} = \left\{[1]= P_1 = \{1\},\, -[1]= \overline{P}_1 = \{2\},\, [3] = P_0 = \{ 3,4,5,6\} \right\}$ is odd-balanced for $G$ but not exo-balanced. On the right, $Q$ is the symbolic quotient network of $G$ by the odd-balanced partition $\mathcal{P}$.} \label{f:sixSwift} \end{figure} \section{Coupled cell systems with additive input structure}\label{sec_CCNS} Let $G$ be an $n$-cell network with weighted adjacency matrix $W_G$. We consider the cells of $G$ as individual dynamical systems, given by ordinary differential equations. We assume that the cells are all of the same type, that is, have the same phase space and the same internal dynamics. The dynamical systems that we associate to $G$ are such that the couplings between the cells, the way they influence the dynamical evolution of each other, are determined by the edges of $G$ and corresponding weights. These are called {\it coupled cell systems}. More precisely, we take a cell to be a system of ordinary differential equations and we consider coupled cell systems with {\it additive input structure}~\cite{F15,BF17}. Let $C=\{1, \ldots,n\}$ be the set of cells of $G$ where each cell $c$ has phase space $P_c=\mbox{$\mathbb{R}$}^k$. A coupled cell system with {\it additive input structure} is given by $\dot{x} = f(x)$, where $f = (f_1, \ldots, f_n)$ so that the equation $\dot{x}_j = f_j(x)$ is associated with cell $j$ and it has the form: \begin{equation} \dot{x}_j=g(x_j) +\sum_{i=1}^n {w_{ji}h\left(x_j,x_i\right)} \quad \left( j=1, \ldots, n\right) \label{eq:EDOsystem} \end{equation} where $g:\mbox{$\mathbb{R}$}^k \rightarrow \mbox{$\mathbb{R}$}^k$ and $h:\mbox{$\mathbb{R}$}^k \times \mbox{$\mathbb{R}$}^k \rightarrow \mbox{$\mathbb{R}$}^k$ are smooth functions; each $w_{ji}\in \mbox{$\mathbb{R}$}$ is the value of the weight of the coupling strength from cell $i$ to cell $j$. The function $g$ characterizes the {\it internal dynamics} and the function $h$ is the \textit{coupling function}. Systems of ordinary differential equations where cells are governed by equations of the form (\ref{eq:EDOsystem}) are said to be $G$-{\it admissible} as they encode the network structure of $G$. \begin{rem} \normalfont The difference-coupled vector fields considered in Neuberger {\it et al.}~\cite{NSS19} are a particular class of input additive coupled cell systems where the coupling matrix is a symmetric $(0,1)$-matrix and $h(u,v) = \tilde h(u-v)$, for some function $\tilde h$. \hfill $\Diamond$ \end{rem} \subsection{Additive coupled cell systems and additional restrictions} We present next four subclasses of coupled cell systems with additive input structure associated with general weighted networks where restrictions are imposed on the internal dynamics and coupling functions. The first three subclasses are an extension to general weighted networks of Definition 2.2 in Neuberger {\it et al.}~\cite{NSS19} for $0-1$ undirected networks. \begin{Def} \normalfont Let $G$ be an $n$-cell coupled cell network with weighted adjacency matrix $W_G$. Given a choice of cells phase spaces, take an input additive coupled cell system admissible by $G$ as defined by (\ref{eq:EDOsystem}) where $w_{ji}$ is the entry $ji$ of $W_G$. Denote by $I_G$ the set of these coupled cell systems admissible by $G$ and define the following subsets: \\ (i) $I_{G,0} = \{ f \in I_G |\ h(x,y) = 0 \mbox{ if } x=y\}$ is the set of {\it exo-input-additive coupled cell systems}.\\ (ii) $I_{G,odd} = \{f \in I_{G,0} |\ g, h \mbox{ are odd} \}$ is the set of {\it odd-input-additive coupled cell systems}.\\ (iii) $I_{G,l} = \{ f \in I_{G,0}|\ g \mbox{ is odd and } h \mbox{ is linear}\}$ is the set of {\it linear-input-additive coupled cell systems}.\\ (iv) $I_{G,eo} = \{ f \in I_{G}|\ g \mbox{ is odd; } h \mbox{ is even in $x$ and odd in $y$}\}$ is the set of {\it even-odd-input-additive coupled cell systems}. \hfill $\Diamond$ \end{Def} \begin{rem} \normalfont (i) For any choice of cells phase spaces, we have that $I_{G,0}$ is a proper subspace of $I_G$. It follows in particular that it is natural to predict the existence of subspaces that are flow-invariant under any coupled cell system for $G$ with additive input structure (for any choice of cell phase spaces) in $I_{G,0}$ which will not have that property in $I_G$. This issue is addressed in the next section.\\ (ii) Note that, in (\ref{eq:EDOsystem}), if the coupling function $h$ is linear then $h(-x,-y) = -h(x,y)$ for all $x,y$. Thus we have the following inclusions: $I_{G,l} \subseteq I_{G,odd} \subseteq I_{G,0} \subset I_G$. Moreover, the conditions defining $I_{G,l}$ imply that the linear coupling function $h$ satisfies $h(x,-x) = 2h(x,0)$, for all $x \in \mbox{$\mathbb{R}$}^k$, as $h(x,-x) = h(x,0) + h(0,-x) = h(x,0) - h(0,x)$ and from $h(x,x) = 0$, we have that $h(0,x) = -h(x,0)$. \hfill $\Diamond$ \end{rem} We describe now the general form of the smooth coupling functions taking the restrictions of $I_{G,0}, I_{G,odd}, I_{G,l}$ and $I_{G,eo}$. \begin{prop} \label{prop:gen_form_h} Take $a = (a_1, \ldots, a_k) \in \mbox{$\mathbb{R}$}^k$ and $b = (b_1, \ldots, b_k) \in \mbox{$\mathbb{R}$}^k$. The coupling function $h$ in (\ref{eq:EDOsystem}) has the following form:\\ (i) If $f$ in $I_{G,0}$ then $$ h(a,b) = (a_1-b_1) l_1(a,b) + \cdots + (a_k-b_k) l_k(a,b), $$ where for $j=1, \ldots, k$, the function $l_j:\, \mbox{$\mathbb{R}$}^k \times \mbox{$\mathbb{R}$}^k \to \mbox{$\mathbb{R}$}^k$ is smooth. \\ (ii) If $f$ in $I_{G,odd}$ then $$ h(a,b) = (a_1-b_1) m_1(a_1^2, \ldots, a_k^2, b_1^2, \ldots, b_k^2) + \cdots + (a_k-b_k) m_k(a_1^2, \ldots, a_k^2, b_1^2, \ldots, b_k^2), $$ where for $j=1, \ldots, k$, the function $m_j:\, \mbox{$\mathbb{R}$}^k \times \mbox{$\mathbb{R}$}^k \to \mbox{$\mathbb{R}$}^k$ is smooth. \\ (iii) If $f$ in $I_{G,l}$ then $$ h(a,b) = (a_1-b_1) (a_{11}, \ldots, a_{1k}) + \cdots + (a_k-b_k) (a_{k1}, \ldots, a_{kk}) $$ where $a_{ij} \in \mbox{$\mathbb{R}$}$ for all $i,j=1, \ldots, k$. \\ (iv) If $f$ in $I_{G,eo}$ then $$ h(a,b) = b_1 m_1(a_1^2, \ldots, a_k^2, b_1^2, \ldots, b_k^2) + \cdots + b_k m_k(a_1^2, \ldots, a_k^2, b_1^2, \ldots, b_k^2), $$ where for $j=1, \ldots, k$, the function $m_j:\, \mbox{$\mathbb{R}$}^k \times \mbox{$\mathbb{R}$}^k \to \mbox{$\mathbb{R}$}^k$ is smooth. \\ \end{prop} \begin{proof} The proof of (i) follows from an adaptation of Lemma~3.1 in Chapter II of \cite{GS85} (see \cite[Exercise II 3.3]{GS85}). Trivially, (ii) and (iii) follow from (i). The proof of (iv) follows trivially from the symmetry of $h$, that is, $h(a,b)$ must be even in $a$ and odd in $b$. \end{proof} \begin{prop} \label{prop:reg_nreg_L_W} Let $G$ be an $n$-cell weighted network with adjacency matrix $W_G$ and Laplacian matrix $L_G$. In (\ref{eq:EDOsystem}), assume $k=1$, that is, assume the cell phase spaces to be $\mbox{$\mathbb{R}$}$. \\ (i) The linear subspace of the linear vector fields in $I_{G,0},\ I_{G, odd},\ I_{G, l}$ is $< \mathrm{id}_n,\, L_G>$ and in $I_{G,eo}$ is $<\mathrm{id}_n,\, W_G>$.\\ (ii) If $G$ is regular, then we have that the linear subspace of the linear vector fields in $I_{G},\, I_{G,0},\ I_{G, odd},\ I_{G, l}, I_{G,eo}$ is $<\mathrm{id}_n,\, W_G> = < \mathrm{id}_n,\, L_G>$. \end{prop} \begin{proof} (i) If $f \in I_{G,0}$ is linear, that is, both $g,h$ are linear and $h(x,x) =0$ for all $x \in \mbox{$\mathbb{R}$}$, we have $f(x) = \alpha x $ and $h(x,y) = \beta (x-y) $ for all $x,y \in \mbox{$\mathbb{R}$}$. This follows trivially from Proposition~\ref{prop:gen_form_h} (iii). Thus $$ \begin{array}{rcl} f_i (x_1, \ldots, x_n) & = & \alpha x_i + \beta \sum_{j\not=i; j=1}^n w_{ij} (x_i -x_j) = \alpha x_i + \beta \sum_{ j=1}^n w_{ij} (x_i -x_j)\\ & = & \alpha x_i + \beta x_i \sum_{j=1}^n w_{ij} - \beta \sum_{j=1}^n w_{ij} x_j \\ & = & \alpha x_i + \beta \left( v(i) x_i - \sum_{j=1}^n w_{ij} x_j \right) = \alpha x_i + \beta (L_Gx)_i\, . \end{array} $$ That is, $f = \alpha \mbox{id}_n + \beta L_G$. Moreover, any linear map on $\mbox{$\mathbb{R}$}^n$ of the type $\alpha \mbox{id}_n + \beta L_G$ belongs to $I_{G,odd}$ and $I_{G,l}$. \\ Let $f \in I_{G,eo}$ be linear, that is, both $g,h$ are linear where $g$ is odd and $h(x,y)$ is even in $x$ and odd in $y$. Thus $g(x) = \alpha x$ and trivially, from Proposition~\ref{prop:gen_form_h} (iv), it follows that $h(x,y) = \beta y$. It follows that $$ f_i (x_1, \ldots, x_n) = \alpha x_i + \beta \sum_{j=1}^n w_{ij} x_j = \alpha x_i + \beta (W_Gx)_i, $$ that is, $f = \alpha \mbox{id}_n + \beta W_G$. \\ (ii) When $G$ is regular, we have $L_G = v_W \mbox{id}_n - W_G$ and so $<\mathrm{id}_n,\, W_G> = < \mathrm{id}_n,\, L_G>$. \end{proof} \section{Balanced partitions and synchrony in the class of the coupled cell systems with input additive structure} \label{sec_bal} Following ~\cite{GSP03,GST05}, a polydiagonal $\Delta$ is a {\it synchrony subspace} of a weighted network $G$ when it is left invariant under the flow of every $G$-admissible coupled cell system with additive input structure. Recall that, given a weighted network $G$ and a balanced standard partition $\mathcal{P}$ on its set of cells $C$, we denote by $\Delta_{\mathcal{P}}$, the associated polydiagonal subspace, and by $G_{\mathcal{P}}$, the quotient network of $G$ by $\mathcal{P}$. \begin{thm}[\cite{ADF17}] \label{thm:SSBR} Let $G$ be an $n$-cell weighted network. Consider the admissible coupled cell systems for $G$ with additive input structure, for any given choice of total phase space $\left(\mbox{$\mathbb{R}$}^k\right)^n$. Then:\\ (i) A polydiagonal subspace $\Delta_{\mathcal{P}}$ associated with a standard partition $\mathcal{P}$ is a synchrony subspace for $G$ if and only if the partition $\mathcal{P}$ is balanced on the set of cells of $G$. \\ (ii) Let $\mathcal{P}$ be a standard balanced partition on the set of cells of $G$. Then:\\ (ii.a) The restriction to $\Delta_{\mathcal{P}}$ of a $G$-admissible coupled cell system with additive input structure is a $G_{\mathcal{P}}$-admissible coupled cell system with additive input structure.\\ (ii.b) Every $G_{\mathcal{P}}$-admissible coupled cell system with additive input structure is the restriction to $\Delta_{\mathcal{P}}$ of a $G$-admissible coupled cell system with additive input structure. \end{thm} Let $G$ be a weighted network and $W_G$ the corresponding weighted adjacency matrix. Let $\mathcal{P}$ be a balanced standard partition of the set of cells of $G$ with parts $P_1, \ldots, P_p$ and consider the corresponding block structure (\ref{eq:oddbf}) of $W_G$. Denote coordinates on $\Delta_{\mathcal{P}}$ by $(y_1, \ldots, y_p)$ where $y_j = x_k$ for (all) $ k \in P_j$, where $j=1, \ldots, p$. The restriction of (\ref{eq:EDOsystem}) to the polydiagonal space $\Delta_{\mathcal{P}}$ is admissible for the quotient $G_{\mathcal{P}}$ with adjacency matrix $[q_{ij}]$ given by: \begin{equation} \dot{y}_j=g(y_j) +\sum_{i=1}^p {q_{ji}h\left(y_j,y_i\right)} \quad \left(j=1, \ldots, p\right)\, . \label{eq:restEDO} \end{equation} \begin{rem}\normalfont In $I_{G,0}$, we have that in (\ref{eq:EDOsystem}) and (\ref{eq:restEDO}), the terms $a_{jj} h(x_j,x_j)$ and $q_{jj} h(y_j,y_j)$ vanish, respectively. Thus $$ I_{G,0} \subsetneqq I_G\, . $$ It follows then that in $I_{G,0}$, for a polydiagonal subspace $\Delta_{\mathcal{P}}$ to be a synchrony subspace for $G$, that is, to be left invariant under the flow of any system of the form (\ref{eq:EDOsystem}) where $f \in I_{G,0}$, we expect less restrictions to be imposed on $\mathcal{P}$. In fact, to be precise, we can relax the condition of a partition to be balanced by dropping down the conditions on the blocks $Q_{jj}$ that have constant row sum. That is, the standard partition $\mathcal{P}$ must be exo-balanced as we show in the next section. \hfill $\Diamond$ \end{rem} \section{Exo-balanced partitions and synchrony in the class of the exo-input-additive coupled cell systems} \label{sec_exo_bal} In this section we enlarge the set of synchrony subspaces of a network by restricting to coupled cell systems that are exo-input-additive. Let $G$ be an $n$-cell weighted network and $W_G$ the corresponding weighted adjacency matrix. When $f \in I_{G,0}$, equations (\ref{eq:EDOsystem}) for the input additive coupled cell systems admissible by $G$ simplify to: \begin{equation} \dot{x}_j=g(x_j) +\sum_{i=1, i\not= j}^n w_{ji} h\left(x_j,x_i\right) \quad \left(j=1,\ldots,n\right)\, . \label{eq:2EDO} \end{equation} The following result is an extension, to weighted coupled cell networks and input additive coupled cell systems, of Theorem 3.13 in Neuberger {\it et al.}~\cite{NSS19}. \begin{prop} \normalfont Let $G$ be an $n$-cell weighted network and $\mathcal{P}$ a standard partition of its set of cells. The partition $\mathcal{P}$ is exo-balanced for $G$ if and only if $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,0}$, for any given choice of total phase space $\left(\mbox{$\mathbb{R}$}^k\right)^n$. \end{prop} \begin{proof} It follows from Remark~\ref{rmk:bal_G_L}, that a partition $\mathcal{P}$ is exo-balanced for $G$ if and only it it is balanced for the network $G_{-L}$ with adjacency matrix $-L_G$, where $L_G=[l_{ij}]$ is the Laplacian matrix of $G$. By Theorem~\ref{thm:SSBR}, this is equivalent to the polydiagonal subspace $\Delta_{\mathcal{P}}$ being a synchrony subspace for $G_{-L}$ which is equivalent to the polydiagonal subspace $\Delta_{\mathcal{P}}$ being left invariant under the flow of every system in $I_{G_{-L}}$. For the systems in $I_{G_{-L}}$, the equation $\dot x_j = f_j(x)$ associated with cell $j$ has the form: \begin{equation} \dot{x}_j=g(x_j) + \sum_{i=1}^n \left(-l_{ji}\right) h\left(x_j,x_i\right) \quad \left(j =1, \ldots, n\right)\, . \label{eq:sys_Lag} \end{equation} Given that $w_{ji} = -l_{ji}$, for $i \ne j$, when $h(u,u)=0$, we have that the equations in (\ref{eq:2EDO}) and (\ref{eq:sys_Lag}) are the same. That is, $I_{G_{-L},0}$ and $I_{G,0}$ coincide. The result then follows. \end{proof} Let $\mathcal{P}$ be a strict exo-balanced standard partition on the set of cells of $G$ with parts $P_1, \ldots, P_p$ and consider an enumeration of the network set of cells adapted to $\mathcal{P}$ so that $W_G$ has a block structure (\ref{eq:oddbf}). Recall Definition~\ref{def:quo:_exo} where it is described what we call the quotient network $G_{\footnotesize{\mathcal{P}}}$ which has the $p \times p$ adjacency matrix $W_{G_{\footnotesize{\mathcal{P}}}} = [q_{ij}]$ with $q_{ii} = 0$ and $q_{ij} = v_{Q_{ij}}$, for $i \ne j$. Equations (\ref{eq:2EDO}), when restricted to $\Delta_{\mathcal{P}}$ are given by: \begin{equation} \dot{y}_j=g(y_j) +\sum_{i=1, i\not=j}^p {q_{ji}h\left(y_j,y_i\right)} \quad \left( j=1, \ldots, p\right) \label{eq:2restEDO} \end{equation} which are admissible by the network with adjacency matrix $$\left( \begin{array}{c|c|c|c} 0 & q_{12} & \cdots &q_{1p}\\ \hline \vdots& \vdots & \cdots &\vdots\\ \hline q_{p1} & q_{p2} & \cdots &0 \end{array} \right)\, .$$ In particular, these restricted equations are also the restriction to $\Delta_{\mathcal{P}}$ of equations (\ref{eq:2EDO}), for the network with adjacency matrix \begin{equation} \left( \begin{array}{c|c|c|c} 0_{11} & Q_{12} & \cdots &Q_{1p}\\ \hline \vdots& \vdots & \cdots &\vdots\\ \hline Q_{p1} & Q_{p2} & \cdots &0_{pp} \end{array} \right)\, . \label{eq:2zerodiagbf} \end{equation} \begin{exam} \normalfont Recall the three-cell network $G$ at the left of Figure~\ref{fig:odd3cell}. The standard partition $\mathcal{P} = \left\{P_1 = \{1\},\, P_2 = \{2,3\} \right\}$ is balanced and exo-balanced. A coupled cell system of the form (\ref{eq:EDOsystem}) for $G$ where $f \in I_{G}$ or $f \in I_{G,0}$ takes the form $$ \left\{ \begin{array}{l} \dot{x}_1 = g(x_1) + h(x_1,x_2) + h(x_1, x_3)\\ \dot{x}_2 = g(x_2) + 2h(x_2,x_1)\\ \dot{x}_3 = g(x_3) + 2h(x_3,x_1) \end{array} \right. \, . $$ Restricting any such system to $\Delta_{\mathcal{P}} = \{x:\, x_2 = x_3\}$, we obtain $$ \left\{ \begin{array}{l} \dot{x}_1 = g(x_1) + 2h(x_1,x_2)\\ \dot{x}_2 = g(x_2) + 2h(x_2,x_1) \end{array} \right. \, . $$ This system is admissible for the quotient network $Q_2$ at the right of Figure~\ref{fig:odd3cell}. In fact, if $f \in I_{G,0}$, then this restricted system is in $I_{Q_{2},0} \subset I_{Q_{2}}$. The network $Q_2$ is a two-cell bidirectional ring network where the edges have weight two. \hfill $\Diamond$ \end{exam} \begin{exam} \normalfont Consider the four-cell network $G$ with weighted adjacency matrix $$ W_{G} = \left( \begin{array}{cccc} 0 & -3 & -1& -2 \\ -1 & 0 & -1& -1 \\ -3 & 0 & 0 & -1 \\ -1 & -1 & -1& 0 \end{array} \right) $$ and note that $\mathcal{P} = \left\{P_1 = \{1,2,4\},\, P_2 = \{3\} \right\}$ is a strict exo-balanced standard partition for $G$. An exo-input-additive coupled cell system for $G$, that is, in $I_{G,0}$, takes the form $$ \left\{ \begin{array}{l} \dot{x}_1 = g(x_1) -3 h(x_1,x_2) - h(x_1, x_3) - 2h(x_1, x_4)\\ \dot{x}_2 = g(x_2) - h(x_2,x_1) - h(x_2, x_3) - h(x_2, x_4)\\ \dot{x}_3 = g(x_3) - 3h(x_3,x_1) - h(x_3, x_4)\\ \dot{x}_4 = g(x_4) - h(x_4,x_1) - h(x_4, x_2) - h(x_4, x_3) \end{array} \right. \, . $$ Restricting any such system to $\Delta_{\mathcal{P}} = \{x:\, x_1=x_2 = x_4\}$, given that $h(u,u)=0$, we get the system $$ \left\{ \begin{array}{l} \dot{x}_1 = g(x_1) -h(x_1,x_3)\\ \dot{x}_3 = g(x_3) -4h(x_3,x_1) \end{array} \right. \, . $$ This system is admissible for the quotient network of $G$ by the exo-balanced partition $\mathcal{P}$ with adjacency matrix $ \left( \begin{array}{cc} 0 & -1 \\ -4 & 0 \end{array} \right) $ (recall Definition~\ref{def:quo:_exo}). \hfill $\Diamond$ \end{exam} \section{Odd-balanced partitions and anti-synchrony in the class of the odd-input-additive coupled cell systems} \label{sec_odd_bal} A non-standard generalized polydiagonal left invariant under the flow of every odd-input-additive coupled cell system admissible by a weighted network $G$ is an {\it anti-synchrony subspace} of $G$. We show next that these anti-synchrony subspaces of $G$ are the non-standard generalized polydiagonals associated with the odd-balanced tagged partitions of $G$. This result is an extension, to weighted coupled cell networks and input additive coupled cell systems, of Theorem 4.14 in Neuberger {\it et al.}~\cite{NSS19}. \begin{prop} \normalfont \label{prop:odd_iff} Let $G$ be a weighted network and $\mathcal{P}$ a tagged partition of its set of cells which is not standard. The tagged partition $\mathcal{P}$ is odd-balanced for $G$ if and only if the generalized polydiagonal $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,odd}$, for any given choice of total phase space $\left(\mbox{$\mathbb{R}$}^k\right)^n$. \end{prop} \begin{proof} Let $G$ be an $n$-cell weighted network with set of cells $C$ and adjacency matrix $W_G$. Consider a tagged partition $\mathcal{P}$ of $C$ formed by parts $P_1, P_2, \ldots, P_p$, counterparts $\overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q$ and zero part $P_0$. Consider an enumeration of the cells adapted to $\mathcal{P}$ so that $W_G$ has a block form (\ref{eq:oddbf}). Equation (\ref{eq:EDOsystem}) for the input additive coupled cells systems admissible by $G$ can be rewritten as {\tiny \begin{equation} \label{eq:EDOsystem_odd} \dot{x}_j=g(x_j) + \sum_{t=1}^q \left( \sum_{i \in P_t} {w_{ji}h\left(x_j,x_i\right)} + \sum_{i \in \overline{P}_t} {w_{ji}h\left(x_j,x_i\right)} \right) + \sum_{t=q+1}^p \left( \sum_{i \in P_t} w_{ji}h \left(x_j,x_i\right) \right) + \sum_{i \in P_0} {w_{ji}h\left(x_j,x_i\right)}, \end{equation}} for $j=1, \ldots, n$. For any given choice of total phase space $\left(\mbox{$\mathbb{R}$}^k\right)^n$, assume that the generalized polydiagonal subspace $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,odd}$. Take $k=1$ and $h(x,y) = x-y$ and note that, by Proposition~\ref{prop:gen_form_h} (ii), in $I_{G,odd}$ we have $h(u,u)=0$. Then, in the restriction to $\Delta_{\mathcal{P}}$, we have the following: \\ \noindent (i) For $j,k \in P_r$, for $r \in \{1, \ldots, p\}$, we have $\dot{x}_j = \dot{x}_k$. Thus, since $h(u,u)=0$, we have $ \sum_{i \in P_t} w_{ji} = \sum_{i \in P_t} w_{ki}$ and $\sum_{i \in \overline{P}_s} w_{ji} = \sum_{i \in \overline{P}_s} w_{ki}$, for $t \ne r$, $1\leq t \leq p$, $1\leq s \leq q$, and $ \sum_{i \in P_0} w_{ji} = \sum_{i \in P_0} w_{ki}$. That is, the block matrices, $Q_{ij}$ if $i \not= j$, $R_{ij}$ and $Z_{i0}$, in (\ref{eq:oddbf}), are regular.\\ \noindent (ii) Analogously, taking $j,k \in \overline{P}_r$, for $r \in \{1, \ldots, q\}$, we conclude that the block matrices, $\overline{Q}_{ij}$ if $i \not= j$, $\overline{R}_{ij}$ and $\overline{Z}_{i0}$, in (\ref{eq:oddbf}), are regular.\\ \noindent (iii) For $j \in P_r$ and $k \in \overline{P}_r$, for $r \in \{1, \ldots, q\}$, we have $\dot{x}_j = - \dot{x}_k$. Thus, since $h$ is odd, we have $\sum_{i \in P_t} w_{ji} = \sum_{i \in \overline{P}_t} w_{ki}$ and $\sum_{i \in \overline{P}_t} w_{ji} = \sum_{i \in P_t} w_{ki}$, for $t \ne r$. That is, for $i,j \in \{1, \ldots, q \}$, we have ${\mathrm rs}\left( Q_{ij} \right) = {\mathrm rs}\left( \overline{Q}_{ij} \right)$ if $i \ne j$, and ${\mathrm rs}\left( R_{ij} \right) = {\mathrm rs}\left( \overline{R}_{ij} \right)$.\\ \noindent (iv) For $j \in P_0$, we have $\dot{x}_j = 0$. Thus, since $h$ is odd, we have $\sum_{i \in P_t} w_{ji} = \sum_{i \in \overline{P}_t} w_{ji}$, that is, ${\mathrm rs}\left(Z_{0t} \right) = {\mathrm rs}\left( \overline{Z}_{0t} \right)$ for $1\leq t \leq q$, and $\sum_{i \in P_t} w_{ji} = 0$, that is, ${\mathrm rs}\left(Z_{0t} \right) =0$ for $q+1 \leq t \leq p$. We conclude that, if the generalized polydiagonal subspace $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,odd}$ then the tagged partition $\mathcal{P}$ is odd-balanced for $G$. Now, assume that the tagged partition $\mathcal{P}$ is odd-balanced for $G$ and consider the input additive coupled cells systems admissible by $G$ in $I_{G,0}$. We can assume that the equations are in the form given in (\ref{eq:EDOsystem_odd}). We have the following: \\ \noindent Conditions (a)-(b) in Definition~\ref{def:odd} of odd-balanced partition imposing the regularity of all the blocks except $Q_{ii}, \overline{Q}_{ii}, Z_{0j}, \overline{0j}, Z_{00}$ and that, each pair of blocks of the type, $Q_{ij},\, \overline{Q}_{ij}$ if $i \ne j$, $R_{ij},\, \overline{R}_{ij}$, and $Z_{i0},\, \overline{Z}_{i0}$ are both regular of the same valency, since $h(u,u)=0$, imply that, given an initial condition in $\Delta_{\mathcal{P}}$, the equations for cells in the same part $P_r$ for $1\leq r \leq p$, or $\overline{P}_s$ for $1 \leq s \leq q$, are equal. Moreover, they imply that the equations for cells in a part $P_s$ are symmetric to the equations for cells in its counterpart $\overline{P}_s$ for $1 \leq s \leq q$, with the additional condition of $g$ and $h$ being odd.\\ \noindent Conditions (c)-(d) in Definition~\ref{def:odd} of odd-balanced partition imposing that the blocks $Z_{0j},\, \overline{Z}_{0j}$ satisfy ${\mathrm rs}(Z_{0j}) = {\mathrm rs}\left(\overline{Z}_{0j}\right)$, for $0 < j \leq q$, and ${\mathrm rs}(Z_{0j})=0$ for $q+1 \leq j \leq p$, imply that, given an initial condition in $\Delta_{\mathcal{P}}$, the equations for cells in the part $P_0$ are null, with the additional condition of $g$ and $h$ being odd.\\ We conclude then that, if a tagged partition $\mathcal{P}$ is odd-balanced for $G$ then $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,odd}$. \end{proof} \begin{exam}\normalfont Returning to the network on the left of Figure~\ref{fig:odd3cell}, we have that equations (\ref{eq:EDOsystem}) for $G$ where $f \in I_{G,odd}$ take the form $$ \left\{ \begin{array}{l} \dot{x}_1 = g(x_1) + h(x_1,x_2) + h(x_1, x_3)\\ \dot{x}_2 = g(x_2) + 2h(x_2,x_1)\\ \dot{x}_3 = g(x_3) + 2h(x_3,x_1) \end{array} \right. $$ where $g,h$ are odd and $h(x,x) = 0$. In Example~\ref{ex:simples}, we have seen that the tagged partition $\mathcal{P} = \left\{[1] = P_1 = \{1\},\, -[1] = \overline{P}_1 = \{2,3\} \right\}$ is odd-balanced. Restricting any such system to the generalized polydiagonal $\Delta_{\mathcal{P}} = \{x:\, x_2 = -x_1,\, x_3 = -x_1\}$, we obtain $$ \dot{x}_1 = g(x_1) + 2h(x_1,-x_1)\, . $$ The symbolic network $Q_1$ at the center of Figure~\ref{fig:odd3cell}, as described in Definition~\ref{def:quo_odd}, represents this restricted system where the cell $-[1]$ represents the negative state of the cell $[1]$. \hfill $\Diamond$ \end{exam} From Proposition~\ref{prop:odd_iff} and using the symbolic quotient defined in Definition~\ref{def:quo_odd} for an odd-balanced tagged partition, it follows the following proposition: \begin{prop} Given an $n$-cell network $G$, an odd-balanced tagged partition $\mathcal{P}$ on the network set of cells, and an enumeration of cells adapted to $\mathcal{P}$ providing a block structure (\ref{eq:oddbf}) of the adjacency matrix $W_G$, we have that any coupled cell system in $I_{G,odd}$ restricted to the generalized polydiagonal $\Delta_{\mathcal{P}}$ is consistent with the symbolic quotient defined in Definition~\ref{def:quo_odd} where cells representing the classes $\overline{P}_i \equiv -P_i$ correspond to the negative states of the cells representing the classes $P_i$. Moreover, the cell representing the class $P_0$ corresponds to the zero state. More precisely, it has the following the form. Denoting coordinates on $\Delta_{\mathcal{P}}$ by $(y_1, \ldots, y_p)$ where $y_j = x_k$ for (all) $ k \in P_j$, where $j=1, \ldots, p$, the restriction of (\ref{eq:EDOsystem}) to $\Delta_{\mathcal{P}}$ where $f \in I_{G,odd}$ is given by: \begin{equation} \dot{y}_j=g(y_j) +\sum_{i=1, i \not= j}^p q_{ji}h\left(y_j,y_i\right) + \sum_{i=1}^{q} r_{ji}h\left(y_j,-y_i\right) + z_{j0}h\left(y_j,0\right) \quad \left( j=1, \ldots, p\right)\, . \label{eq:oddrestEDO} \end{equation} Here, $q_{ji}$ (resp. $r_{ji}$) represents the valency of the regular matrix $Q_{ji}$ (resp. $R_{ji}$) and $z_{j0}$ the valency of the regular matrix $Z_{j0}$. \end{prop} \section{Linear-balanced partitions and anti-synchrony in the class of the linear-input-additive coupled cell systems}\label{sec_lin_bal} A non-standard generalized polydiagonal left invariant under the flow of every linear-input-additive coupled cell system admissible by a weighted network $G$ is an {\it anti-synchrony subspace} of $G$. We show next that these anti-synchrony subspaces of $G$ are the non-standard generalized polydiagonals associated with the linear-balanced tagged partitions of $G$. This result is an extension, to weighted coupled cell networks and input additive coupled cell systems, of Theorem 4.21 in Neuberger {\it et al.}~\cite{NSS19}. Recall that in $I_{G,l}$, from Proposition~\ref{prop:gen_form_h} (iii), we have for $a,b \in \mbox{$\mathbb{R}$}^k$, $$ h(a,a) = 0,\quad h(a,-a) = 2 h(a,0),\quad h(a, \pm b) = h(a,0) \overline{+} h(b,0)\, . $$ \begin{prop} \normalfont \label{prop:IGL} Let $G$ be a weighted network and $\mathcal{P}$ a tagged partition of its set of cells which is not standard. The tagged partition $\mathcal{P}$ is linear-balanced for $G$ if and only if the generalized polydiagonal $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,l}$, for any given choice of total phase space $\left(\mbox{$\mathbb{R}$}^k\right)^n$. \end{prop} \begin{proof} Let $G$ be an $n$-cell weighted network with set of cells $C$, adjacency matrix $W_G$ and Laplacian $L_G$. Consider a tagged partition $\mathcal{P}$ of $C$ with parts $P_1, P_2, \ldots, P_p,$ counterparts $\overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q$, zero part $P_0$ and the corresponding generalized polydiagonal $\Delta_{\mathcal{P}}$. Assume $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,l}$. In (\ref{eq:EDOsystem}), assume $k=1$. By Proposition~\ref{prop:reg_nreg_L_W}, the space $\Delta_{\mathcal{P}}$ is left invariant under $L_G$. By Definition~\ref{def:linear}, we have that $\mathcal{P}$ is linear-balanced. Assume now that the tagged partition $\mathcal{P}$ is linear-balanced for $G$ and consider an enumeration of the cells of $G$ adapted to $\mathcal{P}$ so that the adjacency matrix $W_G$ of $G$ has a block structure (\ref{eq:oddbf}). By Definition~\ref{def:linear}, for $k=1$ the space $\Delta_{\mathcal{P}}$ is left invariant by the matrix $L_G = D_G - W_G$, which is equivalent to the entries of $W_G$ satisfy the conditions in Corollary~\ref{cor:mainLaplacian}. Consider an additive coupled cell system in $I_{G,l}$, with equations \begin{equation} {\tiny \begin{array}{rcl} \dot{x}_i & = & g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij}h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) + \displaystyle \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} {w_{ij}h\left(x_i,x_j\right)}, \end{array}} \label{eq:EDOsystem_Lag_lin} \end{equation} for $i=1,\ldots,n$, where $g$ is odd and $h$ is linear. Consider coordinates $\left(y_1, \ldots, y_p\right)$ in $\Delta_{\mathcal{P}}$ where: for $1 \leq t \leq q$, we take $y_t = x_j = -x_m$ for all $j \in P_t$ and $m \in \overline{P}_t$; for $q+1 \leq t \leq p$, we have $y_t = x_j$ for all $j \in P_t$; also, $x_j = 0$ for all $j \in P_0$. We have so $h(y_t,y_t) = 0$ for all $1 \leq t \leq p$ and $h(y_t,-y_t) = 2 h(y_t,0)$ for $1 \leq t \leq q$; also, if $l\not= t$, we have $h(y_l,\pm y_t) = h(y_l,0) \overline{+} h(y_t,0)$ and $h(-y_l,\pm y_t) = - h(y_l,0) \overline{+} h(y_t,0)$. In (\ref{eq:EDOsystem_Lag_lin}), if $i \in P_l$ for $1\leq l \leq q$ and $x \in \Delta_{\mathcal{P}}$, using conditions (i)-(ii) in Corollary~\ref{cor:mainLaplacian} and corresponding notation, we obtain:\\ $ {\tiny \begin{array}{rcl} & & g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij}h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) + \displaystyle \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} {w_{ij}h\left(x_i,x_j\right)} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1, t\not= l}^q \left( h\left(y_l,y_t\right) \sum_{j \in P_t} w_{ij}+ h\left(y_l,-y_t\right) \sum_{m \in \overline{P}_t} w_{im} \right) \\ & & \\ & & \quad \displaystyle + h\left(y_l,y_l\right) \sum_{j \in P_l} w_{ij} + h\left(y_l,-y_l\right) \sum_{m \in \overline{P}_l} w_{im} + \displaystyle \sum_{t=q+1}^p h(y_l, y_t) \left( \sum_{j \in P_t} w_{ij} \right) + h\left(y_l,0\right) \sum_{j \in P_0} w_{ij} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1, t\not=l}^q \left( \left( h(y_l,0) - h(y_t,0)\right) \sum_{j \in P_t} w_{ij} + \left( h(y_l,0) + h(y_t,0) \right) \sum_{m \in \overline{P}_t} w_{im} \right) \\ & & \\ & & \quad + \displaystyle 2 h\left(y_l,0\right) \sum_{m \in \overline{P}_l} w_{im} + \displaystyle \sum_{t=q+1}^p \left( h(y_l,0) - h(y_t,0)\right) \left( \sum_{j \in P_t} w_{ij} \right) + h\left(y_l,0\right) \sum_{j \in P_0} w_{ij} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + h(y_l,0) \displaystyle \left( \sum_{t=1, t\not=l}^q \left( \sum_{j \in P_t} w_{ij} + \sum_{m \in \overline{P}_t} w_{im}\right) + 2 \sum_{m \in \overline{P}_l} w_{im} \displaystyle + \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} \right) + \sum_{j \in P_0} w_{ij} \right) \\ & & \\ & & \quad + \displaystyle \sum_{t=1, t\not=l}^q h(y_t,0) \left( - \sum_{j \in P_t} w_{ij} + \sum_{m \in \overline{P}_t} w_{im} \right) + \displaystyle \sum_{t=q+1}^p \left( h(y_t,0)\right) \left( - \sum_{j \in P_t} w_{ij} \right) \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + h(y_l,0) \displaystyle \left( \sum_{t=1, t\not= l}^{q} \left[ {\mathrm rs}\left(Q_{lt}\right) + {\mathrm rs}\left(R_{lt}\right) \right] + 2 {\mathrm rs}\left(R_{ll}\right) + \sum_{t=q+1}^{p} {\mathrm rs}\left(Q_{lt}\right) + {\mathrm rs}\left( Z_{l0} \right) \right)_i\\ & & \\ & & \quad + \displaystyle \sum_{t=1, t\not=l}^q h(y_t,0) \left(- {\mathrm rs}(Q_{lt}) + {\mathrm rs}(R_{lt}) \right)_i + \displaystyle \sum_{t=q+1}^p h(y_t,0) \left(- {\mathrm rs}(Q_{lt})\right)_i \\ & & \\ & = & g(y_l) + h(y_l,0) r_l + \displaystyle \sum_{t=1, t\not= l}^p h(y_t,0) q_{lt} \, . \end{array}} $\\ Recall that, for $1\leq l \leq q$, the column matrices $- {\mathrm rs}(Q_{lt}) + {\mathrm rs}(R_{lt})$ for $t=1, \ldots, q$, $t\not= l$ and $-{\mathrm rs}(Q_{lt})$, for $t=q+1, \ldots, p$ are regular of valency $q_{lt}$. Also, $\sum_{t=1, t \not= l}^{p} {\mathrm rs}\left(Q_{lt}\right) + \sum_{t=1, t\not= l}^{q} {\mathrm rs}\left(R_{lt}\right) + 2 {\mathrm rs}\left(R_{ll}\right) + {\mathrm rs}\left( Z_{l0} \right)$ is regular of valency $r_l$. Similarly, in (\ref{eq:EDOsystem_Lag_lin}), if $i \in \overline{P}_l$ for $1\leq l \leq q$ and $x \in \Delta_{\mathcal{P}}$, using conditions (i)-(ii) in Corollary~\ref{cor:mainLaplacian} and corresponding notation, we obtain:\\ $ {\tiny \begin{array}{l} g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij}h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) + \displaystyle \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} {w_{ij}h\left(x_i,x_j\right)}\\ \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & \displaystyle -g(y_l) - h(y_l,0) r_l - \sum_{t=1, t\not= l}^p h(y_t,0) q_{lt}\, . \end{array}} $ In (\ref{eq:EDOsystem_Lag_lin}), if $i \in P_l$ for $ l > q$ and $x \in \Delta_{\mathcal{P}}$, using conditions (iii)-(iv) in Corollary~\ref{cor:mainLaplacian} and corresponding notation, we obtain:\\ $ {\tiny \begin{array}{l} g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij}h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) + \displaystyle \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} {w_{ij}h\left(x_i,x_j\right)} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1}^q \left( h\left(y_l,y_t\right) \sum_{j \in P_t} w_{ij}+ h\left(y_l,-y_t\right) \sum_{m \in \overline{P}_t} w_{im} \right) \\ & & \\ & & \quad \displaystyle + h\left(y_l,y_l\right) \sum_{j \in P_l} w_{ij} + \displaystyle \sum_{t=q+1, t\not=l}^p h(y_l, y_t) \left( \sum_{j \in P_t} w_{ij} \right) + h\left(y_l,0\right) \sum_{j \in P_0} w_{ij} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1}^q \left( \left( h(y_l,0) - h(y_t,0)\right) \sum_{j \in P_t} w_{ij} + \left( h(y_l,0) + h(y_t,0) \right) \sum_{m \in \overline{P}_t} w_{im} \right) \\ & & \\ & & \quad + \displaystyle \sum_{t=q+1, t\not= l}^p \left( h(y_l,0) - h(y_t,0)\right) \left( \sum_{j \in P_t} w_{ij} \right) + h\left(y_l,0\right) \sum_{j \in P_0} w_{ij} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + h(y_l,0) \displaystyle \left( \sum_{t=1}^q \left( \sum_{j \in P_t} w_{ij} + \sum_{m \in \overline{P}_t} w_{im}\right) \displaystyle + \sum_{t=q+1, t\not=l}^p \left( \sum_{j \in P_t} w_{ij} \right) + \sum_{j \in P_0} w_{ij} \right) \\ & & \\ & & \quad + \displaystyle \sum_{t=1}^q h(y_t,0) \left( - \sum_{j \in P_t} w_{ij} + \sum_{m \in \overline{P}_t} w_{im} \right) + \displaystyle \sum_{t=q+1, t\not=l}^p \left( h(y_t,0)\right) \left( - \sum_{j \in P_t} w_{ij} \right) \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + h(y_l,0) \displaystyle \left( \sum_{t=1}^{q} \left[ {\mathrm rs}\left(Q_{lt}\right) + {\mathrm rs}\left(R_{lt}\right) \right] + \sum_{t=q+1, t\not=l}^{p} {\mathrm rs}\left(Q_{lt}\right) + {\mathrm rs}\left( Z_{l0} \right) \right)_i\\ & & \\ & & \quad + \displaystyle \sum_{t=1}^q h(y_t,0) \left(- {\mathrm rs}(Q_{lt}) + {\mathrm rs}(R_{lt}) \right)_i + \displaystyle \sum_{t=q+1, t\not=l}^p h(y_t,0) \left(- {\mathrm rs}(Q_{lt})\right)_i \\ & & \\ & = & g(y_l) + h(y_l,0) r_l + \displaystyle \sum_{t=1, t\not= l}^p h(y_t,0) q_{lt} \, . \end{array}} $\\ Recall that, for $p \geq l >q$, the column matrices $- {\mathrm rs}(Q_{lt}) + {\mathrm rs}(R_{lt})$ for $t=1, \ldots, q$, and $-{\mathrm rs}(Q_{lt})$, for $t=q+1, \ldots, p$, where $t \not= l$, are regular of valency $q_{lt}$. Also, $\sum_{t=1, t \not= l}^{p} {\mathrm rs}\left(Q_{lt}\right) + \sum_{t=1}^{q} {\mathrm rs}\left(R_{lt}\right) + {\mathrm rs}\left( Z_{l0} \right)$ is regular of valency $r_l$. In (\ref{eq:EDOsystem_Lag_lin}), if $i \in P_0$ and $x \in \Delta_{\mathcal{P}}$, using conditions (v)-(vi) in Corollary~\ref{cor:mainLaplacian} and corresponding notation, recalling that $h(0,0)=0$ and $g(0) =0$, we obtain:\\ $ {\tiny \begin{array}{rcl} & & g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij} h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) \displaystyle + \sum_{t=q+1}^p \left(\sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} w_{ij} h\left(x_i,x_j\right) \end{array}} $ \noindent $ {\tiny \begin{array}{rcl} & = & \displaystyle \sum_{t=1}^q \left( h\left(0,y_t\right) \sum_{j \in P_t} w_{ij}+ h\left(0,-y_t\right) \sum_{m \in \overline{P}_t} w_{im} \right) \displaystyle + \sum_{t=q+1}^p h\left(0,y_t\right) \left(\sum_{j \in P_t} w_{ij} \right) \end{array} } $\\ $ {\tiny \begin{array}{rcl} & = & \displaystyle \sum_{t=1}^p h\left(0,y_t\right) \left( \sum_{j \in P_t} w_{ij}- \sum_{m \in \overline{P}_t} w_{im} \right) + \displaystyle \sum_{t=q+1}^p h\left(0,y_t\right) \left( \sum_{j \in P_t} w_{ij} \right)\\ \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & \displaystyle \sum_{t=1}^q h\left(0,y_t\right) \left( {\mathrm rs}(Z_{0t}) - {\mathrm rs}\left(\overline{Z}_{0t}\right)\right)_i + \sum_{t=q+1}^p h\left(0,y_t\right) \left( {\mathrm rs}(Z_{0t}) \right)_i = 0\, . \end{array}} $ We have so that $\Delta_{\mathcal{P}}$ is invariant under the flow of the additive coupled cell system with equations (\ref{eq:EDOsystem_Lag_lin}). That is, $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,l}$. \end{proof} From Proposition~\ref{prop:IGL} and using the notation of the symbolic quotient of $G$ by a linear-balanced tagged partition determined by the matrix (\ref{eq:quolin}) in Definition~\ref{def:quo_odd}, it follows the following proposition: \begin{prop} Let $G$ be an $n$-cell network, $\mathcal{P}$ a linear-balanced tagged partition on the set of cells of $G$ with parts $P_1, \ldots, P_p$, counterparts $\overline{P}_1, \ldots, \overline{P}_q$ and zero part $P_0$, and consider an enumeration of the set of cells adapted to $\mathcal{P}$ providing a block structure (\ref{eq:oddbf}) of the adjacency matrix $W_G$. Consider the symbolic quotient of $G$ by $\mathcal{P}$ determined by the matrix (\ref{eq:quolin}) in Definition~\ref{def:quo_odd}. Denoting coordinates on $\Delta_{\mathcal{P}}$ by $(y_1, \ldots, y_p)$ where $y_i = x_k$ for (all) $ k \in P_i$, the restriction of (\ref{eq:EDOsystem}) to $\Delta_{\mathcal{P}}$ where $f \in I_{G,l}$ is given by: \begin{equation} \dot{y}_i=g(y_i) +\sum_{j=1, j \not= i}^p q_{ij}h\left(y_j,0\right) + r_{i}h\left(y_i,0\right) \quad \left( i=1,\ldots, p\right)\, . \label{eq:linrestEDO} \end{equation} \end{prop} \begin{exams} \normalfont Consider the isomorphic six-cell networks in Figure~\ref{f:2ndsixSwift} and the linear-balanced partitions of Examples~\ref{exs:linear}. For the linear-balanced partition of the network set of cells in Examples~\ref{exs:linear}~(i) with parts $P_1 =\{1,2\}, \, P_2 = \{ 3\}$ and counterparts $\overline{P}_1 =\{4,5\}, \, \overline{P}_2 = \{ 6\}$, we have that any coupled cell system in $I_{G,l}$ restricted to $$ \Delta_{\mathcal{P}} = \{ (y_1,y_1, y_2, -y_1, -y_1, -y_2):\, y_1, y_2 \in \mbox{$\mathbb{R}$}^k\} $$ has the following the form: \begin{equation} \left\{ \begin{array}{l} \dot{y}_1 =g(y_1) + 2 h(y_1,0)\\ \\ \dot{y}_2 =g(y_2) + 2 h(y_2,0) \end{array}\, . \right. \label{eq:linexemplo} \end{equation} \\ Consider now the linear-balanced partition of the network set of cells in Examples~\ref{exs:linear}~(ii) with one part $P_1 =\{1,2\}$, one counterpart $ \overline{P}_1 = \{ 3, 4\}$ and the zero part $P_0 =\{5,6\}$. We have that any coupled cell system in $I_{G,l}$ restricted to $$ \Delta_{\mathcal{P}} = \{ (y_1,y_1,- y_1, -y_1, 0,0):\, y_1 \in \mbox{$\mathbb{R}$}^k\} $$ has the following the form: \begin{equation} \begin{array}{l} \dot{y}_1 =g(y_1) + 2 h(y_1,0)\, . \end{array} \label{eq:2linexemplo} \end{equation} \hfill $\Diamond$ \end{exams} \section{Even-odd-balanced partitions and anti-synchrony in the class of the even-odd-input-additive coupled cell systems}\label{sec_eo_bal} A non-standard generalized polydiagonal left invariant under the flow of every even-odd-input-additive coupled cell system admissible by a weighted network $G$ is an {\it anti-synchrony subspace} of $G$. We show next that these anti-synchrony subspaces of $G$ are the non-standard generalized polydiagonals associated with the even-odd-balanced partitions of $G$. Recall that in $I_{G,eo}$, $g$ is odd and the coupling function $h(x,y)$ is even in $x$ and odd in $y$. It follows in particular that $h(x,0)=0$ for all $x$. Also, we have that $h(-x,-y) = h(x,-y) = -h(x,y) = -h(-x,y)$ for all $x,y$. \begin{prop} \normalfont \label{prop:eoia} Let $G$ be a weighted network and $\mathcal{P}$ a tagged partition of its set of cells which is not standard. The partition $\mathcal{P}$ is even-odd-balanced for $G$ if and only if the generalized polydiagonal $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,eo}$, for any given choice of total phase space $\left(\mbox{$\mathbb{R}$}^k\right)^n$. \end{prop} \begin{proof} Let $G$ be an $n$-cell weighted network with set of cells $C$ and adjacency matrix $W_G$. Consider a tagged partition $\mathcal{P}$ of $C$ with parts $P_1, P_2, \ldots, P_p,$ counterparts $\overline{P}_1, \overline{P}_2, \ldots, \overline{P}_q$, zero part $P_0$ and the corresponding generalized polydiagonal $\Delta_{\mathcal{P}}$. Assume $\Delta_{\mathcal{P}}$ is left invariant under the flow of every system in $I_{G,eo}$. In particular, by Proposition~\ref{prop:reg_nreg_L_W}, the coupled cell system $\dot{x} = W_G \, x$ where $x \in \mbox{$\mathbb{R}$}^n$, is even-odd-input-aditive. Thus, the space $\Delta_{\mathcal{P}}$ is left invariant under $W_G$. By Definition~\ref{def:linear}, we have that $\mathcal{P}$ is even-odd-balanced. Assume now that the partition $\mathcal{P}$ is even-odd-balanced for $G$ and consider an enumeration of the cells of $G$ adapted to $\mathcal{P}$ so that the adjacency matrix $W_G$ of $G$ has a block structure (\ref{eq:oddbf}). By Definition~\ref{def:linear}, for $k=1$ the space $\Delta_{\mathcal{P}}$ is left invariant by the matrix $W_G$, which is equivalent to the entries of $W_G$ satisfying the conditions in Proposition~\ref{thm:mainLaplacian}. Consider an additive coupled cell system in $I_{G,eo}$, with equations \begin{equation} {\tiny \begin{array}{rcl} \dot{x}_i & = & g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij}h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) + \displaystyle \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} {w_{ij}h\left(x_i,x_j\right)}, \end{array}} \label{eq:EDOsystem_Lag_eo} \end{equation} for $i=1,\ldots,n$, where $g$ is odd and $h$ is even in $x$ and odd in $y$. Consider coordinates $\left(y_1, \ldots, y_p\right)$ in $\Delta_{\mathcal{P}}$ where: for $1 \leq t \leq q$, we take $y_t = x_j = -x_m$ for all $j \in P_t$ and $m \in \overline{P}_t$; for $q+1 \leq t \leq p$, we have $y_t = x_j$ for all $j \in P_t$; also, $x_j = 0$ for all $j \in P_0$. We have so $h(y_t,0) = 0$. The proof now follows as in the proof of Proposition~\ref{prop:IGL}. As an example, note that in (\ref{eq:EDOsystem_Lag_eo}), if $i \in P_l$ for $1\leq l \leq q$ and $x \in \Delta_{\mathcal{P}}$, we obtain:\\ $ {\tiny \begin{array}{rcl} & & g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij}h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) + \displaystyle \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} {w_{ij}h\left(x_i,x_j\right)} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1}^q \left( h\left(y_l,y_t\right) \sum_{j \in P_t} w_{ij}+ h\left(y_l,-y_t\right) \sum_{m \in \overline{P}_t} w_{im} \right) + \displaystyle \sum_{t=q+1}^p \left( h(y_l, y_t) \sum_{j \in P_t} w_{ij} \right) + h\left(y_l,0\right) \sum_{j \in P_0} w_{ij} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1}^q \left( h\left(y_l,y_t\right) \sum_{j \in P_t} w_{ij} - h\left(y_l,y_t\right) \sum_{m \in \overline{P}_t} w_{im} \right) + \displaystyle \sum_{t=q+1}^p \left( h(y_l, y_t) \ \sum_{j \in P_t} w_{ij} \right) \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1}^q h\left(y_l,y_t\right) \left( \sum_{j \in P_t} w_{ij} - \sum_{m \in \overline{P}_t} w_{im} \right) + \displaystyle \sum_{t=q+1}^p \left( h(y_l, y_t) \sum_{j \in P_t} w_{ij} \right) \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & g(y_l) + \displaystyle \sum_{t=1}^q h\left(y_l,y_t\right) \left( {\mathrm rs}\left(Q_{lt}\right) - {\mathrm rs}\left(R_{lt}\right) \right)_i + \displaystyle \sum_{t=q+1}^p h(y_l, y_t) \left( {\mathrm rs}\left(Q_{lt}\right) \right)_i \end{array}}\\ $ Similarly, in (\ref{eq:EDOsystem_Lag_eo}), if $i \in \overline{P}_l$ for $1\leq l \leq q$ and $x \in \Delta_{\mathcal{P}}$, we obtain:\\ $ {\tiny \begin{array}{rcl} & & g(x_i) + \displaystyle \sum_{t=1}^q \left( \sum_{j \in P_t} {w_{ij}h\left(x_i,x_j\right)} + \sum_{m \in \overline{P}_t} {w_{im}h\left(x_i,x_m\right)} \right) + \displaystyle \sum_{t=q+1}^p \left( \sum_{j \in P_t} w_{ij} h\left(x_i,x_j\right)\right) + \sum_{j \in P_0} {w_{ij}h\left(x_i,x_j\right)} \end{array}} $\\ $ {\tiny \begin{array}{rcl} & = & -g(y_l) - \displaystyle \sum_{t=1}^q h\left(y_l,y_t\right) \left( {\mathrm rs}\left(\overline{Q}_{lt}\right) - {\mathrm rs}\left(\overline{R}_{lt}\right) \right)_i - \displaystyle \sum_{t=q+1}^p h(y_l, y_t) \left( -{\mathrm rs}\left(\overline{R}_{lt}\right) \right)_i \end{array}} $\\ The rest of the proof follows in a similar way as in the proof of Proposition \ref{prop:IGL} using the conditions in Proposition~\ref{thm:mainLaplacian} for $W_G$. \end{proof} The following proposition describes the restrictions of coupled cell systems which are even-odd-input-additive to anti-synchrony spaces. \begin{prop} Let $G$ be an $n$-cell network, $\mathcal{P}$ an even-odd--balanced partition on the set of cells of $G$ which is not standard, with parts $P_1, \ldots, P_p$, counterparts $\overline{P}_1, \ldots, \overline{P}_q$ and zero part $P_0$, and consider an enumeration of the set of cells adapted to $\mathcal{P}$ providing a block structure (\ref{eq:oddbf}) of the adjacency matrix $W_G$. Consider the symbolic quotient of $G$ by $\mathcal{P}$ determined by the matrix (\ref{eq:quoeo}) in Definition~\ref{def:quo_odd}. Denoting coordinates on $\Delta_{\mathcal{P}}$ by $(y_1, \ldots, y_p)$ where $y_i = x_k$ for (all) $ k \in P_i$, the restriction of (\ref{eq:EDOsystem}) to $\Delta_{\mathcal{P}}$ where $f \in I_{G,eo}$ is given by: \begin{equation} \dot{y}_i=g(y_i) +\sum_{j=1}^p q_{ij}h\left(y_i,y_j\right) \quad \left( i=1,\ldots, p\right)\, . \label{eq:eorestEDO} \end{equation} \end{prop} \section{The set of synchrony and anti-synchrony subspaces of a weighted network} \label{sec:algm} In \cite{AD18}, Aguiar and Dias, extend previous results on the coupled cell networks formalism of Golubitsky, Stewart and collaborators to the setup of weighted coupled cell networks considering input additive coupled cell systems. Some of those results have to do with the polydiagonal subspaces of the network phase space, assuming one dimensional cell dynamics, that are left invariant by the network weighted adjacency matrix. These correspond to the polydiagonal subspaces that are flow invariant by all the input additive coupled cell systems that are admissible by the network structure. That is, they correspond to the synchrony subspaces of the weighted network that are given by the balanced partitions of the network set of cells. In \cite{AD18}, taking the results in Stewart~\cite{S07}, Aguiar and Dias conclude that the set of the synchrony subspaces of a weighted coupled cell network, in one-to-one correspondence with the balanced partitions of the set of cells of the network, is a lattice with the partial order given by inclusion and the meet operation given by intersection. Moreover, they conclude that both the characterization and the algorithm obtained in Aguiar and Dias~\cite{AD14}, where the lattice of synchrony subspaces of a network can be obtained using the eigenvalue and eigenvector structure of its adjacency matrix, apply to the weighted setup. Here, we enlarge the set of the polydiagonal subspaces by considering the generalized polydiagonal subspaces that are left invariant by the adjacency matrix and/or the Laplacian matrix of a network. The {\it synchrony subspaces} of a network correspond to the polydiagonal subspaces that are given by the exo-balanced partitions on the network set of cells. These are flow invariant by the exo-input-additive coupled cell systems. The subset of the synchrony subspaces that are given by the balanced partitions are flow invariant by the larger space of the input-additive coupled cell systems. The {\it anti-synchrony subspaces} of a network correspond to the generalized polydiagonal subspaces that are given by the linear-balanced and even-odd-balanced partitions on the network set of cells. These are flow invariant by the linear-input-additive and even-odd-input-additive coupled cell systems, respectively. Recall that the linear-input-additive coupled cell systems are the coupled cell systems with input additive structure where the internal function $g$ is odd and the coupling function $h$ is linear (and odd) and satisfies $h(x,x) \equiv 0$. The even-odd-input-additive coupled cell systems are the coupled cell systems with input additive structure where the internal function $g$ is odd and the coupling function $h$ is even in the first variable and odd in the second variable. The subset of the anti-synchrony subspaces that are given by the odd-balanced partitions are flow invariant by the space of the odd-input-additive coupled cell systems. Recall that the space of the odd-input-additive coupled cell systems is larger than the space the linear-input-additive coupled cell systems as the coupling function does not have to be necessarily linear. \begin{Def}\normalfont Given an $n$-cell weighted network $G$, denote by ${\mathcal L}_{W_G}$ and by ${\mathcal L}_{L_G}$, the set of the generalized polydiagonal subspaces that are left invariant by the adjacency matrix $W_G$ and the Laplacian matrix $L_G$ of $G$, respectively. \hfill $\Diamond$ \end{Def} From the results in the previous sections, ${\mathcal L}_{W_G} \cup {\mathcal L}_{L_G}$ corresponds to the set of the synchrony and anti-synchrony subspaces of a network $G$. Moreover, we have the following result. \begin{thm} \label{thm:final} Let $G$ be a weighted network and consider the set ${\mathcal L}_{W_G} \cup {\mathcal L}_{L_G}$ of the synchrony and anti-synchrony subspaces of $G$. Let $\Delta_{\footnotesize{\mathcal{P}}}$ be a subspace in ${\mathcal L}_{W_G} \cup {\mathcal L}_{L_G}$ and $\mathcal{P}$ the associated tagged partition. We have that, \\ (i) $\Delta_{\footnotesize{\mathcal{P}}}$ is a synchrony subspace for every system in \\ \hspace{0.1in} (i.a) $I_G$, $I_{G,eo}$ if and only if $\mathcal{P}$ is balanced; \\ \hspace{0.1in}(i.b) $I_{G,0}$, $I_{G,odd}$, or $I_{G,l}$ if and only if $\mathcal{P}$ is exo-balanced. \\ (ii) $\Delta_{\footnotesize{\mathcal{P}}}$ is an anti-synchrony subspace for every system in \\ (ii.a) $I_{G,odd}$ if and only if $\mathcal{P}$ is odd-balanced; \\ (ii.b) $I_{G,l}$ if and only if $\mathcal{P}$ is linear-balanced; \\ (ii.c) $I_{G,eo}$ if and only if $\mathcal{P}$ is even-odd-balanced. \end{thm} \begin{rem} (i) Given an $n$-cell weighted network $G$, we have that ${\mathcal L}_{W_G}$ and ${\mathcal L}_{L_G}$, are lattices with the partial order and meet operations given by the inclusion and intersection, respectively, as happens for the lattice of polydiagonal subspaces that are left invariant by the adjacency matrix of a network (synchrony subspaces given by the balanced standard partitions). In fact, observe that the intersection of a synchrony subspace with a synchrony is a synchrony space, and the intersection of a synchrony space with an anti-synchrony subspace, or the intersection of two anti-synchrony spaces, is an anti-synchrony subspace. \\ (ii) The set of subspaces invariant under a linear map forms a complete lattice under the relation of inclusion. Moreover, this lattice can be described using the Jordan subspaces, the irreducible invariant subspaces having a unique eigenvector (up to multiplication by a scalar). In this lattice the meet operation is the intersection and the join operation is the sum. We can apply this to the set of all the spaces that are invariant under the network adjacency matrix and the network Laplacian matrix, to conclude that they are complete lattices where the meet operation is the intersection and the join operation is the sum, and both can be obtained from the corresponding Jordan subspaces. \\ (iii) As the Laplacian matrix $L_G$ is regular, we have that the one-dimensional diagonal space where all cell coordinates are equal belongs to ${\mathcal L}_{L_G}$. However, the bottom of ${\mathcal L}_{L_G}$ is the zero space as it is always invariant under the Laplacian matrix. The same holds for ${\mathcal L}_{W_G}$. Equivalently, any linear-input-additive coupled cell system and any even-odd-input-additive has the zero equilibrium.\\ (iv) As mentioned in (ii), the join operation for the lattice of the subspaces which are left invariant under the network adjacency matrix or Laplacian matrix is given by the usual sum. However, analogously to happens for synchrony subspaces, the sum of two synchrony or anti-synchrony subspaces may not be a generalized polydiagonal subspace and so the join operation for the lattices ${\mathcal L}_{W_G}$ and ${\mathcal L}_{L_G}$ is not the sum. Moreover, there is no explicit form of describing the join of two synchrony or anti-synchrony subspaces. Thus both lattices ${\mathcal L}_{W_G}$ and ${\mathcal L}_{L_G}$ are subsets of the lattices of the invariant subspaces under the network adjacency matrix and the network Laplacian matrix, respectively, but are not sublattices. \hfill $\Diamond$ \end{rem} From Proposition~\ref{prop:subset}, when $G$ is a regular network, ${\mathcal L}_{W_G} = {\mathcal L}_{L_G}$. If $G$ is not regular, then in general, ${\mathcal L}_{W_G} \not= {\mathcal L}_{L_G}$, moreover, neither ${\mathcal L}_{W_G},\ {\mathcal L}_{L_G}$ is strictly included in the other. The work in Aguiar and Dias~\cite{AD14} extends in a natural way, by considering generalized polydiagonal subspaces and using the eigenvalue and eigenvector structures of the adjacency matrix $W_G$ and the Laplacian matrix $L_G$, to obtain the lattices ${\mathcal L}_{W_G}$ and ${\mathcal L}_{L_G}$ and, thus, to obtain the set of the synchrony and anti-synchrony subspaces of a network $G$. Although the lattices join operation is not given by the sum, as in \cite{AD14}, we can conclude that, for each lattice, there is a subset of synchrony and anti-synchrony subspaces, called {\it minimal}, with the property of every remaining synchrony or anti-synchrony subspace in the lattices ${\mathcal L}_{W_G}$ and ${\mathcal L}_{L_G}$ being a sum of subspaces in that minimal subset. Each minimal synchrony or anti-synchrony subspace of ${\mathcal L}_{W_G}$ (${\mathcal L}_{L_G}$) is associated to an eigenvector or a set of generalized eigenvectors of $W_G$ ($L_G$). We have then that the Algorithm 6.5 in \cite{AD14}, for networks with only one edge-type, can be easily adapted to find the lattices ${\mathcal L}_{W_G}$ and ${\mathcal L}_{L_G}$ for a weighted network $G$ and, thus, to find the set of synchrony and anti-synchrony subspaces of $G$. In fact, the only step of the algorithm which needs adaptation is the first one where, for each eigenvalue $\lambda$ of the matrix, the table that is constructed, besides the polydiagonal subspaces, must also contain all the generalized polydiagonal subspaces, for the eigenvectors and Jordan chains in the generalized eigenspace for $\lambda$. This step relies on Lemma 6.1 in \cite{AD14}, which generalizes easily for the case where, besides conditions of the form $x_{l_1} = x_{l_2}$, we have also equalities of the from $x_{l_1} = -x_{l_2}$ or $x_{l_1} = 0$. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=1.5cm, thick,node/.style={circle,draw}] \node[node] (1) at (-6cm, 4cm) [fill=white] {$1$}; \node[node] (2) at (-3cm, 4cm) [fill=white] {$2$}; \node[node] (3) at (-6cm, 2.5cm) [fill=white] {$3$}; \node[node] (4) at (-3cm, 2.5cm) [fill=white] {$4$}; \path (1) edge node {} (2) (2) edge node {} (1) (3) edge node {} (4) (4) edge node {} (3) (3) edge node {} (1) (3) edge node {} (2); \end{tikzpicture} \end{center} \caption{A four-cell network.} \label{contra_exemplo} \end{figure} \begin{exam} \label{ex:qutro} Consider the four-cell network $G$ in Figure~\ref{contra_exemplo}. Considering the exposition above, we compute the lattices ${\mathcal L}_{W_G}$ and ${\mathcal L}_{L_G}$ of $G$ by considering the referred adaptation of Algorithm 6.5 in Aguiar and Dias~\cite{AD14}. We have that $$ W_{G} = \left( \begin{array}{cccc} 0 & 1 & 1 & 0 \\ 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) \quad \mbox{ and } \quad L_{G} = \left( \begin{array}{cccc} 2 & -1 & -1 & 0 \\ -1 & 2 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & -1 & 1 \end{array} \right)\, . $$ The eigenvalues of $W_G$ are $-1,1$ and the corresponding eigenspaces and generalized eigenspaces are {\small $$ E_{-1} = <(1,-1,0,0),(1,1,-2,2)>,\ E_1 = <(1,1,0,0)>,\ G_1 = <(1,1,0,0), (1,1,1,1)>\, . $$ } We start by identifying the generalized polydiagonals associated with the (generalized) eigenvectors of $W_G$. That is, for each (generalized) eigenvector we consider the generalized polydiagonal with less dimension containing the eigenvector. These are, $$ \begin{array}{ll} \Delta_{\footnotesize{\mathcal{P}_1}} = \{ {\bf x}:\ x_1=-x_2, x_3=x_4=0 \}, \qquad &\Delta_{\footnotesize{\mathcal{P}_2}} = \{ {\bf x}:\ x_1=x_2, x_3=-x_4 \}, \\ \\ \Delta_{\footnotesize{\mathcal{P}_3}} = \{ {\bf x}:\ x_2=x_4=-x_3, x_1=0 \}, & \Delta_{\footnotesize{\mathcal{P}_4}} = \{ {\bf x}:\ x_1=x_4=-x_3, x_2=0 \}, \\ \\ \Delta_{\footnotesize{\mathcal{P}_5}} = \{ {\bf x}:\ x_1=x_2, x_3=x_4=0 \}, & \Delta_{\footnotesize{\mathcal{P}_6}} = \{ {\bf x}:\ x_1=x_2, x_3=x_4 \}. \end{array} $$ Next, checking whether or not these generalized polydiagonal subspaces have an eigenvector basis, we conclude that, since $$ \begin{array}{lll} \Delta_{\footnotesize{\mathcal{P}_1}} = <(1,-1,0,0)>, \quad & \Delta_{\footnotesize{\mathcal{P}_2}} = <(1,1,-2,2)> \oplus E_{1}, \quad & \Delta_{\footnotesize{\mathcal{P}_3}} = <(0,-2,2,-2)>, \\ \\ \Delta_{\footnotesize{\mathcal{P}_4}}= <(2,0,-2,2)>, \quad & \Delta_{\footnotesize{\mathcal{P}_5}} = E_1, \quad & \Delta_{\footnotesize{\mathcal{P}_6}} = G_1, \end{array} $$ they are all left invariant by the matrix $W_G$. All these subspaces are anti-synchrony subspaces, with the exception of $\Delta_{\footnotesize{\mathcal{P}_6}}$ that is a synchrony subspace. By the results in \cite{AD14}, they form a sum-dense set for the lattice ${\mathcal L}_{W_G}$. Considering the possible sums of two or more of these subspaces, we get one more synchrony and two more anti-synchrony subspaces for $G$ in ${\mathcal L}_{W_G}$, $$ \begin{array}{ll} \Delta_{\footnotesize{\mathcal{P}_7}} = \Delta_{\footnotesize{\mathcal{P}_1}} \oplus \Delta_{\footnotesize{\mathcal{P}_2}} = \{ {\bf x}:\ x_4=-x_3 \} , \quad & \Delta_{\footnotesize{\mathcal{P}_8}} = \Delta_{\footnotesize{\mathcal{P}_2}} \oplus \Delta_{\footnotesize{\mathcal{P}_6}} = \{ {\bf x}:\ x_1=x_2 \}, \\ \Delta_{\footnotesize{\mathcal{P}_9}} = \Delta_{\footnotesize{\mathcal{P}_1}} \oplus \Delta_{\footnotesize{\mathcal{P}_5}} = \{ {\bf x}:\ x_3=x_4=0 \} . \end{array} $$ Considering the intersection of all the subspaces, we get one more element, the anti-synchrony null subspace $\{ 0_{\mbox{$\mathbb{R}$}^4} \}$ which corresponds to the bottom of the lattice ${\mathcal L}_{W_G}$. We have, then, ${\mathcal L}_{W_G} = \{ \Delta_{\footnotesize{\mathcal{P}_i}},\ i=1,\ldots, 9 \} \cup P \cup \{ 0_{\mbox{$\mathbb{R}$}^4} \}$. The eigenvalues of $L_G$ are $0,1,2,3$ and the corresponding eigenspaces are {\small $$ E_0 = <(1,1,1,1)>,\ E_1 = <(1,1,0,0)>,\ E_2 = <(1,1,-1,1)>,\ E_3 = <(1,-1,0,0)>\, . $$ } The generalized polydiagonals associated with the eigenvectors of $L_G$ are, besides $\Delta_{\footnotesize{\mathcal{P}_1}}$ and $\Delta_{\footnotesize{\mathcal{P}_5}}$, $$ \begin{array}{ll} \Delta_{\footnotesize{\mathcal{P}_{10}}} = \{ {\bf x}:\ x_1=x_2=x_3=x_4 \}, \qquad & \Delta_{\footnotesize{\mathcal{P}_{11}}} = \{ {\bf x}:\ x_1=x_2=x_4 =- x_3\}. \end{array} $$ We have that $\Delta_{\footnotesize{\mathcal{P}_{10}}} = E_0$ and $\Delta_{\footnotesize{\mathcal{P}_{11}}}=E_2$ are left invariant by the Laplacian matrix $L_G$. Thus, they are a synchrony and an anti-synchrony subspace for $G$, respectively. By the results in \cite{AD14}, $\Delta_{\footnotesize{\mathcal{P}_1}}$, $\Delta_{\footnotesize{\mathcal{P}_5}}$, $\Delta_{\footnotesize{\mathcal{P}_{10}}}$ and $\Delta_{\footnotesize{\mathcal{P}_{11}}}$ form a sum-dense set for the lattice ${\mathcal L}_{L_G}$. Considering the possible sums of two or more of these subspaces, we get the following synchrony and anti-synchrony subspaces for $G$ in ${\mathcal L}_{L_G}$, {\tiny $$ \Delta_{\footnotesize{\mathcal{P}_2}} = \Delta_{\footnotesize{\mathcal{P}_5}} \oplus \Delta_{\footnotesize{\mathcal{P}_{11}}}, \quad \Delta_{\footnotesize{\mathcal{P}_6}} = \Delta_{\footnotesize{\mathcal{P}_5}} \oplus \Delta_{\footnotesize{\mathcal{P}_{10}}}, \quad \Delta_{\footnotesize{\mathcal{P}_9}} = \Delta_{\footnotesize{\mathcal{P}_1}} \oplus \Delta_{\footnotesize{\mathcal{P}_5}}, \quad \Delta_{\footnotesize{\mathcal{P}_{12}}} = \Delta_{\footnotesize{\mathcal{P}_{10}}} \oplus \Delta_{\footnotesize{\mathcal{P}_{11}}} = \{ {\bf x}:\ x_1=x_2=x_4 \}, $$ } {\tiny $$ \Delta_{\footnotesize{\mathcal{P}_{7}}} = \Delta_{\footnotesize{\mathcal{P}_1}} \oplus \Delta_{\footnotesize{\mathcal{P}_{5}}} \oplus \Delta_{\footnotesize{\mathcal{P}_{11}}}, \quad \Delta_{\footnotesize{\mathcal{P}_8}} = \Delta_{\footnotesize{\mathcal{P}_5}} \oplus \Delta_{\footnotesize{\mathcal{P}_{10}}} \oplus \Delta_{\footnotesize{\mathcal{P}_{11}}}, \quad \Delta_{\footnotesize{\mathcal{P}_{13}}} = \Delta_{\footnotesize{\mathcal{P}_1}} \oplus \Delta_{\footnotesize{\mathcal{P}_{5}}} \oplus \Delta_{\footnotesize{\mathcal{P}_{10}}} = \{ {\bf x}:\ x_3=x_4 \}\, . $$ } Considering the intersection of all the subspaces, we get one more element, the anti-synchrony null subspace $\{ 0_{\mbox{$\mathbb{R}$}^4} \}$ which corresponds to the bottom of the lattice ${\mathcal L}_{L_G}$. Thus, ${\mathcal L}_{L_G} = \{ \Delta_{\footnotesize{\mathcal{P}_i}}, i\in \{1,2, 5,\ldots,13\} \} \cup P \cup \{ 0_{\mbox{$\mathbb{R}$}^4} \}$. We have, then, that the set of the synchrony and anti-synchrony subspaces for $G$ is ${\mathcal L}_{W_G} \cup {\mathcal L}_{L_G} = \{ \Delta_{\footnotesize{\mathcal{P}_i}},\ i=1,\ldots, 13 \} \cup P \cup \{ 0_{\mbox{$\mathbb{R}$}^4} \}$. The partitions $\mathcal{P}_6$ and $\mathcal{P}_8$ are balanced and the partitions $\mathcal{P}_{10}, \mathcal{P}_{12}, \mathcal{P}_{13}$ are strictly exo-balanced. The generalized partitions $\mathcal{P}_1$, $\mathcal{P}_2$, $\mathcal{P}_5$, $\mathcal{P}_7$ and $\mathcal{P}_9$ are odd-balanced, the generalized partitions $\mathcal{P}_1$, $\mathcal{P}_2$, $\mathcal{P}_5$, $\mathcal{P}_7$, $\mathcal{P}_9$ and $\mathcal{P}_{11}$ are linear-balanced, and the generalized partitions $\mathcal{P}_1$, $\mathcal{P}_2$, $\mathcal{P}_3$, $\mathcal{P}_4$, $\mathcal{P}_5$, $\mathcal{P}_7$ and $\mathcal{P}_{9}$ are even-odd-balanced. Thus, by Theorem~\ref{thm:final}, we have:\\ (i) the synchrony subspaces for the admissible systems in $I_G$, $I_{G,eo}$ are $\Delta_{\footnotesize{\mathcal{P}_{6}}}$ and $\Delta_{\footnotesize{\mathcal{P}_{8}}}$; \\ (ii) the synchrony subspaces for the admissible systems in $I_{G,0}$, $I_{G,odd}$, $I_{G,l}$ are those in (i) together with $\Delta_{\footnotesize{\mathcal{P}_{10}}}$, $\Delta_{\footnotesize{\mathcal{P}_{12}}}$ and $\Delta_{\footnotesize{\mathcal{P}_{13}}}$; \\ (iii) the anti-synchrony subspaces for the admissible systems in $I_{G,odd}$ are $\Delta_{\footnotesize{\mathcal{P}_{1}}}$, $\Delta_{\footnotesize{\mathcal{P}_{2}}}$, $\Delta_{\footnotesize{\mathcal{P}_{5}}}$, $\Delta_{\footnotesize{\mathcal{P}_{7}}}$ and $\Delta_{\footnotesize{\mathcal{P}_{9}}}$; \\ (iv) the anti-synchrony subspaces for the admissible systems in $I_{G,l}$ are those in (iii) together with $\Delta_{\footnotesize{\mathcal{P}_{11}}}$; \\ (v) the anti-synchrony subspaces for the admissible systems in $I_{G,eo}$ are those in (iii) together with $\Delta_{\footnotesize{\mathcal{P}_{3}}}$ and $\Delta_{\footnotesize{\mathcal{P}_{4}}}$. \hfill $\Diamond$ \end{exam} \begin{rem}\normalfont As illustrated by Example~\ref{ex:qutro}, even for a non-regular network $G$ the intersection ${\mathcal L}_{W_G} \cap {\mathcal L}_{L_G}$ can be non-trivial. \hfill $\Diamond$ \end{rem} \begin{rem}\normalfont \label{rmk:reg} For the particular case of regular networks, ${\mathcal L}_{W_G} = {\mathcal L}_{L_G}$ and so the network set of synchrony and anti-synchrony subspaces can be obtained using either the eigenvalue and eigenvector structure of the network adjacency matrix or of the Laplacian matrix. \hfill $\Diamond$ \end{rem} \section{Conclusions} \label{sec:conclu} In this paper, we caracterize the set ${\mathcal L}_{W_G} \cup {\mathcal L}_{L_G} $ of the synchrony and anti-synchrony subspaces of a general weighted network $G$, which corresponds to the generalized polydiagonals invariant under the adjacency and/or Laplacian matrices of $G$. More precisely, the set ${\mathcal L}_{W_G}$ of synchrony and anti-synchrony subspaces of a general weighted network $G$ corresponds to the generalized polydiagonals that are flow-invariant by any coupled cell system with input additive structure that are even-odd-balanced. These are in correspondence with the generalized polydiagonals invariant under the network adjacency matrix. The set ${\mathcal L}_{L_G}$ of synchrony and anti-synchrony subspaces of a general weighted network $G$ corresponds to the generalized polydiagonals that are flow-invariant by any coupled cell system with input additive structure that are linear-balanced. These are in correspondence with the generalized polydiagonals invariant under the network Laplacian matrix. Much of this work is motivated by the work presented by Neuberger, Sieben, and Swift in \cite{NSS19}, which we extend in several aspects. In \cite{NSS19}, the authors consider undirected networks without weights on the connections. Here, we consider weighted directed networks. In \cite{NSS19}, the associated admissible systems are difference-coupled vector fields, a special class of the input-additive vector fields that we consider here. In our setting we have a more general definition of anti-synchrony subspace in the sense that for the associated tagged partition a part and its counterpart may have a different number of cell elements. Moreover, there can be parts with no counterparts. We have also that, contrary to what happens in \cite{NSS19}, an anti-synchrony subspace can correspond to a generalized polydiagonal susbpace that is invariant by the adjacency matrix of the network and not by its Laplacian matrix. In \cite{NSS19}, the set of anti-synchrony subspaces corresponds to the matched polydiagonals that are left invariant by the Laplacian matrix of the network. \vspace{5mm} \noindent {\bf Acknowledgments} \\ The authors were partially supported by CMUP, which is financed by national funds through FCT-- Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, I.P., under the project with reference UIDB/00144/2020.
1,314,259,996,891
arxiv
\section{Introduction} Magnetic properties of ferromagnetic metals at varying levels of impurity have previously been investigated using neutron scattering techniques\cite{lynn1975temperature, mook1973neutron, pickart1967spin, loong1984neutron, collins1969critical}. Notably, the introduction of substitutional defects into a magnetic system has been shown to distort the spectrum of the system's characteristic excitations\cite{svensson1969resonant}. This effect has been investigated both theoretically\cite{takeno1963spin, maradudin1965some, izyumov1966peculiarities} as well as experimentally for lattice vibrations and spin waves. These experiments have provided spin wave dispersion curves and stiffness constants. In particular, the spin wave stiffness parameter \textit{D} has shown sensitivity to variations in the concentration of magnetic defects\cite{antonini1969spin}. Additionally, both spin wave energies and lifetimes decrease as the impurity concentration grows. Computational techniques have long been utilized to investigate collective excitations in systems spanning a broad range of practical interest. Molecular dynamics (MD) methods\cite{Rapaport} have been used to model vibrational behavior in metals, alloys, biological systems, etc. while spin dynamics (SD) simulations have investigated magnetic properties of classical lattice-based spin models\cite{landau1999spin}. SD simulations have replicated experimental findings in simple spin systems such as RbMnF\textsubscript{3}\cite{tsai2000spin} and have successfully predicted the existence of two-spin-wave modes\cite{bunker2000longitudinal}. With the use of coordinate dependent exchange interactions, this method has also had success investigating systems with more complex magnetic properties such as propagating spin wave modes\cite{tao2005spin} and external magnetic field effects in bcc iron\cite{chui2014spin}. Each of these methods numerically solves the classical equations of motion which describe the dynamical evolution of the model under consideration. Despite the individual capabilities of MD and SD simulation methods, the coupling of lattice and spin degrees of freedom is inherently neglected by both. In magnetic materials, atomic motion affects magnetic moments which depend non-trivially on the local atomic environment. Likewise, magnetic interactions have been shown to contribute to structural properties of these materials, including the BCC structure of iron\cite{Hasegawa1983, Herper1999}. Therefore, these degrees of freedom, atomic and magnetic, should be considered jointly in any model aiming to investigate the excitations of such a system. The combined molecular dynamics - spin dynamics (MD-SD) method treats the spin subsystem using an extension of the Heisenberg model where the exchange interaction is a coordinate-dependent, pairwise function of inter-atomic distance\cite{Omelyan}. Inter-atomic interactions are handled with a non-magnetic many-body potential. The time integration of the coupled equations of motion is calculated using an algorithm based on the Suzuki-Trotter decomposition of the exponential time evolution operator. This algorithm is time reversible, efficient, and is known to conserve phase-space volume\cite{krech1998fast}. To handle systems of practical interest, such a model must utilize empirical many-body inter-atomic potentials and exchange interactions parameterized by experimental data and first-principles calculations\cite{Ma}. Using this compressible magnetic model parameterized for BCC iron, the interplay between spin waves (magnons) and lattice vibrations (phonons) has previously been investigated, showing spin wave dampening as well as magnon-phonon coupling in pure systems\cite{Perera}. This paper will extend the current MD-SD framework to investigate BCC iron with vacancies. We show the effect of these impurities on magnon dispersion curves as well as on individual spin wave excitation peaks. The spin waves are dampened and broadened in the presence of vacancy defects, in agreement with experimental neutron scattering data. \section{Model} \subsection{MD-SD Algorithm} The MD-SD method extends the traditional MD approach through the addition of a third phase variable - the spin angular momentum $\mathbf{S}_i$ - to the position and velocity degrees of freedom. This spin variable is incorporated into the MD-SD Hamiltonian such that \begin{equation} \mathcal{H} = \sum_{i=1}^{N}{\frac{mv_{i}^{2}}{2}} + U(\{\bf{r}_{\it{i}}\}) - \sum_{\it{i} < \it{j}}{ \textnormal{J}_{\it{ij}}(\{\mathbf{r}_{\it{k}}\})}\bf{S}_{\it{i}} \cdot \bf{S}_{\it{j} } \end{equation} \noindent for a system of $N$ magnetic atoms of mass $m$ with positions $\{\mathbf{r}_{i}\}$, velocities $\{\mathbf{v}_{i}\}$, and atomic spins $\{\mathbf{S}_{i}\}$. The first term in the MD-SD Hamiltonian represents the kinetic energy of the atoms, and $U(\{\mathbf{r}_{i}\})$ is a non-magnetic interatomic potential. The third term describes a Heisenberg-like exchange interaction which includes a coordinate-dependent exchange parameter $J_{ij}(\{\mathbf{r}_k\})$. The introduction of coordinate dependence into $J_{ij}(\{\mathbf{r}_{k}\})$ allows for the exchange interaction to depend on the locations of all atoms in the vicinity of atoms $i$ and $j$. This framework is general and may be utilized for any magnetic material given proper parameterization of the interatomic potential and exchange parameter. To investigate BCC iron, we employ the embedded atom interatomic potential $U(\{r_{i}\})$ developed and parameterized for iron by Dudarev and Derlet \cite{Dudarev}. The exchange interaction used in this work is a pairwise function $J(r_{ij})$ parameterized by first principles calculations for iron \cite{Ma}. This MD-SD Hamiltonian has true dynamics according to the classical equations of motion \begin{subequations}\label{eom} \begin{align} &\frac{d\bf{r_{\it{i}}}}{dt} = \bf{v}_{\it{i}}\\[5pt] &\frac{d\bf{v_{\it{i}}}}{dt} = \frac{\bf{f_{\it{i}}}}{m} \\[5pt] &\frac{d\bf{S_{\it{i}}}}{dt} = \frac{1}{\hbar}\bf{H}_{\it{i}}^{\it{eff}} \times \bf{S}_{\it{i}} \end{align} \end{subequations} \noindent where $\mathbf{f}_i$ and $\mathbf{H}^{eff}_{i}$ represent the interatomic force and effective field acting on the $i^{th}$ atom, respectively. In order to investigate the collective excitations of the simulated system, space-displaced, time-displaced correlation functions are computed. By performing Fourier transformations of these correlation functions, we obtain information about the frequency and lifetimes of these excitations. During a simulation run, the spatial Fourier transform of the space-displaced, time-displaced spin-spin correlation function, or the intermediate scattering function, is calculated on the fly \begin{equation} F_{ss}^{k}(\mathbf{q},t)=\frac{1}{N}\langle \rho_{s}^{k}(\mathbf{q},t)\rho_{s}^{k}(-\mathbf{q},0) \rangle, \end{equation} \noindent where $k$ represents the real-space directions $\{x, y, z\}$ and $\rho_{s}(\mathbf{q}, t)$ represents the microscopic spin density, defined as \begin{equation} \rho_{s}(\mathbf{q}, t) = \sum_{i}\mathbf{S}_{i}(t)e^{-i\mathbf{q} \cdot \mathbf{r}_{i}(t)}. \end{equation} During a microcanonical simulation of a ferromagnetic material, the total magnetization vector is a constant of motion. This property allows for the differentiation of spin excitations which propagate parallel and perpendicular to the magnetic symmetry axis through the choice of a Cartesian coordinate system such that the $z$-axis is parallel to the magnetization vector. The components of $F_{ss}(\mathbf{q}, t)$ are then regrouped into a longitudinal component \begin{equation} F_{ss}^{L}(\mathbf{q}, t) = F_{ss}^{z}(\mathbf{q}, t) \end{equation} \noindent and a transverse component \begin{equation} F_{ss}^{T}(\mathbf{q}, t) = \frac{1}{2} [ F_{ss}^{x}(\mathbf{q}, t) + F_{ss}^{y}(\mathbf{q}, t) ]. \end{equation} \noindent A temporal Fourier transform of the intermediate scattering function yields the spin-spin dynamic structure factor \begin{equation} S_{ss}^{L,T}(\mathbf{q}, \omega) = \frac{1}{2\pi}\int_{-\infty}^{+\infty}F_{ss}^{L,T}(\mathbf{q}, t)e^{-i\omega t}dt \end{equation} \noindent for momentum transfer $\mathbf{q}$ and frequency $\omega$. The dynamic structure factor obtained from MD-SD simulations may be compared to the measurements made by inelastic neutron scattering experiments\cite{lovesey1984theory}. \subsection{Simulation Details} Time integration of the equations of motion shown in Eq. (\ref{eom}) is performed using a second-order Suzuki-Trotter decomposition of the time evolution operator \cite{Omelyan, tsai2005molecular}. We equilibrate the position and spin subsystems of multiple initial configurations to the desired temperature $T$ using the Metropolis Monte Carlo method. Initial velocities are then drawn from the Maxwell-Boltzmann distribution at this temperature. A short microcanonical MD-SD run is performed in order to equilibrate the three sub-systems. Once properly equilibrated, these configurations are independently time-evolved using the microcanonical MD-SD integration scheme. For this time integration, a time-step of $\delta t = 1$ fs was found to adequately conserve energy and magnetization. Simulation data shown in this paper are from MD-SD runs of 1 ns ($10^{6}$ time-steps) in duration. During the time integration, the intermediate scattering function is recorded on the fly for each independent configuration. Once all runs have completed we average these data, yielding an estimate of the canonical ensemble average at temperature $T$. \begin{figure} \includegraphics[width=0.5\textwidth]{3_peaks} \caption{\label{Spin Wave Peaks} Transverse spin-spin dynamic structure factor obtained from MD-SD simulations for L = 16 at $T$ = 300 K for both the pure lattice and the system with 5\% of lattice sites left vacant. (a) $\mathbf{q} = (0.137 \textup{\AA}^{-1}, 0, 0)$, (b) $\mathbf{q} = (0.581 \textup{\AA}^{-1}, 0.581 \textup{\AA}^{-1}, 0)$, (c) $\mathbf{q} = (3.32 \textup{\AA}^{-1}, 3.32 \textup{\AA}^{-1}, 3.32 \textup{\AA}^{-1})$. Symbols represent simulation data while solid lines show multi-peak Lorentzian curve-fitting. Dashed lines indicate the constituent peaks which make up the solid curve for the systems with 5\% vacancy.} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{L6_234_dist_vac_SssT_100} \caption{\label{Spin Wave Peaks} Transverse spin-spin dynamic structure factor obtained from MD-SD simulations for L = 6 at $T$ = 300 K for $\mathbf{q} = (0.365 \textup{\AA}^{-1}, 0, 0)$. Curves with symbols represent configurations which have 2 or 4 atoms removed from the system. } \end{figure} In order to include vacancy type defects, we leave randomly selected sites devoid of an atom in each initial configuration. For the data presented here, simulations have been performed for a body-centered cubic system of edge length $L$ = 16 (8192 sites) with periodic boundary conditions at $T$ = 300 K. For a 5\% vacancy concentration, 7783 sites contain an atom while the remaining 409 sites remain vacant. Each independent configuration included in the overall simulation set has a unique vacancy distribution. Using the canonical ensemble average estimate for the spin-spin intermediate scattering function obtained from these simulations, we calculate the dynamic structure factor. Spin wave excitations are observed through the transverse spin-spin dynamic structure factor $S^{T}_{ss}(\mathbf{q}, \omega)$ as peaks of Lorentzian form, \begin{equation}\label{lorentzian} S^{T}_{ss}(\mathbf{q}, \omega) = \frac{I_{0}\Gamma^{2}}{(\omega - \omega_{0})^{2}+\Gamma^{2}} \end{equation} \noindent where $\omega_{0}$ is the characteristic frequency of the excitation, $I_{0}$ is the amplitude, and $\Gamma$ is the half width at half maximum. $\Gamma$ is inversely proportional to the lifetime of the associated excitation. Fitting simulational data to Eq. 8 allows us to observe the impact of vacancy defects on spin wave dispersion curves, as well as on individual spin wave line shapes, excitation energies, and lifetimes. \section{Results} \subsection{Transverse Spin Waves} \subsubsection{Spin Wave Energies} \begin{figure} \includegraphics[width=0.5\textwidth]{disp_merge} \caption{\label{Magnon Shift} (a) Spin wave dispersion curve at $T=300$ K obtained from MD-SD simulations with $L=16$. Results are shown for the pure lattice as well as the system with 5\% vacancy concentration. (b) The fractional shift in transverse spin wave frequencies due to a 5\% concentration of vacancy defects. Results obtained from MD-SD simulations of systems of size $L = 16$ at $T=300$ K.} \end{figure} In order to demonstrate our approach, we performed calculations with a 5\% vacancy concentration, a figure consistent with the nonmagnetic defect concentration found in experimental studies. Figure 1 shows the effect of this 5\% vacancy concentration on three different transverse spin waves in a system of size $L=16$. The vacancy defects cause a dampening of the peaks in $S_{ss}^{T}(\mathbf{q}, \omega)$ as well as a shift to lower frequency. These effects on spin wave peaks are observed for all wave vectors, though we only show a few selected line shapes here. Visible in Fig. 1(a), the low-$q$ peak region of the impure system displays a more rugged spectral structure than the pure peaks or any of the higher $q$ peaks from impure systems. While curve fitting these rough peaks to the form of Eq. (\ref{lorentzian}) yields estimates of peak locations and half-widths, applying a fit utilizing the sum of multiple Lorentzian forms more accurately reproduces the asymmetries and additional structure in these peaks. Included in Fig. 1 are fits to two-peak (Figs. 1(a), 1(b)) and four-peak (Fig. 1(c)) Lorentzian functions, describing the distorted signals obtained from the impure system. Multiple peaks are also visible in the small-$q$ region in the $(q, q, 0)$ and $(q, q, q)$ symmetry directions, though we only show the $(q, 0, 0)$ peak in Fig. 1. The distortion of the these spin wave line shapes is due to the presence of localized spin wave modes near the defect sites. While these multi-peak Lorentzian functions provide reasonable fits to our data, the possibility remains of additional structure within the limits of our resolution. In order to investigate the downward shift of spin wave frequencies seen in Fig. 1, we simulated a small system ($L = 6$) from which atoms are removed one by one. The decrease in spin wave frequency due to this removal of atoms is observed in Figure 2. The atoms removed from the configurations in Fig. 2 are chosen such that the sites are separated by a distance greater than the cutoff distance of the interatomic potential. Multiple vacancy configurations were generated, though only one configuration is shown for each data set in Fig. 2. As the number of vacancy sites grows, the frequency of the characteristic spin wave spectrum decreases. The full spin wave dispersion curve is shown in Figure 3(a), constructed using the peak locations obtained through Lorentzian curve fitting. To observe the effect of vacancy defects on spin wave excitations more carefully, we compare the characteristic spin wave frequencies obtained from MD-SD simulations for the pure system with those from systems with a 5\% vacancy concentration. We calculate the fractional shift in spin wave frequency, $(\omega_{\textrm{\scriptsize{ Pure}}} - \omega_{\textrm{\scriptsize{ 5\%}}} / \omega_{\textrm{\scriptsize{ Pure}}})$, shown in Fig. 3(b) directly below the dispersion curve. All observed spin wave excitations display a shift to lower frequency, with this effect being most significant near the zone center. The shaded areas in Fig. 3(b) indicate the regions in the small-\textit{q} $S_{ss}^{T}(\mathbf{q}, \omega)$ spectra where we observe multiple peaks. While multi-peak Lorentzian forms fit the high-\textit{q} signals more accurately than the single peak form, identification of the less intense constituent peaks is unreliable. Therefore we include only the most intense peak signal from our curve-fitting procedure for high-\textit{q} excitations in Fig. 3(b). \begin{figure} \includegraphics[width=0.45\textwidth]{L16_pure_5_low_q_mag_disp} \caption{\label{Low-q Magnon dispersion} Comparison of low $|\mathbf{q}|$ magnon dispersion curves obtained from MD-SD simulation (L=16) for the pure system and that with 5\% vacancy defects. Lines in the figure represent a two-parameter curve fit to Eq. 9 with $D$ shown in Table I.} \end{figure} In ferromagnetic materials below the critical temperature $T_{c}$, spin waves with small-$q$ are isotropic and are expected to approximate a quadratic dispersion relation\cite{shirane1968spin} of the form \begin{equation}\label{dispersion} \hbar\omega = D|\mathbf{q}|^2(1-\beta |\mathbf{q}|^2) \end{equation} \noindent where $D$ represents the magnetic stiffness constant. The dispersion curve for low-$q$ is shown in Figure 4 for both the pure system and that with 5\% vacancy concentration. The dispersion parameters $D$ and $\beta$ in Eq. (\ref{dispersion}) are quantities accessible through neutron scattering using diffraction methods (DM), triple axis spectrometry (TAS), chopper spectrometry (CS), or other methods. While the dispersion relation shown in Eq. (\ref{dispersion}) provides an accurate fit to both experimental and computational data, the behavior of the parameter $\beta$ as observed in experiment is inconsistent. However previous investigation of BCC iron under varying levels of impurity has shown systematic behavior in the stiffness parameter $D$. The conditions most notable to this study are the substitutional inclusion of Si, a nonmagnetic atom, into the crystal. Due to the lack of a magnetic moment and lower mass in comparison to Fe, the Si atom mimics a magnetic vacancy. The lines in Fig. 4 represent curve fits to Eq. (\ref{dispersion}), and the values of the stiffness fit parameter $D$ are shown along with previous experimental data in Table \ref{stiffness_table}. While our results overestimate the value of the stiffness parameter, they capture the trend of a decrease in $D$ caused by nonmagnetic defects observed in experimental data. \subsubsection{Spin Wave Lifetimes} \begin{comment} \begin{table}[] \centering \caption{Experimental and computational comparison of spin wave dispersion parameters in impure Fe systems at room temperature} \label{my-label} \begin{tabular*}{\columnwidth}{@{\extracolsep{2mm} }cccccc} \hline & Method & D & $\beta$ & Ref. \\ \hline Fe & TAS & 281 $\pm$ 10 & 1.0 & 24\\ Fe & MD-SD & 315.1 & 0.277 & This work\\ Fe (4\% Si) & TAS & 270 $\pm$ 10 & 2 &24\\ Fe (5\% Vac.) & MD-SD & 290.2 & 0.268 &This work\\ Fe (7\% Si) & DM & 273 $\pm$ 3 & 3.7 $\pm$ 0.2 &10\\ Fe (15\% Si) & DM & 233 $\pm$ 11 & 4 $\pm$ 2 &10 \end{tabular*} \end{table} \end{comment} \begin{comment} \begin{table}[] \renewcommand{\arraystretch}{1.5} \centering \caption{Experimental and computational comparison of spin wave dispersion parameters in impure Fe systems at room temperature. } \label{stiffness_table} \begin{tabular*}{\columnwidth}{@{\extracolsep{3mm} }ccccc} & Method & D (meV \textup{\AA} \textsuperscript{2}) & Ref. \\ \Xhline{2.5\arrayrulewidth} Fe & CS & 307 $\pm$ 15 & \citet{loong1984neutron}\\ \hline Fe & \multirow{2}{*}{ TAM} & 281 $\pm$ 10 & \multirow{2}{*}{ \citet{shirane1968spin}}\\ Fe (4\% Si) & & 270 $\pm$ 10 \\ \hline Fe & \multirow{2}{*}{ MD-SD} & 315.1 $\pm$ 0.1 & \multirow{2}{*}{ This work}\\ Fe (5\% Vac.) & & 290.2 $\pm$ 0.2 \\\hline Fe (7\% Si) & \multirow{2}{*}{ DM} & 273 $\pm$ 3 &\multirow{2}{*}{ \citet{antonini1969spin}}\\ Fe (15\% Si) & & 233 $\pm$ 11 \end{tabular*} \end{table} \end{comment} \begin{table}[] \renewcommand{\arraystretch}{1.5} \centering \caption{Experimental and computational comparison of spin wave dispersion parameters in impure Fe systems at room temperature. } \label{stiffness_table} \begin{tabular*}{\columnwidth}{@{\extracolsep{3mm} }ccccc} & D (meV \textup{\AA} \textsuperscript{2}) & Method & Ref. \\ \Xhline{2.5\arrayrulewidth} Fe & 307 $\pm$ 15 & CS & \citet{loong1984neutron}\\ \hline Fe & 281 $\pm$ 10 & \multirow{2}{*}{ TAS} & \multirow{2}{*}{ \citet{shirane1968spin}}\\ Fe (4\% Si) & 270 $\pm$ 10 & \\ \hline Fe & 315.1 $\pm$ 0.1 & \multirow{2}{*}{ MD-SD} & \multirow{2}{*}{ This work}\\ Fe (5\% Vac.) & 290.2 $\pm$ 0.2 \\\hline Fe (7\% Si) & 273 $\pm$ 3 & \multirow{2}{*}{ DM} &\multirow{2}{*}{ \citet{antonini1969spin}}\\ Fe (15\% Si) & 233 $\pm$ 11 \end{tabular*} \end{table} \begin{figure} \includegraphics[width=0.5\textwidth]{L16_5vac_magnon_width_frac_shift_3_directions} \caption{\label{Magnon HWHM 3 Directions} Fractional shift in half width at half maximum ($\Gamma$) of transverse spin waves at $T$ = 300 K obtained from MD-SD simulations for $L=16$ in the [1 0 0], [1 1 0], and [1 1 1] symmetry directions. } \end{figure} \begin{figure} \includegraphics[width=0.45\textwidth]{L16_pure_5_SssL_100_NEW} \caption{\label{Longitudinal} Longitudinal spin-spin dynamic structure factor obtained from MD-SD simulations for $\mathbf{q} = (0.137 \textup{\AA}^{-1}, 0, 0)$ at $T$ = 300 K with $L = 16$ for both the pure system and that with 5\% of sites left vacant. Solid lines represent the predicted peak locations in the pure system, and dashed lines represent the predicted locations of the $(2, 0, 0) - (1, 0, 0)$ peaks in the system with vacancies. For clarity, only the $(2, 0, 0) - (1, 0, 0)$ excitations are labeled.} \end{figure} Spin wave excitation lifetimes are inversely proportional to the half width at half maximum of the characteristic peak in $S^{T}_{ss}(\mathbf{q}, \omega)$ which may be obtained through curve fitting to Eq. (\ref{lorentzian}). The effect of vacancy defects on spin wave lifetime is presented in Figure 5, which shows the fractional change in the half width at half maximum due to impurities. Fig. 5 shows significant broadening of spin wave peaks for all $q$ values, indicating a decrease in lifetime for all spin wave excitations. However the effect is most significant for low $q$ excitations, and the broadening decreases as $q$ grows larger in all symmetry directions. This decreased lifetime at low $q$ is due to increased magnon-magnon scattering caused by the existence of additional spin wave modes, evidenced by the rugged structure of low $q$ peaks in $S^{T}_{ss}(\mathbf{q}, \omega)$ as in Fig. 1 (a). \subsection{Longitudinal Spin Waves} For classical Heisenberg spin models, Bunker \textit{et al.} have previously shown that the excitation peaks in the longitudinal component of $S_{ss}(\mathbf{q}, \omega)$ represent creation and/or annihilation processes resulting from the interaction of multiple transverse spin waves\cite{bunker2000longitudinal}. Previous MD-SD simulation of pure BCC iron measured the longitudinal component of $S_{ss}(\mathbf{q}, \omega)$, and the frequencies of these two-spin wave peaks were identified using the difference of transverse spin wave frequencies\cite{Perera}: \begin{equation} \omega_{ij}^{-}(\mathbf{q}_{i}\pm \mathbf{q}_{j}) = \omega(\mathbf{q}_{i}) - \omega(\mathbf{q}_{j}) \end{equation} \noindent where $\mathbf{q}_{i}$ and $\mathbf{q}_{j}$ are the wave-vectors of the constituent spin waves, and $\omega$ is the characteristic frequency of each. Figure 6 shows a portion of the $S_{ss}^{L}(\mathbf{q}, \omega)$ spectrum for $\mathbf{q} = \frac{2\pi}{La}(1, 0, 0)$ the $L = 16$ system. Since the large set of available wave vectors $\{\mathbf{q}_{i}\}$ in an $L = 16$ system leads to many two-spin-wave interaction processes, only a small section of $S_{ss}^{L}(\mathbf{q}, \omega)$ is presented here. Individual peaks in the pure lattice spectrum are well defined and we have identified the two transverse spin waves that combine to produce each peak in the longitudinal component. An example of the identification of these peaks is shown by the solid vertical line in Fig. 6 which represents the difference in frequencies of the $\frac{2\pi}{La}(1, 0, 0)$ and $\frac{2\pi}{La}(2, 0, 0)$ transverse spin waves, calculated using Eq. 10. While we have identified the other peaks in the pure lattice data of Fig. 6, only one is shown in the figure for clarity. Fig. 6 also shows the effect of 5\% vacancy defects on $S_{ss}^{L}(\mathbf{q}, \omega)$. Qualitatively, the longitudinal spectrum is less well-defined for the impure system, as the individual excitation peaks seen in the pure system shift to lower frequency and merge into a broad distribution. As shown previously in Fig. 1(a), small-$q$ spin waves such as those that contribute to the solid line in Fig. 6 display multiple transverse excitation peaks. Therefore to identify the location of longitudinal excitations in the impure system, we must consider each combination of the constituent transverse spin waves. Both of the $\frac{2\pi}{La}(1, 0, 0)$ and $\frac{2\pi}{La}(2, 0, 0)$ transverse spin wave spectra show dual-peak structure, therefore Eq. 10 has at least four unique combinations of spin wave interactions. Each of these resultant frequencies are shown as dashed vertical lines in Fig. 6, indicating that the loss of individual peak resolution in this region of $S_{ss}^{L}(\mathbf{q}, \omega)$ is due to the increase in multiple spin wave annihilation processes. \section{Conclusions} The effect of vacancy defects on magnetic excitations in BCC iron were investigated using combined molecular and spin dynamics simulations (MD-SD) at room temperature. We calculated the intermediate scattering function on-the-fly during our simulation runs, and used this data to obtain the dynamic structure factor $S_{ss}(\mathbf{q},\omega)$ which contains information regarding spin wave energies and lifetimes. We obtained spin wave energies and half widths at half maximum by performing curve-fitting of the characteristic spin wave peaks to a Lorentzian function. For all observed spin waves, the introduction of vacancy defects was shown to decrease the energy of the excitation with the effect being most significant near the zone center. The $S_{ss}(\mathbf{q},\omega)$ line shape in low-$q$ spin waves also showed rugged structure, indicating the propagation of multiple excitations with different characteristic energies. The presence of multiple spin waves modes increases magnon-magnon scattering, which is evident by the decrease in observed spin wave lifetime for low-$q$ excitations. The low $q$ region of the magnon dispersion curve has been shown experimentally to obey a quadratic dispersion relation of the form of Eq. (\ref{dispersion}). From this quadratic form, the magnetic stiffness constant has been measured experimentally for BCC iron under varying conditions of magnetic impurity. Our findings using MD-SD simulations are consistent with the observed decrease in the stiffness constant with the introduction of defects, as shown in Fig. 4 and Table I. We investigated longitudinal spin wave modes which represent two spin wave interaction processes. The introduction of defects into the system resulted in the loss of clearly defined peaks which are observed in the pure system. We have calculated the resultant frequency of interacting transverse spin waves, showing that this effect is due to an increase in available spin wave scattering processes. Further quantitative studies of magnetic materials containing impurity atoms (e.g. Fe\textsubscript{$1-x$}Si\textsubscript{$x$}) will require reliable interatomic potentials for those alloys. \vspace{6mm} \section*{Acknowledgments} Part of this work (M.E.) was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division. This work was sponsored in part by the “Center for Defect Physics”, an Energy Frontier Research Center of the Office of Basic Energy Sciences (BES), U.S. Department of Energy (DOE). This study was supported in part by resources and technical expertise from the Georgia Advanced Computing Resource Center, a partnership between the University of Georgia's Office of the Vice President for Research and Office of the Vice President for Information Technology. This research also used resources of the Oak Ridge Leadership Computing Facility at ORNL, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
1,314,259,996,892
arxiv
\section{Introduction} Despite the long history of study on flux compactifications, many interesting questions remain. Of particular interest are questions related to supersymmetry breaking and the construction of trustworthy lower-dimensional effective descriptions of flux vacua. In this paper we consider the problem of constructing effective actions for tree-level flux compactifications that involve spacetime-filling sources, such as D-branes and O-planes. A motivation for such setups is phenomenology, because orientifold planes seem necessary ingredients for the construction of flux vacua that are genuinely lower-dimensional \cite{Tsimpis:2012tu, Petrini:2013ika} (in the sense that there is a separation between the KK scale and the vacuum energy) and, at tree-level, orientifolds are necessary for having Minkowski or de Sitter vacua \cite{deWit:1986xg, Maldacena:2000mw}. The presence of such sources necessarily induces a warp factor in front of the lower-dimensional metric. This is in contrast with ordinary KK reduction where all dependence on internal coordinates is neglected. Nonetheless the warping can affect the low-energy physics, most notably it can soften the hierarchy problem \cite{Giddings:2001yu}. Hence we are naturally led to investigate how ordinary KK reduction is extended to warped compactifications. This can be called ``Warped Effective Field Theory'' (WEFT), see for instance \cite{Shiu:2008ry, Frey:2008xw, Martucci:2009sf, Frey:2013bha}. Typically, questions about WEFT are asked in the context of compactifications to $\mathcal{N}=1, D=4$ Minkowski vacua, with the standard example being O$3$/O$7$ compactifications on conformal Calabi-Yau spaces with three-form fluxes \cite{Giddings:2001yu,Dasgupta:1999ss}. When warping is neglected there is a standard procedure to write down the K\"ahler potential and superpotential that defines the $\mathcal{N}=1, D=4$ supergravity that is supposed to capture the low energy physics of fluctuations around the vacuum. Technically speaking, the absence of warping\footnote{By warping we imply everything that is sourced by the O-planes and D-branes, such as a dilaton that depends on internal coordinates, the warping in front of the four-dimensional metric and the conformal factor for the internal metric, a nonzero profile for the RR form that couples to the brane.} implies that one solves the ten-dimensional equations of motion for which the sources are smeared, i.e. the delta function is replaced with a constant \cite{Grana:2006kf, Blaback:2010sj, Blaback:2012mu}. This is in spirit of ordinary KK reduction, where fields are Fourier expanded on the internal compact space and only the zero mode is kept, since zero modes have the smallest mass. However, if the warping is relevant at low energies it implies that higher order Fourier modes have low enough masses to be physically relevant and ordinary KK reduction needs to be revised. If the supersymmetry-breaking scale is below the KK scale one expects the Wilsonian effective action (the WEFT) to be supergravity. Hence this must imply that the low energy effective theory can still be written in terms of a K\"ahler and a superpotential, but now they will get corrected by warping terms. Hence we expect two supergravity theories to exist that relate to the same flux compactification: the one obtained from smearing the source (which is an ordinary KK reduction) and the WEFT in which warping is somehow taken into account. The motivation for this work is that this seems problematic for theories with extended supersymmetry since such theories are usually very constrained. In minimal supergravity one could indeed think that warping corrects the K\"ahler and superpotential, but in the case of maximal or half-maximal supergravity the gauge group almost completely fixes the theory. One explanation for this could be that compactifications which preserve supersymmetry have a restricted topology and hence amount of orientifold tension. Since the tension controls the size of the warping it could be that the warping corrections are not relevant at low energies. With this in mind we consider a particular compactification of massive IIA supergravity to $\mathrm{AdS}_7$ space. Besides the Romans mass the other ingredients are spacetime-filling D6 branes and $H$ flux filling the internal space. In the smeared limit this solution was first found in \cite{Blaback:2010sj} where the internal space was found to be an $S^3$. The stability with respect to the left-invariant moduli was verified in \cite{Blaback:2011nz}. The stability of the solution was not believed to be guaranteed because it was claimed to be a non-supersymmetric solution. The question of whether a sensible localized solution exists was studied in \cite{Blaback:2011nz, Blaback:2011pn} where it was shown that the localized solution must have three-form flux divergences identical to the infamous ones encountered uplifting anti-D3-branes \cite{McGuirk:2009xx, Bena:2009xk, Bena:2012bk, Gautason:2013zw}.\footnote{See \cite{Junghans:2013xza} for an overview and \cite{Blaback:2012nf} for an interpretation of the singularity.} Recently a very interesting twist to the story was given in reference \cite{Apruzzi:2013yva}: the localized solution was found to preserve half of the supersymmetry of the ten-dimensional theory and because of that a first order integration was found that simplified the numerical study of the solution and the understanding of its global properties. This raises a few questions: `Does the smeared solution preserve supersymmetry?'; `Is there a $D=7$ supergravity describing the fluctuations around the vacuum (both for the smeared and localized solution)?'. The answers we find in this paper are twice negative and to our knowledge this is the first example where these phenomena occur, namely: 1) a flux vacuum that is only found to be supersymmetric when the sources are properly localized and 2) despite the very high amount of preserved supersymmetry there is no lower-dimensional supergravity description. In the discussion we give some clues as to why this happens and when it is expected to happen. The rest of this paper is organized as follows. In section \ref{AdS7} we review the construction of both the smeared and localized solution and show that the smeared solution indeed breaks supersymmetry, which is rather straightforward by relying upon the results of \cite{Apruzzi:2013yva}. In section \ref{gauged} we describe the half-maximal and maximal supergravities in $D=7$ using the embedding tensor formalism \cite{deWit:2002vt, Schon:2006kz}. By constructing the dictionary between geometric fluxes and the embedding tensor components we can rule out the existence of a seven-dimensional gauged supergravity that has the aforementioned $\mathrm{AdS}_7$ vacuum as its ground state.\footnote{If there is nonzero Romans mass. If the Romans mass vanishes the solution has a lift to eleven-dimensional supergravity and the $\mathrm{AdS}_7$ vacuum can be understood as the standard Freund-Rubin vacuum describing the near horizon limit of an M5-brane.} We conclude with a discussion in section \ref{Discussion} in which we speculate about the meaning of our results. We have included two appendices, the first of which cointains some technical details concerning the group-theoretical calculation to derive the dictionary embedding tensor/fluxes, whereas in the second appendix the example of the no-scale Minkowski vacuum is worked out explicitly. In this example the smearing gives the gauged supergravity description and the localization is understood. \section{$\mathrm{AdS}_7$ vacua in massive IIA supergravity}\label{AdS7} We first review some of the results of \cite{Apruzzi:2013yva}, where all supersymmetric $\mathrm{AdS}_7 \times M_3$ solutions obtainable from type II supergravity were found. Solutions exist only in (massive) IIA supergravity and are supported by $H$ flux filling the internal space and spacetime-filling D6-branes whose backreaction also switches on a nontrivial profile for the dilaton and the $F_2$ flux. When the Romans mass is put to zero the resulting solution (presenting D6-branes and anti-D6-branes at the poles of $M_3$, which is topologically an $S^3$) can be lifted to the well known $\mathrm{AdS}_7 \times S^4$ Freund-Rubin solution of eleven-dimensional supergravity. Here, we will only address the features of the solutions that are necessary for this paper. The $\mathrm{AdS}_7$ vacua are intriguing for many other issues, such as the appearance of diverging $H$ flux and the possible appearance of D8 stacks (that carry D6 charge) The local properties around the $H$ flux singularities of the massive $\mathrm{AdS}_7$ solution with D6-branes of \cite{Apruzzi:2013yva} had appeared earlier in \cite{Blaback:2011nz, Blaback:2011pn}. The solutions of the latter references have extra integration constants. However, $\mathrm{AdS}_7$ solutions with extra integration constants are not expected to be globally well-defined, but they can serve as local solutions to which some geometry can be glued.\footnote{In the noncompact limit, this is the usual procedure for which the solutions are a warped product of seven-dimensional Minkowski space times conformal $\mathbb{R}^3$. It is in this limit that the connection with supersymmetry-breaking branes can be made and that the appearance of extra integrations constants is physical.} The globally well-defined solutions are supersymmetric and have no integration constants. As it turns out the solutions do break supersymmetry when the D6 sources are smeared. In the smeared limit the internal space can be taken to be a round $S^3$ and with respect to the left-invariant modes on $S^3$ the solution was found to be stable in \cite{Blaback:2011nz}. \subsection{Supersymmetric $\mathrm{AdS}_7$ vacua from localized D6-branes} \subsubsection*{System of differential equations} The pure spinor approach allows one to rewrite the system of supersymmetry equations of type IIA supergravity on the background $\mathrm{AdS}_7 \times M_3$, plus the Bianchi identities for the fluxes, as a system of differential equations involving two differential forms $\psi^1$ and $\psi^2$ (associated with the internal geometry), and the fluxes. According to the Ansatz \begin{equation} \label{eq:ads7m3} \d s_{10}^2 = \textrm{e}^{2A} \d s_{\mathrm{AdS}_7}^2 + \d s^2_{M_3}\, , \end{equation} the supersymmetry parameters $\epsilon_1$, $\epsilon_2$ (two ten-dimensional Majorana-Weyl spinors with opposite chirality) are decomposed into an external spinor times an internal one: $\epsilon_i = (\zeta \otimes \chi_i \pm \zeta^c \otimes \chi_i^c ) \otimes v_{\pm}$, $i=1,2$, where the superscript $c$ denotes Majorana conjugation and the last factor $v_{\pm}$ is introduced to give $\epsilon_i$ the correct chirality in ten dimensions. The polyforms $\psi^{1,2}$ are defined by the internal spinors $\chi_{1,2}$ via the Clifford map:\footnote{The map allows to identify forms with bispinors: $dx^{m_1} \wedge \ldots \wedge dx^{m_p} \mapsto \gamma_{\alpha\beta}^{m_1 \ldots m_p}$. A slash $\cancel{\phantom{j}}$ over a form denotes its image under the Clifford map, i.e. the associated bispinor.} \begin{equation} \label{eq:psi12} \cancel{\psi^1} = \chi_1 \otimes \chi_2^\dagger\, ,\quad \quad \cancel{\psi^2} = \chi_1 \otimes {\chi_2^c}^\dagger\,. \end{equation} Together with the total RR flux $F=F_0 + F_2$ allowed by the background, they satisfy the equations \begin{subequations}\label{eq:sys73} \begin{align} &\d_H {\rm Im} (\textrm{e}^{3A-\phi} \, \psi^1_+) = -2 \textrm{e}^{2A-\phi} \, {\rm Re} \psi^1_-\ , \label{eq:73I1}\\ &\d_H {\rm Re} (\textrm{e}^{5A-\phi} \, \psi^1_+) = 4 \textrm{e}^{4A-\phi} \, {\rm Im} \psi^1_-\ , \label{eq:73R1}\\ &\d_H (\textrm{e}^{5A-\phi} \,\psi^2_+) = -4i \textrm{e}^{4A-\phi} \, \psi^2_-\ , \label{eq:732}\\ & \frac{1}{8} \textrm{e}^{\phi} \star_3 \lambda F = \d A \wedge {\rm Im} \psi^1_+ + \textrm{e}^{-A} \,{\rm Re} \psi^1_-\ , \label{eq:73f}\\ &\d A \wedge {\rm Re} \psi^1_- = 0 \label{eq:73dAR}\ , \\ &(\psi^{1,2}_+,\overline{\psi^{1,2}_-})=-\frac i2 \label{eq:73norm}\ . \end{align} \end{subequations} Here, $\phi$ is the dilaton and $A$ is the warping factor appearing in \eqref{eq:ads7m3}; $H$ is the NSNS flux, $\d_H = \d - H \wedge$ is the twisted exterior derivative and $\lambda$ acts on a $p$-form as $\lambda \alpha_p = (-)^{\lfloor \frac{p}{2} \rfloor} \alpha_p$. Finally, the subscript $\pm$ on $\psi^{1,2}$ indicates the even (odd) part of the polyform and $\left(\,,\right)$ is the usual Chevalley-Mukai pairing between forms (in particular \eqref{eq:73norm} fixes the norms of the internal spinors to one). To seek for genuine vacuum solutions, every physical field should depend only on $M_3$. The system given above is equivalent to $\mathcal{N}=1$ supersymmetry on $\mathrm{AdS}_7 \times M_3$; any of its solutions is by construction a supersymmetric $\mathrm{AdS}_7$ vacuum.\footnote{It can be shown that solutions to \eqref{eq:sys73} are a subclass of solutions of the form $\mathrm{Mink}_6 \times M_4$ (by considering AdS as a warped product of Mink by a line). The system of type II supersymmetry equations on $\mathrm{Mink}_6 \times M_4$ in terms of pure spinors first appeared in \cite{Lust:2010by}.} \subsubsection*{Parametrization of $\psi^{1,2}$} In order to solve this system one proceeds by parametrizing the polyforms $\psi^{1,2}$ defined in \eqref{eq:psi12}. The most general parametrization of these forms is obtained (following the lines of \cite{Halmagyi:2007ft}) by noticing that the two internal spinors $\chi_1$ and $\chi_2$ define an identity$\times$identity structure on $T_{M_3} \oplus T^*_{M_3}$, since a norm-1 spinor $\chi$ in three dimensions is able to define a Dreibein $\left\lbrace e_a \right\rbrace_{a=1}^3$ (i.e. an identity structure). The spinors $\chi_{1,2}$ are then expanded on the basis $\left\lbrace \chi, \chi^c \right\rbrace$, the coefficients of this expansion being trigonometric functions of some angles $\theta_1$, $\theta_2$ and $\theta_3$, and the expansion is plugged into \eqref{eq:psi12}, in order to parametrize $\psi^{1,2}$. Employing this parametrization, the one-form parts of \eqref{eq:73I1}, \eqref{eq:73R1}, \eqref{eq:732} directly {\it determine} the Dreibein on $M_3$ (its components turn out to be combinations of derivatives of the angles), that is they {\it give} its metric $\d s^2_{M_3} = e_a e_a$, \eqref{eq:73f} gives the total RR flux, while the three-form part of the system determines $H$.\footnote{The equation of motion for $F_2$ turns out to be automatically satisfied, as the equation of motion for $H$, whereas the Bianchi identity for $F_2$ is a consequence of the explicit expressions of all fluxes, as determined by the system \eqref{eq:sys73}.} One is left with two genuine ODEs: one is the condition that $F_0$ should be piecewise constant (which is the content of its Bianchi identity) and the other one reads \begin{equation}\label{eq:413} x\,\d x = (1+ x^2) \d \phi - (5+x^2) \d A\, , \end{equation} with $x \equiv \cos(\theta_1) \sin(\theta_2)$. Finally, the two-form part of the system imposes $\phi$ to be functionally dependent on $A$ (i.e. $\d A \wedge \d \phi=0$); hence $x$ depends on $A$ too, as imposed by \eqref{eq:413}. In particular, the metric determined by the system has the form of a {\it fibration of a round $S^2$ over an interval parametrized by $A$}. However neither $A$ nor the other scalar parameters (the angles) were a priori intended as coordinates on the internal manifold: nevertheless, since the analysis of the system\footnote{That is, the explicit expressions of the metric and of the RR and NSNS fluxes obtained from the system.} has been so far only local, the angles can be promoted to coordinates on $M_3$, while it is wiser to introduce a new coordinate $r$ (defined by $\d r = 4\textrm{e}^A \frac{\sqrt{1-x^2}}{4x-F_0\, \textrm{e}^{A+\phi}}\d A$), to parametrize the base of the fibration. In these new coordinates the metric reads \begin{equation} \d s^2_{M_3} = \d r^2 + \frac1{16}\textrm{e}^{2A}\,(1-x^2)\d s^2_{S^2} \ ,\label{eq:met-r} \end{equation} and the system of residual ODEs is: \begin{subequations}\label{eq:oder} \begin{align} &\partial_r \phi = \frac{\textrm{e}^{-A}}{4\sqrt{1-x^2}} (12 x + (2x^2-5)F_0 \,\textrm{e}^{A+\phi}) \ ,\\ &\partial_r x = -\frac{1}{2} \textrm{e}^{-A}\sqrt{1-x^2} (4+x F_0 \, \textrm{e}^{A+\phi}) \ ,\\ &\partial_r A = \frac{4\textrm{e}^{-A}}{\sqrt{1-x^2}} (4x - F_0\, \textrm{e}^{A+\phi})\ . \end{align} \end{subequations} Now $r$ has become a coordinate on the base and $A$, $x$, $\phi$ have become functions of $r$. Moreover, to make $M_3$ compact, the $S^2$ fiber is demanded to shrink at two distinct values of $r$ (this is accomplished if $x$ goes to $\pm 1$ at the extrema of the base interval, see \eqref{eq:met-r}): thus, $M_3$ is topologically an $S^3$ and these two values are interpreted as its north, $r_{\mathrm{N}}$, and south, $r_{\mathrm{S}}$, poles.\footnote{The other possible compact $M_3 = S^1 \times S^2$ (topologically), which is obtained compactifying the base interval, is excluded since incompatible with the system \eqref{eq:oder}.} In turn, the compact topology of the internal space imposes boundary conditions on the system, that must be satisfied by the fields. A way of understanding the appearance of this $S^2$ is by considering the $\mathrm{Sp}(1) \cong \mathrm{SU}(2)$ R-symmetry group of a six-dimensional $(1,0)$ CFT, dual to any possible solution of the system; furthermore, its presence elucidates why the even- and odd-form parts of the bispinors $\psi^{1,2}$ can be organized as singlets and triplets of $\mathrm{SU}(2)$ \cite[Sec.~4.5]{Apruzzi:2013yva}. \subsubsection*{Massive solutions with localized D6-branes} Assuming $F_0 \neq 0$ everywhere on $M_3$, it is possible to construct a compact solution to \eqref{eq:oder} with nonzero D6 charge. Such a fully backreacted solution contains a stack of spacetime-filling D6-branes localized at the south pole (the stack being there calibrated). Around this pole the metric is singular since it has the behavior one expects near a D6, and the $H$ flux is divergent; nontrivially, the solution exhibits peculiar global properties: the north pole is a regular point for all fields, but one can also substitute it by inserting an anti-D6 stack or an O6-plane still obtaining a globally well-defined solution (in these cases too, $H$ diverges near the sources). \subsection{Smearing breaks supersymmetry} We now prove that the system \eqref{eq:sys73} does not allow for any solution with smeared D6 charge. Smearing the system practically means that we will enforce the following conditions: \begin{equation} \label{eq:smear} \phi = \mathrm{const} \equiv \phi_0,\quad A=\mathrm{const},\quad F_2 = 0\ . \end{equation} Since for constant $A$ the warping factor $e^{2A}$ appearing in \eqref{eq:ads7m3} can be reabsorbed in the $\mathrm{AdS}_7$ metric, we will set $A=0$. Thus, the ten-dimensional metric takes the form of a direct product: \begin{equation}\label{eq:smemet} \d s_{10}^2 = \d s^2_{\mathrm{AdS}_7} + \d s^2_{M_3}\ . \end{equation} No condition is imposed on $H$ and $F_0$ so that, a priori, they are not identically zero. Under these assumptions the system \eqref{eq:sys73} simplifies while \eqref{eq:73norm} holds unchanged. \begin{comment} to: \begin{subequations}\label{eq:sme73} \begin{align} &d_H {\rm Im} \psi^1_+ = -2 {\rm Re} \psi^1_-\ , \label{eq:s73I1}\\ &d_H {\rm Re} \psi^1_+ = 4 {\rm Im} \psi^1_-\ , \label{eq:s73R1}\\ &d_H \,\psi^2_+ = -4i \psi^2_-\ , \label{eq:s732}\\ & \frac{1}{8} e^{\phi_0} *_3 F_0 = {\rm Re} \psi^1_-\ . \label{eq:s73f} \end{align} \end{subequations} \end{comment} From \eqref{eq:413} and imposing \eqref{eq:smear}, we find \begin{equation} \label{eq:reldif} \tan(\theta_1)\, \d \theta_1 = \cot(\theta_2)\, \d \theta_2\ . \end{equation} This is a nonlinear relation between two of the differentials (derivatives of the angles) which induces a relation between two of the components of the Dreibein. Therefore, if we assume that $\left\lbrace e_a\right\rbrace$ is a well-defined Dreibein on $M_3$, the one-form equations cannot be solved together. We have thus shown that smearing breaks supersymmetry, since the smeared system does not allow for any supersymmetric solution. Nevertheless, it is possible to define the smeared limit of the massive solution with D6-branes of \cite{Apruzzi:2013yva} as a bona fide solution to the ten-dimensional equations of motion where delta sources are replaced by constants \cite{Blaback:2011nz, Blaback:2011pn}; it just breaks supersymmetry. \subsection{Non-supersymmetric $\mathrm{AdS}_7$ vacua and vacuum stability} Before we move towards the gauged supergravity analysis we want to address a few issues related to the existence of possible non-supersymmetric extensions of these $\mathrm{AdS}_7$ vacua and spend a few words on the stability of the latter. Our aim is to connect \cite{Apruzzi:2013yva} to \cite{Blaback:2011pn, Blaback:2012nf, Bena:2012tx}. Concerning non-supersymmetric extensions, one is required to solve the general second order differential equations for an Ansatz that has rotational symmetry \cite{Blaback:2011pn}. In Einstein frame the Ansatz is given by: \begin{align} & \d s^2_{10} = \textrm{e}^{2A(\theta)} \,\d s_{\mathrm{AdS}_7}^2 + \textrm{e}^{2B(\theta)}\left(\d \theta^2 + \sin^2(\theta) \,\d \Omega^2\right)\ ,\\ & H=\lambda F_0\, \textrm{e}^{\tfrac{7}{4}\phi}\star_3 1\ , \label{eq:Hflux}\\ & F_2 = \textrm{e}^{-\tfrac{3}{2}\phi-7A}\star_3\d\alpha\ , \end{align} where $\phi, \lambda$ and $\alpha$ are now functions depending on $\theta$, $\star_3$ contains the conformal factor and we take $F_0$ to be constant. The equation of motion for $H$ enables one to eliminate $\alpha$ in terms of $\lambda$, \begin{equation}\label{alphaversuslambda} \alpha=\textrm{e}^{\tfrac{3}{4}\phi +7A}\, \lambda \ , \end{equation} where we have set the integration constant to zero by shifting $\alpha$. Then the problem is reduced to finding a set of four unknown functions $A, B, \phi, \lambda$ depending on $\theta$ and obeying coupled second order differential equations. Around $\theta =0$ the general solution is given by \cite{Blaback:2011pn}: \begin{align} & \textrm{e}^{-A} = \theta^{-\frac{1}{16}}\Bigl(a_0 + a_1 \theta + \ldots \Bigr)\ ,\\ & \textrm{e}^{-2B}= \theta^{\frac{7}{8}}\Bigl(b_0 + b_1 \theta + \ldots \Bigr)\ ,\\ & \textrm{e}^{-\frac{1}{4}\phi}= \theta^{-\frac{3}{16}}\Bigl(f_0 + f_1 \theta + \ldots \Bigr)\ ,\\ & \lambda = \theta^{-1}\Bigl(\lambda_0 + \lambda_1 \theta + \ldots \Bigr)\ . \end{align} To understand what the general integration parameters are one investigates which of the Taylor expansion coefficients can be chosen freely. It turns out that there are five constants and the rest can be determined in terms of these five \cite{Blaback:2011pn}: \begin{equation}\label{five} a_0,\ b_0,\ f_0,\ \lambda_0,\ \lambda_1\ . \end{equation} The reason can easily be understood. The ten-dimensional equations of motion can be interpreted as four coupled differential equations plus a Hamiltonian constraint. This would give seven integration constants. However $A(0)$ and $B(0)$ can be understood as rescaling the $\mathrm{AdS}_7$ and $S^3$ radii such that we take them equal to zero. By fixing the D6 charge at the origin $\theta=0$ we enforce one algebraic condition among the constants in (\ref{five}), such that one is left with four independent integration constants. What was not done in \cite{Blaback:2011pn} is to check which of these local solutions can be extended consistently all the way down to the south pole. The way this should proceed is via the shooting method. One constructs the solutions near the north and south pole and evolves them towards the equator where they have to connect smoothly. We expect that this introduces four extra constraints on the above integration constants, one for each degree of freedom ($A$, $B$, $\alpha$, $\phi$). This then fixes the solution \emph{uniquely} and implies that all the solutions to the second order equations must be supersymmetric, when one demands them to be globally well-defined.\footnote{This means that the BPS conditions of \cite{Apruzzi:2013yva} are required for a globally well-defined solution. This fixes for instance $\lambda_0 = \frac{a_0 f_0^5}{F_0}$, which can also be seen by comparing with the expression $H=-(6\textrm{e}^{-A}+xF_0\,\textrm{e}^{\phi})\mathrm{vol}_3$ given in \cite{Apruzzi:2013yva}.} This shows that the solutions have no moduli, as already noticed in \cite{Apruzzi:2013yva}, and that the solution is completely fixed by discrete topological data: the Romans mass $F_0$, the total flux integer $h=\int H$ and the way the D6 charges are distributed over north and south pole of the $S^3$. The total D6 charge $Q_6$ is determined by the RR tadpole condition: \begin{equation} Q_6 = Q_{\text{south}} + Q_{\text{north}} = h F_0\ . \end{equation} Finally we address the issue of stability of the supersymmetric $\mathrm{AdS}_7$ solutions. Despite supersymmetry, one needs to worry about stability because of the $H$ singularity. As long as this singularity is not resolved the solution is not physical and one possible interpretation is that such a solution is simply not existent and one should really have a time-dependent solution that describes flux decaying against the D6 charges \cite{Blaback:2012nf}.\footnote{Supersymmetry is not a guarantee for existence and stability when singularities are present. Well known examples are multi-centered BPS black holes \cite{Denef:2000nb}. When all centers are brought together one finds a BPS spherical solution with naked singularity. This solution is not physical and wants to evolve in time, separating the centers until they relax to their equilibrium position of the multi-centered solution.} The essential ingredient that decides on the fate of the solution is the resolution of the singularity. If the D6-branes can be replaced with spherical D8 shells that carry the same D6 charge then the singularity disappears \cite{Bena:2012tx}. This has been successful for those $\mathrm{AdS}_7$ solutions which do not carry net D6 charge ($Q_{\text{south}} =- Q_{\text{north}}$) \cite{Apruzzi:2013yva}. For the other solutions, a probe analysis in the noncompact limit has revealed that the polarization is not occuring \cite{Bena:2012tx}. However this does not exclude that in the compact case, at some far enough distance from the pole, the formation of spherical D8-branes could occur. In fact, from a holographic point of view one is tempted to conclude that the class of $\mathrm{AdS}_7$ solutions with nonzero Romans mass could be dual to $(1,0)$ CFT's in $D=6$ \cite{Hanany:1997gh}. The existence of such CFT's would rule out the possibility of having an unstable vacuum and hence one expects D8-branes to polarize and resolve the singularity on the gravity side. However it is essential to understand that this does not relate to the fate of supersymmetry-breaking anti-branes in warped throats. It is only the noncompact version of these compact $\mathrm{AdS}_7$ solutions, for which the worldvolume is Minkowski instead of AdS, that the D6 sources can be interpreted in terms of supersymmetry-breaking branes, that decay into noncompact ten-dimensional Minkowski spacetime \cite{Blaback:2012nf}. \section{Gauged supergravity in $D=7$ and IIA compactifications}\label{gauged} In this section we attempt to interpret a class of massive type IIA compactifications with smeared six-branes as gauged supergravities in $D=7$, possibly up to some explicit supersymmetry-breaking effects in the corresponding scalar potential. At first glance, since we are including half-BPS objects, one might think that half-maximal supergravity theories are the correct framework to analyze this problem. However, since the $\mathbb{Z}_{2}$ truncation worked out in \cite{Dibitetto:2012rk} relating the maximal theory to the half-maximal one can be interpreted as an O6 involution, only orientifold-allowed fluxes can be described by using the embedding tensor of the half-maximal theory. In particular, within such a framework, we will only be able to describe a compactification carrying nonzero Romans mass and NSNS three-form flux, but no ``metric flux''. In order to include this, we will need the full embedding tensor of the maximal theory. The reduction Ansatz, in string frame, that produces the result which we will be comparing our supergravity potentials with, reads \begin{equation} \d s^2_{10} \, = \, \tau^{-2} \d s_7^2 \, + \, \rho \, M_{ij}e^{i} \otimes e^{j} \ , \end{equation} where $\rho$ and $\tau$ are suitable combinations of the internal volume and ten-dimensional dilaton guaranteeing that the seven-dimensional Lagrangian is in the Einstein frame, whereas $M_{ij}$ parametrizes the $\textrm{SL}(3)/\textrm{SO}(3)$ coset, where $i, \, j \, = \, 1, \, 2, \, 3$ denote $\textrm{SL}(3)$ fundamental indices. The $e^i$'s are Maurer--Cartan one-forms and the structure constants of their algebra are denoted by $\omega$. By performing the reduction, one finds the following scaling properties for the different fluxes \begin{equation} \begin{array}{lclclc} V_{F_{0}} \, \sim \, f_{0}^{2} \, \rho^{3/2} \, \tau^{-7} & , & V_{H} \, \sim \, h^{2} \, \rho^{-3} \, \tau^{-2} & , & V_{\omega} \, \sim \, \omega^{2} \, \rho^{-1} \, \tau^{-2} & , \end{array} \end{equation} which imply that the above fluxes naturally transform in the following irreps of $\mathbb{R}^{+}_{\rho} \, \times \, \mathbb{R}^{+}_{\tau} \, \times \, \textrm{SL}(3)$: \begin{equation} \label{Flux_irreps} \hspace{-3mm} \begin{array}{lclclc} F_{0} \, = \, f_{0} \, \in \, \textbf{1}_{(-\frac{3}{4};\,+\frac{7}{2})} & , & H_{ijk} \, = \, h \, \epsilon_{ijk} \, \in \, \textbf{1}_{(+\frac{3}{2};\,+1)} & , & {\omega_{ij}}^{k} \, = \, \epsilon_{ijl} \, q^{lk} \, \in \, \textbf{6}^{\prime}_{(+\frac{1}{2};\,+1)} & , \end{array} \end{equation} where $q^{(ij)} \, = \, q^{ij}$. \subsection{Half-maximal gauged supergravity} Seven-dimensional half-maximal supergravity enjoys $\mathbb{R}^{+} \, \times \, \textrm{SL}(4)$ global symmetry. Note that $\mathrm{SL}(4) \cong \mathrm{SO}(3,3)$ and hence we are here only considering the theory coupled to three vector multiplets that are included in the closed-string sector. The extra $N$ arbitrary vector multiplets which contain open-string degrees of freedom (see appendix \ref{IIACOMP}) do not play any essential role in this analysis and hence including them would not change the corresponding result. The consistent deformations of this theory transform as \begin{equation} \label{ET_half} \begin{array}{cccccc} \Theta & \in & \underbrace{\textbf{1}_{(-4)}}_{p=3} & \oplus & \underbrace{\textbf{6}_{(+1)}\,\oplus\,\textbf{10}_{(+1)}\,\oplus\,\textbf{10}^{\prime}_{(+1)}}_{p=1} & , \end{array} \end{equation} where the subscripts on the different $\textrm{SL}(4)$ irreps denote $\mathbb{R}^{+}$ charges. This is in agreement with what predicted in \cite{Bergshoeff:2007vb} by using the Kac-Moody approach, where it is also shown that the $\textbf{1}_{(-4)}$ corresponds to a ``$p=3$-type'' deformation, i.e. a St\"uckelberg-like massive deformation for the three-form field, thus not associated with any gauging. The other irreps instead correspond to gaugings; the $\textbf{6}_{(+1)}$ gauges the $\mathbb{R}^{+}$ factor and some subgroup of $\textrm{SL}(4)$, whereas gaugings in the $\textbf{10}_{(+1)}\,\oplus\,\textbf{10}^{\prime}_{(+1)}$ are purely within $\textrm{SL}(4)$. In what follows we will show how only the Romans mass and $H$ flux coming from compactifications of massive IIA supergravity with six-branes sit inside the deformations $\Theta$ introduced in \eqref{ET_half}. To this end, we will restrict ourselves to the relevant case\footnote{\label{foot_xi}The only known ten-dimensional construction giving rise to $\mathbb{R}^{+}$ gaugings parametrized by an embedding tensor in the $\textbf{6}_{(+1)}$ is introducing some dilaton flux $H_{i} \, \equiv \, \partial_{i}\phi$. This would turn on at most half of its components, but it goes beyond our present scope.} of purely $\textrm{SL}(4)$ gaugings combined with a massive deformation. Let us denote by $\theta \, \in \, \textbf{1}_{(-4)}$ the mass parameter and by $Q_{(mn)} \, \in \, \textbf{10}_{(+1)}$ and $\tilde{Q}^{(mn)} \, \in \, \textbf{10}^{\prime}_{(+1)}$ the embedding tensor components, where $m, n$ are fundamental $\textrm{SL}(4)$ indices. In this case, the non-vanishing Quadratic Constraints (QC) needed for the closure of the gauge algebra reduce to \cite{Dibitetto:2012rk} \begin{equation} \label{QC_half} \begin{array}{cccc} \tilde{Q}^{mp} \, Q_{pn} \, - \, \dfrac{1}{4} \, \left(\tilde{Q}^{pq} \, Q_{pq}\right) \, \delta_{n}^{m} & = & 0 & , \end{array} \end{equation} transforming in the $\textbf{15}_{(+2)}$ of $\textrm{SL}(4)$. In terms of the scalars of the theory, which span \begin{equation} \begin{array}{cccc} \underbrace{\mathbb{R}^{+}}_{\Sigma} & \times & \underbrace{\frac{\textrm{SL}(4)}{\textrm{SO}(4)}}_{\mathcal{M}_{mn}} & , \end{array} \end{equation} the scalar potential induced by the above deformation parameters can be written as\footnote{Please note that we have chosen the normalization of the mass parameter $\theta$ such that the gauge coupling $g$ factorizes the whole potential given in \eqref{V_Half_Max}.} \begin{equation} \label{V_Half_Max} \begin{array}{ccrlc} V & = & \dfrac{g^{2}}{64} & \bigg[\theta^{2} \, \Sigma^{8} \, + \, \dfrac{1}{4} \, Q_{mn} \, Q_{pq} \, \Sigma^{-2} \, \left(2 \, \mathcal{M}^{mp} \, \mathcal{M}^{nq} \, - \, \mathcal{M}^{mn} \, \mathcal{M}^{pq} \right) & + \\[3mm] & & + & \dfrac{1}{4} \, \tilde{Q}^{mn} \, \tilde{Q}^{pq} \, \Sigma^{-2} \, \left(2 \, \mathcal{M}_{mp} \, \mathcal{M}_{nq} \, - \, \mathcal{M}_{mn} \, \mathcal{M}_{pq} \right) & + \\[3mm] & & - & \theta \, \left(\tilde{Q}^{mn} \, \Sigma^{3} \, \mathcal{M}_{mn} \, + \, Q_{mn} \, \Sigma^{3} \, \mathcal{M}^{mn}\right) \, + \, Q_{mn} \, \tilde{Q}^{mn} \, \Sigma^{-2}\bigg] & , \end{array} \end{equation} where $\mathcal{M}^{mn}$ denotes the inverse of $\mathcal{M}_{mn}$. The first step to obtain such an expression for the scalar potential $V$ is $\mathbb{Z}_{2}$ truncating the maximal theory \cite{Samtleben:2005bp} as described in \cite{Dibitetto:2012rk}. This gives rise to a particularly constrained half-maximal theory where the following extra QC are satisfied \begin{equation} \label{QC_extra} \begin{array}{ccccccccccc} \tilde{Q}^{pq} \, Q_{pq} & = & 0 & & \textrm{and} & & & \theta \, \tilde{Q}_{mn} & = & 0 & . \end{array} \end{equation} Subsequently one can observe that the general scalar potential of the half-maximal theory will contain the two above terms with some coefficients. Finally, one can fix those coefficients by performing a $\mathbb{T}^{2}$ reduction down to $D=5$ and comparing the result with the corresponding terms in the scalar potential of \cite{Schon:2006kz}. The relation between embedding tensor components and fluxes reads:\footnote{See appendix~\ref{app:dictionary} for some details concerning the derivation.} \begin{equation} \label{dictionary_fluxes_half} \begin{array}{lclc} \theta \, = \, - \dfrac{1}{\sqrt{2}} \, h & , & \tilde{Q}^{00} \, = \, \sqrt{2} \, f_{0} & . \end{array} \end{equation} Concerning the scalar sector, the $\textrm{SL}(3)$ sector parametrized by $M_{ij}$, which is naturally obtained by dimensional reduction, is embedded inside $\mathcal{M}_{mn}$ in the following way \begin{equation} \label{SL4_scalars} \mathcal{M}_{mn} \, = \, \left( \begin{array}{c | c} \Phi^{3} & \\ \hline & \\[-2mm] & \Phi^{-1} \, M_{ij} \end{array}\right) \ . \end{equation} Now, by inserting the parametrization of the $\textrm{SL}(4)$ scalars given in \eqref{SL4_scalars} and the dictionary \eqref{dictionary_fluxes_half} in the expression of the scalar potential \eqref{V_Half_Max}, one finds \begin{equation} V \, = \, \frac{g^{2}}{128} \, \left(\frac{h \, \Sigma^{5} \, + \, f_{0} \, \Phi^{3}}{\Sigma}\right)^{2} \ . \end{equation} This coincides with the expression obtained in \cite{Blaback:2011nz} adapted to the case with no metric flux,\footnote{Please note that we have adopted different conventions w.r.t. \cite{Blaback:2011nz}, where $V$ is obtained from a reduction in the string frame, thus directly being a function of the ten-dimensional dilaton $\phi$ and the volume modulus $v$.} \begin{equation} \label{V_Fluxes_half} V \, = \, \frac{\left(h \, \tau^{5/2} \, + \, f_{0} \, \rho^{9/4}\right)^{2}}{2 \, \rho^{3} \, \tau^{7}} \ , \end{equation} by choosing $g \, = \, 8$, $h \, f_{0}$ equal to the D6 (or anti-D6, depending on its sign) tension and upon using the following mapping between the $\mathbb{R}^{+}$ scalars \begin{equation} \label{dictionary_scalars} \begin{array}{lclccclclc} \Sigma & \equiv & \rho^{-3/8} \, \tau^{-1/4} & , & & \Phi & \equiv & \rho^{1/8} \, \tau^{-5/4} & \end{array} \end{equation} (which can be derived as a consequence of \eqref{dictionary_weights}). When the D6 tension is negative, as is the case for O6-planes, then there is a stable Minkowski vacuum for those values of the fields such that $h \, \tau^{5/2} \, + \, f_{0} \, \rho^{9/4}=0$. At this Minkowski point a certain combination of $\rho$ and $\tau$ remains massless. This seven-dimensional gauged supergravity with a no-scale structure is discussed in detail in appendix \ref{IIACOMP}. This Minkowski solution, as a solution to seven-dimensional gauged supergravity, solves the ten-dimensional equations of motion in the smeared O6 case. But the warped version, with fully localized O6-planes, is known as well \cite{Blaback:2010sj}.\footnote{It would be useful to use this vacuum solution as an explicit background to investigate some of the issues raised in \cite{McOrist:2012yc} and \cite{Saracco:2012wc}.} Even more, it is possible to map the BPS domain wall flows in that gauged supergravity to ten-dimensional solutions with localized O6-planes \cite{Blaback:2012mu, toappear}. The analysis in \cite{Blaback:2012mu, toappear} shows a perfect match between the conditions in the seven-dimensional gauged supergravity from smeared O6-planes and the ten-dimensional supersymmetry conditions with localized O6-planes. This matching is generally to be expected and this is why we consider it worthy to emphasize that the $\mathrm{AdS}_7$ solutions in massive IIA display the opposite behaviour. \subsection{Maximal gauged supergravity} Now let us move to the maximal theory which will allow us to include metric flux in our discussion. Maximal supergravity in $D=7$ enjoys $\textrm{SL}(5)$ global symmetry. The consistent deformations of this theory (all corresponding to gaugings) transform as \cite{Samtleben:2005bp} \begin{equation} \label{ET} \begin{array}{cccccc} \Theta & \in & \underbrace{\textbf{15}}_{Y_{MN}} & \oplus & \underbrace{\textbf{40}^{\prime}}_{Z^{MN,P}} & , \end{array} \end{equation} where the Linear Constraint (LC) implies $Y_{(MN)} \, = \, Y_{MN}$ and $Z^{[MN],P} \, = \, Z^{MN,P}$ with $Z^{[MN,P]} \, = \, 0$, where $M, \, N, \, P$ denote fundamental $\textrm{SL}(5)$ indices. The following Quadratic Constraints (QC) are needed for the closure of the gauge algebra \begin{equation} \label{QC_max} \begin{array}{cccc} Y_{MQ} \, Z^{QN,P} \, + \, 2 \, \epsilon_{MRSTU} \, Z^{RS,N} \, Z^{TU,P} & = & 0 & , \end{array} \end{equation} transforming in the $\textbf{5}^{\prime} \, \oplus \, \textbf{45}^{\prime} \, \oplus \, \textbf{70}^{\prime}$ of $\textrm{SL}(5)$. The scalars of the theory describe fourteen propagating degrees of freedom and are parametrized by an element $\mathcal{M}_{MN}$ of the coset $\textrm{SL}(5)/\textrm{SO}(5)$. The embedding tensor deformations introduced in \eqref{ET} induce the following scalar potential \begin{equation} \label{V_Max} \begin{array}{ccrlc} V & = & \dfrac{g^{2}}{64} & \bigg[Y_{MN} \, Y_{PQ} \, \left(2 \, \mathcal{M}^{MQ} \, \mathcal{M}^{NP} \, - \, \mathcal{M}^{MN} \, \mathcal{M}^{PQ} \right) & + \\[3mm] & & + & 64 \, Z^{MN,P} \, Z^{QR,S} \, \mathcal{M}_{MQ} \, \left(\mathcal{M}_{NR} \, \mathcal{M}_{PS} \, - \, \mathcal{M}_{NP} \, \mathcal{M}_{RS} \right)\bigg] & . \end{array} \end{equation} In what follows we will construct the dictionary between the above deformation parameters $\Theta$ and fluxes in compactifications of massive IIA supergravity with D6-branes. To this end, we will restrict ourselves to those embedding tensor components which have some ten-dimensional origin in this duality frame. The explicit form of the dictionary embedding tensor/fluxes reads: \begin{equation} \label{dictionary_fluxes_max} \begin{array}{lclclc} Y_{++} \, = \, 4\sqrt{2} \, h & , & Z^{-+,-} \, = \, - Z^{+-,-}\, = \, \dfrac{1}{\sqrt{2}} \, f_{0} & , & Z^{i-,j} \, = \, - Z^{-i,j} \, = \dfrac{1}{\sqrt{2}} \, q^{ij} & . \end{array} \end{equation} For what concerns the scalar sector, $\Sigma$, $\Phi$ and $M_{ij}$ are embedded in the following way inside the element $\mathcal{M}_{MN}$ of the $\textrm{SL}(5)/\textrm{SO}(5)$ coset: \begin{equation} \label{SL5_scalars} \mathcal{M}_{MN} \, = \, \left( \begin{array}{c | c | c} \Sigma^{-4} & & \\ \hline & \\[-2mm] & \Sigma \, \Phi^{3} & \\ \hline & \\[-2mm] & & \Sigma \, \Phi^{-1} \, M_{ij} \end{array}\right) \ . \end{equation} Now, by inserting the parametrization of the $\textrm{SL}(5)$ scalars given in \eqref{SL5_scalars} and the dictionary \eqref{dictionary_fluxes_max} in the expression of the scalar potential \eqref{V_Max}, one finds: \begin{equation} V \, = \, \frac{g^{2}}{2} \, \left[h^{2} \, \Sigma^{5} \, + \, f_{0}^{2} \, \Sigma^{-2} \, \Phi^{6} \, + \, \Sigma^{3} \, \Phi \, \left(2\, \textrm{Tr}(q\,M\,q\,M) \, - \, \textrm{Tr}(q\,M)^{2}\right)\right] \ . \end{equation} By making use of the dictionary \eqref{dictionary_scalars} for the $\mathbb{R}^{+}$ scalars to compare the above expression with\footnote{The following expression was obtained in \cite{Blaback:2011nz} by means of a reduction of massive IIA supergravity with smeared six-branes.} \begin{equation} \label{V_Fluxes_max} V \, = \, \frac{\left(h \, \tau^{5/2} \, + \, f_{0} \, \rho^{9/4}\right)^{2}}{2 \, \rho^{3} \, \tau^{7}} \, + \, \rho^{-1} \, \tau^{-2} \, \left(\textrm{Tr}(q\,M\,q\,M) \, - \, \frac{1}{2} \, \textrm{Tr}(q\,M)^{2}\right)\ , \end{equation} one finds that they only coincide when the $h \, f_{0}$ term corresponding to the tadpole generated by the smeared sources is absent. \subsection*{Summarizing} The scalar potential given in \eqref{V_Fluxes_max} coming from reductions of massive IIA supergravity with smeared D6 charge can be written in two different ways: \begin{equation} \begin{array}{lclclc} V_{(\textrm{IIA})} & = & V_{(\textrm{half-max.})} \, + \, \omega^{2} \, \rho^{-1} \, \tau^{-2} & = & V_{(\textrm{max.})} \, + \, T_{6} \, \rho^{-3/4} \, \tau^{-9/2} & , \end{array} \end{equation} where $\omega^{2} \, \propto \, \left(\textrm{Tr}(q\,M\,q\,M) \, - \, \frac{1}{2} \, \textrm{Tr}(q\,M)^{2}\right)$ and $T_{6} \, \propto \, h \, f_{0}$. Both maximal and half-maximal gauged supergravities miss out a term in the scalar potential $V_{(\textrm{IIA})}$. Since these are the only existing consistent supergravity theories in $D=7$, we conclude that the scalar potentials coming from this class of compactifications do not admit any gauged supergravity description in general. \section{Discussion} \label{Discussion} The main message of this paper is twofold. First we observe that there exists an $\mathrm{AdS}_7$ flux vacuum with sixteen supercharges that has no gauged supergravity description in seven dimensions. Second, this supersymmetric vacuum is such that supersymmetry is broken when the spacetime-filling branes, required for the existence of the vacuum, are smeared over the internal manifold. These two observations are related since gauged supergravity descriptions are typically obtained from smearing branes and orientifold planes. However this would still allow the possibility that the flux vacuum be a $\mathcal{N}=0$ solution in a seven-dimensional gauged supergravity. We have verified that this cannot be the case. The practical obstacle for the smeared $\mathrm{AdS}_7 \times S^3$ solution to be part of a seven-dimensional gauged supergravity is the $S^3$ geometry. Although $S^3$ is a group manifold and the smeared $\mathrm{AdS}_7$ solution has only left-invariant modes excited, it turns out that calibrated smeared D6-branes are only allowed in flat internal geometries. We have shown this using the embedding tensor formalism. The essential mechanism behind this is the observation that compactifications with spacetime-filling sources and extended supersymmetry require the O$p$ involutions \cite{Dibitetto:2011eu, Dibitetto:2012rk}, even when the only sources would be D$p$-branes. In our case O6 involutions would project out the ``metric flux'' of the $S^3$ and only allow $\mathbb{T}^3$ as an internal geometry. The simple observations made in this note illustrate that gauged supergravity is a restrictive tool when it comes to classifying flux vacua, even when these vacua preserve many supercharges. It is natural to wonder about the existence of the effective field theory description of the low-energy fluctuations around the $\mathrm{AdS}_7$ vacuum, since one would naively expect this to be given by a half-maximal gauged supergravity. The reason this is not the case is the absence of a parametric separation between the $\mathrm{AdS}_7$ curvature radius and the KK scale. This absence implies that there is no lower-dimensional effective field theory. An observer in this spacetime will always see all of the ten spacetime dimensions. This is similar to the standard Freund-Rubin vacua, although they admit a gauged supergravity description. But these gauged supergravities should not be regarded as effective field theories. They are rather consistent truncations of ten-dimensional degrees of freedom that combine into lower-dimensional supergravity multiplets, since FR vacua are always perceived as higher-dimensional to an observer. Therefore we conjecture that our observations cannot occur for compactifications that are genuinly lower-dimensional in the sense of a parametric scale separation between the AdS curvature radius and the KK radius. Such supersymmetric AdS vacua should always be obtainable from lower-dimensional supergravities and smearing should not break supersymmetry. \subsection*{Acknowledgements} We have benefited from useful correspondence with Frederik Denef, Adolfo Guarino, Joe Minahan, Alessandro Tomasiello and Marco Zagermann. The work of U.D. and G.D. is supported by the Swedish Research Council (VR), and the G\"oran Gustafsson Foundation. The work of M.F.~was partially supported by the ERC Advanced Grant ``SyDuGraM'', by IISN-Belgium (convention 4.4514.08) and by the ``Communaut\'e Fran\c{c}aise de Belgique" through the ARC program. M.F.~ is a Research Fellow of the Belgian FNRS-FRS. The work of T.V.R. is supported by a Pegasus Marie Curie fellowship of the FWO.
1,314,259,996,893
arxiv
\section{Introduction} Lattice regularizations provide a definition of quantum field theories beyond perturbation theory. Evaluating the associated path integral by Monte Carlo also constitutes a non-perturbative calculational method to derive predictions from the theory. One of the systematic effects that have to be taken into account is the dependence of results on the lattice spacing $a$ (we assume a hyper-cubic lattice throughout) or in other words the size of discretization errors, \begin{equation} \label{eq:cutoffeffect} \Delta_\mathcal{P}(a) = \mathcal{P}(a) - \mathcal{P}(0)\,, \end{equation} associated with a dimensionless observable $\mathcal{P}$ of the theory. As a start, one may consider the classical field theory. One then has smooth fields, and the lattice-Lagrangian can simply be Taylor expanded. It is the continuum one up to terms suppressed by powers of~$a$. One may therefore think that also in the full, quantized, theory the small-$a$ behavior of the discretization errors is $\Delta_\mathcal{P}(a) = p_1 a^{{n_\mathrm{min}}} + p_2 a^{{n_\mathrm{min}}+1} + \ldots $ with the integer ${n_\mathrm{min}}$ given by the first non-zero power in the classical Taylor expansion of the Lagrangian. However, the divergences of quantum field theories spoil this behavior. Still, precise statements can be made about the small-$a$ expansion, based on Symanzik's effective theory (SymEFT) ~\cite{Symanzik:1979ph,Symanzik:1981hc,Symanzik:1983dc,Symanzik:1983gh}, see also~\cite[p.~39ff.]{Weisz:2010nr}. It describes the small-$a$ behavior by an effective field theory with a local Lagrangian \begin{equation}\label{eq:effLagrangian} \L_\text{eff}(x)=\L+a\dlatt[1]{\L}(x)+a^2\dlatt[2]{\L}(x)+\ldots\,. \end{equation} The effective theory can be thought of as a continuum effective theory, regularized e.g. by dimensional regularization. The first term is the continuum Lagrangian $\L$ of the fundamental field theory and $\dlatt[d]{\L}(x)$ are local Lagrangians of higher mass dimension. The leading term in \eq{eq:cutoffeffect} is then given by the one\footnote{We will be more precise below.}with the lowest mass dimension in \eq{eq:effLagrangian}, i.e. $\dlatt[1]{\L}(x)$, unless it vanishes. The corrections $\dlatt[d]{\L}(x)$ can be written as a linear combination of basis operators $\mathcal{B}_i(x)$ with the appropriate canonical mass dimensions. Renormalization of the effective theory introduces anomalous dimensions for the operators $\mathcal{B}_i$. It may therefore modify the small-$a$ expansion to $\Delta_\mathcal{P}(a) = p_1 a^{{n_\mathrm{min}} + \eta} + \ldots $ with, in general, non-integer $\eta$. The (leading) anomalous dimension $\eta$ is in general a non-perturbative quantity, but it may sometimes be estimated by perturbation theory in the $\epsilon$-expansion, see \cite{ZinnJustin:2002ru}. We now turn to asymptotically free theories such as QCD. There, small $a$ means weak coupling at the scale of the lattice cutoff and the anomalous dimension can 1) be computed in perturbation theory and 2) it leads to a modification of $a^n$ by logs \cite{Symanzik:1979ph,Symanzik:1981hc,Balog:2009yj,Balog:2009np}, \begin{equation} \Delta_\mathcal{P}(a) = p_1 [-\log(a\Lambda)]^{-\hat{\gamma}} \,a^{{n_\mathrm{min}}} + \ldots \label{eq:logcorr} \end{equation} and not by fractional powers. The intrinsic scale of the theory, $\Lambda$, is a renormalization group invariant and the exponent $\hat{\gamma}$ is proportional to a one-loop anomalous dimension. Since the work of \cite{Luscher:1991wu}, continuum extrapolations are routinely performed in order to obtain quantitative numbers for continuum field theory observables. They have been carried out with just powers\footnote{Sometimes an additional power of $\gbar^2(a^{-1}) \sim [-\log(a\Lambda)]^{-1}$ has been used when a tree-level improved action is used. Here $\gbar^2$ is the running coupling in some scheme.}of $a$, thus implicitly {\em assuming that $\hat{\gamma}$ is small}. Of course this can not really be taken for granted until $\hat{\gamma}$ is known from a computation. We here start to fill this gap. Note that the logarithmic corrections in \eq{eq:logcorr} can be very relevant. An explicit example is provided by the seminal work of Balog, Niedermayer and Weisz \cite{Balog:2009yj,Balog:2009np}. It concerns the 2-d O(3) sigma model where the leading term is $\hat{\gamma} =-3$ and the logarithmic corrections change the naive $a^2$ behavior to a shape which numerically looks like $a$ in a broad range of $a\Lambda$ \cite{Balog:2009yj,Balog:2009np}. This numerical behavior led to quite some concern \cite{Knechtli:2005jh} and the computation of the logarithmic corrections by Balog, Niedermayer and Weisz were essential to confirm that the SymEFT description holds and put continuum extrapolations on a solid ground. In lattice QCD, knowledge of the leading power of the logarithms (and partially awareness of the issue) are still missing; in particular it is important to have a confirmation that $\hat\gamma$ is small as is usually assumed. Let us cite Peter Weisz \cite{Weisz:2010nr}: \\[1ex] {\it The program should be carried out for lattice actions used for large scale simulations of QCD, when technically possible, in order to check if potentially large logarithmic corrections to lattice artifacts predicted by perturbative analysis appear. } \\[1ex] Ten years later, as a first step, we do carry out the program in the pure Yang-Mills (YM) theory as well as in Wilson's lattice QCD without non-perturbative $\mathrm{O}(a)$ improvement. The latter case is rather simple and basically given by results in the literature. We will therefore discuss only the YM theory in detail and just mention the difference and results in Wilson's QCD in \sect{s:Wils}. \subsection*{Scope}\addcontentsline{toc}{subsection}{Scope} In addition to the discretization effects due to the terms $\dlatt[d]{\L}$ in the effective Lagrangian, correlation functions of local fields $\Phi(x)$ also get $a$-effects from corrections to the fields $\Phi(x)$ represented in the SymEFT \cite{Heatlie:1990kg, Luscher:1996sc}. Apart from mostly restricting ourselves to the YM theory, we also do not discuss these additional discretization effects. They are absent in quantities which are independent of details of the local fields. We call those spectral quantities, since the spectrum of the Hamiltonian is the important application. In the YM theory, correlation functions themselves have so far not played a relevant role, apart from one notable exception. The exception is the new sector of Gradient flow observables \cite{Narayanan:2006rf,Luscher:2010iy}. We leave its treatment to future work. \section{Symanzik effective theory and logarithmic corrections to $a^n$ behavior} \label{s:sym} We consider YM theory in 4 dimensions defined by the action \bes S_\mathrm{lat} &=& \frac2{g_0^2}\, \sum_{x,\mu>\nu=0}^3\, p(x,\mu,\nu)\,, \nonumber \\[-1ex]\label{eq:Slat}\\[-1ex] \nonumber && p(x,\mu,\nu)=\Re\,\tr\,(1-U(x,\mu)U(x+a\hat\mu,\nu)U^{-1}(x+a\hat\nu) U^{-1}(x,\nu) ) \ees in terms of the link variables $U(x,\mu) \in $~SU(N), connecting $x+a\hat\mu$ and $x$. We assume a lattice with periodic boundary conditions in space and infinite (or arbitrarily large) time extent.\footnote{ In practice, finite lattices are of course needed for the Monte Carlo evaluation. The appropriate modifications of equations such as \eq{eq:mass} are standard.} As an example of a simple observable, $\mathcal{P}$, take a ratio of glue-ball masses, which may be defined as ($\partial_\mu^\mathrm{lat} f(x) =[f(x+a\hat\mu)-f(x)]/a$ and $x=(x_0,\vecx)$) \bes \label{eq:mass} \mathcal{P}=\mh_i/\mh_j,\quad \mh_i = -\lim_{x_0\to\infty} \partial_{0}^\mathrm{lat} \log\left( a^3\sum_\vecx C_i(x) \right) \,, \ees in terms of a two-point function \bes \label{eq:Cx} C_i(x-y) &=& \langle\, \Phi_i(x) \Phi_i(y) \,\rangle_\lat^\con \ees The gauge invariant fields $\Phi_i(x)$ are formed out of small (with a maximal extent $r_\mathrm{w}$ with $r_\mathrm{w}/a$ fixed) spatial Wilson loops, combined in such a way as to have a definite transformation under the lattice cubic group. A very simple example is the scalar field $\Phi_1(x)=Z_{F^2}\sum_{k,l\in\{1,2,3\}}p(x,k,l)$. For simplicity we assume in the following that the renormalization factors, such as $Z_{F^2}$ are determined such that they do not introduce any cutoff effects. In perturbation theory minimal (lattice) subtraction has this property. Expectation values are defined by the lattice path integral \bes \label{eq:obslat} \langle F(U) \rangle_\lat = \frac1{\cal Z}\int \prod_{x,\mu}\rmd U(x,\mu) \rme^{-S_\lat(U)} F(U) \,, \quad \ees where $\cal Z$ normalizes such that $\langle 1 \rangle_\lat=1$, $F(U)$ stands for a function of any number of link variables $U(x,\mu)$ and $\rmd U(x,\mu)$ is the invariant Haar measure. The label ``con'' stands for connected correlation functions, namely the subtraction of $[\langle\, \Phi_i(x)\,\rangle_\lat]^2$ in \eq{eq:Cx}. Note that while $C_i(x)$ depend on the details of the definition of $\Phi_i(x)$, the masses $\mh_i$ only depend on the quantum numbers of the field $\Phi(x)$. Masses or more generally energies are spectral quantities. SymEFT gives the small-$a$ expansion of correlation functions such as $C_i(x)$ in the form of a continuum effective field theory. The central statement is \bes \label{eq:Cxexp} C(x) = C^\mathrm{cont}(x) + a^{n_\mathrm{min}} \delta C(x) + \mathrm{O}(a^{{n_\mathrm{min}}+1}) \ees and the expansion on the r.h.s. can be obtained from the effective continuum field theory with effective Lagrangian \eq{eq:effLagrangian} supplemented by correction terms which are due to correction terms of the fields \cite{Heatlie:1990kg,Luscher:1996sc} \begin{equation}\label{eq:discretisedQuantity} \eff[]{\Phi}(x)=\Phi(x)+a\dlatt[1]{\Phi}(x)+a^2\dlatt[2]{\Phi}(x)+\ldots\,. \end{equation} Let us mention right away that ${n_\mathrm{min}}=2$ in the considered YM theory. For precise statements we need to specify \begin{enumerate} \item the rules of the EFT, i.e. how precisely are $\delta C(x)$ defined in terms of $\dlatt{\L}(x), \dlatt{\Phi}(x)$, \label{it:rules} \item which local operators contribute to $\dlatt{\L}(x)$, $\dlatt{\Phi}(x)$, \label{it:fields} \item how are the parameters of the EFT determined, in other words how are the coefficients of those operators contributing to $\dlatt{\L}(x), \dlatt{\Phi}(x)$ determined. \label{it:match} \end{enumerate} We discuss these items in turn. \noindent {\bf \ref{it:rules}.} The correction terms $\dlatt{\L}(x)$ etc. have canonical mass dimension $4+d$. A path integral with weight $\rme^{-\int\rmd^4x \L_\text{eff}(x)}$ is thus not renormalizable. Path integral expectation values are {\em defined} by expanding in the parameter $a$ before integrating over the fields. For our example, \eq{eq:Cxexp}, we then have as a definition of $\delta C(x)$ \bes \label{eq:C1eff} \delta C(x) &=& \delta C^\L(x) + \delta C^\Phi(x) \,, \\ \label{eq:C1S} \delta C^\L(x-y) &=&-\int\rmd^4 z \, \langle\, \Phi(x) \Phi(y)\, \dlatt[2]{\L}(z)\,\rangle_\contrs^\con \\ \label{eq:C1Phi} \delta C^\Phi(x-y) &=&\langle\, \delta \Phi(x) \Phi(y)\,\rangle_\contrs^\con + \langle\, \Phi(x) \delta \Phi(y)\,\rangle_\contrs^\con \ees where $\langle\, X \,\rangle_\contrs^\con$ is given by the standard continuum connected correlation function with continuum Lagrangian \bes \label{eq:scont} \L_\contrs(A) = -\frac{1}{2g_0^2}\,\sum_{\mu,\nu}\tr(F_{\mu\nu}(A) F_{\mu\nu}(A))\,, \quad F_{\mu\nu}(A) = [D_\mu(A),D_\nu(A)]\,, \ees written in terms of the covariant derivative \bes \label{eq:Dmu} D_\mu(A) = \partial_\mu + A_\mu \,. \ees We have already anticipated that \ref{it:fields}. leads to the vanishing of $\dlatt[1]{\L}, \dlatt[1]{\Phi}$ and used a shorthand $\delta \Phi=\dlatt[2]{\Phi}$. \noindent {\bf \ref{it:fields}.} The correction Lagrangians $\dlatt{\L}$ are linear combinations \bes \dlatt{\L}(x)= \sum_i \omegasym_i(g_0^2) \,\mathcal{O}_i(x) \ees of local operators $\mathcal{O}_i(x) $ which comply with the symmetries of the underlying lattice theory and have a mass dimension $4+d$. Gauge invariance is one of the symmetries (gauge fixing is needed only in \sect{s:AD} where we report on the perturbative computation). One may further drop all combinations of fields which vanish by the continuum equation of motion, $[D_\mu, F_{\mu\nu}(x)] =0$, (such as $\mathcal{O}=\tr([D_\mu, F_{\mu\nu}]\,[D_\rho F_{\rho\nu}])$) \cite{Luscher:1996sc} as well as all operators which can be written as total derivatives of the form $\slashed{\mathcal{O}} = \partial_\mu K_\mu$. After doing that, we have a so called ``on-shell'' basis. For YM it consists of two operators, which we may choose as \bes \mathcal{O}_{1}=\frac{1}{g_0^2}\sum\limits_{\mu,\nu,\rho}\tr([D_\mu, F_{\nu\rho}]\,[D_\mu, F_{\nu\rho}]) \,,\quad \mathcal{O}_{2}=\frac{1}{g_0^2}\sum\limits_{\mu,\nu}\tr([D_\mu, F_{\mu\nu}]\,[D_\mu, F_{\mu\nu}])\,, \label{eq:ops} \ees already known from Refs.~\cite{Weisz:1982zw,Luscher:1985}\footnote{That reference discusses the construction of a lattice improved action such that the $a^2$ terms in the SymEFT are absent. The basis of operators is the same.}. Note that $\mathcal{O}_{2}$ breaks the O(4) rotational invariance of the continuum Lagrangian \eq{eq:scont} down to 90$^{\circ}$ rotations around the lattice axes. Dropping it, one has the general effective Lagrangian of a low energy theory with just gauge fields and O(4) invariance. This is a (tiny) sector of the Lagrangian considered for beyond the standard model phenomenology in Ref.~\cite{Alonso:2013hga}. The operator, $\frac{1}{g_0^3}\tr(F_{\mu\nu}F_{\nu\rho}F_{\rho\mu})$, considered there is seen to be on-shell equivalent to \bes \mathcal{O}_1=\frac{2}{g_0^2}\sum\limits_{\mu,\nu,\rho}\left(\tr([D_\mu,F_{\mu\nu}][D_\rho,F_{\rho\nu}])-\tr(F_{\mu\nu}F_{\nu\rho}F_{\rho\mu})\right)+\text{(total divergences)} \ees using integration by parts and the Bianchi identity. Gauge invariant dimension five operators do not exist and thus YM theory has ${n_\mathrm{min}}=2$. The corrections to the continuum fields $\Phi_i$ will not be needed. Now we consider the $a$ expansion of our observable, \bes \mathcal{P} = \mathcal{P}^\mathrm{cont} + a^2 [\delta\mathcal{P}^{\L} + \delta\mathcal{P}^{\Phi}] + \mathrm{O}(a^3) \,. \ees Inserting the spectral representations into the ratios $C^\L_i/C^\mathrm{cont}_i$ which appear as one expands the r.h.s. of \eq{eq:mass} in $a$, one sees\footnote{For intermediate steps in the derivation, see \cite{Sommer:2010ic}, sect.~9.4.1. In quantum mechanics the relation given is the Feynman-Hellmann theorem.} \bes \label{eq:S2matel} \delta\mathcal{P}^\L = - \frac12\, [\langle i| \dlatt[2]{\L}(0) | i\rangle\, -\langle j| \dlatt[2]{\L}(0) | j \rangle]\,, \quad \delta\mathcal{P}^\mathrm{\Phi} = 0\,. \ees The states $|i\rangle$ with $\langle i|i\rangle =2L^3$ are the ground states of the Hamiltonian of the finite volume theory with spatial volume $L^3$ in the zero momentum sector of the Hilbert space with the quantum numbers of $\Phi_i$. The vanishing of $\delta\mathcal{P}^\mathrm{\Phi}$ was to be expected as the energy of a physical state should not depend on the interpolating field used to create it, including its renormalization. Since physical quantities which do depend on $\delta \Phi$ have so far not been in the focus of lattice computations, and also because each field appearing in the correlation functions has to be considered separately, we will ignore the contribution of $\delta \Phi$ from now on. We concentrate on spectral quantities. \noindent {\bf \ref{it:match}.} The coefficients $\omegasym_i$ are needed, in particular their dependence on the parameters of the theory. \Eq{eq:S2matel} makes it clear that actually we first have to renormalize the operators $\mathcal{O}_i$ and then determine their coefficients by matching, which will be discussed in \sect{s:match}. Renormalization introduces a dependence on the renormalization scale $\mu$ (and scheme). By renormalization group improvement we turn it into a dependence on the lattice spacing, which we are seeking. In the 2-d O(N) sigma model, all this has been done to next-to-leading order in the coupling\cite{Balog:2009yj} . Here we are content with the leading order since it predicts the asymptotic behavior of $\Delta_\mathcal{P}$. Before proceeding it is convenient to switch to a basis of operators, with elements $\mathcal{B}_i=\sum_jv_{ij}\mathcal{O}_j$ which do not mix at one-loop order, i.e. \bes \base^\mathrm{R}_i(\mu) = [1+g^2 Z^{(1)}_i + \mathrm{O}(g^4)]\, \mathcal{B}_i\,, \ees where $\base^\mathrm{R}_i(\mu)$ denote the renormalized operators in some scheme at renormalization scale~$\mu$. One may think of the $\msbar$ scheme. In general, we then have $ \Delta_\mathcal{P} = \sum_i \csym_i \mathcal{M}^\mathrm{R}_{\mathcal{P},i}, $ where at leading order in the coupling $\omega_j=\omegasym_j^{(n)}g_0^{2n} + \rmO(g_0^{2n+2})\,,\; \omegasym_j^{(n)}=\sum_i \csym_i^{(n)} v_{ij} $ and $\mathcal{M}^\mathrm{R}_{\mathcal{P},i}$ are matrix elements of the operators $\mathcal{B}_i$ in the continuum field theory. The renormalized matrix elements are denoted \bes \label{eq:melR} \mathcal{M}^\mathrm{R}_{\mathcal{P},i}(\mu) = \langle \psi_\mathcal{P}| \base^\mathrm{R}_i(\mu) |\psi_\mathcal{P}\rangle \,, \ees with some physical state $|\psi_\mathcal{P}\rangle$, analogous to $|\psi_\Phi\rangle$. We have suppressed the spacetime argument of $\mathcal{B}_i$. The coefficients $\csym_i$ depend on the renormalization scheme adopted for $\mathcal{B}_i^\mathrm{R}$ as well as on $\mu$ and $a$. We may thus write (dropping higher powers of $a$ without notice) \bes \label{eq:deltap1} \Delta_\mathcal{P}(a) = -a^2 \sum_i \csym_i(\gbar(\mu),a\mu)\, \mathcal{M}^\mathrm{R}_{\mathcal{P},i}(\mu)\,, \ees where the dependence of $\csym_i$ on $\mu$ cancels the one of $\mathcal{M}^\mathrm{R}_{\mathcal{P},i}(\mu)$. In order to systematically learn about the behavior for small $a$ we use renormalization group improvement, namely we set $\mu=1/a$, and introduce the renormalization group invariant matrix elements \bes \mathcal{M}^\mathrm{RGI}_{\mathcal{P},i} = \sum_j \varphi_{ij}(\gbar(\mu))\, \mathcal{M}^\mathrm{R}_{\mathcal{P},j}(\mu) = \langle \psi_\mathcal{P}| \base^\mathrm{RGI}_i |\psi_\mathcal{P}\rangle \,. \ees The matrix valued function ($\mathop{\mathrm{Pexp}}$ denotes path ordering: terms with smallest $x$ appear to the left) \bes \label{eq:phifct} \varphi(\gbar) &=&\left[\,2b_0 \gbar^2\,\right]^{-\gamma^{(0)}/2b_0} \mathop{\mathrm{Pexp}}\left\{-\int_0^{\gbar} \rmd x \left[\,{ \gamma(x) \over\beta(x)} -{\gamma^{(0)}\over b_0 x}\,\right] \right\}\,, \\ &=& \left[\,2b_0 \gbar^2\,\right]^{-\gamma^{(0)}/2b_0} \times [1+\rmO(\gbar^2)] \label{eq:phifct2} \ees involves the anomalous dimension matrix $\gamma$ defined by \bes \mu \frac{\rmd}{\rmd\mu} \base^\mathrm{R}_i(\mu) = \sum_j \gamma_{ij}(\gbar(\mu)) \,\base^\mathrm{R}_j(\mu) \,. \ees It has the expansion \bes \gamma(\gbar) = -\gbar^2 \,[\gamma^{(0)} +\gamma^{(1)} \gbar^2 + \ldots ]\,, \ees where by our choice of basis $\gamma^{(0)}$ is diagonal, \bes \label{eq:gammahat} \frac1{2b_0}\gamma^{(0)} =\diag(\hat\gamma_1,\hat\gamma_2)\,. \ees Our convention for the $\beta$-function is $\beta(\gbar(\mu)) = \mu \frac{\rmd}{\rmd\mu} \gbar(\mu)$ with expansion $\beta(\gbar)=-\gbar^3\,(b_0 +b_1 \gbar^2 +\ldots) $. Asymptotic freedom means that perturbation theory is applicable at small $a$. The asymptotic behavior of \eq{eq:deltap1} can thus be inferred from (renormalized) perturbation theory. The $\mathrm{O}(g^2)$ term in \eq{eq:phifct2} is then subdominant and further we may expand \bes \label{eq:ciexp} \csym_i(\gbar(a^{-1}),1) = \csym_i^{(0)} + \csym_i^{(1)} \gbar^2(a^{-1}) + \ldots \,. \ees Putting everything together and concentrating on the leading term we arrive at \bes \Delta_\mathcal{P}(a) &=& -a^2 \sum_{i} \csym_i^{(0)} \left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_i} \mathcal{M}^\mathrm{RGI}_{\mathcal{P},i}\,[1 +\mathrm{O}(\gbar^2(a^{-1})]+\mathrm{O}(a^4)\,. \ees Ordering $\hat\gamma_1 < \hat\gamma_2$, the leading asymptotics is \bes \Delta_\mathcal{P}(a) \sim a^2 \left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_1} \sim a^2 \left[\frac{1}{-\log(a\Lambda)}\right]^{\hat\gamma_1} \,, \ees unless $\csym_1^{(0)}$ or the matrix element $\mathcal{M}^\mathrm{RGI}_{\mathcal{P},1}$ vanish. Generically, there is no reason for the latter to do so. A positive/negative $\hat\gamma_1$ leads to an accelerated/decelerated asymptotic convergence as compared to naive $a^2$ behavior. \section{One-loop computation of the anomalous dimension matrix} \label{s:AD} We now turn to the anomalous dimension matrix $\gamma^{(0)}$. Although the renormalization of composite pure gauge theory operators has been discussed extensively \cite{Gracey:2002he,Alonso:2013hga}, a new computation is necessary because of the rotation symmetry violating operator $\mathcal{O}_2$, \eq{eq:ops}, which is not found in the literature. We thus employed dimensional regularization and computed the renormalization matrix, \begin{equation} \R{\begin{pmatrix} \mathcal{O}_1\\[3pt] \mathcal{O}_2 \end{pmatrix}}= \begin{pmatrix} Z_{11}& 0\\[3pt] Z_{21}&Z_{22} \end{pmatrix} \begin{pmatrix} \mathcal{O}_1\\[3pt] \mathcal{O}_2 \end{pmatrix}\,,\label{eq:Zop} \end{equation} to one-loop order. Here $Z_{12}$ vanishes because dimensional regularization preserves rotational symmetry and thus $\R{(\mathcal{O}_1)}$ can not have a rotational non-invariant piece $Z_{12} \mathcal{O}_2$. The $Z$-matrix is obtained from a perturbative computation of a sufficient number of expectation values \bes C^\op_{ik} = \langle \mathcal{O}_i \mathcal{O}_{k}^\mathrm{probe} \rangle \ees of the operators $\mathcal{O}_i$ together with suitable multi-local, renormalized, operators $\mathcal{O}_{k}^\mathrm{probe}$. We may choose $\mathcal{O}_{k}^\mathrm{probe}$ including their kinematics to simplify the computation. Unfortunately, just choosing them to be composed of local gauge invariant operators, e.g. $\tr F_{\mu\nu}F_{\mu\nu}$, one quickly discovers that one-loop computations are insufficient, since the tree-level correlation functions vanish. As one option, we thus relaxed on manifest gauge invariance of $C_{ik}$ and consider gauge dependent Green's functions with \bes \mathcal{O}_{1}^\mathrm{probe} = \tilde{A}^a(p_1)\cdot\eta_1\; \tilde{A}^b(p_2)\cdot\eta_2\,,\quad \mathcal{O}_{2}^\mathrm{probe} = \tilde{A}^a(p_1)\cdot\eta_1\; \tilde{A}^b(p_2)\cdot\eta_2\; \tilde{A}^c(p_3)\cdot\eta_3\,, \ees in terms of the momentum space fields $\tilde A_\mu(p) =\int \rmd^4x\, \rme^{-ipx} A_\mu(x)$. We have $\sum_i p_i=-q$ as indicated in \fig{f:diagrams} and choose $[(p_i)_0]^2=-(\vecp_i)^2$, $p_i\cdot\eta_i=0$ for all $i$ with the Euclidean scalar product $p\cdot\eta=\sum_\mu p_\mu\eta_\mu$. In principle mixing of $\mathcal{O}_i$ with gauge-non-invariant operators then has to be taken into account \cite{Joglekar:1975nu,Collins:1994ee}. However, those do not contribute to the on-shell Green's functions selected by our choice of kinematics. Since we want to restrict ourselves to the two and three gluon $\mathcal{O}^\mathrm{probe}$ from above, we need to have a non-zero momentum $q$ of the operators $\mathcal{O}_i$. Otherwise the Green's functions vanish by kinematics. The price to pay is that $\mathcal{O}_i$ mix with the ``total divergence operators'', \bes \slashed\mathcal{O}_1 = \frac{1}{g_0^2}\sum\limits_{\mu,\nu,\rho}\partial_\mu\tr(F_{\rho\nu}\, [D_\mu, F_{\rho\nu}])\,, \quad \slashed\mathcal{O}_2 = \frac{1}{g_0^2} \sum\limits_{\mu,\nu}\partial_\mu\tr(F_{\mu\nu}\,[D_\mu, F_{\mu\nu}])\,, \ees as \begin{equation} \R{\begin{pmatrix} \mathcal{O}\\[3pt] \slashed\mathcal{O} \end{pmatrix}}= \begin{pmatrix} Z& A^{\mathcal{O}\slashed{\mathcal{O}}}\\[3pt] 0&Z^{\slashed\mathcal{O}} \end{pmatrix} \begin{pmatrix} \mathcal{O}\\[3pt] \slashed\mathcal{O} \end{pmatrix}\,,\label{eq:Zopgaugevariant} \end{equation} with a block-triangular structure. As a second option, we considered the background field method\cite{DeWitt:1967ub,KlubergStern:1974xv,Abbott:1980hw,Luscher:1995vs}. It consists of introducing a smooth classical background field, $B_\mu(x)$. The gauge field, \bes A_\mu= B_\mu + g_0 Q_\mu \,, \ees is split into the background field and the quantum fluctuations $Q_\mu$. Note that the background field is {\em not} required to satisfy the equation of motion. In addition to the Lagrangian \bes \L_\mathrm{bf}(B,q) = \L_\contrs(B+g_0Q)\,, \ees one chooses the background field gauge with gauge-fixing term \bes \L_\mathrm{gf}(B,Q) = -\lambda_0\,\sum_{\mu,\nu}\tr([D_\mu(B),Q_\mu] [D_\nu(B),Q_\nu] \ees instead of the standard $-\lambda_0\,\tr((\partial_\mu A_\mu) (\partial_\nu A_\nu)) $ and adds a Faddeev Popov term \cite{Faddeev:1967}. \begin{figure}\centering \subfloat[Two-point function.]{\includegraphics[scale=1.3]{A2O.pdf}}\qquad \subfloat[Three-point function.]{\includegraphics[scale=1.3]{A3O.pdf}} \caption{Schematic representation of the needed two-point and three-point functions with insertion of an operator $\mathcal{O}_i$. The "blob" represents all possible connected tree-level and one-loop graphs with given number of external legs.} \label{f:diagrams} \end{figure} In this case, we can form \bes \mathcal{O}_{1}^\mathrm{probe} = \tilde{B}^a_\mu(p_1)\; \tilde{B}^b_\nu(p_2)\,,\quad \mathcal{O}_{2}^\mathrm{probe} = \tilde{B}^a_\mu(p_1)\; \tilde{B}^b_\nu(p_2)\; \tilde{B}^c_\rho(p_3)\,, \ees just in terms of the background field, and obtain gauge invariant $C_{ik}$ by construction. We can remain with Euclidean momenta and do not need a nonzero momentum to flow into the operator $\mathcal{O}_i$. Thus the mixing with total divergence operators does not contribute any more. The downside is that here the equations of motion do not hold. Therefore, we have to consider the mixing structure \begin{equation} \R{\begin{pmatrix} \mathcal{O}\\[3pt] \mathcal{E} \end{pmatrix}}= \begin{pmatrix} Z&A^{\mathcal{O}\mathcal{E}}\\[3pt] 0 & Z^{\mathcal{E}} \end{pmatrix} \begin{pmatrix} \mathcal{O}\\[3pt] \mathcal{E} \end{pmatrix}\,,\label{eq:opMixingNoGaugeFix} \end{equation} with the extra operator \bes \mathcal{E} = \frac{1}{g_0^2}\sum\limits_{\mu,\nu,\rho} \tr([D_\mu, F_{\mu\nu}]\,[D_\rho, F_{\rho\nu}])\,. \ees Since we are just interested in the renormalization matrix $Z$, it suffices to consider only $\R{\mathcal{O}}$, the first block row of the above equations. Those define the renormalized $\R{(C^\op_{ik})}$, replacing $\mathcal{O}$ with $\R{\mathcal{O}}$. We write the resulting equations as \bes \label{eq:CikR} \R{(C^\op_{ik})} = \sum_{j=1}^2 Z_{ij} {C^\op_{jk}} + \sum_{l} A_{il} {\Cred_{lk}}\,, \ees where $\Cred_{lk}$ is formed of the needed redundant operators which mix into $\mathcal{O}$. Without background field, it is the set of $\slashed{\mathcal{O}}$. With background field there is just the operator $\mathcal{E}$. Expanding \bes Z_{ij}&=& \delta_{ij} + \bar Z_{ij} \frac{\gr^2}{\eps}+ \mathrm{O}(\eps^0, \gr^4)\,, \quad A \;=\; \bar \Zoffd \frac{\gr^2}{\eps} + \mathrm{O}(\eps^0, \gr^4)\,, \\ C^\op_{ik} &=& (C^\op_{ik})^{(0)} + \overline{C^\op_{ik}}\frac{\gr^2}{\eps} + \mathrm{O}(\eps^0, \gr^4)\,, \quad C^{\mathrm{red}} \;=\; ({C^{\mathrm{red}}})^{(0)} + \mathrm{O}(\gr^2)\,, \ees and requiring the finiteness of $\R{(C^\op_{ik})}$, the desired $\bar Z_{ij}$ (as well as $\bar \Zoffd$) are obtained as the solution of the linear system of equations (each $i=1,2$ and all $k$ yield an equation), \bes \sum_{j=1}^2 \bar Z_{ij} (C^\op_{jk})^{(0)} + \sum_{l} \bar \Zoffd_{il} ({\Cred_{lk}})^{(0)} = -\overline{C^\op_{ik}}\,. \ees There is one subtlety in applying the above. The equations assume that the observables $C^\op_{jk}$ are infrared finite. With the chosen on-shell kinematics in the first case, this is, however, not true and the $1/\eps$ terms contain in principle a mix of ultraviolet and infrared divergences. Therefore we use the by now common following trick, called {\it infrared rearrangement} \cite{Misiak:1994zw,Chetyrkin:1997fm,Luthe:2017ttc}. For each loop integral, we rewrite the denominators in the form \begin{equation} \label{eq:irreaar} \frac{1}{(k+p)^2}=\frac{1}{k^2+\Omega}-\frac{2kp+p^2-\Omega}{(k^2+\Omega)(k+p)^2}\,, \end{equation} where $k$ is the loop momentum and $\Omega$ is an arbitrary positive constant. The second term on the r.h.s.~is one power less ultraviolet divergent and the first one has no source of infrared divergence. We can usually restrict ourselves to the first one since we are just interested in the ultraviolet divergences which determine the renormalization. If necessary, one can apply the transformation repeatedly. While for many integrals this trick is not necessary, we carry it out in all cases, since all integrals are then brought to the standard form $\int \rmd^D k\left[k^2 +\Omega\right]^{-n} {k_{\mu_1}\ldots k_{\mu_l}}$ up to the finite and infrared divergent parts which we just drop. Note that the $Z$-factors are independent of $\Omega$. We have used this throughout as a check on our results. The computation was carried out with the help of computer algebra packages. Feynman graphs were generated by \QGRAF/~\cite{Nogueira:1993,Nogueira:2006pq}, formally treating the operator insertions with the help of additional non-propagating scalar fields, $\varphi_i(x)$, called ``anchor'', through additional terms $\sum_i\varphi_i(x)\mathcal{O}_i(x)$ in the Lagrangian. The Feynman rules were generated using \FORM/~\cite{Vermaseren:2000nd}, which we also used for tricks such as \eq{eq:irreaar}, to reduce the Feynman graphs to standard one-loop integrals, and to isolate the $1/\eps$ poles. The computed two-point and three-point functions with operator insertions are shown schematically in \fig{f:diagrams}. We checked explicitly that the results for both cases, non-zero $q$ vs. background field, agree. They read \bes \bar Z &=& \frac{C_\text{A}}{(4\pi)^2} \begin{pmatrix} 7/3 & 0 \\ -7/15 & 21/5 \end{pmatrix}\,. \ees The element $\bar Z_{11}$ agrees with the value found in the literature~\cite{Narison:1983}. For completeness we also report the mixing terms~($C_\text{A}=\mathrm{N}$ for gauge group SU(N)) \bes \bar \Zoffd^{\mathcal{O}\slashed{\mathcal{O}}} &=& \frac{C_\text{A}}{(4\pi)^2} \begin{pmatrix} -6 & 0 \\ -21/20 & -9/5 \end{pmatrix} \,,\quad \bar \Zoffd^{\mathcal{O}\mathcal{E}} = \frac{C_\text{A}}{(4\pi)^2} \begin{pmatrix} \frac{23}{6}-\frac{3}{2\lambdar} \\ \frac{7}{15}-\frac{1}{2\lambdar} \end{pmatrix} \,,\\ Z^{\mathcal{E}} &=& 1+ \frac{C_\text{A}}{(4\pi)^2} \left(\frac{5}{4}-\frac{3}{4\lambdar} \right)\frac{\gr^2}{\eps}\,. \ees We read off that the choice of basis, \bes \mathcal{B}_1= \mathcal{O}_1\,, \quad \mathcal{B}_2=-\frac14 \mathcal{O}_1+\mathcal{O}_2\,, \ees renormalizes without mixing at one-loop order, \bes \mathcal{B}_i^\mathrm{R}&=&[1+\bar Z^\mathcal{B}_i \frac{\gr^2}{\eps} ] \,\mathcal{B}_i +\rmO(\gr^4)\,, \quad \bar Z^\mathcal{B}_1=\frac73\frac1{(4\pi)^2}\,,\quad \bar Z^\mathcal{B}_2=\frac{21}5 \frac1{(4\pi)^2}\,. \ees The anomalous dimensions of \eq{eq:gammahat} are\footnote{ At one-loop order we have $\gamma_i = 2b_0\hat\gamma_i=\bar Z^\mathcal{B}_i$. } \bes \label{eq:gammahatYM} \hat\gamma_1=7/11\approx 0.636\,,\quad \hat\gamma_2=63/55\approx 1.145 \,, \ees independent of the number of colors. \begin{figure}\centering \includegraphics[width=0.4\textwidth]{six_linksBB.pdf} \caption{Graphical representations~\cite{Weisz:2010nr} of the loop geometries contributing to common lattice gauge actions.}\label{f:gaugeActionTerms} \end{figure} \pagebreak \section{Matching to lattice actions} \label{s:match} The final ingredient needed to predict the form of the cutoff effects are the coefficients of the higher dimensional operators in the effective Lagrangian, step ``\ref{it:match}.'' in \sect{s:sym}. At leading order of perturbation theory considered here, we just need the lowest order coefficients $\csym_i^{(0)}$ of the functions $\csym_i$, \eq{eq:ciexp}. At tree-level, no divergences occur in the path integral. One may therefore perform a naive classical expansion of the lattice action in $a$, setting $U(x,\mu) = \rme^{a A_\mu(x)}$ with a smooth continuum gauge field $A_\mu$. This expansion has been carried out by L\"uscher and Weisz~\cite{Luscher:1985} for a set of gauge actions, in particular for those consisting of the lattice loops depicted in \fig{f:gaugeActionTerms}. For each of these loops one sums over all lattice points corresponding to the lower left corners in the graph and over all orientations on the lattice, e.g. for the plaquette term (0) one sums over $\mu>\nu$, for the rectangle (1) over $\mu\ne\nu$ etc. There are 6,12,16,48 orientations for the loops (0),(1),(2),(3). Apart from the overall pre-factor $2/g_0^2$, we denote their coefficients at $g_0 \to 0$ as $e_i,\, i=0,1,2,3$ (in Ref.~\cite{Luscher:1985} they are denoted $c_i(0)$). With \bes e_0+8e_1+8e_2+16e_3 = 1\,, \ees the leading term in the $a$-expansion, \bes S_\lat^\mathrm{class} &=& \int\rmd^4x \left\{ \L_\contrs(x) + a^2 \sum_{i=1}^2 \omega_i \mathcal{O}_i(x) + \ldots \right\} \,, \ees has the conventional normalization. The ellipses summarize terms that vanish upon the use of the equation of motion and higher orders in $a$. From Table~2 of \cite{Luscher:1985} we find \\ \bes \csym_1^{(0)} &=& \omegasym_1^{(0)}+\frac14 \omegasym_2^{(0)}=\frac1{48} +\frac14 e_1 +\frac13 e_2 -\frac14 e_3 \,, \\ \csym_2^{(0)} &=& \omegasym_2^{(0)} = \frac1{12} + e_1 - e_3 \,. \ees \begin{table}[h] \centering \begin{tabular}{cccccc} \toprule action & $e_1$ & $e_2$ & $e_3$ & $\csym_1^{(0)}$ & $\csym_2^{(0)}$ \\ \midrule Wilson, \eq{eq:Slat} & 0 & 0 & 0 & $\frac1{48}$ & $\frac1{12}$ \\ Symanzik improved & $-\frac1{12}$ & 0 & 0 & 0 & 0 \\ Iwasaki \cite{Iwasaki:2011np} & $-0.331$ & 0 & 0 & $-0.0619$ & $-0.2477$\\ DBW2 \cite{deForcrand:1996bx,Takaishi:1996xj} & $-1.4088$ & 0 & 0 & $-0.3314$ & $-1.3255$\\ \bottomrule \end{tabular} \caption{Commonly used gauge actions and their coefficients of the operators $\mathcal{B}_1, \mathcal{B}_2$ in the SymEFT. The row ``Symanzik improved'' applies to all actions with leading order in $g_0^2$ coefficients as specified there. } \label{t:ci0} \end{table} The standard Wilson plaquette action, \eq{eq:Slat}, has $e_0=1$, $e_1=e_2=e_3=0$ and both $\mathcal{B}_1$ and $\mathcal{B}_2$ contribute to the order $a^2$. Symanzik improved actions have $\csym_i^{(0)}=0$ by design. Other actions such as the Iwasaki action and the ``DBW2'' action lead to quite large coefficients. We show a summary in \tab{t:ci0}. All considered lattice actions just have the plaquette and the rectangle terms. This turns out to lead to vanishing coefficients $e_2,e_3$ and in the classical $a^2$ expansion only $\mathcal{O}_1$ contributes in the $\mathcal{O}_i$ basis ~\cite{Luscher:1985}. As discussed before we have to go to the basis $\mathcal{B}_i$ with diagonal renormalization at one-loop. The relevant coefficients for the asymptotics are then related, $\csym_2^{(0)}=4\csym_1^{(0)}$. \section{Examples for the asymptotic behavior} For convenience we combine here the main results of the previous two sections and discuss some interesting sample applications. \subsection{Generic form for spectral quantities} The cases considered in \tab{t:ci0} are probably the most relevant for the Yang-Mills theory. Since they all satisfy $ \csym_2^{(0)}=4\csym_1^{(0)}\,, $ we have the form \bes \Delta_\mathcal{P}(a) &=& -a^2 \csym_1^{(0)} \left\{ \left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_1} \mathcal{M}^\mathrm{RGI}_{\mathcal{P},1} + 4 \left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_2} \mathcal{M}^\mathrm{RGI}_{\mathcal{P},2} \right\} \times \nonumber \\ && \times \,[1 +\mathrm{O}(\gbar^2(a^{-1})]\, \text{ for Wilson, Iwasaki, DBW2 actions.} \label{eq:DeltaPleadgen} \ees The entire computed leading behavior only depends on the coefficient $\csym_1^{(0)}$. While we cannot predict the relative contribution of the two powers $\hat\gamma_{1},\hat\gamma_2$ because they depend on the non-perturbative matrix elements $\mathcal{M}^\mathrm{RGI}$, their mixture is the same for any of the three different actions. The only action dependence is in the coefficient of the rectangle term (geometry (1) of \fig{f:gaugeActionTerms}) and thus the leading cutoff effects have a relative size \bes \text{ Wilson : Iwasaki : DBW2} &\approx& 1\;:\;(-3)\;:\;(-16)\;. \ees For a Symanzik improved action, the property $\csym_2^{(0)}=\csym_1^{(0)}=0$ and additionally for a one-loop improved action $\csym_2^{(1)}=\csym_1^{(1)}=0$ means \bes \label{eq:DeltaPlead} \Delta_\mathcal{P}(a) &=& -a^2 \sum_{i} \csym_i^{(n)} \left[\gbar^2(a^{-1})\right]^n \left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_i} \mathcal{M}^\mathrm{RGI}_{\mathcal{P},i}\,\times \\ && \times [1 +\mathrm{O}(\gbar^2(a^{-1})]\,, \ees where $n=1$ for a tree-level improved action and $n=2$ for a one-loop improved action and $n=0$ without perturbative improvement. We illustrate the $a$-behavior in \fig{f:Delta_lead}. One notices that over a typical range of $a$ from $a=0.1\,\fm$ to $a=0.04\,\fm$, one has 20, 40, 60\% (for $n=0,1,2$) faster than $a^2$ reductions of $\Delta_\mathcal{P}(a)$ as compared to the naive $a^2$. We remind the reader, that gradient flow observables are excluded and that we have restricted ourselves to energy levels. \begin{figure} \centering \includegraphics[width=0.495\textwidth]{logs} \includegraphics[width=0.495\textwidth]{logs2} \caption{Illustration of discretization errors, $\Delta_\mathcal{P}(a)$, \eq{eq:DeltaPlead} compared to naive $a^2$ behavior. We use $\alpha(5/r_0)=\gbar^2(5/r_0)/4\pi=0.25$, where $r_0\approx0.5\,\fm$~\cite{Sommer:1993ce} and set matrix elements to one in units of $r_0$ and set $c_1^{(i)}=1$. On the right, we drop the overall naive power of $a^2/r_0^2$ and normalize at $a/r_0=1/5$ such that the shape is clearly visible. } \label{f:Delta_lead} \end{figure} \subsection{Short distance observables} Let us now consider the special case of a dimensionless short distance observable depending on a single physical length scale $r$. A simple example is $ \mathcal{P}_\mathrm{F} = \frac{4\pi}{C_\mathrm{F}} r^2 F(r)\,, $ with $F(r)$ the force between static quarks assumed here to be defined in terms of a discrete derivative of the potential which is correct up to order $a^4$ errors.\footnote{Otherwise, if $\mathrm{O}(a^2)$ errors are associated with the definition of the lattice derivative, these can be taken into account explicitly.} In particular, we are interested in the region of small $r$, which has two consequences. The ratio $a/r$ which determines the discretization errors is not as small as in the large distance region. The discussion of discretization errors is thus particularly important. Second, not only the continuum $\mathcal{P}(\Lambda r,0)$ can be expanded in perturbation theory, but also the quantity at finite $a/r$ - both in lattice theory and in SymEFT. We want to summarize what one can learn from this. The perturbative expansion in the lattice theory is expected to be of the form \cite{Symanzik:1982dy,Symanzik:1979ph} \bes \Delta_\mathcal{P}(\Lambda r,a/r) &=& \mathcal{P}(\Lambda r,a/r) - \mathcal{P}(\Lambda r,0) \nonumber \\ &=& \label{eq:DeltaPl} \mathcal{P}(\Lambda r,0) \; [ \delta_0(a/r) + \delta_1(a/r) \,\glat^2(r^{-1}) +\ldots ] \\ &&\delta_l(a/r) =\frac{a^2}{r^2} \sum_{k=0}^l p_{lk} \log(a/r)^k +\mathrm{O}((a/r)^4) \,. \ees On the other hand in SymEFT with renormalization group improvement, dropping the $\mathrm{O}(\glat^2(a^{-1}))$ corrections, we have \bes \label{eq:DeltaPshortd} \Delta_\mathcal{P}(\Lambda r,a/r) &=& -\frac{a^2}{r^2} \sum_{i} \csym_i^{(0)} \left[\,2b_0 \glat^2(a^{-1})\right]^{\hat\gamma_i} [r^2 \mathcal{M}^\mathrm{RGI}_{\mathcal{P},i}(r)]\,\, \\ &=& -\frac{a^2}{r^2} \mathcal{P}(\Lambda r,0)\sum_{i} \csym_i^{(0)} \left[\frac{\glat^2(a^{-1})}{\glat^2(r^{-1})}\right]^{\hat\gamma_i} \,K_i(r) \,, \quad \\ \nonumber && K_i(r)=\frac{r^2 \mathcal{M}^\mathrm{R}_{\mathcal{P},i}(r;\mu)}{\mathcal{P}(\Lambda r,0)}\,, \quad \mu=r^{-1}\,, \ees where the second argument $\mu$ in $\mathcal{M}^\mathrm{R}$ is the renormalization scale of the operator $\mathcal{B}_i^\mathrm{R}$. For comparison to the fixed order perturbation theory form \eq{eq:DeltaPl} we expand (remember $\hat\gamma_i=\gamma_i^{(0)}/(2b_0)$) \bes \label{eq:fop} \left[\frac{\glat^2(a^{-1})}{\glat^2(r^{-1})}\right]^{\hat\gamma_i} &=& 1 + \gamma_i^{(0)} \log(a/r)\,\glat^2(r^{-1})\,+\,\mathrm{O}(\glat^4)\,, \\ K_i(r) &=& [K_i^{(0)} + K_i^{(1)} \glat^2(r^{-1}) + \mathrm{O}(\glat^4)]\,, \ees and find \bes \label{eq:p0} p_{00} &=& -\sum_i \csym_i^{(0)}K_i^{(0)} \,, \\ \label{eq:p1} p_{10} &=& -\sum_i \csym_i^{(0)}K_i^{(1)} - \sum_i \csym_i^{(1)}K_i^{(0)} \quad p_{11} = -\sum_i \csym_i^{(0)}K_i^{(0)}\gamma_i^{(0)} \,. \ees This demonstrates the standard use of EFT in the perturbative domain. The EFT description and computation is more efficient since first of all it provides renormalization group improvement (l.h.s. of \eq{eq:fop}) and second even the computation of coefficients $p_{lk}$ may be simplified. Apart from the one-loop matching coefficients of the action, $\csym_i^{(1)}$, which can be computed by matching any convenient set of observables, only continuum perturbation theory quantities appear on the r.h.s. of \eq{eq:p0},~\eq{eq:p1}. \subsubsection*{Improved observables}\addcontentsline{toc}{subsubsection}{Improved observables} \label{s:improbs} For short distance observables it is rather common to attempt a reduction of lattice spacing effects at the level of the expectation values instead of at the level of the action. For the static potential or $\mathcal{P}_\mathrm{F}$, we refer the reader to~\cite{Sommer:1993ce,Necco:2001xg}. Examples with higher orders in perturbation theory and with a combination of improvement of action and observable are found for example in \cite{DeDivitiis:1994yz,Bode:1999sm,Alexandrou:2015sea,DallaBrida:2018rfy}. To illustrate what is gained by considering SymEFT, it is sufficient to define a tree-level improved short distance observable, \bes \label{eq:Pimpr} \mathcal{P}^\mathrm{impr}(\Lambda r,a/r) &=& \frac {\mathcal{P}(\Lambda r,a/r)}{1+\delta_0(a/r)} =\frac {\mathcal{P}(\Lambda r,a/r)}{1-\frac{a^2}{r^2} \sum_i\csym_i^{(0)}K_i^{(0)}} +\mathrm{O}(a^4/r^4)\,. \ees By construction, cutoff effects in fixed order perturbation theory are then suppressed by one power of $\glat^2$ (all orders in $a/r$) and therefore also the coefficient $p_{00}$ of $a^2/r^2$ vanishes irrespective of the action. However, this neither means that the leading term ($i=1$) in \eq{eq:DeltaPshortd} vanishes nor that the sum of the two $\mathrm{O}(a^2)$ terms does. The sum of the two terms vanishes only for $a=r$, which is not at all where the $a^2$ expansion is applicable. In fact, inserting the denominator in \eq{eq:Pimpr} into \eq{eq:DeltaPshortd} one obtains \bes \label{eq:DeltaPshortdTLI} \Delta_{\mathcal{P}^\mathrm{impr}}(\Lambda r,a/r) &=& - \frac{a^2}{r^2} \,\mathcal{P}(\Lambda r,0)\,\sum_i \left\{ \left[\frac{\glat^2(a^{-1})}{\glat^2(r^{-1})}\right]^{\hat\gamma_i} -1 \right\} \, K_i^{(0)}\csym_i^{(0)} \, \,. \ees The effect of tree level improvement is the subtraction of the $1$ in the curly bracket. For intermediate $a/r$, this will reduce the magnitude (and change the sign) of each term in the sum over $i$. However, asymptotically, for very small $a/r$, the tree level improvement leads to an increase of the $a^2$ effects. This behavior is tied to the sign of the $\hat\gamma_i$. For negative $\hat\gamma_i$, we would always have a reduction of the magnitude of the terms. Usually the terms $K_i^{(0)}\csym_i^{(0)}$ are known individually and one can divide out the complete leading order term, \bes \label{eq:PRGimpr} \mathcal{P}^\mathrm{RG-impr}&=& \frac{\mathcal{P}}{1- \frac{a^2}{r^2}\,\sum_i \left[\frac{\glat^2(a^{-1})}{\glat^2(r^{-1})}\right]^{\hat\gamma_i} \, K_i^{(0)}\csym_i^{(0)} } \,, \ees and have a renormalisation group and tree level improved observable. It then has leading corrections which are truly of order $ \Delta_\mathcal{P}/\mathcal{P} \sim \frac{a^2}{r^2} \glat^2(r^{-1})\left[\frac{\glat^2(a^{-1})}{\glat^2(r^{-1})}\right]^{\hat\gamma_1} $ as the name tree level improvement suggests. We return to $\mathcal{P}_\mathrm{F}$. In this special case, the O(4) invariant operator $\mathcal{O}_1=\mathcal{B}_1$ does not contribute at tree level, $K_1^{(0)}=0$. Specializing to the Wilson plaquette action and the force along a lattice axes, we have $\csym_2^{(0)}=1/12$ and $ K_2^{(0)}=-9$. If one chooses a different direction on the lattice, e.g. a body-diagonal, the matrix element $K_2^{(0)}$ is smaller, but the finite difference defining the force on the lattice has a larger discretization length. The various terms are illustrated in \fig{f:Delta_lead_force}. The dotted line is the fixed order perturbation theory for $\Delta_{\mathcal{P}_\mathrm{F}}/\mathcal{P}_\mathrm{F}$ and the full curve the remainder, \eq{eq:DeltaPshortdTLI}. The dashed line shows a rough linear approximation to the latter at larger $a$. It extrapolates to a small value of $-0.6\%$ at $a=0$. We may think of this as an example for the relative error one makes by approximating the cutoff effects of the tree-level improved observable linear in $a^2$.\footnote{Usually the tree-level improved force is defined through an improved distance \cite{Sommer:1993ce} $r_\mathrm{I}$. At the level of $a^2$ terms this is equivalent to \eq{eq:Pimpr}.} Interpreting $\mathcal{P}_\mathrm{F}$ as a running coupling as explained for example in \cite{Necco:2001gh}, this intercept represents a systematic (relative) uncertainty on the coupling. It translates into an about 1.5\% error in the $\Lambda$-parameter of the theory, which is not entirely irrelevant given today's precision of results for it. Needless to say that the full logarithmic term \eq{eq:DeltaPshortdTLI} is better eliminated by use of \eq{eq:PRGimpr}. \begin{figure} \centering \includegraphics[width=0.695\textwidth]{logs3} \caption{Leading order discretization errors, $\Delta_{\mathcal{P}_\mathrm{F}}(a)/\mathcal{P}_\mathrm{F}$ of the static force, see the text. We use $\alpha(1/r)=\glat^2(1/r)/(4\pi)=0.2$, and $K_1^{(0)}=0$, $K_2^{(0)}\csym_2^{(0)} = -3/4$, corresponding to the Wilson plaquette action and the force along a lattice axes. The dotted line represents fixed order perturbation theory, the full line the remainder (on top of fixed order) predicted by SymEFT, and the dashed line shows a rough approximation, linear in $a^2$, to that latter.} \label{f:Delta_lead_force} \end{figure} \section{Schr\"odinger functional} \label{s:SF} \newcommand{\tilde c_\mathrm{b}}{\tilde c_\mathrm{b}} \newcommand{\omega_\mathrm{b}}{\omega_\mathrm{b}} \newcommand{\csym_\mathrm{b}}{\csym_\mathrm{b}} \newcommand{\cbt^{(0)}}{\tilde c_\mathrm{b}^{(0)}} \newcommand{\cbt^{(1)}}{\tilde c_\mathrm{b}^{(1)}} \newcommand{\cbt^{(2)}}{\tilde c_\mathrm{b}^{(2)}} \newcommand{\cb^{(0)}}{\omega_\mathrm{b}^{(0)}} \newcommand{\cb^{(1)}}{\omega_\mathrm{b}^{(1)}} \newcommand{\cb^{(2)}}{\omega_\mathrm{b}^{(2)}} \newcommand{\ct^{(0)}}{\ct^{(0)}} \newcommand{\ct^{(1)}}{\ct^{(1)}} \newcommand{\ct^{(2)}}{\ct^{(2)}} \newcommand{a_\mathrm{f}}{a_\mathrm{f}} Short distance observables of particular interest can be defined in the Schr\"odinger functional \cite{Luscher:1992an}. Fixed order perturbation theory has been used extensively to study discretization errors in this environment. Here we consider their renormalization group improvement through SymEFT. We just consider the pure gauge theory and the Schr\"odinger functional\ with an abelian background field, where - as we will see - we do not have to deal with operator mixing. In the lattice regularization, the Schr\"odinger functional\ can be defined by the path integral with the action, \bes \label{eq:SlatSF} S^\mathrm{SF}_\mathrm{lattice} &=& \frac2{g_0^2}\, \sum_{0\leq x_0\leq T-a} \sum_\vecx \sum_{\mu>\nu=0}^3\, p(x,\mu,\nu) \\ && + a\, (\ct(g_0)-1) \,a^3 \sum_\vecx \, [ \mathcal{O}_\mathrm{b}^\mathrm{l}(0,\vecx) + \mathcal{O}_\mathrm{b}^\mathrm{l}(T-a,\vecx)] \,, \ees with \bes \mathcal{O}_\mathrm{b}^\mathrm{l}(x_0,\vecx) = \frac2{g_0^2}\,\frac1{a^4}\, \sum_{k=1}^3 p(x,k,0) \,. \ees Space-time is a cylinder in the sense that we have periodic boundary conditions in space with period $L$ and Dirichlet boundary conditions on the time-slices $x_0=0$ and $x_0=T$, \bes \left. U(x,k)\right|_{x_0=0} = \rme^{a C_k(L,\eta)}\,, \quad \left. U(x,k)\right|_{x_0=T} = \rme^{a C'_k(L,\eta)}\,. \ees For details we refer to \cite{Luscher:1992an}, but we note that the dimensionless $L C_k(L,\eta)$ is just a function of the dimensionless parameter $\eta$ (and a here irrelevant second parameter $\nu$) and that the field strength $F_{kl}$ vanishes at the two boundaries. Under these conditions, which have been imposed for all numerical applications so far, the SymEFT for the Yang-Mills Schr\"odinger functional is given by the formal continuum action \bes \label{eq:SSymSF} \Seff^\mathrm{SF} &=& \int\rmd^3 \vecx \left\{ \int_0^T\rmd x_0 \L_\contrs(x) + a\, \omega_\mathrm{b}\, [ \mathcal{O}_\mathrm{b}(0,\vecx) + \mathcal{O}_\mathrm{b}(T,\vecx)] \right\} +\mathrm{O}(a^2) \ees with \bes \label{eq:Bb} \mathcal{O}_\mathrm{b}(x) &=& -\frac{1}{g_0^2}\tr(F_{0k}(x) F_{0k}(x))\,. \ees The presence of the boundary terms in \eq{eq:SSymSF} is the reason for including the corresponding extra term proportional to $\ct$ in the lattice formulation: the coefficients $\ct^{(i)}$ in \bes \label{eq:ctterm} \ct(g_0) =\ct^{(0)} + \ct^{(1)} g_0^2 + \mathrm{O}{(g_0^4)} \,, \ees can be chosen such that $\omega_\mathrm{b}$ vanishes and there are no linear terms in $a$ in the lattice Schr\"odinger functional at the corresponding order in perturbation theory~\cite{Luscher:1992an}. A prominent observable in the Schr\"odinger functional is the running coupling \bes \gbar^{-2}(L^{-1} ) = \frac1{k} \langle S' \rangle \,,\;\text{ with } S' = \left. \frac{\partial S}{ \partial \eta}\right|_{\eta=0} \,. \ees with $k$ such that $\gbar^2 = g_0^2+\mathrm{O}(g_0^4)$. We want to discuss its $a$-effects as an example. The definition of the $a$-effects requires to first renormalize. We here do this by lattice minimal subtraction, \bes \label{eq:glat} \glat^2(\mu) &=& Z_\mathrm{g}(\glat,a\mu) g_0^2\,, \quad Z_\mathrm{g}(\glat,a\mu) = 1 -2b_0 \log(a\mu)\glat^2(\mu) +\mathrm{O}(g^4)\,. \ees We can then define the function \bes K(\glat^2(\frac1L ), \frac{a}L ) = \gbar^{-2} \,, \ees which relates the renormalized couplings of the two schemes. It has a continuum limit and discretization errors \bes \Delta K(\glat^2,\frac{a}L )= K(\glat^2,\frac{a}L ) -K(\glat^2, 0) \,. \ees They have the expansion \bes \frac{\Delta K(\glat^2,\frac{a}L )}{K(\glat^2,0)} =\frac{a}{L} \,[p_{00} + (p_{10} + p_{11}\log(\frac{a}L ) )\glat^2(\frac1L ) +\mathrm{O}(g^4) ] + \mathrm{O}((a/L)^2)\,, \ees where analogously to before SymEFT predicts \bes p_{11} = \gamma^{(0)}_\mathrm{b} \, p_{00} \,. \ees An explicit one-loop computation showed that~\cite{Luscher:1993gh} \bes p_{00}&=&2\,(\ct^{(0)}-1)\,, \\ p_{10}&=& 2 \times (\ct^{(1)} + 0.0890(2)) \,, \text{ for } \ct^{(0)}=1\,. \ees Thus $\ct^{(0)}=1,\, \ct^{(1)} = - 0.0890(2)$ leads to the absence of linear $a$-effects at one-loop. For this reason the perturbative computations have been carried out with $\ct^{(0)} = 1$ and from the published one-loop computation we do not have access to $\gamma^{(0)}_\mathrm{b}$. As done in \sect{s:AD}, the standard way to compute $\gamma^{(0)}_\mathrm{b}$ is to compute the one-loop renormalization of $\mathcal{O}_\mathrm{b}$. Here we extract it indirectly from the results of the two-loop computation of ~\cite{Bode:1998hd,BodeThesis}. In contrast to \sect{s:AD} the computation thus relies entirely on the lattice regularization. Consider \eq{eq:SlatSF} with a lattice spacing $a\to a_\mathrm{f}$ and then replace \bes \ct(g_0)-1 \to \zeta \,. \ees In this way $\zeta$ acts as a source for the lattice regularized operator $\mathcal{O}_\mathrm{b}$. The continuum function $ K(\glat^2,0) $ is given by \bes \label{eq:A} K(\glat^2,0) = \lim_{a_\mathrm{f}\to 0} \left[ \langle S' \rangle_{a_\mathrm{f}}\right]_{\zeta=0} \ees and the first order correction in $a$ by \bes \label{eq:Aa} \Delta K(\glat^2,\frac{a}L)= a\,\lim_{a_\mathrm{f}\to 0} \, \left[ \frac1a_\mathrm{f} \frac{\partial}{\partial \zeta} \langle S' \rangle_{a_\mathrm{f}}^\mathrm{R}\right]_{\zeta=0} + \mathrm{O}((a/L)^2)\,. \ees The right hand side of \eq{eq:Aa} is the SymEFT prediction written as the continuum limit of the lattice regularized theory (with spacing $a_\mathrm{f}$ to distinguish it from $a$). Renormalization is indicated by the superscript $\mathrm{R}$. In addition to \eq{eq:glat} it affects the boundary operator $\mathcal{O}_\mathrm{b}$, \bes \mathcal{O}_\mathrm{b}^\mathrm{lat} &=& Z_\mathrm{b}(\glat,a_\mathrm{f}\mu) \mathcal{O}_\mathrm{b}\,, \quad Z_\mathrm{b}(\glat,a_\mathrm{f}\mu) = 1-\gamma^{(0)}_\mathrm{b} \log(a_\mathrm{f}\mu) \glat^2(\mu) +\ldots\,. \ees We are now ready to extract $\gamma^{(0)}_\mathrm{b}$ from the two-loop expansion, \bes \gbar^{-2} &=& g_0^{-2} \, [1 + k_1 g_0^2 + k_2 g_0^4 + \mathrm{O}(g_0^6)] \\ k_1 &=& - m_1^a + \ct^{(1)} \frac{2a_\mathrm{f}}{L} \,,\quad \\ k_2 &=& - m_2^a - \ct^{(1)} m_2^b - (\ct^{(1)})^2 m_2^c - \ct^{(2)} m_2^d \,,\quad \ees derived in~\cite{Bode:1998hd,BodeThesis} for $\ct^{(0)}=1$. We use the asymptotic expansion of the coefficients $m_i^k$ in powers of $\frac{a_\mathrm{f}}L $ and $\log(\frac{a_\mathrm{f}}L )$ given in Ref.~\cite{Bode:1998hd,BodeThesis}. But first we note that with $\langle S' \rangle = k/\gbar^{2}$ we have \bes \label{eq:gsq1lp} \frac1{a_\mathrm{f} }\left[ \frac{\partial}{\partial\zeta} \langle S' \rangle_{a_\mathrm{f}}\right]_{\zeta=0}^\mathrm{R} &=& \frac1a_\mathrm{f} Z_\mathrm{b}(\glat,a_\mathrm{f}\mu) \left[ \frac{\partial}{\partial \zeta} \langle S' \rangle_{a_\mathrm{f}}\right]_{\zeta=0} \\ \nonumber &=& Z_\mathrm{b}(\glat,a_\mathrm{f}\mu) \frac k{g_0^2}[\frac2L - \frac1a_\mathrm{f} m_2^b(\frac{a_\mathrm{f}}L) \,g_0^2 +\mathrm{O}(g_0^4) ] \\ &=& \nonumber \frac k{\glat^2(\mu)}\left[\frac2L -\frac2L \, (\gamma^{(0)}_\mathrm{b}+2b_0) \log(a_\mathrm{f}\mu) \glat^2(\mu) - \frac1a_\mathrm{f} m_2^b(\frac{a_\mathrm{f}}L) \,\glat^2(\mu) \right] \\&& +\mathrm{O}(\glat^2) \nonumber \ees since the computation~\cite{Bode:1998hd,BodeThesis} corresponds to $\zeta= \ct^{(1)} g_0^2 +\mathrm{O}(g_0^4)$. Finally, requiring finiteness of \eq{eq:gsq1lp} after inserting \bes \frac1a_\mathrm{f} m_2^b(a_\mathrm{f}/L) &=& \frac 1L [r_2^b + s_2^b \log(L/a_\mathrm{f}) + \mathrm{O}(a_\mathrm{f}/L) ]\,, \ees with~\cite{BodeThesis} $r_2^b=0.1683(8)\,,\; s_2^b = 0.2785(4)$ we obtain $\gamma^{(0)}_\mathrm{b} = s_2^b /2 -2b_0$ and \bes \label{eq:gammabres} \hat\gamma_\mathrm{b} = 0.000(2) \,. \ees Note that this is the anomalous dimension of a boundary operator. Assuming that $\hat\gamma_\mathrm{b} = 0$, exactly, \Eq{eq:Aa} can now be written in the form (see also \eq{eq:DeltaPshortdTLI}) \bes \label{eq:DKasy} \Delta K &=& \frac{a}L [\gbar^2(a^{-1})]^{n_I+1}\,2\,\csym_\mathrm{b}^{(n_I+1)}\,[1 +\mathrm{O}(g^2)] \,, \ees where $\csym_\mathrm{b}^{(n_I+1)} = -\ct^{(n_I+1)}$ is the leading coefficient in \bes \csym_\mathrm{b} = \csym_\mathrm{b}^{(n_I+1)}[\gbar^2(a^{-1})]^{n_I+1} + \mathrm{O}([\gbar^2(a^{-1})]^{n_I+2})\,, \label{eq:cbexp} \ees namely we are considering a theory where $\ct$ is chosen to achieve $\mathrm{O}(a)$ improvement in perturbation theory, up to and including the terms $g_0^{2n_I}$. The $\mathrm{O}(\gbar^2(\frac1L ))$ term in the SymEFT matrix element is given by $r_2^b \gbar^2/2$, but it comes together with the two-loop anomalous dimension of the boundary operator and the next order correction in \eq{eq:cbexp}. Since these are presently unknown, we only show the leading order in $g^2$ in \eq{eq:DKasy}. In order to compute the non-perturbative running of the coupling, one considers the step scaling function, \bes \Sigma(u,\frac{a}L ) = \left.\gbar^2(1/(2L))\right|_{\gbar^2(1/L)=u}\,, \ees where the choice of intermediate renormalization scheme (we chose ``lat'') disappears. Its leading discretization errors are (see also \cite{DallaBrida:2018rfy}, App. A) \bes \Delta \Sigma(u,\frac{a}L ) &=& \Sigma(u,\frac{a}L )-\Sigma(u,0) \\ &=& u \frac{a}{L} \csym_\mathrm{b}^{(n_I+1)}[\gbar^2(a^{-1})]^{n_I+1}[1 +\mathrm{O}(u)] \label{eq:sffinal} \ees Since we have seen that the one-loop anomalous dimension of $\mathcal{O}_\mathrm{b}$ vanishes, this is equivalent to the form used by the ALPHA collaboration recently \cite{Bruno:2017gxd,DallaBrida:2018rfy}. \section{Wilson-QCD} \label{s:Wils} Let us now briefly discuss the case of the original Wilson action for QCD including the Wilson term in the fermion action~\cite{Wilson:1974}. While this action is hardly used any more in the original form it is still of interest because there are results in the literature. More importantly, some large scale computations use the $\mathrm{O}(a)$-improved version with an approximate coefficient of the clover improvement term. One can gain information on the scaling of $\delta\mathcal{P}^\L$, \eq{eq:S2matel}, in that case. The Wilson quark action breaks chiral symmetry and thus allows for the dimension five Sheikholeslami-Wohlert term\cite{Sheikholeslami:1985ij} \bes \dlatt[1]{\L}(x)= - \omegaswsym \frac18 \,\psibar(x)[\gamma_\mu,\gamma_\nu] F_{\mu\nu}(x)\psi(x) \ees in the SymEFT, \eq{eq:effLagrangian}. In principle there are additional terms proportional to quark masses, but these ``only'' affect quark-mass dependences \cite{Luscher:1996sc} and are absent when one takes the continuum limit along a physical scaling trajectory defined by, for example, fixed ratios of $\nf$ pseudo-scalar masses in the $\nf$-flavour theory. We here neglect those $\mathrm{O}(a\mq)$ effects; we set the quark masses to zero. There are no operators which violate rotational symmetry. Therefore, there is no mixing at $\mathrm{O}(a)$ at all. The prediction for the asymptotic $a$ dependence can then immediately be written down, \bes \Delta_\mathcal{P}(a) = -a \, \cswsym^{(0)} \left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma^\mathrm{sw}} \mathcal{M}^\mathrm{RGI} \times [1 + \mathrm{O}(\gbar^2(a^{-1}))] \sim a \left[\frac{1}{-\log(a\Lambda)}\right]^{\hat\gamma^\mathrm{sw}} \,. \ees For the standard Wilson action, we have $\cswsym = \cswsym^{(0)} +\mathrm{O}(g^2)$ with $\cswsym^{(0)}=-1$. As in \eq{eq:sffinal}, there are additional powers of $\gbar^2(a^{-1})$ when the theory is perturbatively $\mathrm{O}(a)$ improved \cite{Sheikholeslami:1985ij,Wohlert:1987rf,Luscher:1996sc,Aoki:2003sj}. We find~\cite{H:inprep} ($C_\mathrm{A}=\mathrm{N},\; C_\mathrm{F}=(\mathrm{N}^2-1)/(2\mathrm{N})$), \bes \hat\gamma^\mathrm{sw} = \frac{15C_\mathrm{F}-6C_\mathrm{A}}{11C_\mathrm{A}-2\nf} \ees for the anomalous dimension. It is rather small. For $\mathrm{N}=3$ this is in agreement with~\cite{Narison:1983}. For the considered case of Wilson fermions, one may also easily discuss the relevant contributions from corrections to the vector and axial vector, non-singlet, flavor currents. In SymEFT, they are represented by~\cite{Luscher:1996sc} \bes V^{r,s}_\mu(x)&=&\psibar_r(x) \gamma_\mu \psi_s(x) + a \,\omegavsym \,\partial_\nu T^{r,s}_{\mu\nu}(x) \, , \\ A^{r,s}_\mu(x)&=&\psibar_r(x) \gamma_\mu \gamma_5 \psi_s(x) + a \, \omegaasym \,\partial_\mu P^{r,s}(x)\,. \ees Matrix elements of interest of the corresponding lattice currents are, e.g., leptonic decay constants and semi-leptonic form factors. Using the anomalous dimensions of the non-singlet pseudo scalar density and the tensor current \cite{Larin:1993tq,Broadhurst:1994se}, their lattice artifacts receive contributions \bes \Delta_\mathcal{P}^\mathrm{V}(a) &=& a \, \gbar^2(a^{-1})\left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma^\mathrm{T}} \mathcal{M}^\mathrm{RGI}_\mathrm{T} \,[\cvSym^{(1)} +\mathrm{O}(\gbar^2(a^{-1})]\,, \quad \hat\gamma^\mathrm{T} =\frac{3C_\mathrm{F}}{11C_\mathrm{A}-2\nf}\,, \nonumber \\ \label{eq:DeltaVA}\\[-2ex] \nonumber \Delta_\mathcal{P}^\mathrm{A}(a) &=& a \, \gbar^2(a^{-1})\left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma^\mathrm{P}} \mathcal{M}^\mathrm{RGI}_\mathrm{P} \,[\caSym^{(1)} +\mathrm{O}(\gbar^2(a^{-1})]\,,\quad \hat\gamma^\mathrm{P} = -3\, \hat\gamma^\mathrm{T}\,. \ees where $\mathcal{M}^\mathrm{RGI}_\mathrm{T}$ is the RGI matrix element of $\partial_\nu T^{r,s}_{\mu\nu}$ of the considered transition and $\mathcal{M}^\mathrm{RGI}_\mathrm{P}$ the RGI matrix element of $\partial_\mu P$. There is an extra factor $\gbar^2$, as compared to previous expressions, since the $\mathrm{O}(a)$ term in the classical expansion of the currents vanishes. The $\omega_{\rm V/A}^{(1)}$ factors are the one-loop matching coefficients between SymEFT and the considered lattice theory. An extended list of references with results for improvement coefficients $c_\mathrm{V/A}^{(1)}$ for various actions is given in Table 1 of \cite{Sommer:2006sj}. The case of unimproved lattice currents, e.g. $ V^{r,s}_{\mu,\mathrm{latt}}(x)=\psibar_r(x) \gamma_\mu \psi_s(x)$, can be obtained by setting $\csym _\mathrm{V/A}^{(1)}=-c_\mathrm{V/A}^{(1)}$ in \eq{eq:DeltaVA}. These coefficients are rather small. \section{Summary} We have investigated the form of the leading discretization errors in lattice gauge theory in a few specific cases. The starting point is the leading contribution to the Symanzik effective Lagrangian in the form \bes \label{eq:effLagrangian2} \L_\text{eff}(x)=\L(x)+a^{{n_\mathrm{min}}}\sum_i \csym_i^{(n_i)}g^{2n_i}\mathcal{B}_i(x)+\ldots\,, \quad {n_\mathrm{min}} \geq 1\,,\quad n_i\geq0\,, \ees where the ellipsis denotes higher powers in $g^2$ for each term $i$ as well as higher powers in $a$. The basis operators are chosen such that they do not mix at one-loop order and have one-loop anomalous dimensions $\gamma_i^{(0)}g^2, \; \gamma_1^{(0)} \leq \gamma_2^{(0)} \leq \ldots$. Once ${n_\mathrm{min}},\, c_i,\, n_i,\, \gamma_i^{(0)}$ are known, the leading correction to the continuum limit of spectral quantities is \bes \label{eq:DeltaPconcl} \Delta_\mathcal{P}(a) &=& a^{n_\mathrm{min}} \left[\gbar^2(a^{-1})\right]^{n_1} \left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_1} \, \csym_1^{(n_1)} \mathcal{M}^\mathrm{RGI}_{\mathcal{P},1}\,\, [1 +\mathrm{O}([\gbar^{2}(a^{-1})]^{\Delta\hat\gamma},\gbar^2(a^{-1}))]\nonumber \\ && +\rmO(a^{{n_\mathrm{min}}+1})\,, \ees with $\hat\gamma_i=\gamma_i^{(0)}/(2b_0)\,,\; \Delta\hat\gamma=\hat\gamma_2-\hat\gamma_1$. The only unknown is the $a$-independent renormalization group invariant matrix element $\mathcal{M}^\mathrm{RGI}_{\mathcal{P},1}$ of the operator $\mathcal{B}_1$. The most important ingredient in the formula is the leading $\hat\gamma_1$. In almost all considered cases, we find that $\hat\gamma_1\geq 0$ in stark contrast to the case of the 2d O(3) model \cite{Balog:2009yj}. This is good news, as the leading corrections accelerate the approach to the continuum limit compared to the naive classical argumentation which neglects the overall $\left[\gbar^2(a^{-1})\right]^{n+\hat\gamma_1}$ factor. Let us briefly summarize the results for the individual cases considered. \begin{itemize} \item Yang-Mills theory.\\ Discretization effects of order $a^2$ are due to two operators. Their anomalous dimensions, $\hat\gamma_i$, computed in \sect{s:AD}, are of order one, see \eq{eq:gammahatYM}. In eqs. (\ref{eq:effLagrangian2}- \ref{eq:DeltaPconcl}), the original Wilson action, tree-level and one-loop Symanzik improved actions have $n_i=0,1,2$ respectively. \item Yang-Mills theory with a boundary: Schr\"odinger functional.\\ As discussed in \sect{s:SF} there are linear in $a$ discretization errors due to one boundary operator. Using the literature on perturbation theory for the Schr\"odinger functional, we extracted its anomalous dimension and found that it vanishes within uncertainties, $\hat\gamma_\mathrm{b}=0.000(2)$. This means that the fixed order perturbation theory analysis of discretization errors carried out by the ALPHA collaboration \cite{Bruno:2017gxd} receives no log-corrections at leading order. \item Wilson $\mathrm{O}(a)$ effects due to the fermion action. \\ Here our analysis concerns $\mathrm{O}(a)$ effects which come from an action with perturbative improvement, i.e. an improvement coefficient $\csw$ determined at $n$-loop perturbation theory. The Pauli term, found to be the only contributing operator by Sheikholeslami and Wohlert, has $n_1=n+1$ in \eq{eq:DeltaPconcl}. Its anomalous dimension, $\hat\gamma_1=\hat\gamma^\mathrm{sw} = \frac{15C_\mathrm{F}-6C_\mathrm{A}}{11C_\mathrm{A}-2\nf} $, could be taken from the literature~\cite{Narison:1983}. It is rather small. Interestingly, as one approaches the conformal window \cite{Nogradi:2016qek} by increasing $\nf$, the anomalous dimension $\hat\gamma^\mathrm{sw}$ grows. \item Wilson $\mathrm{O}(a)$ effects due to the flavor currents. \\ Weak decay (and other) matrix elements receive {\em additional} discretization errors from correction terms in the effective weak Hamiltonian. We just considered the flavor currents with perturbative $\rmO(a)$ improvement in \sect{s:Wils}. For the axial current, the (derivative of the) pseudo-scalar field governs the correction term. Its $\hat\gamma_\mathrm{P}$ is negative, but relatively small in magnitude. Since the coefficient of the correction operator starts at order $g^2$ in perturbation theory, the total logarithmic modification, $\left[\gbar^2(a^{-1})\right]^n\left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_\mathrm{P}}$, again accelerates convergence due to $n\geq 1$ and $n+\hat\gamma_\mathrm{P}>0$. For the vector current the $\rmO(a)$ correction involves the tensor current with $\hat\gamma_\mathrm{T}$ which is positive and rather small. This leads to an even better $a$-dependence. Note that this analysis holds also for a non-perturbatively improved action but only perturbatively improved currents. \end{itemize} Short distance observables $\mathcal{P}(r\Lambda)$ with $r\Lambda\ll 1$ are special. Their matrix elements $\mathcal{M}^\mathrm{RGI}_{\mathcal{P},i}(r\Lambda)$ are computable in renormalized perturbation theory in terms of the coupling at scale $\mu=1/r$ and one can make parameter free predictions for the leading corrections. As discussed in \sect{s:improbs} the usual tree-level {\em improved observables} do not always lead to a reduction of the asymptotic cutoff effects, but this is easy to rectify such that cutoff-effects are suppressed by one power of $\gbar^2(r^{-1})$ at short distances. As a broad conclusion, our results are very positive because the so-far known logarithmic corrections are relatively weak. This lends support to some of the continuum extrapolations performed in the literature. For example, the BMW collaboration has performed continuum extrapolations of data obtained with tree-level coefficient, $\csw=1$ of the Sheikholeslami-Wohlert term~\cite{Durr:2010vn}. In principle, the asymptotic behavior is then $\csym_\mathrm{sw}^{(1)} \gbar^2(a^{-1})\left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma^\mathrm{sw}}$. In one of their continuum extrapolations they used this form but with $\hat\gamma^\mathrm{sw}\to 0$, which we now see is a rather good approximation. Of course, the difficult question in such extrapolations is whether one is in the region where the asymptotics dominates. For this reason they also used alternative extrapolation functions. Despite the small values of $\hat\gamma$ that we found, with tree-level or one-loop Symanzik improved action, the $\left[\gbar^2(a^{-1})\right]^n\left[\,2b_0 \gbar^2(a^{-1})\right]^{\hat\gamma_1}$ effects are non-negligible when MC results are precise, see the right part of \fig{f:Delta_lead}. In any case, when the leading behavior is known, it should be incorporated into the fit function. Still, we emphasize that the asymptotically leading behavior can be predicted, not the region where exactly this dominates over formally suppressed terms. Of course the most interesting application of SymEFT is lattice QCD with ${n_\mathrm{min}}=2$ in \eq{eq:logcorr}. In that case the basis of contributing operators is considerably larger. Work on determining their anomalous dimensions is in progress~\cite{H:inprep}. Also Gradient flow observables are of high interest. Their discretization errors are surprisingly large~\cite{Sommer:2014mea,Ramos:2014kka,DallaBrida:2016kgh}. Now that it is known that standard pure gauge theory operators are not the source of this behavior, since they have positive $\hat\gamma_i$, a natural suspicion is that there is an unusually large and negative anomalous dimension $\hat\gamma$ of the additional dimension six operator at $t\to0$, present in the 5-d formulation of the Gradient Flow, see \cite{Ramos:2014kka} for more details. We also plan to investigate this issue. \textbf{Acknowledgements.} We thank Hubert Simma and Kay Sch\"onwald for many discussions. \vskip 0.3cm \noindent \input{SymanzikYMxpd.bbl} \end{document}
1,314,259,996,894
arxiv
\section{Introduction} Exceptionally deep observations of the distant universe with the Hubble Space Telescope ({\it HST}) have consistently pushed the frontiers of human knowledge. A succession of observing programs with each generation of {\it HST} detectors, in concert with the other {\it NASA} Great Observatories ({\it Spitzer Space Telescope} and {\it Chandra X-ray Observatory}), have probed the star-formation and assembly histories of galaxies through $>$ 95\% of the universe's lifetime. These observations have been made publicly available to the greater astronomy community, enabling a wide range of science and ancillary observing programs. The study of {\it HST} deep fields has established a number of techniques now standard in extra-galactic astronomy, including the Lyman break selection of distant star-forming galaxies; photometric redshift determinations; stellar population fitting to multi-band photometry; quantitative morphological analysis; and the detection of high-redshift transient phenomena. Here we present the new Frontier Fields, an {\it HST} and {\it Spitzer} director's discretionary time campaign to observe six massive strong-lensing clusters and six parallel fields, designed to simultaneously detect the faintest galaxies ever observed and provide a statistical picture of galaxy evolution at early times. The first Hubble Deep Field (HDF) observations with {\it HST} {\it WFPC2} revealed thousands of galaxies to 30th magnitude, fainter than any seen before (Williams et al. 1996; Ferguson, Dickinson, \& Williams 2000). Utilizing the Lyman break technique (Songalia, Cowie, \& Lilly 1990; Guhathakurta, Tyson, \& Majewski 1990), the HDF and subsequent HDF-South (HDF-S; Castertano et al. 2000; Williams et al. 2000; Ferguson, Dickinson, \& Williams 2000) detected significant numbers of distant star-forming galaxies visible in optical out to redshifts $z \sim 5$ (e.g. Madau et al. 1996). {\it HST}'s deep and high spatial resolution images showed that many of these distant galaxies were smaller with higher surface-brightnesses and more irregular structures than local galaxy populations (e.g. Abraham et al. 1996). Follow-up observations of the HDF and HDF-S in the infrared with {\it HST}'s {\it NICMOS} camera (Dickinson 1999; Thompson et al. 1999; Franx 2003) enabled studies of the stellar mass of the $z < 5$ populations (e.g. Papovich et al. 2001; Dickinson et al.2003; Fontana et al. 2003) as well as the detection of higher redshift galaxies at $5 < z < 7$ (Thompson 2003; Bouwens et al. 2003) and intrinsically redder populations (Labbe et al. 2003; Fern{\'a}ndez-Soto, Lanzetta, \& Yahil 1999, Stiavelli et al. 1999). Combined with the spectroscopic confirmation of many of these faint galaxies (e.g. Lowenthal et al. 1997; Steidel et al. 1996), it became possible to track the cosmic star-formation (Madau et al. 1996; Lanzetta et al.2002; Bouwens et al. 2003) and assembly history of stellar mass (Dickinson et al. 2003) over the majority of the universe's lifetime. {\it HST} {\it NICMOS} observations of the HDF in 1997 discovered the highest redshift Type Ia supernova known at that time ($z=1.7$), confirming the acceleration of the universe (Riess et al. 2001). After the original HDFs, synergistic multi-wavelength deep observations with Great Observatories and new capabilities on Hubble further expanded the boundaries of our understanding. The installation of the Advanced Camera for Surveys Wide Field Camera ({\it ACS/WFC}; Ford et al. 1998) on {\it HST} in 2002 greatly improved the depth and area of optical imaging possible within a fixed exposure time. The fields for the Great Observatories Origins Deep Survey (GOODS; Giavalisco et al. 2004) were chosen to overlap with existing X-ray deep fields from {\it Chandra} (HDF/Chandra Deep Field North and the new Chandra Deep Field South; Hornschemeier et al. 2000; Giacconi et al. 2001). New {\it HST} and {\it Spitzer} imaging produced high-quality and deep multi-wavelength photometry, revealed new distant galaxy populations, measured photometric redshifts, improved stellar mass estimates, and could be matched to faint X-ray sources in the Chandra Deep Fields (e.g. Pope et al. 2006; Mosbasher et al. 2004; Grazian et al. 2006; Fontana et al. 2006; Treister et al. 2004; Barger et al. 2005). The cadence of the {\it HST} GOODS observations were designed to perform a systematic search for high-redshift supernovae (Riess et al. 2004 a, b). The {\it HST} Ultra Deep Field (HUDF; Beckwith et al. 2006) location within GOODS-S/CDFS was chosen to leverage this existing data with an additional 400 orbits (268 hours) to reach optical depths fainter than original HDF {\it WFPC2} limits. The resulting ``wedding cake'' survey of the combined GOODS and HUDF observations proved to be an important strategy for spanning the depth and area needed to constrain both the bright and faint ends of the luminosity function of galaxies approaching the epoch of reionization (e.g. Bouwens et al.2007). With the success of the {\it HST} SM4 in 2009 and the installation of the Wide Field Camera 3 ({\it WFC3}; MacKenty et al.2008) with its IR channel, {\it HST} greatly improved the efficiency of its high-spatial resolution near-infrared imaging. The {\it WFC3} Early Release Science near-infrared observations of GOODS-S (Windhorst et al. 2011) and deep imaging in HUDF and parallels revealed new populations of galaxies at $z \sim s8$ (Illingworth \& Bouwens 2010; Bouwens et al. 2010). Additional {\it WFC3} observations of the HUDF (Ellis et al. 2013) added the F140W filter and deeper observations in F105W and F160W filters to increase the detection efficiency of highest redshift candidates ($8.5 < z < 12$). (See also Illingworth et al.2013 for a separate reduction of all HUDF data). Wider field near-infrared imaging with the {\it HST} Multi-Cycle Treasury Cosmic Assembly Near-infrared Deep Extra-galactic Legacy Survey (CANDELS; Grogin et al. 2012; Koekemoer et al. 2012) built upon the previous {\it HST} ACS/WFC and {\it Spitzer} observations of the GOODS, GEMS (Rix et al. 2004), COSMOS (Scoville et al. 2007), EGS (Davis et al. 2007), and UDS (Lawrence et al. 2007) extragalactic legacy fields. Thanks to {\it WFC3}, detections of $z \sim 8$ candidates are now relatively commonplace (e.g. Labbe et al. 2010; Finkelstein et al. 2010; Yan et al. 2011; McClure et al. 2011, Bradley et al. 2012). The current measurement of the cosmic star-formation history extends to less than 500 Myr after the Big Bang (e.g. Ellis et al. 2013; Finkelstein et al. 2015; Oesch et al. 2013; Oesch et al. 2016; but see Pirzkal et al. 2013, Brammer et al. 2013), albeit with very small numbers of candidates at $z > 9$. {\it HST}'s observations of high redshift galaxies have placed important constraints on cosmological measures of reionization (e.g. Robertson et al. 2015, Finkelstein et al. 2015). With the launch of the {\it James Webb Space Telescope} ({\it JWST}) still several years away, and no new servicing missions to {\it HST} planned, significant progress on understanding the first billion years of the universe with the remaining {\it HST} years poses a major challenge. The {\it HST} and {\it Spitzer} projects proposed supporting a new joint ‘Deep Fields’ program supported with director’s discretionary time in their 2012 NASA Senior Review proposals. The Hubble Deep Fields Initiative science working group (HDFI SWG) was convened by {\it STScI} Director M. Mountain in 2012. They recommended a new strategy to ``go deep'': use massive clusters of galaxies as cosmic telescopes, combined with very deep {\it HST} and {\it Spitzer} observations\footnote{\url{www.stsci.edu/hst/campaigns/frontier-fields/documents/HDFI_SWGReport2012.pdf}}. Very massive clusters of galaxies are the most massive structures in the universe, bending space-time to create efficient gravitational lenses (e.g. Kneib \& Natarajan 2011). The light from galaxies behind these natural telescopes experience magnification factors of a few within a few arc-minutes of the cluster cores, and magnifications $\sim$10 or greater within smaller windows along the critical curves. Therefore, {\it HST} observations of these strongly-lensed fields can probe galaxies as intrinsically faint or fainter than those detected in the HUDF in a much shorter exposure time -- provided those galaxies fall within the high magnification windows. The advantages of this strategy had already been demonstrated by the Cluster Lensing and Supernova Survey (CLASH; Postman et al. 2012), a 524-orbit {\it HST} Multi-Cycle Treasury Program to study the gravitational lensing properties of 25 galaxy clusters. CLASH targeted each cluster with shallow observations in 16 ultra-violet -- near-infrared {\it HST} bandpasses, in order to obtain precise photometric redshift constraints on background lensed galaxies. Within only a few orbits of {\it HST} time in the reddest filters, CLASH discovered several $z>9$ galaxy candidates highly magnified by intervening massive clusters at $z \sim 0.5$ (Coe et al. 2013; Zheng et al. 2012; Bouwens et al. 2014). The Frontier Fields program is an ambitious multi-cycle director's discretionary time observing campaign with {\it HST} and {\it Spitzer Space Telescope} to peer deeper into the universe than ever before. The Frontier Fields combine the power of {\it HST} with the natural gravitational telescopes of six high-magnification clusters of galaxies to produce the deepest observations of clusters and their lensed galaxies ever obtained. The {\it HST} cluster images are obtained in parallel with six parallel `blank' field images; the parallel field images are the second deepest images ever obtained, and triple the blank field area imaged to 29th ABmag depths. The {\it Spitzer Space Telescope} is also dedicating $ >1000$ hours of Director's discretionary time to obtain $IRAC$ 3.6 and 4.5 micron imaging to 26.5, 26.0 ABmag depths in the six cluster and six parallel Frontier Fields. In this paper, we describe the primary science goals in \S2; the field selection criterion in \S3; the Frontier Field clusters and parallel fields in \S4; the {\it HST} and {\it Spitzer} observations in \S5; and the public Frontier Fields lensing modeling effort in \S6. Further details, the latest {\it HST} data releases, and Frontier Fields updates may be found at \url{www.stsci.edu/hst/campaigns/frontier-fields/} . Details describing the {\it Spitzer} observations will be presented in Capak et al. 2016 (in prep) and more information is available at \url{ssc.spitzer.caltech.edu/warmmission/scheduling/approvedprograms/ddt/frontier/}. \section{Science Goals \& Strategy} The primary science goals of the Frontier Fields are to explore the high-redshift universe accessible only with deep {\it HST} observations, and to set the scene for {\it JWST} studies of the early universe. High-redshift quasar absorption lines studies have found that the epoch of reionization was completed by $z \sim 6$ (Fan et al. 2006), while cosmic microwave background observations place the start of reionization before $z \sim 10$ (e.g. Spergel et al.2003, Hinshaw et al. 2013, Planck Collaboration 2015). Including recent estimates of the optical depth from PLANCK data, the era between $z \sim 11$ and $z \sim 6$ probed by the deepest and reddest {\it HST} observations marks a critical transition in the universe's history (e.g. Planck Collaboration 2015, Robertson et al. 2015). The installation of the {\it HST} {\it WFC3} camera with the near-infrared channel dramatically increased the number of galaxy candidates detected at $z > 6$. However, prior to the start of the Frontier Fields in 2013, astronomers' understanding of the galaxy populations during the epoch of reionization were based largely on those detected in direct {\it HST} {\it WFC3/IR} imaging surveys (HUDF, CANDELS, BORG) and handfuls of lensed objects in shallow {\it HST} observations from CLASH. The detected unlensed galaxies are the most luminous objects of their era, and thus significantly more massive and rare than the progenitors of today's Milky Way galaxies (e.g. Behroozi, Conroy, \& Wechsler 2013; Boylan-Kolchin, Bullock, \& Garrison-Kimmel 2014). High redshift galaxies are barely resolved by {\it HST} (Oesch et al. 2010, Ono et al. 2013), with lensed $z >8$ galaxies yielding intrinsic sizes less than a few hundred pcs across (Coe et al. 2013). Because such high-redshift galaxies are often only observable in the reddest {\it HST} bandpasses, limited information about their rest-frame ultraviolet slopes, stellar populations, and dust content can be inferred from their observed colors (e.g. Finkelstein et al. 2012). Unseen $z > 6$ dwarf galaxies well below {\it HST}'s nominal direct detection limit are needed to produce the number of ionizing photons required to disassociate the universe's reservoir of intergalactic neutral hydrogen (e.g. Finkelstein et al.2015, Robertson et al. 2015). Very few candidates at $z \sim 9$ and above were identified (Ellis et al. 2013; Oesch et al. 2013; Zheng et al. 2012; Coe et al. 2013), resulting in vigorous debate about how quickly the first star-formation proceeded and how many $z > 9$ objects future JWST might see (Oesch et al. 2012). (The role of early black holes in terms of their contribution to the reionization budget is presently unknown and this will be revealed by JWST.) In order to address many of these unknowns, the Frontier Fields program was designed with the following science aims: 1. To reveal populations $z=5-10$ galaxies that are $>10$ times fainter than any presently known, the key building blocks of $\sim L^*$ galaxies in the local universe. 2. To characterize the stellar populations of faint galaxies at high redshift and solidify our understanding of the stellar mass function at the earliest times. 3. To provide, for the first time, a statistical morphological characterization of star forming galaxies at $z > 5$. 4. To find $z > 8$ galaxies stretched out enough by foreground clusters to measure sizes and internal structure and/or magnified enough for spectroscopic follow up. The Frontier Fields combines several previous high-redshift galaxy observing strategies to achieve these aims: very deep multiband {\it HST} imaging to identify very faint distant galaxy candidates by their color; and strong-gravitational lensing by massive clusters of galaxies to probe galaxies fainter than those accessible with direct `blank' field {\it HST} imaging. Deep imaging with the {\it Spitzer} $IRAC$ 3.6 and 4.5 micron bands are also required to improve photometric redshifts, measure stellar masses and specific star-formation rates, and rule out low-redshift interlopers (e.g. Labb\'{e} et al. 2013; Brad\u{a}c et al. 2014). The clusters and their exact pointings were selected to optimize the number of detectable $z \sim 10$ objects within the {\it HST} {\it WFC3}/$IR$ field of view magnified by factors of $\sim 1.5-100$, depending on their positions relative to the critical curves of the clusters. The {\it HST} exposure times were chosen to probe intrinsic depths $> 10 \times$ fainter than the HUDF in the highest magnification regions of the lensed fields, but with significantly less time than blank field observations. The volumes probed at the highest magnifications are very small (see Coe, Bradley, \& Zitrin 2015), thus the program observes multiple clusters to improve the statistical likelihood of capturing the light from the faintest and most distant galaxies. While color, redshift, and other relative measures such as specific star-formation rates and emission-line equivalent widths are immune to errors in the magnification estimates, measurements of the intrinsic luminosities and sizes of individual objects depend directly on the inferred lensing magnifications. (Integrated quantities such as galaxy luminosity functions are less susceptible to magnification uncertainties.) In concert with the DD observing campaigns, a unified effort to create high fidelity public maps of the lensing properties of each FF cluster is an integral part of the FF (see \S6). Because each cluster is observed at a fixed {\it HST} roll angle for an extended period, we also obtain simultaneous deep parallel field observations at a single pointing centered $\sim$ 6 arcmins from the cluster core ( $>$ 1.8 projected co-moving Mpc for a $z > 0.3$ lensing cluster). These six new 'blank fields' are comparable in depth to the HUDF parallel fields (Oesch et al. 2007), and triple the area of unlensed fields observed by {\it HST} to depths $\sim$ 29th magnitude ABmag. The background volumes lensed by the clusters are much smaller than those probed by unlensed fields. Thus, while the cluster pointings allow us to see intrinsically fainter objects than the HUDF within small volumes, the parallel fields provide a dramatic improvement in the volume and statistical counting of distant galaxies brighter than 29th magnitude. This is particularly important for understanding the biases associated with cosmic variance - i.e. the fact that every single sightline through the universe is unique (e.g. Robertson et al. 2014). The Frontier Fields will set the stage for the {\it James Webb Space Telescope} to study first light galaxies at $z > 10$ and to understand the assembly of galaxies over cosmic time. {\it JWST} is a 6.5m cold telescope sensitive at $0.7-27$ microns, to be launched at the end of 2018 with a limited lifetime requirement of 5 years and a goal of 10 years. Because {\it JWST}'s lifetime is short relative to {\it HST}, it is important for the astronomical community to be prepared for {\it JWST} observations early on. The high-redshift galaxy candidates detected by the Frontier Fields are likely to be among the first spectroscopic targets for {\it JWST}, and current studies will produce a better understanding of the high-redshift galaxy luminosity functions, spectral-energy distributions, and sizes needed to effectively plan for {\it JWST} surveys. The {\it HST} Frontier Fields high-resolution optical imaging shortward of 0.7 micron in $ACS$ F435W and F606W reaches depths comparable to those achieved by {\it JWST} {\it NIRCam} within 1--2 hours, and hence provides an important legacy dataset for future {\it JWST} extragalactic work. Finally, direct observations of the faintest first galaxies and the dwarf galaxies and early accreting black holes expected to be responsible for reionization will be challenging even with {\it JWST}. Development of cluster lens modeling techniques now will enable future {\it JWST} studies of strong-lensing clusters and their lensed galaxies. The Frontier Fields data offers the opportunity to do ground-breaking science in a number of fields other than the highest redshift universe. Several complementary {\it HST} GO observing programs have been awarded to obtain deep WFC3/UV imaging (GO 13389, 14209; B. Siana), WFC3/IR grism spectroscopy (GO 13459; T. Treu), and target-of-opportunity follow-up of transient events (GO 13386, 13790, 14208; S. Rodney). Hundreds of multiply-imaged background galaxies at all redshifts have permitted the construction of dark matter maps of the clusters at unprecedented resolution to probe cluster substructure (e.g. Jauzac et al. 2014, 2015; Wang et al. 2015, Hoag et al. 2016, Limousin, M. et al. 2016; Mohammed et al. 2016, Natarayan in prep), and will enable new cosmological constraints via angular scaling relations (e.g. Kneib \& Natarayan 2011). At the recommendation of the HFF review committee, an exercise comparing the various independent lens modeling methodologies and their fidelity has been on-going and the first results where more than 10 independent research groups participated in prepartion (Meneghetti et al. 2016). Detailed studies of intermediate redshift galaxies observed both at high magnification and in deep parallel imaging will probe their internal structures, stellar populations, and luminosity functions (e.g. Alavi et al. 2014; Jones et al. 2015; Livermore et al. 2012; Castellano et al. 2016; Pope et al. 2016) These deepest-ever images of massive galaxy clusters have detected intracluster light, ram-pressure stripping and tidal streams at $z > 0.3$ (e.g. Montes \& Trujillo 2014; McPartland et al. 2016), probing the dynamic processes impacting galaxy evolution within these unique environments. The new {\it HST} Frontier Fields observations have detected a number of transients (e.g. Rodney et al. 2015), including the light-curves from the first multiply-imaged supernova (Kelly et al. 2015; discovered in GLASS). \section{Field Selection} The six Frontier Field clusters and parallel fields (Table \ref{table1}) were selected to meet the primary scientific goals outlined in the HDFI SWG recommendations, as well as to optimize the {\it HST} and {\it Spitzer} observing campaigns. A list of 25 cluster candidates were suggested by the HDFI SWG, and additional candidates were suggested by the community during the selection process. Each cluster was evaluated using the following criteria. {\it Lensing properties:} The primary consideration for selecting each of the Frontier Fields was the lensing strength of the cluster. Each cluster's lensing strength was evaluated by calculating the likelihood of observing a $z=9.6$ galaxy magnified to $H_{F160W}$ $\leq$ 27 ABmag within the {\it HST WFC3/IR} field of view, ignoring corrections for incompleteness or sky brightness (Table 2). Preliminary lensing models were provided by two independent modelers, J. Richard and A. Zitrin, and lensing probabilities were calculated assuming a luminosity function with $\phi^{*} = 4.27 \times 10^{-4}$, $M_{UV}^{*} = -19.5$, and $\alpha = -1.98$, extrapolated from $z \sim 8$ (Bradley et al. 2012) by assuming $dM^{*}/dz = 0.46$ (Coe et al. 2015). We excluded several lower-redshift $z<0.3$ strong-lensing clusters (e.g. Abell 1689) because we could not adequately sample the low-redshift cluster critical curves within a single {\it WFC3/IR} $2'.2 \times 2.0$ pointing. However, although the $z=0.308$ merging cluster Abell 2744's critical curves could not be covered by a single {\it WFC3/IR} pointing, the probability of observing a $z=9.6$ galaxy near its core was among the highest of all cluster candidates. Because we based our selection upon the results of the lensing model predictions, our selection was biased towards better studied clusters with existing imaging and spectroscopic data from which lensing models could be constructed. Some otherwise promising clusters (e.g. El Gordo; Menanteau et al. 2012) could not be evaluated as insufficient lensing model constraints were available at the time of selection. {\it Sky brightness and Galactic extinction:} Observations of the very faint extra-galactic universe are limited by the brightness of the sky and by foreground Galactic extinction. Zodiacal light can have a significant impact on the depths obtained by {\it HST} and {\it Spitzer} imaging within a given exposure time. This background depends upon the angular distance of the target from the Sun and the ecliptic. Targets observed with high zodiacal backgrounds have near-infrared sky brightnesses several magnitudes brighter than the lowest zodiacal backgrounds, resulting in significantly lower signal-to-noise images within a given exposure time. Given the highly constrained roll-angles required to obtain observations of a fixed parallel field with both the {\it WFC3} and {\it ACS} cameras, we have a limited ability to mitigate the impact of the zodiacal background by constraining the solar avoidance angle. Therefore, strong preference was given to clusters at high ecliptic latitudes. This selection criteria excluded a number of strong-lensing clusters at low ecliptic latitudes. Additionally, clusters at high Galactic latitude with low extinction were strongly preferred. MACS0717.5+3745 has relatively high Galactic extinction, with $E_{B-V} = 0.068$ (Schlegel \& Finkbeiner 2011). However, this cluster was the second strongest potential lenser on our list of candidates (Table 2). Estimates of the $H_{F160W}$ zodiacal background at the epoch of observation and Galactic extinction for each cluster are given in Table 1. \begin{figure*} \plotone{f1_sm.eps} \caption{The location of the six Frontier Field cluster $+$ parallel field pairs, relative to the ecliptic and Galactic plane. The Galactic extinction map is from Schlagel, Finkbeiner, \& Davis (1998). Deep extra-galactic legacy fields HDF-N, HDF-S, UDF, COSMOS, EGS, and UDS are shown for reference. } \end{figure*} {\it Suitability of available parallel fields:} The {\it HST} observing strategy requires the simultaneous observation of the cluster field and a blank parallel field with {\it WFC3/IR} and {\it ACS} cameras. As we discuss below, this observing requirement limits the range of available roll angles, and hence locations for the parallel fields. The potential parallel field locations were selected to avoid bright stars and extended cluster structures when possible. The weak lensing signal for each of the parallel fields was also examined where possible (private communication, J. Merten, E. Medezinski, K. Umetsu). The weak lensing signal within the parallel fields has median magnification factors between 1.02 and 1.30 for background galaxies between $1 < z < 9$; see discussion of each cluster for detailed estimates. {\it Suitability for ground-based follow-up:} Follow-up of interesting objects detected in the Frontier Fields requires access to those fields from the major ground-based facilities. {\it ALMA} in particular has the potential to spectroscopically confirm the redshift of very high redshift ($z > 6$) galaxy candidates via the [CII] 158 micron and other atomic emission lines (e.g. da Cunha et al. 2013). Additionally, spectroscopic redshifts of multiply imaged galaxies add strong constraints to the lensing models for the clusters. Thus, access to the telescopes on Maunakea, in addition to southern facilities like {\it ALMA} and {\it VLT}, were a major consideration. Five out of the six selected clusters are visible from {\it ALMA}, with MACS0717.5+3745 as the exception (Tables 1, 2). Five out of the six clusters are visible from Maunakea, with Abell S1063 as the exception. {\it Existing ancillary data:} Supporting data was a key consideration recommended by the HDFI SWG. Many of the candidate clusters have been studied previously by space missions, including {\it HST}, the {\it Spitzer} cryo-mission with {\it MIPS} and {\it IRAC} (including 5, 8 micron channels); {\it Herschel}, {\it XMM}, and {\it Chandra} (see the discussion of each cluster for details). Additionally, ground-based spectroscopic and wide-field imaging survey data were evaluated from the literature. Four of the chosen clusters were drawn from the CLASH survey (Postman et al. 2012), with supporting multi-band shallow {\it HST} imaging, wide-field ground-based imaging ({\it Subaru}), spectroscopy ({\it VLT}), as well as archival {\it Herschel} and {\it Chandra} data. Since the announcement of the Frontier Field selection, the community has responded with additional observations with {\it Chandra} (PI S. Murray; C. Jones-Forman), {\it VLA} (PI E. Murphy), {\it XMM} (PI J.P. Kneib, Eckert et al. 2015), {\it ALMA} (PI F. Bauer), {\it LMT} (PI A. Pope), {\it Gemini} GeMS/GSAOI $K_s$ imaging (e.g. Schirmer et al. 2014), {\it VLT} Hawk-I $K_s$ imaging (PI D. Marchesini \& G. Brammer), {\it VLT} MUSE spectroscopy (PIs Caputi \& Cl\'{e}ment, Bauer, Richard, Grillo, e.g. Karman et al. 2015, Grillo et al. 2016), as well as the release of previously unpublished data on these fields (e.g. Ebeling et al. 2014; Gruen et al. 2014). We continue to maintain clearing-house website for public data links and Frontier Fields-related publications: \url{www.stsci.edu/hst/campaigns/frontier-fields/FF-Data}. In addition to the science-driven considerations given above, we optimized the cluster selection for a number of practical issues. {\it HST observability:} The Frontier Fields are observed with {\it HST} at a fixed roll angle and its 180 degree offset in order to obtain deep observations in the cluster field and parallel field with both {\it WFC3/IR} and ACS. These observations are 70 orbits at each orientation. Each field was evaluated to determine the ability to hold a fixed roll angle for more than 30 days and the availability of guide stars at these orientations. For optimal stability, {\it HST} requires two guide stars with magnitudes brighter than 15th magnitude. Our initial evaluation of MACSJ1149.5+2223 found only one acceptable guide star; however, a second guide star with a magnitude slightly fainter than the nominal limit was available. This new guide star was tested in early observations and found to be suitable. {\it Spitzer observability: } Each cluster and parallel field was evaluated by the {\it Spitzer} implementation team. {\it Spitzer} observations are sensitive to bright stars in the field, as saturation above $\sim$35,000 DN can result in ``column pull-down'' impacting the data quality along the effected column. MACSJ0647.7+7015 (e.g. Coe et al. 2013) in particular was found to have unacceptably bright stars in the vicinity, and was excluded. {\it Schedulability: } Each set of cluster/parallel field observations constitutes a considerable investment of {\it HST} time, with 70 orbits at each orient and 140 orbits total per field. The optimal scheduling of these observations is a challenge. We also anticipated that the Frontier Fields would be popular fields for ancillary {\it HST} observing programs. Therefore to avoid schedule collisions with the main Frontier Field program, supporting Frontier Field programs, and other popular {\it HST} fields (e.g. the UDF/GOODS-S), the Frontier Fields were selected to span a range in right ascension. The order in which the fields are observed was determined primarily by the desire to prevent overlapping epochs of {\it HST} observations. {\it JWST observability:} Each of the selected Frontier Fields positions was run through a preliminary {\it JWST} scheduling software and confirmed to have extended {\it JWST} visibility periods. \\ \\ \begin{deluxetable*}{llllccccl} \label{table1} \tabletypesize{\footnotesize} \tablecolumns{9} \tablewidth{0pt} \tablecaption{The Frontier Fields Locations } \tablehead{ \colhead{Cluster} &\multicolumn{2}{c}{Cluster Center (J2000)} & \multicolumn{2}{c}{Parallel Center (J2000)} &\colhead{Epoch1} &\colhead{Epoch2} &\colhead{zodiacal $H_{F160W}$ \tablenotemark{a}} &\colhead{$E_{(B-V)}$ \tablenotemark{b}} \\ &\colhead{$\alpha$} & \colhead{$\delta$} &\colhead{$\alpha$} & \colhead{$\delta$} & {\it HST} & {\it HST} &\colhead{(AB mag/ $\sq\arcsec$)} & } \startdata Abell 2744 & 00:14:21.2 & -30:23:50.1 &00:13:53.6 & -30:22:54.3 &10/2013-12/2013 &5/2014-7/2014 & 22.2/21.9 & 0.012 \\ MACSJ0416.1$-$2403 & 04:16:08.9 & -24:04:28.7 &04:16:33.1 & -24:06:48.7 &1/2014-2/2014 &7/2014-9/2014 & 22.4/22.3 & 0.036 \\ MACSJ0717.5$+$3745 & 07:17:34.0 & +37:44:49.0 &07:17:17.0 & +37:49:47.3 &9/2014-12/2014 &2/2015-3/2015 & 21.8/22.0 & 0.068 \\ MACSJ1149.5$+$2223 & 11:49:36.3 & +22:23:58.1 &11:49:40.5 & +22:18:02.3 &11/2014-1/2015 &4/2015-5/2015 & 21.9/22.0 & 0.020 \\ Abell S1063 & 22:48:44.4 & -44:31:48.5 &22:49:17.7 & -44:32:43.8 &10/2015-11/2015 &4/2016-6/2016 & 22.2/20.6 & 0.010 \\ Abell 370 & 02:39:52.9 & -01:34:36.5 &02:40:13.4 & -01:37:32.8 &12/2015-2/2016 &7/2016-9/2016 & 21.8/21.9 & 0.028 \\ \enddata \tablenotetext{a}{Typical zodiacal background in $H_{F160W}$ for {\it HST} Epoch1 and Epoch 2 observations respectively; computed using {\it HST} exposure time calculator \\ and median observing date.} \tablenotetext{b}{Schlafly \& Finkbeiner 2011, courtesy of the NASA/IPAC Extragalactic Database} \end{deluxetable*} \vspace{0.1 in} \section{The Frontier Field Clusters\\ and Parallel Fields} In February 2013, the six Frontier Field clusters and their parallel fields locations were finalized and announced prior to the {\it HST} Cycle 21 proposal deadline. The Frontier Fields clusters are Abell 2744, MACSJ0416.1-2403, MACSJ0717.5+3745, MACSJ1149.5+2223, Abell S1063 (also known as RXCJ2248.7-4431), and Abell 370 (Table 1). These clusters are at redshifts between 0.3 and 0.55, and are among the most massive known clusters at these redshifts (Table 2). All of the clusters had previous (shallow){\it HST} imaging, with four clusters previously observed as part of the CLASH {\it HST} MCT survey (MACSJ0416.1-2403, MACSJ0717.5+3745, MACSJ1149.5+2223, and Abell S1063) and all but Abell 370 were part of the MAssive Clusters Survey (Ebeling, Edge, \& Henry 2001). \subsection{Abell 2744} Abell 2744 is a massive X-ray luminous merging cluster at $z=0.308$, (Couch \& Newell 1982; Abell, Corwin, \& Olowin 1989), also known as AC118 or ``Pandora's Cluster''. It has a total X-ray luminosity of $L_X = 3.1 \times 10^{45}$ erg s$^{-1}$ at $2-10$ keV (Allen 1998), with X-ray emission concentrated on the southern compact core and extending to the northwest (Owers et al. 2011; Eckert et al. 2015). Its viral mass within the central 1.3 Mpc is $\sim 1.8 \times 10^{15} M_{\odot}$ (Merten et al. 2011). The velocity dispersion is $\sigma = 1497 \pm 47$ km s$^{-1}$ (Owers et al. 2011), but shows two distinct structures, with the northern substructure offset in velocity by $-1600$ km s$^{-1}$ and $\sigma \sim 800$ km s$^{-1}$ (Boschin et al. 2006; Braglia et al.2007). Abell 2744's complicated velocity structure and lensing properties suggest that it is merging system with at least three separate sub-structures (Cypriano et a. 2004; Braglia et al.2007; Merten et al. 2011). Weak lensing analysis by Merten et al. (2011) identified four mass concentrations of core, N, NW, W of 2.2, 0.8, 1.1, 1.1 $\times 10^{14}$ $\Msun$ respectively, with the NW structure showing evidence for spatially separated dark matter, gas and galaxies. Abell 2744 is also host to a powerful extended radio halo with $P_{1.4 GHz} = 1.5 \times 10^{25}$ W s$^{-1}$ (Giovannini, Tordi, \& Feretti 1999). Despite its obviously complicated geometry, Abell 2744 was one of the strongest Frontier Field cluster candidates based on its lensing strength, sky location, and pre-existing ancillary data. The pre-FF lensing model by Merten et al. (2011; using the Zitrin et al. 2009 Light-Traces-Mass modeling method) found 34 strong-lensed images of 11 galaxies in {\it HST} F814W imaging of the core of Abell 2744 (HST GO 11689, P.I.: R. Dupke), giving a core mass $\sim 2 \times 10^{14} M_{\odot}$. This core region is $\sim 100\arcsec \times 100\arcsec$, therefore fits within the {\it HST} {\it WFC3/IR} FOV of $2.2\arcmin \times 2.1\arcmin$. Analysis of preliminary models constructed by Zitrin and Richard separately suggested a very high probability of magnifying a $z \sim 10$ galaxy to $H=27$ ABmag within {\it WFC3/IR} field of view. This high lensing probability has been confirmed by subsequent models provided by the lensing map effort and independent teams (e.g. Coe et al. 2015; Atek et al. 2014; Zitrin et al. 2014; Johnson et al. 2014; Lam et al. 2014; Richard et al. 2014; Ishigaki et al. 2015; Wang et al. 2015; Jauzac et al. 2015; Table 2). Abell 2744 has one of the darkest skies and lowest Galactic extinctions $E_{(B-V)} = 0.012$ (Schlafly \& Finkbeiner 2011) of all the cluster candidates. The typical zodiacal background in $H_{F160W}$ during the cluster IR epoch (10/2013-12/2013) and the parallel IR epoch (5/2014-7/2014) are $\sim 22.2$ and 21.9 ABmag per $\sq\arcsec$ respectively. At a declination of $-30$, it is easily observable with {\it ALMA} and the {\it VLT} but also within reach of Maunakea and the {\it Very Large Array}. It has been extensively studied by the {\it Chandra X-ray Observatory} (e.g. Kempner \& David 2004; Owers et al. 2011; Merten et al. 2011). Abell 2744 was also observed during the {\it Spitzer} cryo-mission, with MIPS 24 micron and IRAC 3.6 - 8 micron observations (PI G. Rieke). This cluster is part of the Herschel Lensing Survey (Egami et al. 2010), with deep {\it Herschel Space Observatory} PACS 100/160 micron and SPIRE 250/350/500 micron imaging. The choice of parallel field was particularly challenging in this case. {\it HST} roll-angles with $>$ 30 day observing windows at both orientations placed the observable parallel field either 6\arcmin east or west of the Abell 2744 core. However, the eastern parallel field location was undesirable because of the presence of an unavoidable bright star. Therefore the western parallel field location ($\alpha$ = 00:13:53.6, $\delta$=-30:22:54.3, J2000) was chosen. The parallel field is $\sim 1-2\arcmin$ west of the NW and W sub-structures identified in Merten et al. (2011). The weak-lensing magnification boost from the cluster is therefore predicted to significant, with median magnification factors $\sim 1.14-1.21$ and maximum magnification factors $1.5-1.85$ for $1 < z < 9$ within the WFC3/IR pointing based on the pre-HFF v1.0 Merten model (Table 2). \subsection{MACSJ0416.1-2403} MACSJ0416.1-2403 is a massive elongated X-ray luminous cluster at z=0.397 (Ebeling et al. 2007; Ebeling et al. 2014) \footnote{This cluster's redshift is often incorrectly quoted as 0.42, based on preliminary analysis by Postman et al. 2012.}. Its bolometric X-ray luminosity is $L_x = 1.02 \times 10^{45}$ erg s$^{-1}$, with a double-peaked profile suggestive of a merging cluster (Mann \& Ebeling 2012). The velocity dispersions for each of these components are $\sigma$ = 779 $^{+22}_{-20}$ and 955 $^{+17}_{-22}$ (Jauzac et al. 2014; Ebeling et al. 2014), and the total mass enclosed within 950 kpc $\sim 1.2 \times 10^{15} M_{\odot}$ (Jauzac et al. 2014; Grillo et al. 2015). MACSJ0416.1-2403 was selected as one of five strong-lensing clusters for the {\it HST} MCT CLASH survey (Postman et al. 2012) based on its large Einstein radius ($\theta_E > 0.35 \arcsec$ at $z=2$). Prior to the Frontier Fields observations, Zitrin et al. (2013) found a high number of multiple images relative to its critical area in the CLASH {\it HST} images, likely due to its highly elongated and irregular structure. Preliminary evaluation of MACSJ0416.1-2403's lensing models yielded moderate to high probabilities of detecting a $z \sim 10$ $H \leq 27$ mag galaxy within the {\it WFC3/IR} field of view (Table 2). MACS0416.1-2403 is at a high ecliptic latitude with a Galactic extinction E(B-V) = 0.036 (Schlafly \& Finkbeiner 2011). The typical zodiacal background in $H_{F160W}$ during the cluster IR epoch (7/2014-9/2014) and the parallel IR epoch (1/2014-2/2014) are $\sim 22.3$ and 22.4 ABmag per $\sq\arcsec$ respectively. At declination $\sim$ -24, this field is easily observable with {\it ALMA}, and also available to Maunakea. A significant amount of data was collected on this cluster as part of MACS and CLASH, including shallow multi-band {\it HST} data, {\it Chandra} imaging, {\it Spitzer} warm-mission IRAC (PI Bouwens), and VLT spectroscopy (e.g. Grillo et al. 2015). Additional Chandra imaging has since been obtained by C. Jones-Forman and S. Murray (Ogrean et al. 2015). However, there are no legacy {\it Spitzer} cryogenic observations. MACSJ0416.1-2403 is notable for having a $J=10$, $V=13$ magnitude star within 1\arcmin of the cluster core. This star has a high proper motion, with DSS and 2MASS imaging from the mid-1990s showing a position a few arc-seconds north of its current (2014) {\it HST} {\it ACS} position. This star is included in the Frontier Fields {\it ACS} pointing, and lies just off the {\it WFC3/IR} pointing, resulting in scattered light and saturated diffraction spikes in the Frontier Field images. However, this star is bright enough to act as an adaptive optics guide star, therefore provides a unique opportunity to obtain AO imaging (e.g. Schrimer et al. 2015, Gemini- GEMS) and spectroscopy of the critical curves surrounding a strong-lensing cluster. The MACSJ0416.1-2403 parallel field was chosen to lie westward of the cluster pointing in order to avoid the bright eastern stars in the {\it Spitzer} Frontier Field observations. This orientation is perpendicular to the elongation of the cluster on the sky, and therefore we expect minimal contamination of the parallel field from the cluster. The parallel field is predicted to have median magnification factors $\sim 1.09-1.16$ and maximum magnification factors $1.2-1.4$ for $1 < z < 9$ within the WFC3/IR pointing based on the pre-HFF v1.0 Merten model (Table 2). \subsection{MACS0717.5+3745} MACSJ0717.5+3745 is an extremely massive X-ray luminous merging cluster at $z=0.545$ (Edge et al.2003). The X-ray luminosity between 0.1-2.4 keV is $3.3 \pm 0.2 \times 10^{45}$ km s${-1}$ (Edge et al. 2003). The cluster's velocity dispersion is $1660^{+120}_{-130}$ km s$^{-1}$ (Ebeling et al. 2007). Its optical and X-ray morphology shows a double peak and lack of center cluster core, with a filament towards extending southeast (Ebeling et al. 2004; Kartaltepe et al. 2008). This cluster also hosts the most power known radio source ($P(1.4 GHz) \sim 5 \times 10^{25} W Hz^{-1}$) with a radio relic significantly offset from the cluster center to the north (van Weeren et al.2009). MACSJ0717.5+3745 was also chosen as one of the CLASH strong-lensing clusters (Postman et al. 2012). It has the largest known Einstein radius ($\sim$ 350 kpc, Zitrin et al. 2009) and an estimated virial mass $\ge 2-3 \times 10^{15} M_{\odot}$ (Zitrin et al. 2009; Limousin et al. 2012). Several pointings of {\it HST} {\it ACS} imaging were obtained previously by Ebeling in Cycle 12 (GO 9722). Weak-lensing analyses of the pre-Frontier Fields {\it HST} imaging and ground-based Subaru imaging have confirmed the presence of the southeast filament, with a projected length $\sim$ 4.5 Mpc and true length of $\sim$ 18 Mpc (Jauzac et al. 2012; Medenski et al. 2013) Independent preliminary lensing models from Zitrin and Richard ranked MACS0717.5+3745 as the strongest lenser of all the considered clusters (see Table 2). However, MACSJ0717.5+3745 has the highest zodiacal background of all the Frontier Fields, as well as a relatively high Galactic extinction $E_{(B-V)} = 0.068$ (Schlafly \& Finkbeiner 2011). It has an ecliptic latitude of 15.4 degrees, with a typical zodiacal background in $H_{F160W}$ during the cluster IR epoch (2/2015-3/2015) and the parallel IR epoch (9/2014-12/2014) of $\sim 22.0$ and 21.8 ABmag per $\sq\arcsec$ respectively. It is also our northern-most cluster at declination $>30$, placing it just out of reach of ALMA and other southern observatories. As a CLASH cluster, significant shallow {\it HST} imaging, ancillary wide-field Subaru imaging, and a photometric redshift catalog are available. This cluster was also observed with {\it Spitzer} cryogenic mission with both IRAC and MIPS (PI Kocevski) and the {\it Spitzer} warm-mission SURFSUP program (PI Bradac; Bradac et al. 2014), as well as with the {\it Herschel Space Observatory} (Egami et al. 2010). A spectroscopic redshift catalog was recently published by Ebeling et al. (2014). The MACSJ0717.5+3745 parallel field was chosen to lie north-west of the cluster pointing in order to avoid the long cluster filament extending to the south-east. The parallel field is predicted to have median magnification factors $\sim 1.07-1.15$ and maximum magnification factors $1.17-1.42$ for $1 < z < 9$ within the WFC3/IR pointing based on the pre-HFF v1.0 Merten model (Table 2). \subsection{MACS1149.5+2223} MACSJ1149.5+2223 at $z=0.543$ was discovered as part of the MACS survey as one of the most X-ray luminous clusters known at $z>0.5$ (Ebeling et al. 2001; Ebeling et al. 2007). Its 0.1-24 keV X-ray luminosity is $L_x = 1.76 \pm 0.04 \times 10^{45}$ erg s$^{-1} $ and it has a velocity dispersion $1840^{+120}_{-170}$ km s$^{-1}$ (Ebeling et al. 2007). Its optically selected galaxy population and X-ray morphology is elongated within the cluster core, but does not show evidence of extended filaments (Kartaltepe et al. 2008). Spectroscopic studies and lensing analysis of previous {\it HST} {\it ACS} imaging (PI Ebeling; GO 9722) suggest four or more large-scale dark matter sub-haloes and a complex merger history ( Zitrin \& Broadhurst 2009; see also Smith et al. 2009). A CLASH strong-lensing cluster (Postman et al. 2012), it has a large Einstein radius ($\sim$ 170 kpc, Zitrin \& Broadhurst 2009) and an estimated total mass $\sim 2.5 \times 10^{15} $ $M_{\odot}$ ( Zheng et al. 2012). Based on the CLASH imaging, Zheng et al. (2012) reported a singly imaged z=9.6 galaxy candidate with a magnification $\sim 14.5$ and observed $F160W$ magnitude $\sim 26.5$. Preliminary lensing models from Zitrin and Richard ranked MACSJ1149.5+2223 as a moderate lenser (Table 2). Its Galactic extinction is fairly low $E(B-V) = 0.020$ (Schlafly \& Finkbeiner 2011), and a zodiacal background $\sim 22$ $H_{F160W}$ ABmag per $\sq\arcsec$ during the epochs of observation (cluster IR: 11/2014-1/2015; parallel IR: 4/2015-5/2015). Initially, this cluster was not considered an ideal {\it HST} target as only one bright guide star was known at the required orients. However, further investigation reveal a second guide star slightly fainter than the nominal magnitude cut-off, and early observations of MACSJ1149.5+2223 in Cycle 21 confirmed the suitability of this guide star pair. At declination +22, this cluster is barely observable with ALMA but easily observed from Maunakea and other northern observatories like the Very Large Array. This cluster is part of the Herschel Lensing Survey (Egami et al. 2010) and a GT Cycle 1 program (PI D. Lutz), and was targeted by {\it Spitzer} warm-mission SURFSUP IRAC imaging program (Bradac et al. 2014). The southern position for the MACSJ1149.5+2223 parallel field was chosen to avoid a particularly bright star at the northern position. The parallel field is predicted to have median magnification factors $\sim 1.02-1.07$ and maximum magnification factors $1.1-1.3$ for $1 < z < 9$ within the WFC3/IR pointing based on the pre-HFF v1.0 Merten lensing model (Table 2). \subsection{Abell S1063} Abell S1063 (also known as RXC J2248.7-4431 and SPT-CL J2248-4431), is the southern-most Frontier Fields cluster with $z=0.3461$ (Abell, Corwin, \& Olowin 1989; B\"{o}hringer et al. 2004; G\'{o}mez et al. 2012). Abell S1063 is a massive cluster with a large velocity dispersion $1840^{+ 230}_{-150}$ km s$^{-1}$. It's X-ray luminosity between 0.5-2.0 keV is $1.8\pm0.2 \times 10^{45}$ erg s$^{-1}$ (Williamson et al. 2011), and the cluster has one of the hottest known X-ray temperatures ($> 11.5$ keV) (G\'{o}mez et al. 2012). It is also among the strongest Sunyaev-Zel'dovich (SZ) detected clusters in the South Pole Telescope survey (Williamson et al. 2011), with a SZ-derived mass $M_{500} \sim 1.4 \times 10^{15}$ $M_{\odot}$. Like the other Frontier Field clusters, the cluster galaxy density map shows significant substructure, with an X-ray peak offset from the primary galaxy density peak (G\'{o}mez et al. 2012). Weak lensing analysis also identified multiple substructures, and gives a mass of the central cluster in agreement with X-ray and SZ calculations (Gruen et al.2013). Selected as a CLASH cluster, the {\it HST} imaging revealed a quintuply lensed $z \sim 6$ galaxy (Monna et al. 2013, Balestra et al. 2013). The Herschel Lensing Survey (Egami et al. 2010) images show an associated 870 micron source, one of the highest redshift lensed sub-mm galaxies known (Boone et al.2013). Abell S1063 is one of the less powerful lensers (Table 2) and most relaxed of the selected Frontier Fields clusters. However, it is located in one of the darkest regions of the sky, with a Galactic extinction of $E_{(B-V)} = 0.010$ (Schlafly \& Finkbeiner 2011). The typical zodiacal background is 20.6 and 22.2 $H_{F160W}$ AB mag per $\sq\arcsec$ during the cluster IR epoch (4/2016-6-2016) and the parallel IR epoch (10/2015-11/2015) respectively. It is inaccessible with Maunakea but easily observed by ALMA and the VLT. As a SPT and CLASH cluster, it had extensive spectroscopic and ancillary data already, including shallow Chandra imaging (PI Romer), {\it Herschel} (Egami et al. 2010; also Open Time Cycle 2 program, PI T. Rawle), SZ, {\it Spitzer} cryo-mission MIPS and IRAC (PI G. Rieke), and VLT spectroscopy (e.g. Balestra et al. 2013). Recently, Abell S1063 has been targeted by VLT MUSE integral field spectrograph (Karman et al. 2015). The Abell S1063 parallel field was chosen to the east of the cluster, to avoid scattered light from the western bright stars in the {\it Spitzer} and {\it HST} observations. We note that Gruen et al. (2013) report a east-north-east cluster substructure which lies northward of the AbellS1063 parallel field location. The parallel field is predicted to have median magnification factors $\sim 1.02$ and maximum magnification factors $1.27-1.43$ for $1 < z < 9$ within the WFC3/IR pointing based on the pre-HFF v1.0 Merten lensing model. \subsection{Abell370} Abell370 (Abell 1958) at $z=0.375$ (Struble \& Rood 1999) is the host of the first known gravitational Einstein ring (Soucail et al. 1987; Paczynski 1987) and thus one of the best studied strong-lensing clusters (e.g. Kneib et al. 1993; Smail et al.1996; Bezecourt 1999a, b; Broadhurst et al. 2008; Richard et al. 2010; Medezinski et al. 2010; Umetsu et al. 2011). Its total velocity dispersion is $\sim$ 1170 km s$^{-1}$ (Dressler al. 1999), with the two main sub-structures showing internal velocity dispersions $\sim$ 850 km s$^{-1}$ (Kneib et al. 1993). Abell 370's total bolometric X-ray luminosity is $L_x = 1.1 \times 10^{45}$ erg s$^{-1}$(Morandi, Ettori, \& Moscardini 2007). X-ray, SZ, and lensing analyses of Abell 370 consistently yield a viral mass $\sim 1 \times 10^{15} M_{\odot}$ (e.g. Umetsu et al. 2011; Richard et al. 2010; Morandi et al. 2007). With {\it HST} {\it ACS} images taken shortly after the last {\it HST} refurbishment, Richard et al. (2010) found significant offsets between the peak X-ray emission and peaks of the lensing mass distribution, and concluded that Abell 370 is likely the recent merger of two equal mass clusters along the line of sight. Like Abell 2744, Abell 370 was not part of the the CLASH {\it HST} MCT survey (Postman et al. 2012). Abell 370 is one of the stronger lensers among the selected Frontier Fields clusters, with current models predicting $P(z=9.6) \sim 0.9$ (Table 2). The typical zodiacal background is 21.9 and 21.8 $H_{F160W}$ AB mag per $\sq\arcsec$ during the cluster IR epoch (7/2016-9/2016) and the parallel IR epoch (12/2015-2/2016) respectively. It has a Galactic foreground extinction $E_{(B-V)} = 0.028$, and is accessible with both Northern and Southern telescopes. Abell 370 also has a rich legacy of archival data, including Chandra imaging (PI Garmire), {\it Herschel} data from the PACS Evolutionary Probe (Lutz et al. 2011) and Herschel Multi-tiered Extragalactic Survey (Oliver et al. 2012), and cryogenic {\it Spitzer} data in the four IRAC channels, IRS, and MIPS (PIs Fazio; Rieke; Houck; Lutz; Dowell). For Abell 370, we choose the south-eastern parallel position in order to avoid multiple bright stars north-west of the cluster and a possible extension of cluster members to the north (Broadhurst et al. 2008). The parallel field is predicted to have the strongest weak-lensing boost, with median magnification factors $\sim 1.2-1.32$ and maximum magnification factors $1.35-1.63$ for $1 < z < 9$ within the WFC3/IR pointing based on the pre-HFF v1.0 Merten lensing model. \subsection{Other Cluster Candidates} We considered a number of potential Frontier Field clusters, many of which are known to be exceptional lensers. We excluded Abell 1689, Abell 1703, and the Bullet Cluster because of their low redshifts/ large angular sizes of their critical curves relative to the {\it WFC3/IR} field of view. Abell 2537, MACSJ1206.2-0747, MACSJ2129.4-0741, MACSJ2214.9-1359, RCS2-2327.4-04, RXJ1347.5-1144 all have low ecliptic latitudes, and therefore have unacceptably high zodiacal backgrounds. MACSJ0329.6-0211, MACSJ451.0+0006, MACSJ0520.7-1328, MACSJ0744.9+3927 have high Galactic extinctions (E(B-V) $> 0.05$). MACSJ0647.7+7015 and MACSJ744.9+3927 have numerous unavoidable bright stars in the field. MACSJ0647.7+7015, MACSJ744.9+3927, and MACSJ1423.8+2404 are unsuitable for deep ALMA observations. MACSJ0358.8-2995 has a foreground z=0.17 Abell cluster and very limited {\it HST} visibility. MACSJ0454.1-0300 is a weaker lenser with a moderate zodiacal background. MACSJ0257-2325 had limited public ancillary data at the time of selection. Additionally, these last three clusters are close in right ascension to each other and the UDF/GOODS-South field and MACSJ0416.1-2403, and therefore would have posed scheduling issues for {\it HST} over the course of the next several {\it HST} cycles. \begin{deluxetable*}{lcccccccc} \label{table2} \tabletypesize{\footnotesize} \tablecolumns{9} \tablewidth{0pt} \tablecaption{Frontier Fields: Cluster Properties and Ancillary Data} \tablehead{ \colhead{Cluster} & \colhead{$z$ \tablenotemark{a}} & \colhead{$M_{vir}$ \tablenotemark{a}} &\colhead{$L_X$ \tablenotemark{a}} &\colhead{P($z=9.6$) \tablenotemark{b}} & \colhead {Parallel $\mu$ \tablenotemark{c}} &\colhead{\it{Spitzer}} &\colhead{ \it{Herschel} \tablenotemark{d}} & \colhead{ALMA \tablenotemark{e}}\\ & & \colhead{$M_{\odot}$} &\colhead{erg s$^{-1}$} & $H \leq 27$ & &\colhead{MIPS 24$\mu$m} & \colhead{PACS/SPIRE} & } \startdata Abell 2744 & 0.308 & $1.8 \times 10^{15}$ & $3.1 \times 10^{45}$ & 0.69 $\pm$ 0.07 &1.14-1.21 & yes & 100/250/350/500 & yes\\ MACSJ0416.1$-$2403 & 0.396 & $1.2 \times 10^{15}$ & $1.0 \times 10^{45}$ & 0.63 $\pm$ 0.12 &1.09-1.16 & no & 100/250/350/500 & yes \\ MACSJ0717.5$+$3745 & 0.545 & $2-3 \times 10^{15}$ & $3.3 \times 10^{45}$ & 0.84 $\pm$ 0.05 &1.07-1.42 & yes &100/250/350/500 &no\\ MACSJ1149.5$+$2223 & 0.543 & $2.5 \times 10^{15} $ & $1.8 \times 10^{45}$ & 0.60 $\pm$ 0.10 &1.02-1.07 &no &70/100/250/350/500 & yes\\ Abell S1063 & 0.348 & $1.4 \times 10^{15}$ & $1.8 \times 10^{45}$ & 0.69 $\pm$ 0.08 &1.02 &yes &70/100/250/350/500 &yes \\ Abell 370 & 0.375 & $\sim 1 \times 10^{15}$ & $1.1 \times 10^{45}$ & 0.90 $\pm$ 0.08 &1.2-1.3 &yes &100/250/350/500 &yes\\ \enddata \tablenotetext{a}{See text for references for each cluster.} \tablenotetext{b}{Median probability of lensing a $z=9.6$ background galaxy to apparent $H_{F160W}$ ABmag $\leq$ 27 within the WFC3/IR FOV, calculated using the pre-HFF v1.0 lensing models. } \tablenotetext{c}{Median magnification factor $\mu$ in the parallel fields within the WFC3/IR FOV; based on the weak-lensing estimates from pre-HFF v1.0 Merten models. Note that magnification factors may be larger at locations closer to the cluster.} \tablenotetext{d}{See Rawle et al. 2016 for summary of {\it Herschel} and {\it Spitzer} cryogenic observations. Note that the Herschel SPIRE 250/350/500 mm field of view covers both cluster and parallel fields for all but MACSJ0416.1-2403.} \tablenotetext{e}{Visibility from ALMA} \end{deluxetable*} \begin{deluxetable}{lcc} \label{table3} \tabletypesize{\footnotesize} \tablecolumns{3} \tablewidth{0pt} \tablecaption{Frontier Fields Observational Depths} \tablehead{ \colhead{Camera/Filter} & \colhead{Exposure Time \tablenotemark{a}} & \colhead{5 $\sigma$ \tablenotemark{b}} } \startdata {\it HST} {\it ACS/WFC} F435W & 45 ks & 28.8 \\ {\it HST} {\it ACS/WFC} F606W & 25 ks & 28.8 \\ {\it HST} {\it ACS/WFC} F814W & 105 ks & 29.1 \\ {\it HST} {\it WFC3/IR} F105W & 60 ks & 28.9 \\ {\it HST} {\it WFC3/IR} F125W & 30 ks & 28.6 \\ {\it HST} {\it WFC3/IR} F140W & 25 ks & 28.6 \\ {\it HST} {\it WFC3/IR} F160W & 60 ks & 28.7 \\ {\it Spitzer IRAC} 3.6$\mu$m & 50 ks & 26.5 \\ {\it Spitzer IRAC} 4.5$\mu$m & 50 ks & 26.0\\ \enddata \tablenotetext{a}{Assuming 2500s per {\it HST} orbit. {\it Spitzer} depths include previous archival observations.} \tablenotetext{b}{Calculated for a point source within a 0.4\arcsec diameter aperture for {\it HST}.} \end{deluxetable} \begin{figure*} \plotone{f2_sm.eps} \caption{{\it HST} full-depth image of Abell 2744, the first Frontier Field strong-lensing cluster. The central 1.5\arcmin $\times$ 1.5\arcmin is shown. } \end{figure*} \begin{figure*} \plottwo{f3a_sm.eps}{f3b_sm.eps} \caption{{\it HST} $H_{F160W}$ $+$ {\it Spitzer IRAC} 3.6 and 4.6 micron image of Abell 2744 (left) and the {\it HST} full-depth image of Abell 2744 parallel field (central 1.5\arcmin $\times$ 1.5\arcmin).} \end{figure*} \begin{figure*} \plottwo{f4a_sm.eps}{f4b_sm.eps} \caption{{\it HST} full-depth image of MACSJ0416.1-2403 and its parallel field (central 1.5\arcmin $\times$ 1.5\arcmin)} \end{figure*} \begin{figure*} \plottwo{f5a_sm.eps}{f5b_sm.eps} \caption{{\it HST} full-depth image of MACSJ0717.5+3745 and its parallel field (central 1.5\arcmin $\times$ 1.5\arcmin)} \end{figure*} \begin{figure*} \plottwo{f6a_sm.eps}{f6b_sm.eps} \caption{{\it HST} full-depth image of MACSJ1149.5+2223 and its parallel field (central 1.5\arcmin $\times$ 1.5\arcmin)} \end{figure*} \section{Observations} Deep optical and near-infrared imaging achieving $\sim$ 29th AB magnitude 5$\sigma$ depths in seven {\it HST} bandpasses (ACS/WFC $B_{F435W}$, $V_{F606W}$, $I_{F814W}$, WFC3/IR $Y_{F105W}$, $J_{F125W}$, $JH_{F140W}$, $H_{F160W}$), from 0.4-1.6 microns are used to identify high-redshift galaxies ($z > 4$) using the Lyman break drop-out technique (Table 3). Deep {\it Spitzer} IRAC imaging at 3.6 and 4.5 microns place additional constraints on galaxy redshifts (Table 3). Spectral energy distribution fitting of the multi-wavelength photometry from the combined {\it HST} and {\it Spitzer} imaging (e.g. Merlin et al. 2016) provide photometric redshifts, and estimates of the galaxy stellar masses and recent star-formation histories (e.g. Castellano et al. 2016). The Frontier Field cluster observations have the same exposure times as the parallel fields, and similar observed depths. However, the intrinsic depths for background galaxies lensed by the clusters are deeper than the parallel fields (modulo the contribution to the foreground by the cluster ICL and galaxies; see Livermore, Finkelstein, \& Lotz 2016; Merlin et al. 2016 for ICL subtraction strategies), with typical magnifications across the cluster pointings $\sim 1.5-2$ and small areas magnified by factors as large as $>10-100$. \subsection{HST Observing Strategy} Both the {\it Wide Field Camera 3} and {\it Advanced Camera for Surveys} are used in concert at fixed {\it HST} roll-angles to probe each Frontier Field cluster and a parallel `blank' field pair. Based upon the recommended depths and filter sets from the HDFI SWG report, we obtain 70 orbits per camera at a given roll angle, for a total of 140 orbits per pointing for both the cluster and parallel field. The first four sets of Frontier Fields were awarded DD time in Cycles 21 and 22 for a total of 560 orbits. Two more Frontier Fields were approved for {\it Spitzer} DD observations in {\it Spitzer} Cycle 11 and were awarded an additional DD 280 orbits in {\it HST} Cycle 23 after an external mid-term review of the program. \footnote{\url{www.stsci.edu/hst/campaigns/frontierfields/documents/FF_MidTermReview.pdf}} {\it Filter Selection and Depths:} The {\it ACS/WFC} observations are taken in the $B_{F435W}$, $V_{F606W}$, and $I_{F814W}$ filters, and the {\it WFC3/IR} observations are obtained in $Y_{F105W}$, $J_{F125W}$, $JH_{F140W}$, and $H_{F160W}$ for both the parallel and cluster fields. The HDFI SWG recommended the $JH_{F140W}$ filter only for the cluster pointings. This filter is most needed for discriminating between $z \sim 9$ and higher redshift candidates, and it was felt that these would be unlikely to be detected in the parallel fields. However, subsequent input from the community and discovery of bright $z \geq 9$ candidates resulted in the addition of the $JH_{F140W}$ to the parallel field observations. The number of orbits per filter/camera and estimated depths for 5 $\sigma$ point source measured within a 0.4$\arcsec$ diameter aperture are given in Table 3. {\it Observational Cadence:} Given the large number of orbits required for a given field and orient, we selected clusters for which {\it HST} observing windows of at least 30 days at fixed orients with suitable guide stars available at both orients. The HDFI SWG did not recommend dividing the observations of a given field over multiple epochs to search for supernovae or other transient objects. Therefore, the majority of data for each field was obtained in two epochs of $\sim$30-60 days (one for each camera/orient) separated by six months. However, for those fields for which no pre-existing {\it HST} data was available, we obtained 1 advance visit with $ACS$/$I_{F814W}$ and/or 1 advance visit with WFC3/$H_{F160W}$ to provide a template for transients and preliminary catalogs for ground-based and {\it Spitzer} ancillary observations. During each main epoch of observations, the {\it HST} {\it WFC3/IR} filter complement was initially rotated through $Y_{F105W}$/$J_{F125W}$/$JH_{F140W}$/$H_{F160W}$ with a single filter per two-orbit visit to facilitate the detection of high redshift supernovae. Our first set of observations of Abell 2744 were impacted by time-variable background in the {\it WFC3/IR} $Y_{F105W}$. This is due to a known HeI emission line at 10830\AA\ from the Earth's atmosphere, which is detected by {\it HST} when it observes at low limb angles at start or end of an orbit and {\it HST} is not in Earth's shadow. During the course of our observations, it was determined that we could predict the times of highly variable sky based upon the observational ephemeris (Brammer et al. 2014). Therefore, a subset of our visits for MACSJ0416.1-2403 were changed to four-orbit visits, with four half-orbit $Y_{F105W}$ exposures paired with four half-orbit $H_{F160W}$ exposures taken at start (or end) of each orbit when HeI emission was expected to have the largest impact. We found this strategy to work well for mitigating the impact of time-variable sky on the $Y_{F105W}$; remaining signatures of this effect, as well as time-variability in all IR filters when observing close to the bright Earth limb, are removed from our reduced data using a modified IR ramp fitting algorithm (Robberto 2014; Hilbert 2014). Initially, the {\it ACS/WFC} filter complement was rotated through $B_{F435W}$/$V_{F606W}$/$I_{F814W}$ throughout each observing epoch as well. At low sky backgrounds, {\it ACS/WFC} images are degraded by charge transfer efficiency (CTE) trails. While CTE trails from sources and hot pixels are now corrected in the standard pipeline, this correction is never perfect and results in residual noise above the ETC estimates. However, in reducing the observations for the first epoch of Abell 2744, we found that the final combined images were greatly enhanced when ``self-calibrated'' to remove the signature of trails in the darks and other detector-related sources of noise\footnote{\url{www.stsci.edu/hst/acs/software/Selfcal}} \footnote{\url{blogs.stsci.edu/hstff/2013/05/24/calibration-is-in-the-works/}} (also Ogaz, Avila, \& Hilbert 2015). Transient hot pixels in the darks are the major source of this noise. The imperfectly corrected hot pixels end up generating the same pattern of residuals in all the images. With multiple exposures ($> 8$), it is possible to self-calibrate out this pattern and regain $\sim$ 20\% in $B_{F435W}$ depth. {\it ACS} undergoes a monthly annealing process in order to reduce the population of hot pixels. The structure of hot pixels in the darks are reset after the anneal, making the ’self-calibration’ software procedure less effective. Therefore, for later epochs of observations, we grouped the {\it ACS/WFC} $B_{F435W}$ and $V_{F606W}$ exposures in order to straddle the planned {\it ACS} anneals. The total number of $I_{F814W}$ exposures is large enough to be self-calibrated with the number of images taken on either side of the {\it ACS} anneals, and so they are interlaced with the $B_{F435W}$ and $V_{F606W}$ observations. {\it Dither Pattern:} To maximize the sensitivity of the HST Frontier Fields, especially toward the edges where strong magnification is predicted, each epoch of observations is constrained to a fixed HST roll angle with small dithers between exposures. The fixed roll-angle requirement means that every HST visit within an epoch is fine-guiding on the same pair of stars, and therefore inter-visit dithering is highly effective. To mitigate self-persistence between visits, we used an inter-visit dither pattern that displaced any given two visits by $>1$ {\it WFC3/IR} pixel ($\sim$ 0.13\arcsec) while still retaining overall compactness. This was achieved by generating 35 pseudo-random dither locations from a 2D Sobol sequence covering a 6-pixel square. At the same time, pixel-phase dithering was achieved by modulating this 6 pixel pattern by a secondary 35-element 2D Sobol sequence sampling over pixel phase. Pairings of ACS and WFC3 filters were carefully matched to visit-specific dither locations such that no filter had a pile-up of exposures in either absolute location or in pixel phase. The {\it HST} dithering within the Frontier Fields visits, comprising four half-orbit exposures per filter, used the standard WFC3/IR "IR-DITHER-BLOB". This intra-visit dither pattern had several attractive features, including: good intra-visit subpixel phase sampling for {\it WFC3/IR}; stepping across {\it WFC3/IR} "blobs" of reduced detector sensitivity; and stepping across the {\it ACS/WFC} CCD gap. This intra-visit dither pattern is also sufficient to reject cosmic ray impacts marring the four half-orbit {\it ACS} exposures. Because of the compactness of IR-DITHER-BLOB, we do not completely fill in in the WFC3/IR ``deathstar'' - a $\sim$ 6'' circular region of bad pixels - nor do we dither over the {\it WFC3/IR} ``wagon-wheel'', an extended region on the right edge of the detector with low quantum efficiency and color-dependent structure which is not corrected by the existing flat fields. {\it {\it WFC3/IR} Persistence:} Like other sensitive {\it HST} {\it WFC3/IR} programs, the Frontier Field observations are scheduled to minimize the impact of IR detector persistence from bright objects previously observed by other {\it HST} programs (e.g. ``bad-actors''; Long, Baggett \& MacKenty 2013). Every Frontier Field exposure is visually inspected for data quality issues including persistence, and additional checks for persistence are done \footnote{\url{archive.stsci.edu/prepds/persist}}. Most persistence impact small regions of the detector and decays rapidly enough to effect only a few exposures, and therefore can be effectively masked out in the final stacked {\it WFC3/IR} images. However, early {\it WFC3/IR} observations of the MACS0416.1-2403 parallel field were severely impacted by scanned {\it WFC3/IR} grism observations of a bright star, for which persistence over $\sim$ 30\% of the {\it WFC3/IR} detector was visible for $> 24$ hours after the grism observations (Long et al. 2014). {\it HST} schedulers quickly responded to change the following week's schedule to prevent repeating this sequence of programs. We triggered an {\it HST} Observation Problem Report (HOPR) to re-observe 10 orbits, and our input resulted in a change in the {\it HST} scheduling systems for the time buffer after such bad actors. Additional HOPR were called in Cycle 23 to repeat persistence-effected observations for Abell S1063 (8 orbits) and Abell 370 (6 orbits). \subsection{HST Data Reduction} We briefly describe here the Frontier Fields {\it HST} data pipeline and resulting high-level science products. For more details about the {\it HST} Frontier Fields data reduction, please see Koekemoer et al. 2016 (in prep) and the data release readme files associated with each {\it HST} dataset. Every incoming exposure is visually inspected and flagged for artifacts, including satellite trails and asteroids, IR persistence, and IR time-variable sky within a few days of acquisition. Intermediate v0.5 stacked and drizzled images products are produced with standard archival retrievals, at 30 and 60 mas pixel scales, with major artifacts masked. The images are aligned with astrometric solutions based on previous {\it HST} and ground-based catalogs, initially compiled during the construction of the public Frontier Fields lensing models in summer 2013. Thus all MAST-hosted Frontier Fields lensing models and {\it HST} data products are aligned to the same astrometric grid. The v1.0 ``best effort'' image products are released within several weeks of the completion of the observing epoch for each cluster/parallel field pair at a given orient and camera configuration. These best effort image products includes the following improvements above the v0.5 releases: $\bullet$ reprocessing of all exposures using the most recent {\it ACS} and {\it WFC3} calibration files (darks, flats, biases). $\bullet$ improved astrometric alignment between filters, and cameras $\bullet$ improved treatment of {\it ACS/WFC} bias destriping $\bullet$ ``self-calibration'' applied to the {\it ACS/WFC} images to remove residual detector noise/artifacts, including correction for CTE in the darks $\bullet$ masking of any new {\it WFC3/IR} ``blobs'' and additional persistence sources $\bullet$ correction for {\it WFC3/IR} time-variable sky in the ramp-fitting, which most strongly affects the F105W observations due to the HeI emission but also impacts all IR filters when observing close to the bright Earth limb. $\bullet$ inclusion of {\it HST} imaging from other programs in the same filters in the stacked images to achieve maximum depths. \subsection{Spitzer Observations} In {\it Spitzer} Cycles 9, 10, and 11, all six Frontier Fields clusters were observed with IRAC channels 1 and 2 (3.6 and 4.5 micron) with Director's Discretionary time. Combined with archival data, the final images are expected to have nominal 5-sigma point source sensitivities of 26.6 AB mag at 3.6 microns and 26.0 AB mag at 4.5 microns. However, contributions from confusion and the intra-cluster light may mean the observations are less sensitive at the cluster core. Two of the clusters (MACS0717.5+3745 and MACS1149.5+2223) are in a previously approved {\it Spitzer} Cycle-9 program SURFSUP (PI M. Bradac, 90009), and two of the clusters (MACS0416.1-2403 and MACS0717.1.5+3745) were observed by the Cycle-8 program iCLASH (PI R. Bouwens, 80168). Due to conflicting roll angle constraints with {\it HST} and {\it Spitzer}, the IRAC and {\it HST} fields of view could not be matched in position angle. Furthermore, to maximize the depth of these observations the observing windows were constrained to the epochs with the lowest background. As a result there are significant "flanking field" areas covered by IRAC to 25h depth around the main {\it HST} fields. For the reduced {\it Spitzer} data products, readme files, and additional information, please see Capak et al. (in prep) and \url{irsa.ipac.caltech.edu/data/SPITZER/Frontier/} \section{Lensing Models \& Predictions} In order to enable study of background lensed galaxies by a broad cross-section of the extra-galactic community, the {\it HST} Frontier Fields team has also supported the development and public release of lensing maps for each selected cluster. The initial lensing models were based on data taken {\it before} the Frontier Fields observing campaign to ensure that the community could make use of the Frontier Fields data as soon as possible (Table 4). Five independent teams (Brad\u{a}c; Clusters As TelescopeS, PI Kneib \& Natarajan; Zitrin \& Merten; Sharon; Williams), using a diversity of approaches (Brad\u{a}c et al. 2005; LENSTOOL: Julio \& Kneib 2009; Zitrin et al. 2009; Merten et al. 2009; GRALE: Mohammed et al. 2014), coordinated to adopt the same input archival {\it HST} and ground-based datasets, the same redshifts, and multiple image identifications. These models were made public on MAST prior to the HST Frontier Fields observations in autumn 2013\footnote{\url{www.archive.stsci.edu/prepds/frontier/lensmodels/}}. The initial pre-FF model predictions for the galaxy numbers and volumes probed at high-redshift are described in Coe, Bradley, \& Zitrin (2015). However, these first pre-FF models have been rapidly superseded. The deep {\it HST} data have resulted in an unprecedented set of strong-lensed arcs and multiple images for constraining the cluster potentials (e.g. Lam et al. 2014; Jauzac et al. 2015; Wang et al. 2015; Kawamata et al. 2016; Jauzac et al. 2014; Diego et al. 2015). Subsequent observations with the GLASS {\it HST} {\it WFC3/IR} grism GO program (Treu et al. 2015; Schmidt et al. 2014; see \url{archive.stsci.edu/prepds/glass/}) and new ground-based spectroscopic campaigns have greatly increased the number and accuracy of the redshifts for the background lensed FF galaxies (Wang et al. 2015, Hoag et al. 2015; Johnson et al. 2014; Richard et al. 2014; Ebeling et al. 2014; Grillo et al. 2015; Balestra et al. 2015). The detection of a lensed SNIa in MACSJ0416.1-2403 has also provided a strong constraint on its true magnification (Rodney et al. 2015). The discovery of the multiply-imaged SN Refsdal in MACSJ1149.5+2223 (Kelly et al. 2014) sparked an independent coordinated effort to predict the time delays and re-appearance of this supernovae in another image of the host galaxy (Treu et al. 2016; Kelly et al. 2016; also Rodney et al. 2016). Additional programs have sought to understand and improve the systematics inherent in the different modeling approach (e.g. Zitrin et al. 2015; Mohammed et al. 2016; Harvey, Kneib, \& Jauzac 2016; Meneghetti et al., in prep.). The Frontier Fields lensing models will continue to be refined, as the HST Frontier Fields observing program proceeds through September 2016, new ancillary spectroscopic and weak-lensing datasets are acquired, and the modeling methods improve. This investment is critical for ensuring the Frontier Fields' legacy for {\it JWST} studies. To continue to provide the best models to the broader community, a renewed effort to update the existing lensing models and incorporate new FF and ancillary data began in May 2015 for Abell 2744 and MACS0416.1-2403 (Table 4). The resulting models were publicly released in autumn 2015. A second round of lensing coordination is set to begin in summer 2016, and will encompass the last four clusters. The delivery of MACSJ1149.5+2223 and MACSJ0717.5+3647 models are due in February 2017, with final delivery of Abell S1063 and Abell 370 models due in February 2018. \section{Summary} We present the motivation and survey design for the Frontier Fields, a Director's Discretionary time program with {\it HST} and {\it Spitzer} to see deeper into the distant universe than ever before. Six strong-lensing clusters and six parallel fields are observed, probing galaxies to observed optical/near-infrared magnitudes of $\sim$ 29 ABmag and $10-100$ times fainter in regions of high magnification. We explain the primary scientific goals of the Frontier Fields, the selection criteria for the fields, and the detailed properties of each Frontier Field cluster and parallel. We describe the {\it HST} and {\it Spitzer} observing programs, and the coordinated Frontier Fields lensing model effort. The {\it HST} Frontier Fields observations of the last cluster (Abell 370) and its parallel field will complete in September 2016, and the coordinated lensing models will be updated in 2017-2018. The full {\it Spitzer} Frontier Fields observations are complete and were publicly released in early 2016. The first Frontier Fields observations have already probed galaxies during the epoch of reionization to intrinsic luminosities fainter than any previously seen (e.g. Livermore, Finkelstein, \& Lotz 2016; Castellano et al. 2016; Atek et al. 2015; Laporte et al. 2015; Zitrin et al. 2014). The full dataset will place strong statistical constraints on the faint end of the luminosity function during this era (Robertson et al. 2015). At the time of publication of this article, over 70 refereed publications and 3 conferences have been devoted to or based in part on the Frontier Fields. These works include studies of high-redshift galaxies in the cluster and parallel fields; new cluster lensing models and dark matter maps; supernovae/transient studies; intra-cluster light and cluster evolution studies; and ancillary observations probing highly-lensed background sources with major ground-based facilities. These data and associated models will provide a unique legacy for future high-redshift universe studies with the {\it James Webb Space Telescope}. The Frontier Fields program was initiated by STScI Director Dr. Matt Mountain using Director’s Discretionary Time on the Hubble Space Telescope. We wish to acknowledge the Hubble Deep Fields Initiative science working group members for conceiving and recommending the Frontier Fields program: J. Bullock (chair), M. Dickinson, S. Finkelstein, A. Fontana, A. Hornschemeier Cardiff, J. Lotz, P. Natarajan, A. Pope, B. Robertson, B. Siana, J. Tumlinson, and M. Wood-Vasey. We also thank the mid-term Frontier Fields review committee for their service: J Bullock, M. Dickinson, R. Ellis, M. Kriek, S. Oey, S. Seitz, S. A. Stanford, and J. Tumlinson. We recognize the contributors to the current Frontier Field lensing models: M. Brada\u{c}, S. Allen, D. Applegate, B. Cain, A. Hoag, P. Kelly, P. Schneider, T. Schrabback, T. Treu, A. von der Linden, J.-P. Kneib, P. Natarajan, H. Ebeling, J. Richard, B. Clement, M. Jauzac, E. Jullo, M. Limousin, E. Egami, J. Merten, A. Zitrin, I. Balestra, M. Bartelmann, N. Benitez, A. Biviano, T. Broadhurst, M. Carrasco, D. Coe, N. Czakon, M. Donahue, T. Eichner, R. Ellis, C. Giocoli, S. Golwala, C. Grillo, O. Host, L. Infante, S. Jouvel, D. Lemze, A. Mercurio, E. Medezinski, P. Melchior, A. Molino, M. Meneghetti, A. Monna, J. Moustakas, L. Moustakas, T. Mroczkowski, M. Nonino, M. Okabe, M. Postman, J. Rhodes, P. Rosati, J. Sayers, S. Seitz, K. Umetsu, K. Sharon, T. Johnson, M. Bayliss, L. Wiliams, I. Mohammed, P. Saha, J. Liesenborgs, K. Sebesta, M. Ishigaki, R. Kawamata, M. Oguri, J. M. Diego, D. Lam, and J. Lim. Finally, we thank David Adler, George Chapman, Bill Workman, Ian Jordan, Alan Welty, Karen Levay, Scott Fleming, Brandon Lawton, Carol Christian, Tony Darnell, Frank Summers, Kathy Cordes, Bonnie Eisenhamer, Lisa Frattare, Ann Jenkins, Hussein Jirdeh, John Maple, Holly Ryer, Ray Villard, Tracy Vogel, and Donna Weaver for their contributions to the {\it HST} Frontier Fields effort. Based on observations obtained with the NASA/ESA {\it Hubble Space Telescope}, retrieved from the {\it Mikulski Archive for Space Telescopes} (MAST) at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. This work is based in part on observations made with the {\it Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This work utilizes gravitational lensing models produced by PIs Bradač, Natarajan \& Kneib (CATS), Merten \& Zitrin, Sharon, Williams, and the GLAFIC and Diego groups. This lens modeling was partially funded by the {\it HST} Frontier Fields program conducted by STScI.
1,314,259,996,895
arxiv
\section{Introduction} Over the past decade, the characterization and classification of topological insulator and superconductor (symmetry protected) phases of matter has played a central role in condensed matter research \cite{Hasan2010,Qi2011}. These phases cannot be adiabatically connected with a trivial atomic limit, they typically exhibit gapless excitations at the sample boundary. Furthermore, there are topologically ordered phases with long-range entanglement that go beyond the symmetry-breaking classification of states of matter. Topologically ordered states are a particular type of topological phase that arises in two or more dimensions and is characterized by a topology-dependent ground state degeneracy and long-range entanglement \cite{Wen2007,Wen1990}. The Kitaev honeycomb lattice model is a paradigm model for the study of topologically ordered states \cite{Kitaev2006}. Its discovery was an important milestone because it is one of the first models that could be solved exactly which exhibits topological order and phase transitions between states with abelian and non-abelian excitations. Various extensions have been devised in two and three dimensions \cite{Wu2009, Ryu2012,Ryu2009,Chern2010}. In the three-dimensional case, long-range entanglement does not necessarily imply a topologically ordered state, although there can be contributions to long-range entanglement that are of topological origin \cite{Grover2011}, and point-like excitations always obey bosonic or fermionic statistics. In spite of this, there is nevertheless an interest in understanding generalizations of the Kitaev model to three dimensions because their study can lead to insights into the topological nature of certain interacting bosonic systems. One fundamental way to characterize topological phases of matter is with spatial entanglement. Entanglement is an important tool used in condensed matter research to study properties of the phases of a system and the phase transitions that separate such phases \cite{Amico2008,Vidal2003, Song2012,Chiara2012, Hsu2009,Li2008}. In particular, it has provided numerous insights related with both non-interacting topological insulators and superconductors \cite{Turner2010, Fidkowski2010,Pollman2010, Prodan2010, Hughes2011} as well as interacting topological phases \cite{KitaevPreskill2006, Thomale2010,Qi2012, Regnault2009, Flammia2009}. Recently, it was shown by Yao and Qi \cite{Yao2010} that the entanglement of the 2D Kitaev model can be understood as arising from two contributions: one describing an emergent static $Z_2$ gauge field, and the second from non-interacting Majorana fermions hopping on a lattice. This insight revealed the origin of the topological entanglement entropy of the Kitaev model, and clarified the difference between its abelian and non-abelian phases in terms of its entanglement properties. In this work we explore the extension of these results to three dimensions. We show that the same property of the entanglement found in the Kitaev model holds for a three-dimensional generalization proposed by Ryu \cite{Ryu2009}. We explore the entanglement properties of this model in terms of signatures identifying the various phases of the system. We point out that introducing vortex defects in the $Z_2$ gauge field does not affect the factorization property of the density matrix, so that the entanglement contribution arising from these defects is determined by gapless Majorana degrees of freedom that are trapped by vortex configurations in the $Z_2$ gauge field. We show examples of the effect of such vortex lines on the entanglement of the system. \section{Entanglement properties of the Kitaev model} \subsection{Kitaev's honeycomb model} As a warm-up, we review the 2D Kitaev model and its entanglement properties in this section. Consider a honeycomb lattice with a spin-$1/2$ degree of freedom represented by Pauli matrices $\sigma^a$ ($a=1,2,3$) at each lattice site. Because of the geometry of the honeycomb lattice, each site has three nearest-neighbors. We label the three possible vectors connecting a lattice site to its nearest-neighbors as $x$, $y,$ and $z$-links. The Kitaev model is obtained by assigning anisotropic exchange couplings between nearest neighboring spins according the type of link that connects them \begin{equation} H=-\sum_{x-\text{link}}J_x \sigma^x_i \sigma^x_j-\sum_{y-\text{link}}J_y \sigma^y_i \sigma^y_j-\sum_{z-\text{link}}J_z \sigma^z_i \sigma^z_j. \end{equation} This particular form of exchange interaction makes this model exactly soluble. In particular, its eigenstates can be obtained explicitly by writing the spin degrees of freedom in terms of Majorana fermion operators. This is the method we will follow in this work, although one can also obtain the eigenstates through a Jordan-Wigner type transformation \cite{Feng2007}. The main idea is to describe the two-dimensional Hilbert space of a spin-$1/2$ degree of freedom using a set of four Majorana fermions $\{b^{x}_i,b^{y}_i,b^{z}_i, c_i \}$ which are defined in an enlarged four-dimensional Hilbert space. These Majorana operators satisfy $b_i^2=1$, $c_i^2=1$, $\{b^{\alpha}_i,b^{\beta}_j\}=2\delta_{ij}\delta_{\alpha, \beta}$ and $b^{\alpha}_i c_j=-c_j b^{\alpha}_i$. If one defines $\tilde{\sigma}^\alpha_{i} = ib^{\alpha}_i c_i$, then this operator is a consistent representation of $\sigma^{\alpha}_i$ if we impose a constraint that restricts $\tilde{\sigma}_i^{\alpha}$ to a two-dimensional Hilbert space. This constraint is found by noting that the $\tilde{\sigma}^{\alpha}_i$ operators commute with the product $D_i=i b^x_i b^y_i b^z_i c_i$. Since $D^2_i=1$, we can impose the constraint $D_i=1$ to restrict $\tilde{\sigma}^{\alpha}_i$ to the desired two-dimensional Hilbert space. One can check that $\tilde{\sigma}^{\alpha}_i$ defined with this constraint satisfies the same algebra as the original spin operators. Hence, $\tilde{\sigma}_i^{\alpha}$ consistently describes the original spin-$1/2$ degree of freedom. In terms of these new operators, the Kitaev model takes the form \begin{equation} \tilde{H}=\frac{i}{2}\sum_{\langle j,k \rangle}J_{\alpha_{jk}}\hat{u}_{jk}c_j c_k, \end{equation} where $\hat{u}_{jk}=i b^{\alpha_{jk}}_j b^{\alpha_{jk}}_k$ are referred to as link operators, with $\alpha_{jk}=x,y,z$ depending on whether the $j$ and $k$ indices form a $x$, $y,$ or $z$-link. A consistent sign convention is to choose the $j$ index to label a site in the $\mathcal{A}$ sublattice, and correspondingly $k$ in the $\mathcal{B}$ sublattice. The fundamental advantage that is gained from using the Majorana fermion language is made apparent by noting that the link operators satisfy \begin{equation} \left[\tilde{H},\hat{u}_{jk}\right]=0 \quad \text{and} \quad \left[\hat{u}_{jk},\hat{u}_{lm}\right]=0. \end{equation} We can thus diagonalize the Hamiltonian and the $\hat{u}_{jk}$ operators simultaneously. An eigenstate of the Kitaev model can then be labeled by a fixed configuration of eigenvalues of the link operators. Since $\hat{u}^2_{jk}=1$, the link operators only have two eigenvalues $\pm1$. Once a configuration of eigenvalues is chosen, what remains is a Hamiltonian of free Majorana fermions $c_i$ hopping on a lattice, which can be solved straightforwardly. The effect of the link operators will be at most to change the signs of the hopping elements. This means the link operators effectively act like a static $Z_2$ gauge field that couples to the $c_i$ Majorana fermions. To obtain the ground state, we need to know what configuration of the $Z_2$ gauge field leads to the lowest overall energy. Lieb showed that this configuration corresponds to all $u_{jk}=1$ \cite{Lieb1994}. Using this configuration, and solving for the corresponding Majorana fermion ground state $\ket{\phi(u)}$, one can then calculate the physical state $\ket{\Psi}$ by projecting into the sector in which $D_i=1$ for all $i$. This amounts to averaging over all possible gauge transformations of the $Z_2$ gauge field: \begin{equation} \ket{\Psi}=\frac{1}{\sqrt{2^{N+1}}}\sum_g D_g \ket{u}\otimes \ket{\phi(u)}. \end{equation} Here, $N$ is the total number of sites, $D_g=\prod_{i \in g} D_i$ with $g$ being a subset of lattice sites, and the sum runs over all possible subsets of sites. Other energy eigenstates can be obtained by the same gauge averaging procedure with some initial configuration of $Z_2$ fluxes. Since the ground state has constant phases on the links, the Hamiltonian in this sector is translationally invariant. A change of basis to momentum space leads to the following two-by-two single-particle Hamiltonian \begin{equation} h(\mathbf{k})=-\text{Re}\phi(\mathbf{k}) \tau^y-\text{Im}\phi(\mathbf{k}) \tau^x, \end{equation} where $\tau^{a}$ ($a=0,x,y,z$) are Pauli matrices that act on the sublattice index, and $\phi(\mathbf{k})=J_x e^{i\mathbf{k}\cdot \mathbf{a}_1}+J_y e^{i\mathbf{k}\cdot \mathbf{a}_2}+J_z$ with $\mathbf{a}_{1,2}$ the primitive vectors that generate the $\mathcal{A}$ hexagonal sublattice. The energy spectrum of this Hamiltonian is $\epsilon_{\pm}(\mathbf{k})=\pm \vert\phi(\mathbf{k})\vert$. Due to the form of this spectrum, one can divide the space of parameters into two regions. Whenever the couplings satisfy the inequalities $\vert J_x \vert\le \vert J_y \vert+\vert J_z \vert$, $\vert J_y \vert\le \vert J_z \vert+\vert J_x \vert,$ and $\vert J_z \vert\le \vert J_x \vert+\vert J_y \vert$, the spectrum is gapless due to the time-reversal invariance of the Majorana fermion Hamiltonian. The spectrum in this parameter regime can thus be gapped out by the addition of three-spin interactions that break time-reversal symmetry. The resulting gapped ground state has quasiparticle excitations that obey non-abelian statistics, and so this phase is referred to as the non-abelian phase of the Kitaev model. By contrast, if the triangular inequalities of the $J_\alpha$ are not satisfied then the system is gapped without the need of any additional terms. In this case the excitations satisfy abelian statistics, and so in this case the system realizes an abelian phase. \subsection{Entanglement of quantum states} Let us now briefly review how to quantify the entanglement of a state $\ket{\Omega}$. One starts by choosing a partition of the Hilbert space into two complementary subspaces, say $A$ and $B$. The entanglement between these two parts of the Hilbert space can be quantified by the so-called von-Neumann entropy, defined as \begin{equation} S_A=-\text{Tr}_A\left(\rho_A \log \rho_A\right).\label{SA} \end{equation} The reduced density matrix of region $A$ is given by $\rho_A=\text{Tr}_B\left[\ket{\Omega}\bra{\Omega}\right]$, where $\text{Tr}_B$ denotes the trace over the degrees of freedom in $B$. We will refer to the partitioning of the Hilbert space as an entanglement cut that is performed on the system. A generalization of the entanglement entropy that has also been useful in characterizing condensed matter systems, namely the Renyi entropy, is given by \begin{equation} S^{(n)}_A=\frac{1}{1-n} \text{Tr}_A\left[\rho_A^n\right]. \end{equation} We can recover the von Neumann entropy by taking the limit $S_A =\lim_{n\rightarrow 1} S^{(n)}_A$. This form of the entanglement entropy in terms of the quantity $\text{Tr}_A\left[\rho_A^n\right]$ will be useful for calculating the entanglement of the Kitaev model and its 3D generalization. As we will discuss in the following sections, the entanglement spectrum of Kitaev-type models can be reduced to the computation of the entanglement of quadratic fermionic Hamiltonians. In such cases, the entanglement entropy is completely determined by the eigenvalues $\{\zeta_i\}$ of the correlation matrix $\left[C\right]_{ij}=\bra{\Omega}c^{\dagger}_i c_j\ket{\Omega}$\cite{peschel2003,peschel2009}, where the $i,j$ indices are restricted to the $A$ subspace. The entanglement entropy $S_A$ in terms of this set of eigenvalues is then given by \begin{equation} S=\sum_i \left\{ -\zeta_i \ln \zeta_i-(1-\zeta_i)\ln(1-\zeta_i)\right\}.\label{Sspect} \end{equation} The set $\{\zeta_i\}$ is called the single-particle entanglement spectrum and it corresponds to the eigenvalues of the correlation matrix. The entanglement entropy, and additionally, all entanglement quantities of a free-fermion ground state $\ket{\Omega}$ can thus be understood by analyzing the $\zeta_i.$ The $\zeta_i$ lie between $0$ and $1,$ and thus the closer the modes are to $1/2,$ the larger the entanglement of a subsystem. The distribution of the $\zeta_i$ is what we will keep track of in the discussion that follows. \subsection{Entanglement in Kitaev's honeycomb model} The phases of the Kitaev model were characterized in \cite{Yao2010} using entanglement. In general, computing the entanglement of an interacting spin model can be challenging both analytically and numerically. However, because of the special structure of the Kitaev model, this task is dramatically simplified. The entanglement of an eigenstate $\ket{\psi}$ of the Kitaev model can be obtained by separately calculating the entanglement of the $Z_2$ gauge field and the Majorana fermions. More specifically, Yao and Qi showed that the following relation holds \begin{equation} \text{Tr}_A\left[\rho_{A}^n\right]=\text{Tr}_{A,G}\left[\rho_{A,G}^n\right]\cdot \text{Tr}_{A,F}\left[\rho_{A,F}^n\right] \end{equation} where $\rho_{A}=\text{Tr}_B \left[\ket{\psi}\bra{\psi}\right]$, the reduced density matrix $\rho_{A,F}$ ($\rho_{A,G}$) describes the Majorana fermions (a pure $Z_2$ gauge field) in region $A$, and the trace $\text{Tr}_{A,F(G)}$ runs over the fermion (gauge) degrees of freedom in region $A$. The factorization of $\text{Tr}_A\left[\rho_{A}^n\right]$ is useful because, by taking the limit $n \rightarrow 1$, one finds that the entanglement entropy can be written as \begin{equation} S_{A}=S_{A,G}+S_{A,F}, \end{equation} where $S_{A,F(G)}$ is the entanglement entropy of the fermions (gauge field). Using this insight, Yao and Qi found that in both abelian and non-abelian phases the entanglement entropy of the Kitaev model can generically be written as $S_A=\left(\alpha +\log 2\right)L-\log 2$, where $\alpha$ is a non-universal constant and $L$ is the length of the boundary separating regions $A$ and $B$. The term proportional to $L$ is the well-known area (perimeter in 2D) law for gapped states. The term that is independent of the boundary size is thus identified as the topological entanglement entropy. It is the same for all phases of the Kitaev model, and it arises exclusively due to the presence of the $Z_2$ gauge field. It was further argued that, although the topological entanglement entropy is the same for both abelian and non-abelian phases, there is nevertheless a way in which their entanglement properties can distinguish these phases. Specifically, in the non-abelian phase the Majorana fermion ground state acquires a nonzero Chern number that leads to the presence of gapless states at the boundary. The presence of these boundary states leads to spectral flow in the entanglement spectrum and further contributes to the entanglement of the system. Such edge states do not generically arise in the abelian phase, so there are no additional entanglement contributions in this phase. This distinction was argued by Yao and Qi to be related to the nature of the quasiparticles in the non-abelian phase. Hence, they argued, the intrinsic difference between the abelian and non-abelian phases is manifested in their entanglement properties. Although we will not make connections to the statistics of excitations in Ryu's 3D model, we will nevertheless find analogous behavior concerning the entanglement properties of its eigenstates. In particular, the entanglement entropy is also separable into gauge field and Majorana fermion components, and this insight allows one to understand and distinguish the topological phases of the system depending on the surface states (or absence thereof) in each phase, as we will see in later sections. \begin{figure} \begin{center} \includegraphics[trim =0cm 2cm 0cm 0cm,scale=0.4]{diamond.jpg} \caption{Conventional cell of the fcc lattice. The yellow large (red small) spheres correspond to the $\mathcal{A}$ ($\mathcal{B}$) sublattice. The green arrows emanating from one of the $\mathcal{B}$ sites correspond to the $\mathbf{s}_i$ vectors. Corresponding to each of the bars connecting an $\mathcal{A}$ and $\mathcal{B}$ site there is a link operator $\hat{u}_{r_A r_B}$ with a specific value of the $Z_2$ gauge field.}\label{lattice} \end{center} \end{figure} \section{Generalization of the Kitaev model to 3D} In this section we discuss Ryu's model and its entanglement properties \cite{Ryu2009}; we will refer it as the Ryu-Kitaev diamond (RKD) model. The overall structure of the RKD Hamiltonian is analogous to that of the Kitaev honeycomb model, namely one considers anisotropic exchange couplings between nearest-neighboring spin degrees of freedom. Upon introducing Majorana operators, the Hamiltonian reduces to a problem of free Majorana fermions hopping in the presence of a $Z_2$ gauge field; in this case in three dimensions. There is, however, a fundamental difference with respect to the Kitaev model, namely the RKD model is designed to preserve time-reversal symmetry for all of its phases. This will have an important impact on the topological classification of the ground state which is manifested in its entanglement properties. \subsection{Hamiltonian and Majorana fermion description} The RKD model is realized on the diamond lattice. The diamond lattice is formed by two fcc sublattices $\mathcal{A}$ and $\mathcal{B}$ that are shifted by a vector $\frac{a}{4}(-1\,1\,-1)^T$ ($a$ is the lattice constant of the fcc conventional cell). We choose the primitive vectors $\mathbf{a}_1=\frac{a}{2}\left(1\, 1\, 0 \right)^T$, $\mathbf{a}_2=\frac{a}{2}\left(0\, 1\, -1 \right)^T$ and $\mathbf{a}_3=\frac{a}{2}\left(1\, 0\, -1 \right)^T$. Each lattice site in the diamond lattice has four nearest neighbours, and the vectors connecting a site in sublattice $\mathcal{A}$ to its nearest neighbours are: $\mathbf{s}_1=\frac{a}{4}\left(1\, 1\, \,1 \right)^T$, $\mathbf{s}_2=\frac{a}{4}\left(-1\, -1\, 1\right)^T$, $ \mathbf{s}_3=\frac{a}{4}\left(1\,1\,-1\right)^T$, $ \mathbf{s}_0=\frac{a}{4}\left(-1\,1\,-1\right)^T$. We illustrate the structure of the diamond lattice together with the $\mathbf{s}_i$ vectors in Fig.\ref{lattice}. On each lattice site we place two spin-$1/2$ degrees of freedom $\sigma^a$ and $\tau^a$ ($a=0,1,2,3$). For convenience, we define $\alpha^{1,2,3}_j=\sigma^{1,2,3}_j \tau^x_j,$ $\alpha^0_j=\sigma^0_j \tau^z_j,$ $\zeta^{1,2,3}_j=\sigma^{1,2,3}_j \tau^z_j,$ and $\zeta^0_j=\sigma^0_j \tau^z_j.$ We then couple nearest-neighboring pairs of spins in the following anisotropic way \begin{equation} H=-\sum_{\mu=0}^{3} \sum_{\mu-\text{links}} J_\mu\left(\alpha_j^\mu\alpha_k^\mu+\zeta_j^\mu\zeta_k^\mu\right). \end{equation} Here, the values $\mu=0,\ldots, 4$ label the four possible nearest neighbors determined by the $\mathbf{s}_\mu$. The key feature of this model is again the anisotropic nature of the exchange interactions. Similar to the Kitaev model, the eigenstates of the Hamiltonian are obtained by introducing Majorana degrees of freedom at each site. In the present case, since there is a four-dimensional Hilbert space at each site, we can consider an enlarged eight-dimensional Hilbert space with six Majorana fermions $\lambda^p_i$ ($p=0,\ldots,5$). The eigenstates are constrained to be in the subspace where $D_i=i\prod_{p=0}^5 \lambda_i^p=1$. By making the identification $\alpha_i^{\mu}=i\lambda_i^{\mu}\lambda_i^4$ and $\zeta_i^{\mu}=i\lambda_i^{\mu}\lambda_i^5$, the Hamiltonian becomes \begin{equation} H=i\sum_{\mu=0}^{3} J_\mu \sum_{\mu-\text{links}} \hat{u}_{jk}\left(\lambda_j^4\lambda_k^4+\lambda_j^5\lambda_k^5\right). \end{equation} where the link operators are given by $\hat{u}_{jk}=i \lambda^{\mu_{jk}}_j \lambda^{\mu_{jk}}_k$. Here, the link operators are again defined to go from sublattice $\mathcal{A}$ to sublattice $\mathcal{B}$. These link operators commute with the Hamiltonian, so we can replace them by a specific choice of eigenvalues. What remains is then a hopping model of two flavors of Majorana fermions that feel the same $Z_2$ field. The RKD model includes additional interactions between spins on three neighboring sites which are introduced in order to remove non-generic degeneracies in the energy spectrum. This effectively leads to the following second-nearest neighbor hoppings in the Majorana fermion language: \begin{eqnarray} H_z&=&\sum_{r_A}\left[iK^z\left(\hat{u}_{r_A\,r_A-s_1}\hat{u}_{r_A\, r_A-s_3}\right)\lambda^T_{r_A-s_1} s^z \lambda_{r_A-s_3}\right] \nonumber\\ &+&\sum_{r_B}\left[iK^z\left(\hat{u}_{r_B+s_1\, r_B}\hat{u}_{r_B+s_3\,r_B}\right)\lambda^T_{r_B+s_1} s^z \lambda_{r_B+s_3}\right],\nonumber\\ H_x&=&\sum_{(i,j)\in \Lambda}\left\{\sum_{r_A}\left[iK^x\left(\hat{u}_{r_A\,r_A-s_i}\hat{u}_{r_A\, r_A-s_j}\right)\lambda^T_{r_A-s_i} s^x \lambda_{r_A-s_j}\right] \right. \nonumber\\ &+&\left.\sum_{r_B}\left[iK^x\left(\hat{u}_{r_B+s_i\, r_B}\hat{u}_{r_B+s_j\,r_B}\right)\lambda^T_{r_B+s_i} s^x \lambda_{r_B+s_j}\right]\right\},\nonumber \end{eqnarray} where the Pauli matrices $s^a$ ($a=0,x,y,z$) act on the $4,5$ indices and $\lambda^T=(\lambda^4, \,\, \lambda^5)$. Note the distinction of indices for the lattice vectors ${\bf{s}}_a$ and the Pauli matrices $s^a.$ The pair of indices $(i,j)$ runs over the set $\Lambda=\{(0,2),\,(2,3),\, (3,0)\}$. Since the plaquettes of the diamond lattice are also hexagons, the ground state continues to occur when $\hat{u}_{jk}=1$ for all $j,k$, so the ground state is translationally invariant. With periodic boundary conditions, the single-particle momentum space Bloch Hamiltonian is \begin{eqnarray} h(\mathbf{k})&=&\Theta^x(\mathbf{k})c^z s^x+\Theta^z(\mathbf{k}) c^z s^z-\text{Re}\Phi(\mathbf{k}) c^y s^0-\text{Im}\Phi(\mathbf{k}) c^x s^0, \end{eqnarray} where $c^a$ ($a=0,x,y,z$) are additional Pauli matrices acting on the sublattice degree of freedom, and we defined the functions \begin{eqnarray} \Phi(\mathbf{k})&=&J_0 e^{i\mathbf{k}\cdot \mathbf{a}_2}+J_1 e^{i\mathbf{k}\cdot \mathbf{a}_1}+J_2+J_3 e^{i\mathbf{k}\cdot \mathbf{a}_3},\\ \Theta^x(\mathbf{k})&=&K^x\sum_{(i,j)\in \Lambda}\sin \mathbf{k}\cdot (\mathbf{s}_i-\mathbf{s}_j),\\ \Theta^z(\mathbf{k})&=&K^z \sin \mathbf{k}\cdot (\mathbf{s}_1-\mathbf{s}_3). \end{eqnarray} The energy spectrum of the single-particle Hamiltonian is given by $\epsilon_{\pm}(\mathbf{k})=\pm\sqrt{\vert \Phi\vert^2+\Theta^{x 2}+\Theta^{z 2}}$, where $\vert \Phi\vert^2=(\text{Re}\Phi)^2+(\text{Im}\Phi)^2$. By evaluating this spectrum for various values of the parameters, one finds that there are several distinct gapped phases separated by gapless critical points. These gapless points correspond to phase transitions between topologically distinct phases. In \cite{Ryu2009}, Ryu identified two main phases, namely a strong and weak topological phase. We will discuss these phases and their entanglement in the following sections. \subsection{Symmetries and topological phases} Similar to the Kitaev model, the RKD has topologically distinct phases depending on the relative strengths of the hopping parameters. Let us consider the single-particle Majorana fermion Hamiltonian in momentum space. The Hamiltonian $h(\mathbf{k})$ satisfies particle-hole symmetry $C h(-\mathbf{k})C^{-1}=-h(\mathbf{k}) $ ($C=\mathcal{K}$), and time-reversal symmetry $T h(-\mathbf{k}) T^{-1}=h(\mathbf{k})$ ($\mathcal{T}=i c^z s^y \mathcal{K}$), where $\mathcal{K}$ represents complex conjugation. Note that the time-reversal symmetry operator satisfies $T^2=-1$. Hence, the model we are considering belongs to the symmetry class DIII of the Altland-Zirnbauer classification of non-interacting fermions \cite{Altland1997}. Similar to 3D time-reversal invariant topological insulators \cite{Fu2007}, there are strong and weak topological states that can be obtained in this model as discussed in \cite{Ryu2009}. The strong topological phase has robust gapless states on any surface that separates the bulk from the vacuum. It can be characterized by a $\mathbb{Z}$ topological invariant defined for 3D systems that satisfy chiral symmetry $SH+HS=0$, where $S$ is the chiral operator. In the present case, this operator corresponds to $S=c^z s^y$. In the basis in which $S$ is diagonal, the operator $Q(\mathbf{k})=2P(\mathbf{k})-1$ can be written in block off-diagonal form (where $P(\mathbf{k})$ is the projection operator into the occupied states). Let us then define the matrix in the block-off diagonal of $Q(\mathbf{k})$ as $q(\mathbf{k})$. Then the integer-valued topological invariant is given by \cite{Schnyder2008} \begin{equation} \nu_{3D}=\int_{BZ}\frac{d^3 k}{24 \pi^2}\epsilon^{\mu \nu \rho}\text{tr}\left[\left(q^{-1}\partial_{\mu}q\right)\left(q^{-1}\partial_{\nu}q\right)\left(q^{-1}\partial_{\rho}q\right)\right],\label{nu3d} \end{equation} where for the RKD model the $q(\mathbf{k})$ matrix is \begin{equation} q(\mathbf{k})=\frac{1}{\epsilon_+(\mathbf{k})}\left( \begin{array}{cc} i \Theta_x+\Theta_z & -\text{Im}\Phi -i\text{Re}\Phi\\ -\text{Im}\Phi +i\text{Re}\Phi & -i \Theta_x-\Theta_z \end{array} \right). \end{equation} As was verified in \cite{Ryu2009}, there is a parameter regime for which $\nu_{3D}\ne 0$, signaling a nontrivial 3D topological ground state. We will discuss in the next section particular realizations of the parameters for which $\nu_{3D}=\pm 1,$ and analyze the corresponding entanglement properties. As to the weak topological states, these arise when some of the hopping parameters are reduced sufficiently so that the ground state is adiabatically connected to either decoupled topological layers or decoupled topological wires. In these phases, the system has boundary modes only on certain surfaces of the system depending on the direction of the layers or wires that realize the topological state. These boundary states are protected by translation symmetry and can be gapped out by introducing disorder that respects the symmetries of class DIII. As such, this phase is not robust, at least not in the same sense that the strong topological phase is robust. It has been argued, however, that if the disorder is respected on average, such boundary states can still survive \cite{Fu2012, Fulga2014}. To be concrete, suppose $J_3,K_z$ are sufficiently smaller than the other couplings so that the ground state is adiabatically connected to the $K_z=J_3=0$ limit. In this case, the system can be viewed as a set of weakly coupled layers that are perpendicular to $\mathbf{s}_1$. When the layers are completely decoupled, each layer realizes a two-dimensional system in class DIII, which means that the ground state is classified by a $\mathbb{Z}_2$ invariant. Because of this, the topology of the ground state of these layers is different from that of the Kitaev model, although similar to the square lattice model studied in \cite{Ryu2012}. The $\mathbb{Z}_2$ invariant in class DIII is given by the Fu-Kane formula \cite{Fu2006,Ryu2012} \begin{equation} \nu_{2D}= \prod_{q: \text{TRIM}} \frac{\sqrt{\text{det}(w(q))}}{\text{Pf}(w(q))},\label{nu2d} \end{equation} where TRIM stands for the set of four time-reversal invariant momenta in the first Brillouin zone (FBZ) of the hexagonal lattice, $w_{nm}(\mathbf{k})=\bra{u_{n}(-\mathbf{k})}T\ket{u_{m}(\mathbf{k})}$, and $\text{Pf}[w]$ is the Pfaffian of the matrix $w(\mathbf{k})$. In \ref{2dnu}, we show the derivation for obtaining the following expression of this topological invariant \begin{equation} \nu_{2D}=\text{sign}(J_0+J_2+J_3) \text{sign}(-J_0+J_2-J_3)\text{sign}(-J_0+J_2+J_3) \text{sign}(J_0+J_2-J_3).\nonumber \end{equation} Using this expression, we find that there are parameter regimes for which $\nu_{2D}=-1$, indicating the presence of a nontrivial phase for each layer. If we now consider the case when another coupling, say $J_0$ is sufficiently small, then the system will be adiabatically connected to decoupled wires in class DIII. This class also has a $\mathbb{Z}_2$ classification. Following \cite{Ryu2012}, we again characterize the topological state by the Fu-Kane formula Eq. \ref{nu2d}. The main difference with the previous calculation is that now there are only two time-reversal invariant momenta. The calculation leads to \begin{equation} \nu_{1D}=\text{sign}(J_2+J_3)\text{sign}(J_2-J_3).\label{nu1d} \end{equation} When $J_3$ is greater than $J_2$, this expression gives $\nu_{1D}=-1$. This leads to localized boundary modes for each wire. Upon coupling the wires to form the 3D bulk system, these boundary modes become dispersive and are generically susceptible to being gapped by disorder. The main difference with the case of weakly-coupled layers is that here there will be no spectral flow between the positive and negative energy bands, whereas in the layer case there is. This difference will manifest itself in the entanglement spectrum as we will see when we discuss the entanglement of the RKD model. \section{Entanglement of the RKD model} \subsection{Factorization of the trace of the density matrix} In order to calculate the entanglement of the RKD model, we first show that the factorization found in two dimensions for the Kitaev model also holds for the RKD model. The derivation we present here is essentially an extension of the derivation by Yao and Qi, the main difference being that there are more Majorana operators per lattice site in the RKD model. In this section we provide a general description of how the derivation works, and we leave the details for \ref{proof}. We start by writing the explicit form of an eigenstate of the RKD model. This eigenstate will be a product of the state of the $Z_2$ gauge field $\ket{u}$ and the corresponding Majorana fermion state $\ket{\phi(u)}$. By projecting into the $D_j=1$ subspace we obtain the physical state: \begin{equation} \ket{\psi}= \sqrt{\frac{1}{2^{1-N}}}\prod_{j}\left(\frac{1+D_j}{2}\right)\ket{u}\otimes \ket{\phi(u)}, \label{proj} \end{equation} where the product runs over all of the $N$ lattice sites of the system. The objective will be to calculate $\text{Tr}_A\left[\rho_A^n\right]$ in terms of the reduced density matrix of a pure $\mathbb{Z}_2$ gauge field $\rho_{A,G}$ and the reduced density matrix of the free Majorana fermions $\rho_{A,F}=\text{Tr}_B\left[\ket{\phi(u)}\bra{\phi(u)}\right]$. The main complication for achieving this is that the state $\ket{u}\otimes \ket{\phi(u)} $ is multiplied by the $D_j$ operators. Thus, it would seem that the $D_j$ operators will inevitably appear in the final expression upon taking the trace of powers of the density matrix. This can be resolved by explicitly performing the traces of the $Z_2$ gauge field over region $B$. This is achieved by rewriting the link operators that cross the entanglement cut in terms of new link operators that exist exclusively on either the $A$ or $B$ region. Upon taking the trace over region $B,$ and for each power $\rho_A^n$ that is computed, there will appear matrix elements of the operators $\{\lambda^0_i,\lambda^1_i,\lambda^2_i,\lambda^3_i\}$ which can be simplified explicitly. After carrying out this procedure, the $\{\lambda^0_i,\lambda^1_i,\lambda^2_i,\lambda^3_i\}$ operators drop out of the expression. What remains at this stage are the $\{\lambda^4_i,\lambda^5_i\}$ operators which act on the fermion state $\ket{\phi(u)}$. Because of the manner in which matrix elements of the $Z_2$ gauge field are traced out, it turns out that all of the $\{\lambda^4_i,\lambda^5_i\}$ can be arranged into operators that project into definite sectors of fixed fermion parity. By using the fact that the fermion parity of the fermion ground state is fixed, these fermion parity projectors can be simplified appropriately, to the point where there will no longer be any of the $\{\lambda^4_i,\lambda^5_i\}$ operators in the expression. The resulting expression turns out to be (see Appendix B) \begin{equation} \text{Tr}_A\left[\rho^n_A\right]=\frac{1}{2^{(n-1)(L-1)}} \text{Tr}_A\left[\rho_{A,F}^{n}\right], \end{equation} where $L$ is the number of links crossing the entanglement cut. By further noting that \begin{equation} \text{Tr}_{A,G}\left[\rho^n_{A,G}\right]=\frac{1}{2^{(n-1)(L-1)}}, \end{equation} for a pure $Z_2$ gauge field, one then obtains the desired result \begin{equation} \text{Tr}_A\left[\rho_A^n\right]= \text{Tr}_{A,G}\left[\rho^n_{A,G}\right]\text{Tr}_{A,F}\left[\rho^n_{A,F}\right]. \end{equation} Using this property of the density matrix of the RKD model, one can then proceed to calculate the entanglement of the system in the same way that it was done for the Kitaev model on the honeycomb lattice. \begin{figure*} \begin{center} \includegraphics[trim =0cm 3cm 0cm 0cm,scale=0.28]{Fig_set1_A.jpg} \caption{Energy and entanglement spectrum in momentum space for $J_0=1$, $J_1=0.3$, $J_2=K_x=K_z=0.5$ and varying $J_3.$ We have $J_3=0.5$ (a,b), $J_3=1.0$ (c,d), $J_3=1.5$(e,f) and $J_3=2.5$. The center figure shows the Brillouin zone and the corresponding path in momentum space over which we evaluate both the energy and entanglement spectrum. The vectors $\mathbf{b}_1$ and $\mathbf{b}_2$ are reciprocal lattice vectors satisfying $\mathbf{b}_i\cdot \mathbf{a}_j=2\pi \delta_{ij}$, with $i,j=1,2$. The points $\Gamma$, $M_1$, $M_2$ and $M_3$ label the time-reversal invariant momenta of the hexagonal lattice generated by $\mathbf{a}_{1}$ and $\mathbf{a}_{2}$.}\label{spectra} \end{center} \end{figure*} \subsection{Entanglement properties} An immediate consequence of the factorization property of $\text{Tr}_A\left[\rho^n_A\right]$ is that there is a contribution to the entanglement entropy which does not scale with system size. This contribution arises exclusively from the $Z_2$ gauge field part and it is given by \begin{equation} S_{A,G}=\left(\log 2\right)L-\gamma_{\text{top}}, \end{equation} where $\gamma_{\text{top}}=\log 2$ is the topological contribution to the entanglement entropy. In fact, this value of the topological entanglement entropy is the same as that of the two-dimensional $Z_2$ gauge field of the Kitaev model. This is consistent with the discussion in \cite{Grover2011}, where it was shown that the entanglement entropy of a discrete gauge theory of symmetry group $G$ would have a topological entanglement entropy $\gamma_{\text{top}}=\log \vert G \vert$ in both two and three dimensions, where $\vert G \vert$ is the number of elements in the group. Let us now evaluate the entanglement of the Majorana fermion part of the ground state. Throughout, we will consider an entanglement cut that partitions the system along the plane generated by $\mathbf{a}_1$ and $\mathbf{a}_2$. We will maintain periodic boundary conditions along theses two directions. Since the ground state is translationally invariant, we can Fourier transform both directions and consider Hamiltonians that are dependent on the momenta $(\mathbf{k}_1, \mathbf{k}_2)$. The corresponding FBZ is depicted in the inset at the center of Fig.\ref{spectra}. This figure of the Brillouin zone also shows the path along which we will evaluate the energy and entanglement spectrum. The path includes the time-reversal momenta which are the momenta where the gap closing points occur in this model. Even though the parameter space is significantly large, it will be sufficient to restrict ourselves to a specific set of parameters that will allow us to explore the relevant phases of the model. We thus fix the parameters $J_0=1$, $J_1=0.3$, $J_2=K_x=K_z=0.5$ and vary $J_3$. In Figs. \ref{spectra} a,c,e,g, we show the energy spectrum with open boundary conditions in the $\mathbf{a}_3$ direction. Each subfigure corresponds to four values of the parameter $J_3=0.5, 1.0,1.5, 2.5$, respectively. The corresponding entanglement spectrum is shown in Figs. \ref{spectra} b,d,f,h with periodic boundary conditions in the $\mathbf{a}_3$ direction. The gap of the model closes as we continuously change between these values of $J_3$. Each time the gap closes, the system undergoes a topological phase transition. The four cases we show here thus correspond to four topologically distinct phases. For all four phases there are energy modes in the gap that cross zero energy. These are the surface states, which signal the nontrivial nature of the ground state. Correspondingly, the entanglement spectrum shows entanglement modes between $0$ and $1$ that behave in a similar way as the surface states. This is due to the fact that the correlation matrix is directly related with the spectrally flattened version of the single-particle Hamiltonian. Thus, the topological surface states will be manifest in the entanglement spectrum as entanglement modes that cross $1/2$ \cite{Turner2010}. For $J_3=0.5,1.0,1.5$ there is spectral flow in both the energy and entanglement spectrum. Both the $J_3=0.5$ and $J_3=1.5$ phases correspond to strong topological states characterized by $\nu_{3D}=-1$ and $\nu_{3D}=1$ respectively, which we verified numerically using Eq. \ref{nu3d}. There is a single crossing in the $J_3=0.5$ case, whereas there are three crossings for $J_3=1.5$. In the intermediate case, namely $J_3=1.0$, we find that $\nu_{3D}=0$. By varying continuously the couplings $J_1$ and $K_z$ to zero, we have found no additional gap closings at the time-reversal invariant momenta, which means that this phase is adiabatically connected with the system of decoupled layers perpendicular to $\mathbf{s}_1$ we discussed earlier. By using Eq. \ref{nu2d}, we obtain $\nu_{2D}=-1$ in this phase, which confirms that the phase at $J_3=1.0$ corresponds to a weak topological phase of coupled 2D topological states in class DIII. In contrast with these three cases, the $J_3=2.5$ phase presents energy modes in the gap that do not connect the negative and positive energy bands. By increasing the value of $J_3$ further, one does not find any additional closings in the energy spectrum, and furthermore $\nu_{3D}=0$ and $\nu_{2D}=1$ (for $J_1=K_z=0$). However, by using Eq. \ref{nu1d}, we find that $\nu_{1D}=-1$ when we set $J_3=K_z=J_0=0$. Hence, the system is essentially in a state of weak topological phase of coupled topological wires. The fact that there is no spectral flow in the energy is because, in the limit of weak coupling, the boundary states of the 1D wires do not have spectral flow anyway. Thus, when coupled the boundary states will not generically disperse sufficiently strong to reach the bulk energy bands, and even if they did they would not spectrally connect the lower band to the upper band. This behavior of course has its counterpart in the entanglement spectrum where the entanglement modes cross $1/2$ but do not flow between $0$ and $1$. As we mentioned earlier, the abelian and non-abelian phases of the Kitaev model can be distinguished in their entanglement properties by additional contributions that appear in the non-abelian phases when the system has edge states. In the present case of the RKD model, if we were to compute the entanglement using open boundary conditions in the $\mathbf{a}_3$ direction, the degenerate zero modes from the two surfaces will contribute additional entanglement to the system, similar to what happens in the 2D Kitaev model. However, whereas in the Kitaev model such additional contribution to the entanglement was linked to the types of excitations in the system by Yao and Qi, in the RKD model the connection to excitations is not clear. We leave this question for future work. \begin{figure} \begin{center} \includegraphics[trim =2cm 1cm 0cm 0cm,scale=0.30]{vortex.jpg} \caption{ Density profile of the zero modes at $k_1=0$ when $J=1.5$ in one of the layers perpendicular to the $\mathbf{s}_1$ direction. The magnitude of the density is represented by the color and size of the circles at each lattice, with the warmer colors and larger size denoting higher density. The green lines are the links for which the sign is flipped with respect to the ground state configuration of the $Z_2$ gauge field. The shaded hexagons show where the vortices are realized. The vortex lines extend into the plane along the $\mathbf{a}_1$ direction. The $\mathbf{a}_2$ and $\mathbf{a}_3$ directions are shown by the black vectors. }\label{conf} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[trim =2cm 1cm 0cm 0cm,scale=0.302]{Fig_set3_A.jpg} \includegraphics[trim =2cm 1cm 0cm 0cm,scale=0.302]{Fig_set3_B.jpg} \includegraphics[trim =2cm 2cm 0cm 0cm,scale=0.302]{Fig_set3_C.jpg} \includegraphics[trim =2cm 2cm 0cm 0cm,scale=0.302]{Fig_set3_D.jpg} \vspace{0.5cm} \includegraphics[trim =2cm 2cm 0cm 0cm,scale=0.35]{Fig_set3_E.jpg} \caption{ Energy (a,b) and entanglement (c,d) spectra for $J_3=1.5$ strong topological phase before and after adding the vortex lines, respectively. Figure (e) shows the difference in entanglement entropy between both cases, illustrating the additional entanglement obtained by the crossings of the Majorana modes trapped in the vortex lines.}\label{vortexweak} \end{center} \end{figure} \subsection{Entanglement arising from vortices in the $Z_2$ gauge field} We now discuss the case of the entanglement that arises from introducing vortex configurations in the $Z_2$ gauge field. We have found that the derivation of the factorization of the density matrix continues to hold, regardless of whether the $Z_2$ has vortex configurations. Furthermore, the $Z_2$ gauge field will continue to contribute the same amount of entanglement entropy as it did for the ground state. Consequently, any change to the entanglement of the system will arise from the Majorana modes that are trapped by the $Z_2$ vortices. This allows us to easily study the entanglement in the presence of the $Z_2$ flux excitations of the gauge field. To generate a vortex, one changes the signs of the links in such a way that the product of the links around the hexagon leads to $\prod_{\bar{ij}\in \text{hex}}u_{ij}=-1$. To simplify the discussion, we will consider periodic boundary conditions in all directions and introduce two vortex lines parallel to the $\mathbf{a}_1$ direction. \begin{figure} \begin{center} \includegraphics[trim =2cm 1cm 0cm 0cm,scale=0.302]{Fig_set2_A.jpg} \includegraphics[trim =2cm 1cm 0cm 0cm,scale=0.302]{Fig_set2_B.jpg} \includegraphics[trim =2cm 2cm 0cm 0cm,scale=0.302]{Fig_set2_C.jpg} \includegraphics[trim =2cm 2cm 0cm 0cm,scale=0.302]{Fig_set2_D.jpg} \vspace{0.5cm} \includegraphics[trim =2cm 2cm 0cm 0cm,scale=0.35]{Fig_set2_E.jpg} \caption{ Energy (a,b) and entanglement (c,d) spectra for $J_3=1.0$ weak topological phase before and after adding the vortex lines, respectively. Figure (e) shows the difference in entanglement entropy between both cases, illustrating the additional entanglement obtained by the single crossing of the Majorana modes trapped in the vortex lines.}\label{vortexstrong} \end{center} \end{figure} Consider a system of dimensions $N_{1,2,3}$ along the $\mathbf{a}_{1,2,3}$ directions respectively. The configuration we choose here of the $Z_2$ gauge field corresponds to assigning a minus sign to the link operators of the form $\hat{u}_{r_A, r_A-s_0}$, such that $r_A=n_1 \mathbf{a}_1+(N_2/2)\mathbf{a}_2+n_3 \mathbf{a}_3$, with $n_1=1, \ldots, N_1$ and $n_3=N_3/4,\ldots, 3N_3/4$. To illustrate this $Z_2$ configuration, we show the pattern of signs in Fig. \ref{conf}. for one of the layers with normal vector $[1\,1\,1].$ Because of this choice of $Z_2$ gauge field, the vortex lines are threaded through the shaded hexagons and extend along the $\mathbf{a}_1$ direction. Once this choice of link values is set, both the nearest and next-nearest neighbor tunneling terms have to be changed accordingly because both types of tunneling are written in terms of link operators. The Majorana fermions will feel the presence of the vortex lines through the phases of the hopping parameters. Such vortex lines can induce states in the gap of the system when the bulk is topologically nontrivial. Even though these states will be localized to either region $A$ or $B$, they can contribute to the entanglement of the system. For simplicity, we consider two of the cases of the previous section, namely the strong topological state at $J_3=1.5$ and the weak topological state at $J_3=1.0$. As we now discuss, there is a clear distinction between the entanglement modes of both cases when vortices are introduced. In Figs. \ref{vortexweak}a,b we show the energy spectrum with and without the vortex lines when $J_3=1.0$. The presence of the vortex lines induces doubly degenerate Majorana branches that cross at $k_1=0$ and $k_1=\pi$. The corresponding entanglement spectrum is shown in Figs. \ref{vortexweak}c,d. Similarly, in Figs. \ref{vortexstrong}a,b we show the energy spectrum with and without the vortex lines when $J_3=1.5$. In this case, the doubly degenerate Majorana branches cross at the single point $k_1=0$. The corresponding entanglement spectrum is shown in Figs. \ref{vortexstrong}c,d. The double degeneracy is due to the presence of two vortex lines. The behavior we observe here can be understood from the arguments presented in \cite{Teo2010}. Vortex lines can be seen as one-dimensional defects in three-dimensional systems that belong, in this case, to class DIII. It was shown in \cite{Teo2010} that under these circumstances, there is a $Z_2$ classification of the state. The invariant associated with this classification determines the stability of gapless Majorana modes that propagate along the vortex line. We can infer from the number of crossings in the energy and entanglement spectrum that the types of gapless Majorana modes we have obtained have a different $Z_2$ invariant for the $J_3=1.0$ and $J_3=1.5$ cases. The weak topological state presents gapless Majorana modes that cross zero energy an even number of times, whereas in the strong topological state the Majorana mode crosses zero an odd number of times. This feature is also present in the entanglement spectrum. Such crossings lead to additional entanglement in the system, with respect to the case of no vortices and with periodic boundary conditions. We further emphasize this point in Fig.\ref{vortexweak}e and Fig.\ref{vortexstrong}e by showing the difference $S_{v}(k_1)-S_{nv}(k_1)$, where $S_{v}(S_{nv})$ is the entanglement entropy of region $A$ when the vortex lines are present (absent). There is an additional contribution that is approximately $2\log 2$ for each of the crossings obtained in the entanglement spectrum. In the weak topological state this contribution comes from the two crossings $k_1=0,\pi$, whereas in the strong topological state this occurs only at $k_1=0$. \section{Conclusions} In this work, we have explored the entanglement properties of a three dimensional generalization of the Kitaev model proposed by Ryu. We have shown that the entanglement entropy separates into a contribution from the $Z_2$ gauge field and the Majorana degrees of freedom, in the same way that it occurs for the Kitaev model. We took advantage of these properties to explore the behavior of the entanglement spectrum of both weak and strong topological phases of the model proposed by Ryu. Finally, we considered the effect of introducing vortex lines in the $Z_2$ gauge field, which lead to additional contributions to the entanglement entropy arising from gapless Majorana modes trapped in the vortices. \section{Acknowledgements} This work was supported by ONR award N0014-12-1-0935. We acknowledge a useful conversation with S. Ryu and the support of the UIUC ICMT. \section*{References}
1,314,259,996,896
arxiv
\section{Introduction}\label{intro} Motion blur is one of the most recursive problems of paramount importance in the field of computer vision and digital photography. It is mainly caused by the streaking of fast moving objects in an image or video frames, also some of the other reasons could be the camera shake or long exposure time \cite{lagendijk2009basic, wang2014recent, yitzhaky1997identification, nayar2004motion, shan2008high}. An interesting way to understand motion blur is to understand the concept of relative motion, \eg, motion of an object relative to the observer in an instance of time. During a single exposure time, the image captured by the camera, especially when an object is moving in the captured image may represent a scene over a continuous interval of time. Such motion of an object in a captured image, causes motion blur artifacts or more specifically displacement of pixels. Removal of motion blur from the images, is to obtain the clean sharp images from the blurry inputs by minimizing the mismatch between the latent sharp image and the restored sharp image. Considering the fact, that motion blur in real-world is commonly shift-variant or non-uniform in nature, and the depth or the density of blur may fluctuate over different regions of an image \cite{bardsley2006blind, cho2007removing, hirsch2011fast, couzinie2013learning}. The earlier blind deconvolution methods assume the estimation of unknown blur kernel by studying image priors \cite{krishnan2009fast, levin2011efficient, zoran2011learning, xu2013unnatural, werlberger2010motion, zhang2015image, pan2016blind}, Weiner deconvolution \cite{wiener1950extrapolation, krishnan2009fast}, or Richardson-Lucy bayesian approach \cite{richardson1972bayesian}. However, such outdated methods inevitably need the handcrafted image priors, while not to mention the cost of computational resources and the exponential increase in complexity. \par Thanks to the recent advancements in deep learning, and its tremendous feature learning ability from training to testing in blind image deblurring task \cite{schuler2015learning, sun2015learning, nah2017deep, tao2018scale, zhang2018dynamic}, we do not need to rely on conventional approaches anymore. Especially, considering the revolution that GANs \cite{goodfellow2014generative} have brought not only in general-purpose computer vision tasks \cite{karras2019style, karras2020analyzing}, but also in blind motion deblurring task \cite{kupyn2018deblurgan, kupyn2019deblurgan, shao2020deblurgan+, asim2020blind, zhang2021deep}. Despite the colossal success of GANs in blind motion deblurring, the quality of restored images from the blurry inputs is still straggling. No doubt, either scale-wise stacking of convolution layers \cite{sun2015learning, gong2017motion, noroozi2017motion, nah2017deep, schuler2015learning, tao2018scale, zhang2018dynamic, gao2019dynamic, stacked} or versions of GAN-based DeblurGAN models \cite{kupyn2018deblurgan, kupyn2019deblurgan} have significantly improved the performance of restoration both in terms of qualitative and quantitative analysis. \par To address the problem of non-uniform blind image deblurring, and to propose such a GAN-based blind image motion deblurring network, that, unlike other GAN-based models does not treat the restoration of sharp image as a linear end-to-end process from input to output. Instead, our proposed approach (SL-CycleGAN) treats blind motion deblurring as a domain-to-domain translation problem. What's even more interesting is, we take the inspiration for this research from the sparse representation learning of \cite{ahmad2019can}, not only this, but we also combine Hawkins \etal \cite{hawkins2011cortical} research on Hierarchical temporal memory (HTM) and ``A thousand brains: A new theory of intelligence'' by Jeff Hawkins \cite{hawkins2021thousand}. By observing other GAN-based state-of-the-art methods for blind motion deblurring \cite{kupyn2018deblurgan, kupyn2019deblurgan, cai2020dark}, the results achieved by our proposed framework outperform state-of-the-art motion deblurring methods, and speak for themselves both qualitatively and quantitatively. \cref{fig1} shows the supreme reconstruction ability of our proposed method against DeblurGAN-v2 \cite{kupyn2019deblurgan} on low-light space images. \par Our contributions in this research paper are summarized as follow: \begin{itemize} \item \textbf{The Framework:} There is no denial that, GAN-based models are notorious in nature when it comes to problems such as mode-collapse and vanishing gradient \cite{arjovsky2017towards, radford2015unsupervised}. Therefore, a thoughtful choice of network that is able to tackle such issues is of vital importance. We adopt CycleGAN \cite{zhu2017unpaired} for its amazing ability of domain-to-domain translation. Unlike, other GAN-based models for blind motion deblurring, our proposed network is cycle-consistent, that means, not only the generators of our network are able to deblur the blurry input but also they are able to reconstruct synthetic non-uniform motion blur similar to the original blurry input. \item \textbf{Sparse Convolutions:} Our second contribution is the adoption of intrinsic advantages of high dimensional sparse representation through sparse convolutions, similar to \cite{ahmad2019can}. The primary reason why we choose sparse convolutions over standard convolutions layers in our proposed generator architecture is, sparse representations are more robust to noise and interference. \item \textbf{HTM:} Hierarchical Temporal Memory (HTM) is an algorithm that models how neocortex of a human brain performs complex world calculations such as understanding of visual patterns, context of spoken language, perceiving the information through touch and other sensory organs \cite{hawkins2011cortical}. Our final contribution is, utilizing a trainable HTM spatial pooler such as k-winner \cite{ahmad2019can} to replace non-linearity ReLU$\left( \cdot \right)$ with k-winner in the residual-block of the generator network. The reason of k-winner as a replacement for classic ReLU is, it is naturally more robust to variance in noise and interference from random signals. In addition, k-winner constraints the output of each layer to the most active non-zero units. \end{itemize} \section{Related work}\label{related} \subsection{Motion Deblurring} Earlier methods treat blind motion image deblurring as an image deconvolution problem \cite{shan2008high,fergus2006removing, cho2009fast, xu2010two}. Similarly, sparse-based methods before the introduction of CNNs explore the sparse image gradients in the input blurry images \cite{krishnan2011blind, levin2011efficient, pan2016l_0, perrone2016logarithmic, sun2013edge, xu2010two, xu2013unnatural}. Meanwhile, other similar motion deblurring approaches focus more on the advantages of patch-wise estimation \cite{michaeli2014blind} and estimation of dark channel image priors \cite{pan2016blind}. However, these conventional image deconvolution methods assume the blur to be uniform, while real-world blur is mostly non-uniform or shift-variant. Since the introduction of deep learning and CNNs, the blind motion deblurring community has seen exponential improvement in the quality of restored sharp images such as \cite{sun2015learning, gong2017motion, noroozi2017motion}. Nah \etal. \cite{nah2017deep} proposed a deep scale-wise convolution network for dynamic scene motion deblurring. Unlike conventional deconvolution methods, \cite{nah2017deep} eliminates the need of knowing explicit blur kernel in advance. Similarly, Schuler \etal. \cite{schuler2015learning} proposed deep CNN network for blind motion deblurring in a coarse-to-fine scheme. Tao \etal. \cite{tao2018scale} proposed a encoder-decoder scale-wise recurrent network architecture for blind motion deblurring. Zhang \etal. \cite{zhang2018dynamic} proposed a sequential CNN architecture for dynamic scene deblurring, while achieving impressive results in comparison with \cite{nah2017deep} and \cite{tao2018scale}. Gao \etal. \cite{gao2019dynamic} proposed a parameter selective sharing scheme, and a multi-scale encoder-decoder model with nested skip connections for dynamic scene deblurring. Cai \etal. \cite{cai2020dark} proposed a dynamic scene motion deblurring network that investigates the dark and bright channel image priors in the input blurry images. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{Architecture_pdf.pdf} \caption{Architecture of SL-CycleGAN generators. The encoder block of the generator contains strided convolutions layers with stide of $\frac{1}{2}$. Each convolution layer in the encoder block and TransposeConv layer in the decoder block is followed by the InstanceNorm and non-linearity ReLU. Besides, the generators network contains 9-residual blocks. \cref{fig2}a shows the conventional Res-block architecture with Conv layers, InstanNorm, and ReLU, while \cref{fig2}b shows the modified Res-block with Sparse-Conv layer, InstanceNorm, and ReLU is replaced by HTM k-winn. Each Sparse Res-block contains a Sparse-conv layer, InstanceNorm, and k-winn.} \label{fig2} \end{figure*} \subsection{GANs for Motion Deblurring} Generative adversarial networks (GANs) \cite{goodfellow2014generative} since their introduction, have been widely used for computer vision tasks. A GAN architecture is a deep neural network that consists of mainly a generator $G$ and a discriminator $D$. A generator network that generates a fake generated sample $G(z)$, normally takes a random noise vector $z$ as an input, more specifically in a deblurring scenario, an input blurry image is passed through the generator network and it outputs a fake sharp version of the input image. While the discriminator network acts as a classifier by discriminating between the real data sample $x$ and the generated sample $G(z)$. Both of these adversaries play a minimax game and keep getting better and better. Theoretically, the main goal of such adversarial network is to approximate the generated distribution ${{p}_{z}}$ to the real data distribution ${{p}_{d}}$. The minimax objective function for GANs can be formulated as, \begin{equation}\label{eq1} \underset{G}{\mathop{\min }}\,\underset{D}{\mathop{\max }}\,\left[ {{\mathbb{E}}_{x\sim{{p}_{d}}}}\log (D(x)) \right]+\left[ {{\mathbb{E}}_{z\sim{{p}_{z}}}}\log (1-D(G(z))) \right] \end{equation}\par Based on the success of GANs, Kupyn \etal. \cite{kupyn2018deblurgan} proposed a conditional GAN-framework (DeblurGAN) for single image blind motion deblurring. DeblurGAN consists of a generator and a discriminator network for deblurring task, while utilizing Wasserstein loss function \cite{arjovsky2017wasserstein} and optimization criteria with an additional gradient penalty to tackle GAN related isssues \cite{gulrajani2017improved}. Kupyn \etal. \cite{kupyn2019deblurgan} proposed the improved version of previous DeblurGAN \cite{kupyn2018deblurgan}, called DeblurGAN-v2. The DeblurGAN-v2 modified the original architecture of the generator network of DeblurGAN by incorporating the Feature pyramid network (FPN) \cite{lin2017feature} for improved quality. Similarly a relativistic local and global discriminator network is introduced \cite{jolicoeur2018relativistic} with Inception-ResNet-v2 \cite{szegedy2017inception} as a backbone of the network. Shao \etal. \cite{shao2020deblurgan+} proposed a GAN-based deblurring framework that explores the dark and bright channel image priors. Asim \etal. \cite{asim2020blind} proposed a blind image deblurring network based on deep generative priors. However, their proposed method lack the experimental analysis on real-world benchmarks for deblurring. Zhang \etal. \cite{zhang2021deep} proposed a image deblurring and denoising network by combining the noisy and blurry image pairs acquired in a burst. Similarly, Lin \etal. \cite{lin2019tell} deployed a GAN-based framework for blind dynamic scene deblurring. Several other approaches either exploit the network architecture in scale-wise convolutions \cite{aljadaany2019douglas, zhang2019gated, jiao2017formresnet}, or the deep generative and discriminative priors \cite{ren2020neural, li2018learning} for dynamic scene blind motion deblurring. \section{SL-CycleGAN Network Architecture} The detailed architecture of SL-CycleGAN generators is demonstrated in \cref{fig2}. Given a pair of blurry and sharp images $\left\{ {{x}_{i}} \right\}_{i=1}^{N}\in {{X}_{blurry}}$ and $\left\{ {{y}_{i}} \right\}_{j=1}^{M}\in {{Y}_{sharp}}$, the generators of SL-CycleGAN learn the translations from ${{X}_{blurry}}$ to ${{Y}_{sharp}}$. Taking the inspiration from the original CycleGAN model \cite{zhu2017unpaired}, SL-CycleGAN also introduces two generator networks ${{G}_{X}}$ and ${{G}_{Y}}$. Whereas, ${{G}_{X}}$ learns the translation function from ${X}\to {Y}$ such that ${{G}_{X}}:{{X}_{blur}}\to {{Y}_{sharp}}$. Similarly, the second generator ${{G}_{Y}}$ learns the mapping from ${Y}\to {X}$ such that ${{G}_{Y}}:{{Y}_{sharp}}\to {{X}_{blur}}$. A pair of adversarial discriminators ${{D}_{X}}$ and ${{D}_{Y}}$ are also proposed. While ${{D}_{X}}$ learns to differentiate between the blurry input ${{x}_{i}}$ and the translated image ${{G}_{Y}}(\hat{y})$. Similarly, ${{D}_{Y}}$ differentiates between the latent sharp image ${{y}_{i}}$ and the translated image ${{G}_{X}}(\hat{x})$. The architecture of our discriminator networks is similar to the PatchGAN $70\times 70$ discriminator \cite{isola2017image}. \subsection{Cycle-consistent Deblurring} As mentioned earlier in \cref{intro}, GANs have been known for mode collapse. The main reason behind mode collapse in GANs, is its adversarial nature and the choice of the objective function for optimization purposes. Theoretically speaking, a generator function that maps ${{G}_{X}}:{{X}_{blur}}\to {{Y}_{sharp}}$ outputs a distribution of translated image ${{p}_{data}}(\hat{y})$ such that the output image $\hat{y}$ is similar to the original sharp image $y$. The assumption that the generated probability distribution ${{p}_{data}}(\hat{y})$ strictly correlates to the original data distribution ${{p}_{data}}({y})$ requires the generator ${G}_{X}$ to be stochastic in nature \cite{goodfellow2014generative}. However during inference, such theoretical assumption does not assure that the generator will learn meaningful translations without being the victim of mode collapse. \par In order to avoid mode collapse in generators and to improve the optimization ability of the network, Zhu \etal. \cite{zhu2017unpaired} argued that the adversarial objective function of generators should be coupled with the term ``cycle-consistent''. The cycle-consistency term ensures that the generators ${{G}_{X}}$ and ${{G}_{Y}}$ are the inverse mapping functions of each other. It can be defined as, \begin{equation}\label{eq2} \begin{split} {{L}_{cycle}}\text{(}{{G}_{X}},{{G}_{Y}})={{\mathbb{E}}_{x\sim {{p}_{data}}(x)}}\left[ {{\left\| {{G}_{Y}}({{G}_{X}}(\hat{x}))-x \right\|}_{1}} \right]\\+{{\mathbb{E}}_{y\sim {{p}_{data}}(y)}}\left[ {{\left\| {{G}_{X}}({{G}_{Y}}(\hat{y}))-y \right\|}_{1}} \right] \end{split} \end{equation} where ${{L}_{cycle}}$ in \cref{eq2} represents the L1 norm, ${{p}_{data}(x)}$ and ${{p}_{data}(y)}$ represent the distributions of blurry and sharp images. \par Unlike our close GAN-based competitors for blind motion deblurring \cite{kupyn2018deblurgan, kupyn2019deblurgan, shao2020deblurgan+, zhang2021deep}, we consider cycle-consistency an essential factor for blind motion deblurring, and show that our network outperforms all the state-of-the-art methods in blind motion dynamic scence deblurring in \cref{exp}. \subsection{Sparse Convolutions and HTM} The history behind sparse representations is nothing new, in fact, Olshausen \etal. \cite{olshausen1997sparse} showed that deploying sparse embeddings and sparse objective functions in encoders can lead to the representations that are similar to the learnt representations in primate visual cortex. Similarly, Chen \etal. \cite{chen2018sparse} developed hierarchical sparse representations that are similar to hierarchical feature detectors. The weights for each unit in sparse convolution layers in our Resnet architecture are randomly sampled from a sparse subset of the previous layer. Additionally, the output of each layer is bounded to only $k$ most non-zero active units. The number of non-zero products in each layer is (sparsity of layer $l$)$\times$(sparse weights of layer $l + 1$). \cref{fig2}a represents a conventional arrangement of a residual-block, where each conv layer is followed by InstanceNorm and ReLU as an activation function. \cref{fig2}b is our modified structure for residual-block, which we called Sparse ResNet-block. To select the most active $k$ non-zero units and for each unit to be equally active in order to be robust to noise and interference, boosting techniques are applied to sparse convolution layers which can be defined as, \begin{equation}\label{eq3} c_{i}^{l}(t)=(1-\alpha )c_{i}^{l}(t-1)+\alpha \cdot \left[ i\in topIndice{{s}^{l}} \right] \end{equation} \cref{eq3} represents the HTM boosting duty cycle through k-winner, which calculates the running average of each active unit cycle. Where $c_{i}^{l}(t)$ is the unit duty cycles for each unit $i$ in layer $l$ at time $t$. The boosting coefficient $b_{i}^{l}={{e}^{\beta ({{{\hat{a}}}^{l}}-c_{i}^{l}(t))}}$ is then measured for each unit based on the target and current average duty cycle. Where ${{{\hat{a}}}^{l}}$ denotes the number of units that are expected to be active, while the boosting factor $\beta$ is a positive parameter responsible for controlling the strength of boosting. \par In order to construct sparse convolutions with HTM k-winner in Resnet architecture, k-winner is applied to the output of InstanceNorm in each residual-block respectively with stride of 1 and kernel size of $3\times3$. \subsection{Loss functions} In this section, we discuss the loss functions for our proposed SL-CycleGAN, the overall loss function is the combination of three different loss functions. \par \textbf{Adversarial loss:} The adversarial objective functions is an essential component for blind motion deblurring in GANs. The classic Jensen-Shannon divergence (JSD) based minimax loss function for GANs is proposed by \cite{goodfellow2014generative} is defined as in \cref{eq1}. However, the objective fucntion in \cref{eq1} suffers from the serious issues such as mode collapse and vanishing gradient. Thus, a conventional minimax objective function is not a good choice for our blind motion deblurring task. Instead, we choose the objective function of \cite{gulrajani2017improved} with gradient penalty term. The adversarial functions of our proposed network can be defined as follow, \begin{equation}\label{eq4} \begin{split} {{L}_{adv}}({{G}_{X}},{{D}_{Y}},{{X}_{blur}},{{Y}_{sharp}})={{\mathbb{E}}_{y\sim {{p}_{data}}(y)}}\left[ {{D}_{Y}}(y) \right]\\-{{\mathbb{E}}_{x\sim {{p}_{data}}(x)}}\left[ {{D}_{Y}}({{G}_{X}}(x)) \right], \end{split} \end{equation} \begin{equation}\label{eq5} \begin{split} {{L}_{adv}}({{G}_{Y}},{{D}_{X}},{{Y}_{sharp}},{{X}_{blur}})={{\mathbb{E}}_{x\sim {{p}_{data}}(x)}}\left[ {{D}_{X}}(x) \right]\\-{{\mathbb{E}}_{y\sim {{p}_{data}}(y)}}\left[ {{D}_{X}}({{G}_{Y}}(y)) \right] \end{split} \end{equation} where ${{G}_{X}}$ and ${{G}_{Y}}$ are the inverse mapping functions of each other. The adversarial functions for both the generators and discriminators in \cref{eq4} and \cref{eq5} are combined along with cycle-consistency loss ${{L}_{cycle}}$ from \cref{eq3} during the inference. \par \textbf{Perceptual loss:} We observe that by incorporating only adversarial and cycle-consistency loss, the quality of the restored images is slightly degraded. To further improve the quality of restored images, we adopt the perceptual loss of pre-trained VGG-19 by Jonhnson \etal. \cite{johnson2016perceptual}. The perceptual loss can be defined as, \begin{equation}\label{eq6} {{L}_{perc}}=\frac{1}{{{W}_{i,j}}{{H}_{i,j}}}\sum\limits_{w=1}^{{{W}_{i,j}}}{\sum\limits_{h=1}^{{{H}_{i,j}}}{{{\left( {{\phi }_{i,j}}{{({{I}_{S}})}_{w,h}}-{{\phi }_{i,j}}{{({{G}_{\theta }}({{I}_{B}}))}_{w,h}} \right)}^{2}}}} \end{equation} where ${{H}_{i,j}}$ and ${{W}_{i,j}}$ in \cref{eq6} indicate the height and width of the conv3$\times $3 layers in the pre-trained VGG19 network. ${{\phi }_{i,j}}$ indicates the obtained feature maps by the j-th convolution layer after the activation function and before the i-th maxpooling layer. ${{I}_{S}}$ and ${{G}_{\theta }}({{I}_{B})}$ represent the real sharp and the restored deblurred images.\par \textbf{Overall Loss Function:} The overall loss function for proposed SL-CycleGAN can be defined as, \begin{equation}\label{eq7} {{L}_{SL-CycleGAN}}={{L}_{adv}}+{{\lambda}_{cyc}}{{L}_{cycle}}+{{\lambda }_{perc}}{{L}_{perc}} \end{equation} where ${{\lambda}_{cyc}}$ represents the relative coefficient of adversarial functions for ${G}_{X}$ and ${G}_{Y}$. ${\lambda }_{perc}$ is the hyper-parameter for perceptual loss ${{{L}}_{perc}}$. \section{Experimental evaluation}\label{exp} \begin{table}[ht] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{@{}clllclc@{}} \toprule Method & & Year & & PSNR & & SSIM \\ \midrule DeepDeblur \cite{nah2017deep} & & \multicolumn{1}{c}{2016} & & 30.12 & & 0.9021 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ DeblurGAN \cite{kupyn2018deblurgan} & & \multicolumn{1}{c}{2018} & & 28.70 & & 0.958 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ DeblurGAN-v2-Inception \cite{kupyn2019deblurgan} & & \multicolumn{1}{c}{2019} & & 29.55 & & 0.934 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ DeblurGAN+ \cite{shao2020deblurgan+} & & \multicolumn{1}{c}{2020} & & 28.62 & & 0.959 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ DBGAN \cite{zhang2020deblurring} & & \multicolumn{1}{c}{2020} & & 31.10 & & 0.9424 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ RNNDeblur \cite{zhang2018dynamic} & & \multicolumn{1}{c}{2018} & & 29.1872 & & 0.9306 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ SRN-Deblur \cite{tao2018scale} & & \multicolumn{1}{c}{2018} & & 30.26 & & 0.9342 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ DBCPeNet \cite{cai2020dark} & & \multicolumn{1}{c}{2020} & & 31.10 & & 0.945 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ MTRNN \cite{park2020multi} & & \multicolumn{1}{c}{2019} & & 31.15 & & 0.945 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ DMPHN \cite{stacked} & & \multicolumn{1}{c}{2019} & & 31.50 & & 0.9483 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ SRN+PSS+NSC \cite{gao2019dynamic} & & \multicolumn{1}{c}{2019} & & 31.58 & & 0.9478 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ Learning Even-Based Motion Deblurring \cite{jiang2020learning} & & \multicolumn{1}{c}{2020} & & 31.79 & & 0.949 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ SAPHNet \cite{suin2020spatially} & & \multicolumn{1}{c}{2020} & & 32.02 & & 0.953 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ RADNet \cite{purohit2020region} & & \multicolumn{1}{c}{2020} & & 32.15 & & 0.953 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ BANet \cite{tsai2021banet} & & \multicolumn{1}{c}{2021} & & 32.44 & & 0.957 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ MPRNet \cite{zamir2021multi} & & \multicolumn{1}{c}{2021} & & 32.66 & & 0.959 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ MIMO-UNet++ \cite{cho2021rethinking} & & \multicolumn{1}{c}{2021} & & 32.68 & & 0.959 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ HINet \cite{chen2021hinet} & & \multicolumn{1}{c}{2021} & & 32.71 & & 0.959 \\ \multicolumn{1}{l}{} & & & & \multicolumn{1}{l}{} & & \multicolumn{1}{l}{} \\ SL-CycleGAN \textbf{(Ours)} & & \multicolumn{1}{c}{2021} & & \textbf{38.087} & & 0.954 \\ \bottomrule \end{tabular}% } \caption{Quantitative comparison of Blind image deblurring on GoPro dataset \cite{nah2017deep}. Our proposed method SL-CycleGAN achieves the highest PSNR of 38.087 dB on blind image motion deblurring task.} \label{Tab1} \end{table} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{GoPro_pdf.pdf} \caption{Deblurring results of test images from GoPro dataset. (a) Blurry inputs. (b) Magnified blurry image patches. (c) Corresponding sharp image patches. (d) Deblurring results of \cite{tao2018scale}. (e) Deblurring results of \cite{kupyn2018deblurgan}. (f) Deblurring results of \cite{kupyn2019deblurgan}. (g) Deblurring results of \cite{shao2020deblurgan+}. (h) Deblurring results of \cite{zhang2021deep}. (i) Finally, deblurring results of our proposed SL-CycleGAN.} \label{fig3} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{Kohler_pdf.pdf} \caption{Visual comparison of test images from Kohler dataset \cite{kohler2012recording}. (a) Input blurry images. (b) Magnified blurry image patches. (c) Magnified real sharp image patches. (d) Deblurring results of \cite{tao2018scale}. (e) Deblurring results of \cite{kupyn2018deblurgan}. (f) Deblurring results of \cite{kupyn2019deblurgan}. (g) Deblurring results of \cite{cai2020dark}. (h) Deblurring results of our proposed method.} \label{fig4} \end{figure*} \subsection{Experiment settings} We have used Pytorch \cite{NEURIPS2019_9015} for all our experiments on Nvidia GTX 1080ti with 11G GPU. We performed experiments on three image benchmarks, GoPro dataset \cite{nah2017deep}, Kohler dataset \cite{kohler2012recording}, and Lai dataset \cite{lai2016comparative}. We resized all the images in all three image benchmarks to 256$\times$256 for training and testing and apply data augmentation. For optimization of the generators and the discriminators, we use the Adam optimizer \cite{kingma2014adam} with ${{\beta}=0.999}$ and batch size of 1. We train our model on all these three image benchmarks for 200 training epochs each with an initial learning rate of 0.0002 for first 100 epochs and linearly decay to zero over next 100 iterations. For all the experiments we set the values of ${{\lambda}_{cyc}}=10$ and ${{\lambda }_{perc}}=100$ in \cref{eq7}. We use the gradient penalty term of \cite{gulrajani2017improved} for the discriminator networks, which is set to 10. We do not use dropout layer in our modified Sparse ResNet architecture, since \cite{ahmad2019can} in their research show that the utilization of k-winner with sparse convolutions replaces the need of dropout layers in the network architecture. The training time of our proposed network (SL-CycleGAN) on one dataset for total of 200 training epochs took 2 days to complete, which is 6 days in total for three datasets. \begin{table}[t] \centering \resizebox{7cm}{!}{% \begin{tabular}{@{}clclc@{}} \toprule Method & & PSNR & & SSIM \\ \midrule Whyte \etal. \cite{whyte2012non} & & 27.02 & & 0.809 \\ Xu \etal. \cite{xu2013unnatural} & & 27.40 & & 0.810 \\ Sun \etal. \cite{sun2015learning} & & 25.21 & & 0.772 \\ DeepDeblur \cite{nah2017deep} & & 26.48 & & 0.807 \\ DeblurGAN \cite{kupyn2018deblurgan} & & 25.86 & & 0.802 \\ DeblurGAN-v2 \cite{kupyn2019deblurgan} & & 26.10 & & 0.816 \\ SRN-Deblur \cite{tao2018scale} & & 26.75 & & 0.837 \\ DMPHN \cite{stacked} & & 24.21 & & 0.7562 \\ Zhang \etal. \cite{zhang2018dynamic} & & 25.71 & & 0.800 \\ Kim \etal. \cite{hyun2013dynamic} & & 24.68 & & 0.794 \\ DBCPeNet \cite{cai2020dark} & & 26.79 & & 0.839 \\ SL-CycleGAN \textbf{(ours)} & & \textbf{30.818 } & & \textbf{0.843} \\ \bottomrule \end{tabular}% } \caption{Quantitative comparison on Kohler dataset \cite{kohler2012recording}. Our proposed SL-CycleGAN achieves significant improvement both in terms of PSNR and SSIM. } \label{Tab2} \end{table} \begin{table}[t] \centering \resizebox{7cm}{!}{% \begin{tabular}{@{}clclc@{}} \toprule Method & & PSNR & & SSIM \\ \midrule Fergus \etal. \cite{fergus2006removing} & & 22.870 & & 0.682 \\ Cho \cite{cho2009fast} & & 23.272 & & 0.699 \\ Xu \etal. \cite{xu2013unnatural} & & 25.586 & & 0.773 \\ Krishnan \etal. \cite{krishnan2011blind} & & 23.070 & & 0.716 \\ Levin \etal. \cite{levin2009understanding} & & 21.855 & & 0.651 \\ Whyte \etal. \cite{whyte2012non} & & 23.232 & & 0.667 \\ Sun \etal. \cite{sun2015learning} & & 24.649 & & 0.756 \\ Xu \cite{xu2010two} & & 25.319 & & 0.765 \\ Zhang \etal. \cite{zhang2013multi} & & 22.918 & & 0.679 \\ Chakrabarti \etal. \cite{chakrabarti2010analyzing} & & 25.389 & & 0.769 \\ Nah \etal. \cite{nah2017deep} & & 24.224 & & 0.713 \\ Gong \etal. \cite{gong2017motion} & & 23.805 & & 0.694 \\ DeblurGAN \cite{kupyn2018deblurgan} & & 24.561 & & 0.741 \\ DeblurGAN-v2 \cite{kupyn2019deblurgan} & & 25.634 & & 0.754 \\ SRN-Deblur \cite{tao2018scale} & & 25.231 & & 0.752 \\ SL-CycleGAN \textbf{(ours)} & & \textbf{27.935} & & 0.766 \\ \bottomrule \end{tabular}% } \caption{Quantitative comparison on Lai dataset \cite{lai2016comparative}. Our Proposed approach shows superior performance than all the other methods in terms of PSNR.} \label{Tab3} \end{table} \subsection{Image Benchmarks} \textbf{Evaluation on GoPro Dataset:} GoPro dataset was proposed by Nah \etal. \cite{nah2017deep}, which consists of 3214 images in total for deblurring task, 2103 training image pairs of blurred and sharp images while the rest of 1111 images are reserved for testing purposes. It is the most commonly used benchmark for blind image deblurring task. The quantitative evaluation on GoPro dataset is presented in \cref{Tab1}. While \cref{Tab1} presents the timeline of all the state-of-the-art deep learning-based deblurring methods starting from year 2016-2021 both in terms of PSNR and SSIM. Our proposed method SL-CycleGAN outperforms all the state-of-the-art methods on GoPro deblurring task, while achieving the record-breaking PSNR of \textbf{38.087} dB, which is 5.377 dB better than the most recent deblurring method HiNet \cite{chen2021hinet}. Similarly, the average SSIM value of our proposed network remains in the list of top five most recent deblurring methods. The qualitative results on GoPro dataset are given in \cref{fig3}. In comparison with the state-of-the-art blind deblurring methods \cite{tao2018scale, kupyn2018deblurgan, kupyn2019deblurgan, shao2020deblurgan+, zhang2021deep}, our proposed approach restores the sharp images from the blurry inputs that are similar to the real sharp images and can be clearly seen in \cref{fig3}. The resemblance between our restored and the real sharp image patches is quite high in comparison with other approaches. \par \textbf{Evaluation on Kohler Dataset:} Kohler \etal \cite{kohler2012recording} proposed a real-world deblurring datasat that consists of 4 latent sharp images and 48 corresponding blurry images of varying blur kernel intensities. It is the most commonly used benchmark for blind image deblurring comparison. The quantitative comparison of blind image deblurring on Kohler dataset is given in \cref{Tab2}. Our SL-CycleGAN outperforms the other state-of-the-art methods by achieving the average PSNR of 30.818 dB and SSIM of 0.843, while our closest competitor DBCPeNet \cite{cai2020dark} achieves the PSNR of 26.79 dB and SSIM of 0.839. Similarly, DeblurGAN \cite{kupyn2018deblurgan}, DeblurGAN-v2 \cite{kupyn2019deblurgan}, SRN-Deblur \cite{tao2018scale} and DMPHN \cite{stacked} show quantitatively inferior performance than our proposed approach. The visual comparison of test images from kohler dataset is presented in \cref{fig4}. We can see from \cref{fig4} that our deblurred sharp image patches retain the texture details and the sharpness similar to the original latent sharp image patches. In comparison with \cite{tao2018scale, kupyn2018deblurgan, kupyn2019deblurgan, cai2020dark}, our proposed model shows the ability to understand the distribution of non-uniform blur over different image regions even when the subject of focus lacks significant light reflection. \par \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{New_ablation_pdf.pdf} \caption{Visual ablation study and analysis on GoPro \cite{nah2017deep}, Kohler \cite{kohler2012recording}, and Lai \cite{lai2016comparative} datasets. First row: Images from GoPro. Second row: Images from Kohler dataset. Third row: Images from Lai dataset. Meanwhile the first column represents the blurry inputs, second column: represent deblur results of CycleGAN, third column: CycleGAN with perceptual loss and sparse convs, fourth column: CycleGAN and sparse convs, and finally CycleGAN + perceptual loss + sparse + k-winn (SL-CycleGAN). } \label{fig5} \end{figure*} \textbf{Quantitative evaluation on Lai Dataset:} Lai \etal. \cite{lai2016comparative} proposed a benchmark for blind image deblurring task, which contains 100 real-world blurred images, they also generated synthetic dataset with 200 generated blurred images containing images of both uniform and non-uniform blur. We present the quantitative comparison on Lai dataset in \cref{Tab3}. We can observe from \cref{Tab3} that our proposed method achieves 2.301 dB improvement in PSNR than the second highest DeblurGAN-v2 \cite{kupyn2019deblurgan}. Meanwhile, the visual results based on the ablation study and analysis on Lai dataset are shown in \cref{fig5}, which we further discuss in \cref{ablation} along with the ablation analysis on GoPro and Kohler datasets. \begin{table}[t] \centering \resizebox{\columnwidth}{3cm}{% \begin{tabular}{@{}cccc@{}} \toprule GoPro & \multicolumn{1}{l}{PSNR(dB)} & \multicolumn{1}{l}{SSIM} & \multicolumn{1}{l}{MS-SSIM} \\ \midrule CycleGAN & 31.835 & 0.844 & 0.986 \\ CycleGAN+VGG-19(perceptual)+Sparse & 37.852 & 0.954 & 0.997 \\ CycleGAN+Sparse & 33.135 & 0.876 & 0.990 \\ CycleGAN+VGG-19(perceptual)+Sparse+k-winn (SL-CycleGAN) & \textbf{38.087} & 0.954 & 0.997 \\ \hline Kohler & \multicolumn{3}{l}{} \\ \hline CycleGAN & 29.870 & 0.814 & 0.983 \\ CycleGAN+VGG-19(perceptual)+Sparse & 29.985 & 0.814 & 0.984 \\ CycleGAN+Sparse & 30.461 & 0.823 & 0.985 \\ \multicolumn{1}{l}{CycleGAN+VGG-19(perceptual)+Sparse+k-winn} (SL-CycleGAN) & \textbf{30.818} & \textbf{0.843 } & \textbf{0.987} \\ \hline Lai & \multicolumn{3}{l}{} \\ \hline CycleGAN & 25.034 & 0.662 & 0.970 \\ CycleGAN+VGG-19(perceptual)+Sparse & 27.581 & 0.764 & 0.983 \\ CycleGAN+Sparse & 27.564 & 0.757 & 0.983 \\ \multicolumn{1}{l}{CycleGAN+VGG-19(perceptual)+Sparse+k-winn} (SL-CycleGAN) & \textbf{27.935} & \textbf{0.766} & \textbf{0.984} \\ \bottomrule \end{tabular}% } \caption{Quantitative ablation study on GoPro \cite{nah2017deep}, Kohler \cite{kohler2012recording} and Lai \cite{lai2016comparative} datasets. } \label{Tab4} \end{table} \subsection{Ablation Study}\label{ablation} We conduct an ablation study on the components of SL-CycleGAN and observe the impact and effectiveness of these components both qualitatively and quantitatively. We present the visual ablation study and on three image benchmarks in \cref{fig5}, while considering original CycleGAN \cite{zhu2017unpaired} as an starting point. Meanwhile, we gradually keep adding modifications to the generator networks such as replacing standard conv layers in the ResNet by sparse-convs and adding VGG-19 perceptual loss, then eliminating perceptual loss and leaving only sparse conv layers. Finally, we modify the network by integrating perceptual loss, sparse-convs and replace ReLU with k-winner in the ResNet generators architecture. We call the final version of our network SL-CycleGAN (CycleGAN + perceptual + sparse-convs + k-winn). We can see from \cref{fig5} that all our sparse versions of the network perform better visually than just CycleGAN, especially the final version SL-CycleGAN produces visually appealing results. Similarly in \cref{Tab4} of ablation study, SL-CycleGAN outperforms all the preceding versions quantitatively.\par \textbf{Limitations:} Keeping in mind that ``Honesty is the best policy''. We observe that during the inference on GoPro dataset, some of the restored images by SL-CyleGAN show slightly dim light in comparison with the original bright sharp images. However, we consider it as an inherited issue from the original CycleGAN and cycle-consistency loss \cite{zhu2017unpaired}. \section{Conclusion} This paper introduces a novel blind image deblurring network SL-CycleGAN, that for the first time, utilizes sparse representation learning with HTM k-winner for improved image deblurring and is more robust towards noise and interference. Meanwhile, achieving the best qualitative and quantitative results on popular image benchmarks. {\small \bibliographystyle{ieee_fullname}
1,314,259,996,897
arxiv
\section{Introduction} \IEEEPARstart{F}{eature} detection and description are fundamental steps in many computer vision tasks, such as visual localization~\cite{zhang2021reference,zhang2021reference}, Structure-from-Motion (SfM)~\cite{schonberger2016structure}, and Simultaneous-Localization-and-Mapping (SLAM)~\cite{fu2021fast}. Nowadays, increasing attention has been attracted onto the multimodal feature extraction and matching in special scenarios, such as autonomous drive and remote sensing, because different modalities provide complementary information~\cite{zhang2021image}. Although several modal-invariant features,~\emph{e.g}\onedot, OS-SIFT~\cite{xiang2018sift} and RIFT~\cite{li2018rift} emerge endlessly, the role SIFT~\cite{lowe2004distinctive} playing in visual image matching cannot be found for multimodal images. Therefore, it is imperative to study a more general and robust solution. Modeling invariance is the key to feature extraction~\cite{jiang2021review}. Benefiting from the great potential of Deep Neural Network (DNN), the features learned on big data dispense with heuristic designs to acquire invariance and significantly outperform their traditional counterparts on both visual-only~\cite{mishchuk2017working,zhang2019learning,tian2019sosnet,tian2020hynet,ma2021sdgmnet,dusmanu2019d2,revaud2019r2d2,tyszkiewicz2020disk} and cross-modal~\cite{aguilera2017cross,liu2018h,cui2021map,cui2021cross} images. Deep learning methods can be mainly divided into two categories: the two-stage and the one-stage frameworks. The efforts~\cite{mishchuk2017working,tian2019sosnet,ma2021sdgmnet,aguilera2017cross,liu2018h} belonging to the former category are based on the manual detector and then encode the patches centered at detected interest points with DNN. Undoubtedly, those descriptors are limited by the detected scale, orientation and so on. To fill the gap between the detection and the description, the one-stage pipeline~\cite{detone2018superpoint,dusmanu2019d2,revaud2019r2d2,bhowmik2020reinforced,tyszkiewicz2020disk,liu2021dgd,cui2021cross,cui2021map,wang2021local,shen2022learning} that learns to output dense detection scores and descriptors is proposed and further improvements are achieved. The joint framework seems alluring, however, its training would be unstable without a proper definition of detection. To address the problem, SuperPoint~\cite{detone2018superpoint} generates synthetic corner data to give the detection clear supervision. A SIFT-like non-maximum suppression is performed on the final feature map in D2-Net~\cite{dusmanu2019d2}. R2D2~\cite{revaud2019r2d2} proposes a slack constraint, in which detection scores are encouraged to peak locally and repeat among various images so that more dense points can be detected. Furthermore, to increase the reliability of the detected features, the detection is always coupled with the description in the optimization. For example, D2-Net tries to suppress the detection scores of the descriptors that are not distinct enough to match. Similarly, R2D2 learns an extra reliability mask to filter out those points. Additionally, the probabilistic model introduced in ReinforcedPoint~\cite{bhowmik2020reinforced} and DISK~\cite{tyszkiewicz2020disk} can be also seen as a coupling strategy that shares the same motivation with D2-Net and R2D2. Compared with the synthetic supervision and the non-maximum suppression, the constraints of local peaking and repeatability are more feasible for detection, because of their flexibility in the training and practical significance in the test. Based on these properties, the detection scores should be also linked to the probability of correctly matching of corresponding descriptors,~\emph{i.e}\onedot, the detection should be coupled with the description as mentioned above. However, the modal-invariant descriptors are always hard to learn and match. Naive suppression on the detected probability of those descriptors that are likely to be wrongly matched would fall into the local minimum where the detected probabilities are all zeros. Additionally, those hard descriptors are the key to gaining promotions, so simply ignoring them would not be a wise choice. Therefore, the coupling of detection and description should be more cautiously designed. In this paper, we firstly absorb the experience from related works and reformulate independent basic loss functions that are more effective and stable for multimodal feature learning, including a contrastive loss for description, a local peaking loss and a repeatability loss for detection. Different from the direct multiplication in previous efforts, we recouple the detection and the description in the mutual weighting strategy as briefly illustrated in Fig.~\ref{fig:fig1}. As for the detection, while an edge-based priori guides the detector to pay attention around edges, the detection scores of the reliable descriptors are further forced to peak by weighting the peaking loss with the matching risk of descriptors. Moreover, the repeatability loss is weighted by the similarity of corresponding descriptors. As for the description, the constrictive loss is weighted with the detection scores so that those descriptors with high detected probability are prioritized in the optimization. Note that, the weights in our recoupling strategy are `stopped gradients',~\emph{i.e}\onedot, detached from back propagation, which makes the detection and the description would not be disturbed by the gradients of the weights. Finally, the features constrained by the recoupled detection and description loss functions, named ReDFeat, can be readily trained from scratch. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figs/framework.pdf} \caption{Overview of RedFeat, where $l_2$ denotes $l_2$ normalization and $\sigma$ denotes the activation function. In our method, an image successively passes through the modal-specific adapter, the weight-sharing encoder and the detector to generate a dense feature map and a detection score map. The basic loss function of descriptors,~\emph{i.e}\onedot, features, is weighted with the corresponding detection scores. Meanwhile, detection scores are encouraged to peak according to the reliability of the corresponding descriptors and fade in smooth areas. Note that, no back-propagated flow traces back along the crossed forward flows that carry only the detached weights, which is the main idea of our recoupling strategy. } \label{fig:fig1} \end{figure} Moreover, to fulfill those harsh terms of the detection, Super Detector is proposed. The prior probability that a point is a keypoint and the conditional probability that a keypoint is extracted from the image are modeled by a fully connected network and a deep convolutional network with a large receptive field, respectively. Particularly, the deep convolution network is equipped with learnable local non-maximum suppression layers to stick out the keypoints. Finally, the posterior detected probability that a detected point is a keypoint is computed by the multiplication of outputs from two networks. To evaluate the features systematically, we collect several kinds of cross-modal image pairs, including cross visible (VIS), near-infrared (NIR), infrared (IR) and synthetic aperture radar (SAR), and build a benchmark in which the performances of features are strictly evaluated. Extensive experiments on this benchmark confirm the applicability of our ReDFeat. Our contributions can be summarized as follows: \begin{itemize} \item[1)] We recouple detection and description with the mutual weighting strategy, which can increase the training stability and performance of the learned cross-modal features. \item[2)] We propose Super Detector that possesses a large receptive field and learnable local non-maximum suppression blocks to improve the ability and the discreteness of the detection. \item[3)] We build a benchmark that contains three kinds of cross-modal image pairs for multimodal feature learning. Extensive experiments on the benchmark demonstrate the superiority of our method. \end{itemize} \section{Related Works} \textbf{Handcrafted Methods.} Although visible deep learning features have sprung up in recent years, their handcrafted counterparts, such as SIFT~\cite{lowe2004distinctive}, ORB~\cite{mur2015orb} and SURF~\cite{bay2006surf} still maintain popular in common scenes due to their robustness and cheapness~\cite{jin2021image,jiang2021review}. Their cross-modal counterparts, such as MFP~\cite{aguilera2012multispectral}, SR-SIFT~\cite{sedaghat2015distinctive}, PCSD~\cite{fan2018sar}, OS-SIFT~\cite{xiang2018sift}, RIFT~\cite{li2018rift} and KAZE-SAR~\cite{pourfard2021kaze}, still receive a large number of attention from the community of multimodal image processing due to the scarcity of well registered data that can support the deep learning methods. Both the visible and the handcrafted multimodal features focus on corner or edge detection and description, which are believed to contain information that is invariant to geometric and radiant distortion. Their successes motivate many deep learning methods~\cite{detone2018superpoint,liu2021dgd,barroso2019key} and us to inject edge-based priori into the training. \textbf{Two-stage Deep Methods.} In recent years, the deep learning `revolution' has swept across the whole field of computer vision including local feature extraction. However, due to the lack of strong supervision of detection, deep local descriptors~\cite{mishchuk2017working,tian2019sosnet,ma2021sdgmnet,aguilera2017cross,liu2018h} had been stuck in a two-stage pipeline, in which the keypoints are extracted by classical detectors,~\emph{e.g}\onedot, Difference of Gaussian (DOG)~\cite{lowe2004distinctive}, and then patches centered in those points are encoded into descriptors by DNN. This pipeline restricts the room for modifications so that most methods devote to modifying the loss functions for descriptors~\cite{tian2019sosnet,zhang2019learning,tian2020hynet,ma2021sdgmnet}. Additionally, Key.Net~\cite{barroso2019key} takes the early effort to learn keypoint extraction with repeatability as a constraint, which is a minority of two-stage methods. Despite of the isolation between detection and description in this kind of methods, DNN still reveals strong potential for local feature extraction~\cite{jin2021image}. Moreover, the independent constraints of detection and description in the two-stage pipeline pave the way to the joint frameworks, which are also the bases of our formulation. \textbf{One-stage Deep Methods.} An obvious limitation, that detection and description cannot reinforce each other, exists in the two-stage pipeline. To tackle this problem, SuperPoint~\cite{detone2018superpoint} firstly proposes a joint detection and description framework, in which the detection and the description are trained with the synthetic supervision and the contrastive learning, respectively. To further enhance the interaction between these two steps, a framework for joint training with a semi-handcrafted local maximum mining as the detector,~\emph{i.e}\onedot, D2-Net~\cite{dusmanu2019d2} is proposed. The detection and description of D2-Net are not only optimized in the meantime, but also tangled for the mutual guide. However, its non-maximum suppression detector is unexplainable. Based on D2-Net, SAFeat~\cite{shen2022learning} designs a multi-scale fusion network to extract scale-invariant features. CMMNet~\cite{cui2021cross} applies D2-Net to multimodal scenario. So, both SAFeat and CMMNet inherit the weakness of D2Net. R2D2~\cite{revaud2019r2d2} proposes a fully learnable detector with feasible constraints and further introduces an extra learnable mask to filter out some unreliable detection. But it is unclear why the reliability should be additionally learned rather than fused into one detector. TransFeat~\cite{wang2021local} introduces transformer~\cite{vaswani2017attention} to capture global information for local feature learning. It has been aware of the flaw of D2-Net's detection and drawn on the local peaking loss from R2D2 to remedy the fault. Furthermore, ReinforcedPoint~\cite{bhowmik2020reinforced} and DISK~\cite{tyszkiewicz2020disk}, model the matching probability in a differential form, in which the detection and description are concisely coupled, and employ Reinforcement Learning (RL) to construct and optimize the matching loss. Undoubtedly, the matching performance of DISK would significantly benefit from direct optimization on matching loss, but RL is hungry for computation and data, which might not be feasible in multimodal scenario. To the best of our knowledge, besides CMMNet, MAP-Net~\cite{cui2021map} is the only joint detection and description method customized for multimodal images. However, it draws on the pipeline from DELG~\cite{cao2020unifying}, whose features are specific for the image retrieval task instead of the accurate image matching task that we focus on. Therefore, we tend to conduct a further study on more feasible joint detection and description methods for cross-modal image matching. \section{Method} \subsection{Background of Coupled Constraints} \label{sec:sec3.1} The joint detection and description framework of local feature learning aims to employ DNN to extract a dense descriptor map $\boldsymbol{D}(\boldsymbol{\Omega_a},\boldsymbol{\Omega_e}) \in \mathbb{R}^{H \times W\times C}$ and a detection score map $\boldsymbol{S}(\boldsymbol{\Omega_a},\boldsymbol{\Omega_e}, \boldsymbol{\Omega_d}) \in \mathbb{R}^{H \times W}$ for an input image $\boldsymbol{I} \in \mathbb{R}^{H \times W}$, where $\boldsymbol{\Omega_a}$, $\boldsymbol{\Omega_e}$, $\boldsymbol{\Omega_d}$ denote the parameters of adapter, encoder and detector, respectively. Let $\{i,i'\}$ represent a correspondence in a pair of overlapping images $\boldsymbol{I}$ and $\boldsymbol{I}'$, $\boldsymbol{d}_{i} \in \boldsymbol{D}$ represent the descriptor of $i$th point with detected probability $s_i\in \boldsymbol{S}$. To constrain the learning of an individual descriptor $\boldsymbol{d}_{i}$, a matching risk function $\mathcal{R}_i(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ is constructed within descriptor maps $\boldsymbol{D}$ and $\boldsymbol{D}'$. Since the reliability of the descriptor can be estimated by $\mathcal{R}_i(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ to some extent, many related works couple the corresponding detection scores $s_i$ and $s_{i'}$ to guide the optimization of $\mathcal{R}_i(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ in a general loss: \begin{equation} \mathcal{L}_g(\boldsymbol{D},\boldsymbol{D}',\boldsymbol{S},\boldsymbol{S}')=\mathbb{E}_i( s_is_{i'}\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')), \label{eqn:eqn1} \end{equation} where $\mathbb{E}(\cdotB)$ denotes the expectation calculation (averaging in batch). While the descriptors with larger detection score $s_i$ would play more important roles during the optimization, the detection scores of points with large $\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ would tend to be zeros. In this way, problems come up: firstly, the zero detection score map is one of a local minimum of this loss, which is not our desire. Secondly, the hard descriptors with large $\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ are the key for improvement, so they deserve more attention instead of being treated as distractors. The two problems are magnified in multimodal feature learning, in which the correspondences suffer from the extreme variance of imaging. Although D2-Net~\cite{dusmanu2019d2} and CMMNet~\cite{cui2021cross} conduct normalization onto detection score $s$ so that $\boldsymbol{S}=0$ would not be the minimum of the loss, the normalization breaks the convexity of detection and the balance between the learning of detection and description would be hard to hold, which finally leads to failed optimization. Reliability loss of R2D2~\cite{revaud2019r2d2} also contains a term similar to Eqn.~\eqref{eqn:eqn1}, which would suffer from the first problem mentioned before. The failures of CMMNet and R2D2 prove our hypothesis as shown in Section~\ref{sec:sec5}. Moreover, the formulation of probabilistic matching loss introduced in ReinforcedPoint~\cite{bhowmik2020reinforced} and DISK~\cite{tyszkiewicz2020disk} is also similar to Eqn.~\eqref{eqn:eqn1}, so it is likely to get stuck in the two problems. Therefore, we devote ourselves to recoupling detection and description in a more elaborated way for better training. \subsection{Basic Constraints} The basic constraints of detection and description should be determined before coupling them. To satisfy the nearest neighbor matching principle, the distance between an anchor and its nearest non-matching neighbor should be maximized, while the distance to its correspondence should be minimized. Therefore, we sample $N$ pairs of corresponding descriptors $\{\boldsymbol{d}_i,\boldsymbol{d}_{i'}\}$ and their detection scores $\{s_i,s_{i'}\}$. The set of samples is denoted by $\{\boldsymbol{D}_N,\boldsymbol{D}'_N\}$. For a pair corresponding cross-modal descriptors $\{\boldsymbol{d}_i,\boldsymbol{d}_{i'}\}$, we mine two intra-modal nearest non-matching neighbor $\{\boldsymbol{d}_j,\boldsymbol{d}_{k}\}$ and two inter-modal non-matching neighbor $\{\boldsymbol{d}_m,\boldsymbol{d}_n\}$ in $\{\boldsymbol{d}_i,\boldsymbol{d}_{i'}\}$ as: \begin{align} \label{eqn:eqn2} \boldsymbol{d}_j&=\text{argmin}_{\boldsymbol{d}_j\in\boldsymbol{D}_N,j\ne i} \theta(\boldsymbol{d}_i,\boldsymbol{d}_j),\\ \label{eqn:eqn3} \boldsymbol{d}_k&=\text{argmin}_{\boldsymbol{d}_k\in\boldsymbol{D}'_N,k\ne i'} \theta(\boldsymbol{d}_{i'},\boldsymbol{d}_k),\\ \label{eqn:eqn4} \boldsymbol{d}_n&=\text{argmin}_{\boldsymbol{d}_n\in\boldsymbol{D}'_N,n\ne i'} \theta(\boldsymbol{d}_{i},\boldsymbol{d}_n),\\ \label{eqn:eqn5} \boldsymbol{d}_m&=\text{argmin}_{\boldsymbol{d}_m\in\boldsymbol{D}_N,m\ne i} \theta(\boldsymbol{d}_{i'},\boldsymbol{d}_m), \end{align} where $\theta(\boldsymbol{d}_i,\boldsymbol{d}_j)=\text{acos}(\boldsymbol{d}_i^T\boldsymbol{d}_j)$ is the angular distance. Although the nearest neighbor matching request distinction between an anchor and its inter-modal nearest neighbor, we believe maximizing $\theta(\boldsymbol{d}_i,\boldsymbol{d}_n)$ and $\theta(\boldsymbol{d}_i,\boldsymbol{d}_m)$ in the meantime is hazardous for acquiring the modal invariance. Thus we tend to maximize $\text{max}\{\theta(\boldsymbol{d}_i,\boldsymbol{d}_n),\theta(\boldsymbol{d}_i,\boldsymbol{d}_m))\}$, $\theta(\boldsymbol{d}_i,\boldsymbol{d}_k)$ and $\theta(\boldsymbol{d}_i,\boldsymbol{d}_j)$, while minimizing $\theta(\boldsymbol{d}_i,\boldsymbol{d}_{i'})$ in contrastive learning behavior. As a result, our basic loss function of description $\mathcal{L}_\text{desc-B}$ is: \begin{align} \begin{split} \mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D}_N,\boldsymbol{D}'_N)= &[(\pi-\theta(\boldsymbol{d}_i,\boldsymbol{d}_k))^2+(\pi-\theta(\boldsymbol{d}_i,\boldsymbol{d}_j))^2\\ &+(\pi-\text{max}\{\theta(\boldsymbol{d}_i,\boldsymbol{d}_n),\theta(\boldsymbol{d}_i,\boldsymbol{d}_m) \})^2\\ &+3\theta(\boldsymbol{d}_i,\boldsymbol{d}_{i'})^2 ]^2, \end{split} \\ \mathcal{L}_{\text{desc-B}}(\boldsymbol{D}_N,\boldsymbol{D}'_N)=& \operatorname{\mathbb{E}}_i(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D}_N,\boldsymbol{D}'_N)). \end{align} The angular distance is employed for distance measure because it could balance the optimization of matching and non-matching pairs~\cite{ma2021sdgmnet}. Moreover, the quadratic matching risk makes the hard samples obtain larger gradients to be optimized. In this way, $\mathcal{L}_{\text{desc-b}}$ is expected to increase the cross-modal robustness of descriptors. As mentioned above, repeatability and local peaking should be the primary properties of detection. To guarantee the repeatability, the detection score $\boldsymbol{S}$ of the first image should be similar to the warped $\boldsymbol{S}'_w$ of the other image. Moreover, the detection score should be salient so that a unique point can be extracted in a local area. Thus, we follow R2D2~\cite{revaud2019r2d2} to primarily constrain the detection with basic repeatability loss $\mathcal{L}_{\text{rep-B}}$ and peaking loss $\mathcal{L}_{\text{peak-B}}$ as: \begin{align} \mathcal{L}_{\text{rep-B}}(\boldsymbol{S},\boldsymbol{S}'_w)&=\mathbb{E}_p(1-\boldsymbol{S}[p]^T\boldsymbol{S}'_w[p]), \\ \label{eqn:eqn9} \mathcal{L}_{\text{peak-B}}(\boldsymbol{S})&=\mathbb{E}_i(\mathrm{AP}(\boldsymbol{S})[i]^2+(1-\mathrm{MP}(\boldsymbol{S}))[i])^2, \end{align} where $p$ is a flattened patch of coordinate, which is extracted on full coordinate $\{1,\dots,H\}\times\{1,\dots,W\}$ by shifted windows with kernel size of $17\times 17$ and stride of $8$; $\boldsymbol{S}[p]\in \mathbb{R}^{256}$ denotes the flattened and normalized vector of detection score, which is indexed by $p$; AP and MP denote the average pooling and the max pooling with kernel size of $17\times17$ and stride of $1$, respectively. Note that, the kernel size and the strides are all adopted from R2D2 empirically. \subsection{Recoupled Constraints} Successes of the related works \cite{dusmanu2019d2,revaud2019r2d2,tyszkiewicz2020disk} suggest that coupling detection and description can improve the feature learning, however, inappropriate coupling strategies bring up problems as mentioned in Section~\ref{sec:sec3.1}. To tackle problems, we recouple them with a mutual weighting strategy, in which the gradients of weights are `stopped' as illustrated in Fig.~\ref{fig:fig1}. Specifically, we again sample $N$ pair of corresponding descriptors $\{\boldsymbol{d}_i,\boldsymbol{d}_{i'}\}$ and their detection scores $\{s_i,s_{i'}\}$. For detection, a weight ${a}(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ that is negatively correlated to matching risk $\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D}_N,\boldsymbol{D}'_N)$ would encourage the more reliable descriptor to be detected in a higher probability. Furthermore, learning from the handcrafted cross-modal features which focus on modal-invariant texture extraction, we introduce an edge-based prior $\boldsymbol{M}(\boldsymbol{I})$ to prevent the interest points from laying on smooth areas. So the recoupled peaking loss $\mathcal{L}_{\text{peak-R}}$ can be formulated as: \begin{align} {a}(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')\triangleq\ &\bigg[1-\frac{R(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')}{\mathbb{E}_i(R(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}'))}\bigg]_+,\\ \boldsymbol{M}(\boldsymbol{I})=\ &\bigg[1-\frac{\|\nabla^2\boldsymbol{I}\|}{\mathbb{E}_i(\|\nabla^2\boldsymbol{I}\|[i])}\bigg]_+,\\ \begin{split} \label{eqn:eqn12} \mathcal{L}_{\text{peak-R}}(\boldsymbol{S},\boldsymbol{D},\boldsymbol{D}',\boldsymbol{I})=\ &\mathcal{L}_{\text{peak-B}}(\boldsymbol{S})+\mathbb{E}_i((\boldsymbol{M}(\boldsymbol{I})\boldsymbol{S})[i]^2)+\\ &\mathbb{E}_i(a(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D}_N,\boldsymbol{D}_N'))(1-s_i)^2), \end{split} \end{align} where $[\cdot]_+$ denotes the rectified linear unit (ReLU); $\triangleq$ denotes the `stop gradient equality'; the edge of image $\boldsymbol{I}$ is computed as $\|\nabla^2\boldsymbol{I}\|$ with Laplacian operator. The weights ${a}(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ and $\boldsymbol{M}(\boldsymbol{I})$ are visualized in Fig.~\ref{fig:fig2} (b) and (c), respectively. \begin{figure}[] \centering \includegraphics[width=0.9\linewidth]{figs/feature_vis.pdf} \caption{Visualization of weights of the input image pair (a) (d) in our recoupled constraints, where the darker red indicates the larger relative value. (a) The input visible image. (b) Visualization of $a(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D_N},\boldsymbol{D_N}'))$, in which batches of descriptors are randomly sampled. (c) The edge of (a) detected by Laplacian operator. (d) The warped infrared image. (e) Similarity of corresponding descriptors used to weight the repeatability learning. (f) visualization of $c(s_i,s_i')$ employed for the guided descriptor learning. } \label{fig:fig2} \end{figure} There are several key differences between our peaking constraints and previous works. Firstly, the recoupling weight $a(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}'))$ is detached from back propagation, which would not directly affect the learning of description~\cite{dusmanu2019d2,cui2021cross,revaud2019r2d2,tyszkiewicz2020disk}. Secondly, $a(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}'))$ only constrains the peaking of the detection, which would not suppress any detected probability of hard descriptors with large $\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ and solve the problems mentioned above~\cite{dusmanu2019d2,cui2021cross,revaud2019r2d2,tyszkiewicz2020disk}. Thirdly, edge-based priori $\boldsymbol{M}(\boldsymbol{I})$ is introduced to balance the peaking constraint, instead of forcing the model to detect corners or edges~\cite{detone2018superpoint,liu2021dgd,barroso2019key}. Moreover, the weights are normalized by the expectation dynamically, so the weights would not be zeros and keep functioning. In this way, while the detection turns explainable and reliable, it can be more stably trained without risks of falling into sub-optimum,~\emph{i.e}\onedot, trivial solution. Multimodal sensors may display the same object in totally different forms, which means requesting repeatability in such areas is exactly irrational. Thus, the repeatability also needs guides from the description. For two corresponding patches of detection score, we compute the average cosine similarity of their descriptors to estimate the local similarity. Then, the local similarity is used as a weight to modulate the recoupled repeatability loss as: \begin{align} b(p;\boldsymbol{D},\boldsymbol{D}_w')\triangleq\ &\mathbb{E}_{i}(\boldsymbol{D}[p[i]]^T\boldsymbol{D}'_w[p[i]]), \\ \label{eqn:eqn14} \begin{split} \mathcal{L}_{\text{rep-R}}(\boldsymbol{S},\boldsymbol{S}'_w)=\ &\mathbb{E}_p( b(p;\boldsymbol{D},\boldsymbol{D}_w')(1-\\ &\boldsymbol{S}[p]^T\boldsymbol{S}'_w[p]))), \end{split} \end{align} where $\boldsymbol{D}_w'$ denotes the warped dense descriptors. Note that the detached $b(p;\boldsymbol{D},\boldsymbol{D}_w')$ would also not affect the optimization of description. And $\boldsymbol{D}[p[i]]^T\boldsymbol{D}'_w[p[i]]$ in the weight $b(p;\boldsymbol{D},\boldsymbol{D}_w')$ is visualized in Fig.~\ref{fig:fig2} (e). As discussed before, since the flattening and peaking of detection are safely defined and hold a balance in the recoulped peaking loss, the detection would slip into trivial solution,~\emph{e.g}\onedot, zeros. Therefore, it is worth recoupling the detection to the description. In other words, the matching risk can be weighted with the detection score so that the descriptors with high detected probability would attract more attention in the optimization. The recoupled description loss with detached weights can be formulated as: \begin{align} c(s_i,s_i')\triangleq\ &s_is_i', \\ \mathcal{L}_{\text{desc-R}}(\boldsymbol{D}_N,\boldsymbol{D}'_N)=\ &\operatorname{\mathbb{E}}_i(c(s_i,s_i))\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D}_N,\boldsymbol{D}'_N)), \end{align} where $c(s_i,s_i')$ is shown in Fig.~\ref{fig:fig1} (f). Finally, the total loss function of our RedFeat can be formulated in the sum of Eqns.~\eqref{eqn:eqn9},~\eqref{eqn:eqn12} and~\eqref{eqn:eqn14} with only one hyperparameter $\lambda$: \begin{equation} \begin{aligned} \mathcal{L}=\ &\mathcal{L}_{\text{desc-R}}(\boldsymbol{D}_N,\boldsymbol{D}'_N)+\mathcal{L}_{\text{peak-R}}(\boldsymbol{S},\boldsymbol{D},\boldsymbol{D}',\boldsymbol{I})+\\ &\mathcal{L}_{\text{peak-R}}(\boldsymbol{S}',\boldsymbol{D}',\boldsymbol{D},\boldsymbol{I}')+\lambda \mathcal{L}_{\text{rep-R}}(\boldsymbol{S},\boldsymbol{S}'_w). \end{aligned} \end{equation} While the weights ${a}(\mathcal{R}(\boldsymbol{d}_i;\boldsymbol{D},\boldsymbol{D}')$ and $b(p;\boldsymbol{D},\boldsymbol{D}_w')$ are generated by description and recoulped to the detection, the weight $c(s_i,s_i')$ takes a converse effect in the loss. This loss based on the mutual weighting strategy would stabilize and boost the feature learning. \subsection{Network Architecture} \textbf{Architecture}. Most joint detection and description methods share similar architectures which include an encoder and a detector. R2D2 proposes a lightweight encoder that contains only $9$ convolution layers and a naive linear detector to output 128-dimensional dense descriptors and a score map, which is cheap in time and memory. Therefore, we adopt this architecture as our raw architecture. The $6$ shallow layers are divided as the adapter that is unshared for eliminating the variance of modals. The raw encoder in our architecture consists of the last $3$ convolutional layers and the raw detector keeps the same with R2D2. Note that, the encoder and detector are weight sharing. \textbf{Super Detector}. Our recoupling constraints mainly embrace the detection. limited by the small receptive field, the raw linear detector cannot capture the neighborhood and global information to fulfill peaking loss. Therefore, we propose a super detector, which has two branches like R2D2. One branch is the raw detector $\theta_{d0}$ that models the prior probability $p({kp}_i|\boldsymbol{d}_i)$ that the point $i$ is a keypoint as $p({kp}_i|\boldsymbol{d}_i)=\text{Sigmoid}(\theta_{d0}(\boldsymbol{d}_i))$; The structure of the other branch needs to model the conditional probability $p(dp_i|\boldsymbol{D},kp_i)$ that a keypoint can be detected globally. \begin{figure}[] \centering \includegraphics[width=0.6\linewidth]{figs/network.pdf} \caption{The branch for conditional probability estimation. Conv $3\times3$, $64$, $1$ denote convolutional layer with kernel size of $3\times3$, output channel of $64$ and dilation of $1$. Local softmax in learnable non-maximum suppression block is formulated as Eqn.~\eqref{eqn:eqn18}.} \label{fig:fig3} \end{figure} Since $p({dp}_i|\boldsymbol{D},{kp}_i)$ is related to global information, the branch should possess a larger receptive field by stacking more convolutional layers. Moreover, the score of the detected point should be the local maximum, so we propose learnable non-maximum suppression layer (LNMS) as shown in Fig.~\ref{fig:fig3}. In the LMNS, features are firstly transformed by a learnable convolutional layer. Then, local maximums in the transformed feature map are detected by the local softmax operation,~\emph{i.e}\onedot, Eqn.~\eqref{eqn:eqn18}. At last, statistical maximums in batch and channel are further mined by BN and IN with ReLU. Briefly, for an input feature map $\boldsymbol{x}$, the forward propagation in LNMS can be described as: \begin{align} \nonumber \boldsymbol{x} &= \text{Conv}(\boldsymbol{x}),\\ \label{eqn:eqn18} \boldsymbol{x} &= \text{Exp}(\boldsymbol{x})/\text{AP3}(\text{Exp}(\boldsymbol{x})),\\ \nonumber \boldsymbol{x} &= \text{ReLU}(\text{BN}(\boldsymbol{x})),\\ \nonumber \boldsymbol{x} &= \text{ReLU}(\text{IN}(\boldsymbol{x})), \end{align} where AP3 denotes average pooling with a kernel size of $3$. BN and IN represent batch normalization and instance normalization, respectively. Finally, the branch is constructed by cascading convolutional layers and several LNMS as shown in Fig.~\ref{fig:fig3}, and it outputs a two-channel feature. After channel softmax activation, the first channel in the final feature map is maintained as $p({dp}_i|\boldsymbol{D},{kp}_i)$. The posterior probability $p({kp}_i|\boldsymbol{D},{dp}_i)$ that the detected point is an interest point,~\emph{i.e}\onedot, $s_i$, can be approximately computed as $p({kp}_i|\boldsymbol{d}_i)p({dp}_i|\boldsymbol{D},{kp}_i)$. \section{Benchmark} The lack of benchmark is one of the major reasons for the slow development of multimodal feature learning. Therefore, building a benchmark might be even more imperative than a robust algorithm. In this paper, we collect three kinds of cross-modal images, including RGB-NIR, VIS-IR and VIS-SAR to build a benchmark for cross-modal feature learning. The features can be evaluated in feature matching and image registration pipelines. Basic information of the collected data is shown Table~\ref{tab:data}. \subsection{Dataset} \textbf{VIS-NIR}. Visible and near-infrared (NIR) image pairs in the average size of $983\times686$ are collected from the RGB-NIR scene dataset~\cite{brown2011multi}. The dataset covers various scenes, including country, field, forest, indoor, mountain, old building, street, urban, and water. And most image pairs are photographed in special conditions and can be well registered. We randomly split the images from $9$ scenes into the training set and the test set with a ratio of $3:1$, which results in a training set of $345$ pairs of images. The ground truths of the test set are manually validated and filtered for more reliable evaluation. Finally, there are $128$ pairs of images left in the test set. \textbf{VIS-IR}. We collect roughly registered 265 pairs of visible and long-wave infrared (IR) images in the average size of $533\times321$. $44$ static image pairs from RGB-LWIR~\cite{aguilera2015lghd} are mainly shot on buildings during the day. The other $221$ pairs of video frames come from RoadScene~\cite{xu2020aaai}, in which more complex objects,~\emph{e.g}\onedot, cars and people, are captured both day and night. We randomly select $47$ images as the test set and leave the rest as the training set. Since the overlapping image pairs cannot be registered with the homography matrix due to the greatly varying depth of objects, we manually mark about $16$ landmarks per image pair for reprojection error estimation. \textbf{VIS-SAR}. Optical-SAR~\cite{xiang2020automatic} provides aligned gray level and synthetic aperture radar image pairs in the uniform size of $512\times512$, which are remotely sensed by the satellite and cover field and town scenes. There are 2011 and 424 image pairs in the training set and test set, respectively. The dataset and its split are gathered into our benchmark without changes. Note that, it is hard to validate the ground truth or label landmarks for this subset due to the fuzziness of SAR images. \begin{table}[] \setlength{\tabcolsep}{1.4mm} \renewcommand\arraystretch{1.5} \centering \caption{Basic information of subsets in our benchmark. The number, the number of channel, the size and the character of the collected images are reported.} \begin{tabular}{l|cc|cc|c|l} \hline \multirow{2}{*}{} & \multicolumn{2}{c|}{Number} & \multicolumn{2}{c|}{Channel} & \multirow{2}{*}{Size} & \multicolumn{1}{c}{\multirow{2}{*}{Character}} \\ \cline{2-5} & \multicolumn{1}{c|}{Train} & Test & \multicolumn{1}{c|}{VIS} & * & & \multicolumn{1}{c}{} \\ \hline VIS-NIR & \multicolumn{1}{c|}{345} &128 & \multicolumn{1}{c|}{3} & 1 & $983\times686$ & Multiple scenes \\ \hline VIS-IR & \multicolumn{1}{c|}{221} &47 & \multicolumn{1}{c|}{3} & 1& $533\times321$ & Road video at night \\ \hline VIS-SAR & \multicolumn{1}{c|}{2011} &424 & \multicolumn{1}{c|}{1} & 1 & $512\times512$ & Satellite remote sensing \\ \hline \end{tabular} \label{tab:data} \end{table} \subsection{Evaluation Protocol} \textbf{Random Transform}. Cross-modal features should carry both geometric and modal invariance. Thus, we generate homography transforms $\boldsymbol{H}$ by cascading random perspective with distortion scale $[0,0.2]$, random rotation $[-10^\circ,10^\circ]$ and random scaling $[0.8,1.0]$. And then, the transforms $\boldsymbol{H}$ are conducted on the aligned raw test set to generate the warped test set. \textbf{Feature Matching}. To generate sufficient matches, the detected keypoints should be repeatable and the extracted descriptors should be robust. To evaluate the repeatability of the keypoints, we compute the number of correspondences (Corres) of image pairs and the repeatable rate (RR),~\emph{i.e}\onedot, the ratio of the number of correspondences over the detected keypoints. Furthermore, we match the descriptors with the bidirectional nearest neighbor matching and calculate the number of correct matches. Following the definition in~\cite{detone2018superpoint,revaud2019r2d2}, we report matching score (MS), that the ratio of the number of correct matches over the number of the detected keypoints in the overlap, to evaluate the robustness of descriptors. Note that, the metrics are validated at different thresholds of pixels. And RR and MS are computed symmetrically across the pair of images and averaged. \textbf{Image Registration}. The image registration is the destination of local feature learning. The matched features are used to estimate homography transform $\boldsymbol{H}'$ with RanSAC from OpenCV libraries, where the reprojection threshold is set to $10$ pixels and the iterations to $100$K. Since the ground-truth transform $\boldsymbol{H}$ is provided, we compute the reprojection error $RE_H$ as: \begin{equation} \text{RE}_ {H}(\boldsymbol{H},\boldsymbol{H}') = \| \boldsymbol{H}[:]- \boldsymbol{H}'[:] \|, \end{equation} where $\boldsymbol{H}[:]$ denotes the flatten vector of $\boldsymbol{H}$. However, this metric would be not indicative for VIS-IR subset, because the raw test image pairs are not well aligned. Therefore, we introduce another method to estimate reprojection error with the landmarks as: \begin{equation} \text{RE}_{M}(\boldsymbol{H}',\boldsymbol{M},\boldsymbol{M}') = \mathbb{E}_i\|\tau(\boldsymbol{m}_i,\boldsymbol{H}')-\boldsymbol{m}_{i'}\|, \end{equation} where $\boldsymbol{M}$ and $\boldsymbol{M}'$ denote the set of landmarks $\boldsymbol{m}$ on two images; $\tau(\boldsymbol{m}_i,\boldsymbol{H}')$ represents the reprojected point of $\boldsymbol{m}_i$. The registration is successful, if RE is smaller than a threshold. The successfully registered images (SR) are counted and the successful registration rate (SRR) is calculated on each subset. \section{Experiments} \subsection{Implementation} We implement our ReDFeat in PyTorch~\cite{paszke2019pytorch} and Kornia~\cite{riba2020kornia}. The training data in a size of $192\times192$ is achieved by cropping, normalization and random perspective transform mentioned above. The network is trained in about 10000 iterations with a batch size of 2 on an NVIDIA RTX3090 GPU. Adam optimizer~\cite{kingma2014adam} with weight decay of $0.0005$ is employed to optimize the loss. Its learning rate is initialized at $0.001$ and decays to $0$ at the last epoch. The last checkpoint of training would be used for evaluation. Our ReDFeat is compared to several counterparts in our benchmark, including SIFT~\cite{lowe2004distinctive}, RIFT~\cite{li2018rift}, DOG+HN~\cite{mishchuk2017working}, R2D2~\cite{revaud2019r2d2} and CMMNet~\cite{cui2021cross}. SIFT and RIFT are extracted with the open-source codes and default settings. HardNet and R2D2, which are deep learning features for visible images, are modified to multimodal scenario by specializing parameters of the first $6$ convolutional layers for individual modal images. CMMNet, which is not open-source, is implemented on the codebase of D2Net~\cite{dusmanu2019d2}. \begin{table}[t] \centering \setlength{\tabcolsep}{2.0mm} \renewcommand\arraystretch{1.5} \caption{Training Stability of 1024 keypoints of joint detection and description methods on VIS-IR.} \begin{tabular}{l|c|c|c} \hline \# Matches (MS) & R2D2 & CMMNet & ReDFeat \\ \hline Pre-trained & 36 (3\%) & 42 (4\%) & 171 (16\%) \\ \hline Scratch & 0 (0\%) & 1 (0\%) & 160 (15\%) \\ \hline \end{tabular} \label{tab:tab1} \end{table} \begin{figure*}[t] \centering \includegraphics[width = 3.3cm]{figs/ablation/vis.png} \includegraphics[width = 3.3cm]{figs/ablation/vis_r2d2.png} \includegraphics[width = 3.3cm]{figs/ablation/vis_rr.png} \includegraphics[width = 3.3cm]{figs/ablation/vis_rf.png} \includegraphics[width = 3.3cm]{figs/ablation/vis_ff.png}\\ \vspace{0.05in} \includegraphics[width = 3.3cm]{figs/ablation/ir.png} \includegraphics[width = 3.3cm]{figs/ablation/ir_r2d2.png} \includegraphics[width = 3.3cm]{figs/ablation/ir_rr.png} \includegraphics[width = 3.3cm]{figs/ablation/ir_rf.png} \includegraphics[width = 3.3cm]{figs/ablation/ir_ff.png}\\ \raggedright {\footnotesize \hspace{5.3cm} R2D2 \hspace{2.6cm} +{\textcircled{\scriptsize 1}} \hspace{2.6cm} +{\textcircled{\scriptsize 1}}+{\textcircled{\scriptsize 2}} \hspace{2cm} +{\textcircled{\scriptsize 1}}+{\textcircled{\scriptsize 2}}+{\textcircled{\scriptsize 3}}} {\small} \caption{Visualization of detection score in ablation study. R2D2 is chosen as the baseline and the others are our methods consisting of different proposed components. Darker red denotes the higher detected probability. The detection of R2D2 is computed by multiplying the repeatability and reliability. } \label{fig:abla} \end{figure*} \subsection{Ablation Study} \textbf{Training Stability}. Since the training stability is the key problem that our recoupling strategy aims to tackle, we try to train the CMMNet, R2D2 and our method from scratch or pre-trained models to confirm our motivation. CMMNet adopts the VGG-16 pre-trained on ImageNet as the initialization. For comparison, we use the official pre-trained model for visible images to initialize R2D2 for cross-modal images. For ReDFeat, we just employ self-supervised learning~\cite{detone2018superpoint}, which is fed with augmented visible images, to obtain a pre-trained model. The mean number of correct matches and MS of $1024$ keypoints on VIS-IR subset are shown in Table~\ref{tab:tab1}. As we can see, CMMNet and R2D2 fail to learn discriminative features without pre-trained models, because the joint optimization of their naive coupled constraints is ill-posed. By contrast, our ReDFeat can be readily trained from scratch while also achieving a tiny improvement from the self-supervised pre-trained, which demonstrates the solidity of our formulation. Therefore, while we keep initializing the training of CMMNet and R2D2 with pre-trained models in subsequent experiments, our ReDFeat would be always trained from scratch. \textbf{Impact of $\lambda$}. The weight of $\mathcal{L}_{\text{rep-R}}$, $\lambda$, is the only hyperparameter of ReDFeat, which plays a crucial role in balancing detection and description in our recoupling strategy. To investigate the impact of $\lambda$, we train ReDFeat with different $\lambda$ values and report relevant metrics of $1024$ keypoints on VIS-IR in Table~\ref{tab:tab2}. Totally, $\lambda$ lager than $8$ brings out desirable registration performance that the community focuses on. It demonstrates that the repeatability constrained by $\mathcal{L}_{\text{rep-R}}$ and weighted by $\lambda$ impose a strong impact on the registration performance. However, the repeatability not only forces the detection to be similar but also narrows the gap between two descriptor maps and decreases the distinction of descriptors, which can be proved by the decrease of the correct matches. Therefore, the image registration performance peaks at $\lambda=8$ and the setting would be kept in subsequent experiments. \begin{table}[t] \centering \setlength{\tabcolsep}{2.0mm} \renewcommand\arraystretch{1.5} \caption{Impact of $\lambda$ of 1024 keypoints on VIS-IR.} \begin{tabular}{l|c|c|c|c|c|c|c} \hline $\lambda$ & 0.01 & 4 & 8 & 12 & 16 & 20 & 24 \\ \hline \# Matches & 154 &163 &160 &135 &126 &132 &129 \\ \hline MS (\%) & 15 &16 &14 &13 &13 &13 &13 \\ \hline RE$_M$ & 4.22 & 3.63 & 2.75 & 2.77 & 3.31 & 2.75 & 3.13 \\ \hline \# SR & 36 & 44 & 46 & 44 & 45 & 43 & 44 \\ \hline \end{tabular} \label{tab:tab2} \end{table} \textbf{Proposed Components}. We propose three novel modifications: {\small\textcircled{\scriptsize 1}} basic constraints, {\small\textcircled{\scriptsize 2}} recoupled constraints and {\small\textcircled{\scriptsize 3}} new networks for multimodal feature learning. To evaluate the efficiency of our proposals, we choose R2D2, which provides the raw network architecture for losses, as the baseline, and the modifications are successively executed in this framework. As we can see in Table~\ref{tab:tab3}, our basic loss is more suitable for multimodal feature learning and remarkably improves the baseline on all metrics. The recoupled constraints obtain further improvements on feature matching tasks, while the registration performance is comparable to the former. After the new network,~\emph{i.e}\onedot, Super Detector, is equipped, state-of-the-art results are achieved. So far, the proposed components are proved to take positive effects. \begin{table}[t] \centering \setlength{\tabcolsep}{1.5mm} \renewcommand\arraystretch{1.5} \caption{Ablation study of 1024 keypoints on VIS-IR.} \begin{tabular}{l|c|c|c|c|c|c} \hline & \# Corrs & RR(\%) & \# Matches & MS(\%) &RE$_M$ &\# SR\\ \hline R2D2 & 213 & 16 & 36 & 4 &3.30 &41 \\ \hline +{\textcircled{\scriptsize 1}} & 307 & 30 & 83 & 8 & 2.81 &45 \\ \hline +{\textcircled{\scriptsize 1}}+{\textcircled{\scriptsize 2}} & 346 & 33 & 112 & 11 &2.93 &46 \\ \hline +{\textcircled{\scriptsize 1}}+{\textcircled{\scriptsize 2}}+{\textcircled{\scriptsize 3}} & 415 & 40 & 160 & 15 &2.75 &46 \\ \hline \end{tabular} \label{tab:tab3} \end{table} To gain an insight into the impacts of our proposals, we visualize the detection score maps, what we discuss throughout our formulation, under different configurations in Fig.~\ref{fig:abla}. As shown in the second and the third columns of images, while the local peaking loss guides R2D2 to generate discrete detection score, it leads to bulks of detection basic constraints. It can be explained that $\lambda=8$ moderates the impact of local peaking in basic constraints. These lumped detected features intend to repeat and be matched within an acceptable error so that the matching performance is improved. After recoupling the constraints, the edge-based priori makes the detection gather in areas with rich textures, which is expected to obtain further improvements. Finally, the Super Detector equipped with learnable local non-maximum suppression blocks introduces a strong inductive bias to discretize the detection score. The discrete detection score must tighten weighted description loss and repeatability loss, which is believed to help the joint learning and improve the accuracy of keypoint location. \renewcommand\thefigure{7} \begin{figure*}[b] \centering \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/SIFT_70.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/RIFT_70.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/HN_70.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/R2D2_70.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/CMMNet_70.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/ReDFeat_70.png}\\ \vspace{0.12em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/SIFT_106.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/RIFT_106.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/HN_106.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/R2D2_106.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/CMMNet_106.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_nir_vis/ReDFeat_106.png}\\ \vspace{0.12em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/SIFT_80.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/RIFT_80.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/HN_80.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/R2D2_80.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/CMMNet_80.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/ReDFeat_80.png}\\ \vspace{0.12em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/SIFT_223.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/RIFT_223.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/HN_223.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/R2D2_223.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/CMMNet_223.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_ir_vis/ReDFeat_223.png}\\ \vspace{0.12em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/SIFT_69.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/RIFT_69.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/HN_69.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/R2D2_69.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/CMMNet_69.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/ReDFeat_69.png}\\ \vspace{0.12em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/SIFT_1.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/RIFT_1.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/HN_1.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/R2D2_1.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/CMMNet_1.png} \hspace{-0.6em} \includegraphics[width = 2.8cm]{figs/matching/matching_sar_vis/ReDFeat_1.png}\\ \raggedright {\footnotesize \hspace{1.4cm} SIFT\hspace{2.2cm} RIFT\hspace{1.90cm} DOG+HN\hspace{1.95cm} R2D2\hspace{1.95cm} CMMNet\hspace{1.80cm} ReDFeat} \caption{Visualization of matching performance. 1024 keypoints are extracted by different algorithms and marked in red `+'. Descriptors are matched by the bidirectional nearest neighbor matching. Validated at a threshold of 3px, the correct matches are linked by the green lines.} \label{fig:mp_viz} \end{figure*} \renewcommand\thefigure{5} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figs/matching/match_thres} \caption{Feature matching performance of 1024 keypoints of the state-of-the-art methods in our benchmark. Repeatable rate (RR) and matching score (MS) are computed at thresholds up to 10 pixels are drawn in curves.} \label{fig:matchthres} \end{figure} \renewcommand\thefigure{6} \begin{figure}[!ht] \centering \includegraphics[width=0.95\linewidth]{figs/matching/NIR_match.pdf}\\ \footnotesize{(a) Matching Performance on VIS-NIR}\\ \includegraphics[width=0.95\linewidth]{figs/matching/IR_match.pdf}\\ \footnotesize{(b) Matching Performance on VIS-IR}\\ \includegraphics[width=0.95\linewidth]{figs/matching/SAR_match.pdf}\\ \footnotesize{(c) Matching Performances on VIS-SAR}\\ \caption{Feature matching performance of 1024, 2048 and 4096 keypoints at 3px. The numbers of extracted keypoints, correspondences and correct matches are drawn in bars with different colors. Matching scores of three numbers of keypoints are shown on the bars.} \label{fig:mp} \end{figure} \subsection{Feature Matching Performance} The feature matching performance of 1024 keypoints of SIFT, RIFT, HN, R2D2, CMMNet, and our RedFeat on three subsets is quantified in Fig.~\ref{fig:matchthres}, in which RR and MS are selected as the primary metrics and calculated at varying thresholds. As we can see, we achieve the state-of-the-art RR and MS at all thresholds on three subsets, which demonstrates that we obtain more robust descriptors while detecting more precise and repeatable keypoints. As for MS, we gain large margins on all subsets at varying thresholds. Especially on the most challenging subset, VIS-SAR, our MS seems to be several times higher than the second place CMMNet. It is worth mentioning that R2D2 employing pre-trained models for initialization still fails to optimize the description on VIS-SAR, which confirms the significance of our recoupling strategy. More quantitative performance of 1024, 2048 and 4096 keypoints at a threshold of 3px is shown in Fig.~\ref{fig:mp}. The matching score that is key index for feature matching performance is deliberately highlighted on the bars. Except for the number of correspondences on VIS-IR and VIS-SAR, our ReDFeat achieve the best scores on all metrics at 3px. Note that, the handcrafted local-maximums-searching-based detector might fail to extract large numbers of keypoints for some image pairs, which demonstrates the superiority and the flexibility of the learnable detection. Qualitative performance is shown in Fig.~\ref{fig:mp_viz}. Compared to R2D2 and CMMNet, our detected points seem to be more rationally distributed in the textured area,~\emph{i.e}\onedot, intensely but not too intensely. However, traditional detectors employed by SIFT, RIFT, and DOG-HN seemingly generate more interpretable results that strictly attach to the edges or corners. Especially, RIFT detects scattered corner points, which are proved by RR shown in Fig.~\ref{fig:matchthres} and the number of correspondences shown in Fig.~\ref{fig:mp}, in not so salient regions. The weakness of the deep learning detector can be attributed to the flaws of the training set, which cannot provide strict correspondences so that the keypoints are not precisely located. Despite the advantages of traditional detectors, the deep learning one-stage and two-stage methods show the superiority of deep learning on the feature description. In our method that the description and detection are mutually guided in our recoupling strategy, the hard descriptors are better optimized, which makes significant progress in matching performance. \subsection{Image Registration Performance} Successful registration rates of 1024 keypoints of those algorithms are drawn in Fig.~\ref{fig:reprojthres}. Note that, we use two measures of projection error (RE) on the three subsets according to the quality of the ground truths. Nevertheless, our ReDFeat obtains more successfully registered images pairs in each case. And the margin is more prominent on VIS-SAR that is the most challenging. Moreover, the weak performance of CMMNet on VIS-NIR highlights the important of keypoint location,~\emph{i.e}\onedot, the registration performance depends on MS at low thresholds. The problem is well tackled by recoupled constraints and Super Detector in our method, as proved in ablation study. \renewcommand\thefigure{8} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figs/registration/reproj_thres} \caption{Successful registration rate (SSR) of $1024$ keypoints at varying thresholds up to 10. Note that, different measures of reprojection error,~\emph{i.e}\onedot, the meanings of the threshold are used on VIS-IR. } \label{fig:reprojthres} \end{figure} The distributions of reprojection errors of 1024, 2048 and 4096 keypoints are illustrated in Fig.~\ref{fig:ir}. Except on the VIS-NIR with $2048$ and $4096$ keypoints, our method achieves the most SR and the lowest mean RE. Particularly, while greatly boosting SR on VIS-SAR, our method gets the mostly precise image registration performance. As for the tiny disparity of RE among SIFT, DOG-HN, and ReDFeat on VIS-NIR, it can be explained by the small discrepancy between visible and near-infrared images and the accuracy of handcrafted keypoint location as mentioned above. Some examples, in which only our ReDFeat succeeds, are shown in Fig.~\ref{fig:ir_viz}. Although RIFT, R2D2 and CMMNet estimate approximate transforms in some cases from VIS-NIR and VIS-IR, the accuracy of registration does not meet the expectations. On samples of VIS-SAR, the other alternatives even fail to receive a rough result, which is consistent with the feature matching performance. Generally, with the help of recoupled constraints and Super Detector, our method can learn robust cross-modal features that indeed boost the performance of cross-modal image registration. \renewcommand\thefigure{9} \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figs/registration/NIR_proj.pdf}\\ \footnotesize{(a) Reprojection Errors on VIS-NIR}\\ \vspace{0.05in} \includegraphics[width=0.9\linewidth]{figs/registration/IR_proj.pdf}\\ \footnotesize{(b) Reprojection Errors on VIS-IR}\\ \vspace{0.05in} \includegraphics[width=0.9\linewidth]{figs/registration/SAR_proj.pdf}\\ \footnotesize{(c) Reprojection Errors on VIS-SAR}\\ \caption{Reprojection errors of 1024, 2048 and 4096 keypoints of the state-of-the-art methods in our benchmark. Different numbers of keypoints are extracted and drawn in different colors. The distribution of the projection errors of the successfully registered images (SR) at 10 are drawn in box plots, in which the green dash lines indicate the mean of data; the boxes cover samples from the $25$th to $75$th percentile of the errors; the maximums or minimums are marked by caps. The numbers of SR at $10$ are shown under corresponding box plots. } \label{fig:ir} \end{figure} \begin{table}[t] \setlength{\tabcolsep}{1.5mm} \renewcommand\arraystretch{1.5} \centering \caption{Average runtime of different methods in our benchmark. The average sizes of images are given, and the runtime is counted in millisecond (ms).} \begin{tabular}{l|c|c|c|c|c|c} \hline Time (ms) & SIFT & RIFT & DOG+HN & R2D2 & CMMNet & ReDFeat \\ \hline \begin{tabular}[c]{@{}l@{}}VIS-NIR\\ (971$\times$682)\end{tabular} & 236 & 3186 & 351 & 59 & 1790 & 94 \\ \hline \begin{tabular}[c]{@{}l@{}}VIS-IR\\ (528$\times$320)\end{tabular} & 68 & 1984 & 229 & 54 & 458 & 74 \\ \hline \begin{tabular}[c]{@{}l@{}}VIS-SAR\\ (512$\times$512)\end{tabular} & 97 & 2530 & 263 & 56 & 675 & 85 \\ \hline \end{tabular} \label{tab:rt} \end{table} \renewcommand\thefigure{10} \begin{figure*}[t] \centering \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/SIFT_91.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/RIFT_91.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/HN_91.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/R2D2_91.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/CMMNet_91.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/ReDFeat_91.png}\\ \vspace{0.12em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/SIFT_106.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/RIFT_106.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/HN_106.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/R2D2_106.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/CMMNet_106.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_nir_vis/ReDFeat_106.png}\\ \vspace{0.12em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/SIFT_128.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/RIFT_128.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/HN_128.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/R2D2_128.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/CMMNet_128.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/ReDFeat_128.png}\\ \vspace{0.12em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/SIFT_223.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/RIFT_223.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/HN_223.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/R2D2_223.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/CMMNet_223.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_ir_vis/ReDFeat_223.png}\\ \vspace{0.12em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/SIFT_374.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/RIFT_374.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/HN_374.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/R2D2_374.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/CMMNet_374.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/ReDFeat_374.png}\\ \vspace{0.12em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/SIFT_1.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/RIFT_1.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/HN_1.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/R2D2_1.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/CMMNet_1.png} \hspace{-0.6em} \includegraphics[width = 2.9cm]{figs/registration/regis_sar_vis/ReDFeat_1.png}\\ \raggedright {\footnotesize \hspace{1.4cm} SIFT\hspace{2.2cm} RIFT\hspace{1.90cm} DOG+HN\hspace{1.95cm} R2D2\hspace{1.95cm} CMMNet\hspace{1.80cm} ReDFeat} \caption{Visualization of image registration performance. 1024 features are extracted by different algorithms and matched by bidirectional nearest neighbor matching. Note that, the images in 2nd, 4th and 6th rows are corresponding to the images in Fig.~\ref{fig:mp_viz}.} \label{fig:ir_viz} \end{figure*} \subsection{Runtime} Time consumption is important for feature extraction. Because SIFT, RIFT, DOG+HN and CMMNet employ handcrafted detectors, their computation complexities are hard to calculate. And we just report the average runtime in three test sets in Table~\ref{tab:rt}. All of them are implemented in Python, except RIFT which is implemented in Matlab. All methods are run on an Intel Xeon Silver 4210R CPU and an NVIDIA RTX3090 GPU. As we can see, R2D2 consumes the least time to extract features for each image. And benefiting from the parallel computation on GPU, its runtime is not sensitive to the image size. Because of the complex operations in Super Detector, ReDFeat takes more time to finish the extraction. However, the improvements of our method are believed to be worth the increased runtime. All methods seem to be time-consuming except SIFT which takes the same order of magnitude of time as R2D2 and ReDFeat. Generally, we significantly improve the performance of features with few extra costs. \section{Conclusion} In this paper, we take the ill-posed detection in joint detection and description framework as the start point, and propose the recoupled constraints for multimodal feature learning. Firstly, based on the efforts from related works, we reformulate the repeatability loss and the local peaking loss for detection, as well as the contrastive loss for description in multimodal scenario. Then, to recouple the constraints of the detection and description, we propose the mutual weighting strategy, in which the robust features are forced to achieve desired detected probabilities that are locally peaking and consistent for different modals, and the features with high detected probability are emphasized during the optimization. Different from previous works, the weights are detached from back propagation so that the detected probability of an indistinct feature would not be directly suppressed and the training would be more stable. In this way, our ReDFeat can be readily trained from scratch and adopted in cross-modal image registration. To fulfill the harsh terms of detection in the recoupled constraints and achieve further improvements, we propose the Super Detector that possesses a large receptive field and learnable local non-maximum suppression blocks. Finally, we collect visible and near-infrared, infrared, and synthetic aperture radar image pairs to build a benchmark. Extensive experiments on this benchmark prove the superiority of our ReDFeat and the effectiveness of all proposed components. \label{sec:sec5} \IEEEpeerreviewmaketitle {\small \bibliographystyle{plain}
1,314,259,996,898
arxiv
\section{Introduction} Recently, Witten \cite{Witten} constructed a $(0,4)$ supersymmetric linear sigma model in two dimensions with a potential term. Remarkably, under simple assumptions about the structure of the $(0,4)$ multiplets involved, the Yukawa couplings of the model satisfy the ADHM equations of the instantons construction \cite{Adhm}. In the infrared limit the massless chiral fermions in the model are coupled to an instanton gauge field, obtained according to the ADHM prescription \cite{Adhm}. The fact that $(0,4)$ supersymmetry is related to self-duality of the target-space gauge fields is not new (see, for instance, \cite{Howe}). However, it is a very unusual feature of Witten's model that it restricts the general self-dual gauge fields to those with finite action, i.e., {\it instantons}. In \cite{GalSok} we gave a manifestly supersymmetric and off-shell formulation of the ADHM sigma model in harmonic $(0,4)$ superspace. In the present paper we study generalizations of Witten's model to $(4,4)$ supersymmetry. More precisely, we examine the conditions for the model to have off-shell $(4,4)$ supersymmetry in the infrared limit. A necessary condition is that the number of massless chiral fermions be tied to the dimension of the target space-time. In this case the linearized action of the massless sector possesses an off-shell $(4,4)$ supersymmetry. We find the most general action of the massless fields consistent with this supersymmetry. It necessarily involves nontrivial space-time geometry with torsion. In four space-time dimensions supersymmetry restricts the Yang-Mills gauge group to $SU(2)$ and the metric to be conformally flat. The conformal factor satisfies Laplace's equation. In fact, it is identified with the scalar field of the 't Hooft construction of the $5k'$ family of $SU(2)$ instantons \cite{Hooft}. We also find analogs of the 't Hooft instantons in higher dimensions. In section \ref{sec2} we briefly review the harmonic superspace construction of the ADHM sigma model \cite{GalSok}. In section \ref{sec3} we concentrate on its infrared limit and generalize it to include a nontrivial space-time background. Then we study the conditions for the massless theory to possess off-shell $(4,4)$ supersymmetry. In section 4 we derive the component action and discuss further problems. In the concluding section we compare our results with various facts about (4,4) sigma models known in the literature. \section{ADHM sigma model in $(0,4)$ harmonic superspace}\label{sec2} In this section we shall review the construction of the ADHM sigma model with manifest $(0,4)$ supersymmetry. More details can be found in \cite{GalSok}. The supersymmetry algebra is given by \begin{equation}\label{(0,4)algebra} \{Q_-^{AA'}, Q_-^{BB'}\}=2\epsilon^{AB}\epsilon^{A'B'} P_{--} \end{equation} Here $\pm$ indicate the Lorentz ($SO(1,1)$) weights, and we always write the weights as lower indices. $A$ and $A'$ are doublet indices of the $SO(4)\sim SU(2)\times SU(2)'$ automorphism group of (\ref{(0,4)algebra}). The supercharges satisfy the reality condition $\overline{Q_-^{AA'}} = \epsilon_{AB}\epsilon_{A'B'} Q_-^{BB'}$. The algebra (\ref{(0,4)algebra}) can be realized on the super-world sheet with coordinates $x_{++}, x_{--}, \theta^{AA'}_+$: \begin{equation}\label{(0,4)SUSY} \delta x_{++}=-i\epsilon^{AA'}_+\theta_{+AA'}, \;\; \delta x_{--}=0, \;\; \delta\theta^{AA'}_+=\epsilon^{AA'}_+. \end{equation} Following the method of $SU(2)$ harmonic superspace \cite{harms}, we extend the world sheet by means of the harmonic variables $u^{\pm A} \in SU(2)/U(1): \ \ u^{+A}u^-_A=1, \ \ u^-_A= \overline {u^{+A}}$. These variables are inert under supersymmetry, transform as doublets of one of the $SU(2)$ subgroups of automorphisms, and are defined modulo the $U(1)$ subgroup of $SU(2)$. In essence, harmonics parametrize the two-sphere $S^2$ of all possible choices of the three complex structures characteristic for $(0,4)$ supersymmetry. The extended superspace $x_{++}, x_{--}, \theta^{AA'}_+, u^{\pm A}$ possesses an invariant subspace of lower Grassmann dimension: \begin{eqnarray} \hat x_{++} = x_{++} + i\theta^{AA'}_+\theta^{B}_{+A'} u^+_{(A}u^-_{B)}, \ \ x_{--}, \ \ \theta^{+ A'}_{+}= u^+_A \theta^{AA'}_+, \ \ u_A^\pm\; ; \label{bas} \\ \delta \hat x_{++}=-2i\epsilon^{AA'}_+u^-_A\theta^+_{+A'}, \;\; \delta x_{--}=0, \;\; \delta\theta^{+A'}_+=\epsilon^{AA'}_+u^+_A\; . \end{eqnarray} Note that in the notation $\theta^{\pm }_{+}$ the upper index $\pm$ is a $U(1)$ and the lower one is Lorentzian. It is this {\it analytic} superspace that is most adequate for describing multiplets of $(0,4)$ supersymmetry. The analytic superfields $\Phi^q (x,\theta^+,u)$ \footnote{To simplify the notation we shall not write explicitly $\hat x_{++}$ when it is clear that we work in the analytic superspace (\ref{bas}).} carry in general a $U(1)$ harmonic charge and an $SO(1,1)$ weight. They have a very short Grassmann expansion, e.g., \begin{equation}\label{expa} \Phi^q(x,\theta^+,u) = \phi^q(x,u) + \theta^{+A'}_+\xi^{q-1}_{-A'}(x,u) +(\theta^+_+)^2 f^{q-2}_{--}(x,u)\; , \end{equation} where $(\theta^+_+)^2 \equiv \theta^{+A'}_+\theta^+_{+A'}\;$. The coefficients in (\ref{expa}) are harmonic-dependent fields; they can be expanded in terms of spherical harmonics on $S^2$. For certain values of the $U(1)$ charge $q$ such superfields can be made real in the sense of a special conjugation on $S^2$ (for more details about the harmonic formalism see \cite{harms}, \cite{GalSok}). Finally, to complete the formalism we need the expression for the covariant harmonic derivative in the analytic superspace (\ref{bas}): \begin{equation}\label{anD++} D^{++} = u^{+A}{\partial\over\partial u^{-A}} + i(\theta^+_+)^2 {\partial\over\partial\hat x_{++}}\; . \end{equation} It will help us to write down irreducibility conditions and equations of motion for harmonic superfields. The ADHM sigma model of Witten exploits three types of $(0,4)$ multiplets: chiral (right-handed) fermions and two different scalar multiplets that contain both bosons and left-handed fermions. The chiral fermions are described in harmonic superspace by the following {\it anticommuting} and real (in the sense mentioned above) superfields \begin{equation}\label{Lam} \Lambda^a_+(x,\theta^+,u) = \lambda^a_+(x,u) + \theta^{+A'}_+s^{-a}_{A'}(x,u) +i(\theta^+_+)^2 \sigma^{--a}_{-}(x,u)\; \end{equation} with the free action \begin{equation}\label{acL} S_\Lambda = {1\over 2}\int d^2x du d^2\theta^+_+ \; \Lambda^{a}_+D^{++} \Lambda^a_{+}\; . \end{equation} The external index $a=1,\ldots,n+4k'$ is an $SO(n+4k')$ one. The free equation of motion $D^{++} \Lambda^a_{+}=0$ shows that all the components of $ \Lambda^a_{+}$ are auxiliary, except for the lowest component in the harmonic expansion of $\lambda^a_+(x,u)$, i.e., $\lambda^a_+(x,u)=\lambda^a_+(x)$. These are the physical chiral fermions satisfying the free equation $\partial_{--}\lambda^a_+(x)=0$ and forming a trivial on-shell representation of $(0,4)$ supersymmetry. However, as shown in \cite{GalSok}, its off-shell version requires the infinite sets of auxiliary fields contained in (\ref{Lam}), except if the number of chiral fermions is a multiple of four (these ``short'' multiplets were discussed in the second ref. \cite{Howe}, see also \cite{Gates}). One of the scalar multiplets (we shall call it non-twisted) involves the coordinates $X^{AY}$ of the Euclidean target space $R^{4}$, in which the Yang-Mills fields will be defined. \footnote{We shall discuss the generalization to the case of $R^{4k}$ at the end of section 3.} It is described by the real analytic superfield $X^{+Y}(x,\theta^+,u) \;\;(Y=1,2 $ is an $Sp(1)\sim SU(2)$ index), which satisfies the following harmonic irreducibility condition \begin{equation}\label{irr} D^{++} X^{+Y} = 0\; . \end{equation} The solution to it \begin{equation}\label{X} X^{+Y}(x,\theta^+,u) = X^{AY}(x)u^+_A + i\theta^{+A'}_+\psi^Y_{-A'}(x) -i(\theta^+_+)^2 \partial_{--}X^{AY}(x)u^-_A \end{equation} contains $4$ bosonic and $4$ fermionic real off-shell fields. The free action for the this multiplet is again given as an integral over the analytic superspace: \begin{equation}\label{acX} S_X = i\int d^2x du d^2\theta^+_+ \; X^{+Y}\partial_{++} X^+_Y\; . \end{equation} Finally, the last multiplet is the so-called ``twisted" scalar multiplet, in which the $SU(2)$ indices carried by the bosons and fermions are interchanged (as compared to the non-twisted multiplet). Its superspace description requires a set of {\it anticommuting abelian gauge} superfields $\Phi^{+Y'}_+(x,\theta^+,u)$ $(Y'=1,2,\ldots, 2k'$ is an $Sp(k')$ index). The gauge transformations have the form \begin{equation}\label{gau} \delta \Phi^{+Y'}_+ = D^{++} \omega^{-Y'}_+ \end{equation} with analytic parameters $\omega^{-Y'}_+(x,\theta^+,u)$. Using these parameters one can choose the ``harmonically short'' and non-manifestly-supersymmetric {\it Wess-Zumino-type gauge} \begin{equation}\label{WZ} \Phi^{+Y'}_+(x,\theta^+,u) = \theta^{+A'}_+\phi^{Y'}_{A'}(x) +i(\theta^+_+)^2 u^{-}_A \chi^{Y'A}_{-}(x)\; . \end{equation} in which only the $4k'$ real physical bosons $\phi^{Y'}_{A'}$ and fermions $\chi^{Y'A}_{-}$ are left. This multiplet is off shell. The gauge invariant free action for the superfield $\Phi^+_+$ has been given in \cite{GalSok} and we shall not need it here. Now we turn to the discussion of the potential-type coupling of the above three $(0,4)$ multiplets. Such a coupling is severely restricted by dimension, $SO(1,1)$ and $U(1)$ invariance, as well as Grassmann analyticity. An important additional assumption made by Witten in \cite{Witten} is that the part $SU(2)'$ of the $(0,4)$ supersymmetry automorphism group is preserved (this requirement is motivated by the desire to obtain a CFT in the infrared limit). As shown in \cite{GalSok}, then the only possible coupling term is \begin{equation}\label{int} S_{int} = m \int d^2x du d^2\theta^+_+ \; \Phi^{+Y'}_+ v^{+a}_{Y'}(X^+,u) \Lambda_+^a\; . \end{equation} It is invariant under the gauge transformation (\ref{gau}) (together with the kinetic term (\ref{acL}) for $\Lambda^a_+$) provided the chiral fermions transform as \begin{equation}\label{gauLv} \delta \Lambda^a_{+} = mv^{+a}_{Y'}(X^+,u) \omega^{-Y'}_+\; , \end{equation} and the matrix $v^{+a}_{Y'}(X^+,u)$ satisfies the following two conditions \begin{equation}\label{c1} v^{+a}_{Y'}v^{+a}_{Z'} = 0\; , \end{equation} \begin{equation}\label{c2} D^{++} v^{+a}_{Y'}(X^+,u) = 0\; . \end{equation} The general solution to (\ref{c2}) is (recall (\ref{irr})) \begin{equation}\label{lin} v^{+a}_{Y'}(X^+,u) = u^{+A} \alpha^a_{AY'} + \beta^a_{Y'Y} X^{+Y}\; , \end{equation} where the matrices $\alpha,\beta$ are constant. At $\theta^+=0$, the matrix $v^{+a}_{Y'}(X^+,u)$ reduces to \begin{equation}\label{lin'} v^{+a}_{Y'}(X^+,u)|_{\theta=0} = u^{+A} (\alpha^a_{AY'} + \beta^a_{Y'Y} X^{Y}_A) \equiv u^{+A} \Delta^a_{AY'}\; , \end{equation} and then the other condition (\ref{c1}) implies for the matrix $\Delta^a_{AY'}$ \begin{equation}\label{ADHM} \Delta^a_{AY'}\Delta^a_{BZ'} + (A\leftrightarrow B)= 0\; . \end{equation} The matrix $\Delta^a_{AY'}$, linear in $X$ and satisfying (\ref{ADHM}) is the starting point in the ADHM construction for instantons \cite{Adhm}. Now we discuss the infrared limit of the theory. To this end one has to separate the massless and massive modes. Among the $n+4k'$ left-handed fermions $\lambda^a_+$ contained in $\Lambda^a_+$ there is a subset of $4k'$ which are paired with the right-handed fermions in $\Phi$ and become massive (together with the bosons from $\Phi$). The remaining chiral fermions stay massless. To diagonalize the action, we complete the $2k'\times (n+4k')$ matrix $v^{+a}_{Y'}(X^+,u)$ to a full {\it orthogonal} matrix $v^{\hat aa}(X^+,u)$, where the $n+4k'$ dimensional index $\hat a = (+Y', -Y',i)$ and $i=1,\ldots, n$ is a vector index of the group $SO(n)$. Orthogonality means \begin{equation}\label{or} v^{\hat aa} v^{\hat b a} = \delta^{\hat a\hat b}\; , \end{equation} where $\delta^{+Y', -Z'} = - \delta^{-Y', +Z'} = \epsilon^{Y'Z'}, \;\; \delta^{+Y', +Z'}=\delta^{-Y', -Z'}=\delta^{\pm Y', i}=0 $. Since $v^{+a}_{Y'}$ is a function of $X^{+Y}$ and $u^\pm$, we take the other blocks of $v^{\hat aa}$, namely $v^{-Y'a}$ and $v^{ia}$ to be such functions too. With the help of the matrix $v^{\hat aa}$ we can make a change of variables from the superfield $\Lambda^a_+$ to $\Lambda^{\hat a}_+ =v^{\hat aa}\Lambda^a_+$. Then the gauge transformation (\ref{gauLv}) gets the form \begin{equation}\label{gauL-} \delta\Lambda^{-Y'}_+ = m\omega^{-Y'}_+, \ \ \ \delta\Lambda^{+Y'}_+ = \delta\Lambda^{i}_+ = 0\; , \end{equation} hence the superfields $\Lambda^{-Y'}_+$ can be completely gauged away. Further, the superfields $\Lambda^{+Y'}_+$ enter the action without derivatives; their elimination results in the following Lagrangian for the chiral fermions \begin{eqnarray} {\cal L}^{++}_{++}(\Lambda) &=& {1\over 2} \Lambda^i_+[\delta^{ij} D^{++} + (V^{++})^{ij}] \Lambda^j_+ \nonumber \\ &-& \label{LL} {1\over 2}[(V^{++})^{-i}_{Y'} \Lambda^{i}_+ + m\Phi^+_{+Y'}] (V^{-1})^{Y'Z'} [(V^{++})^{-j}_{Z'} \Lambda^{j}_+ + m\Phi^+_{+Z'}]\; . \end{eqnarray} Here we used the notation \begin{equation}\label{VV} (V^{++})^{\hat a\hat b} = v^{\hat aa}D^{++} v^{\hat ba}\; , \ \ \ V_{Y'Z'} =(V^{++})^{--}_{Y'Z'}\; . \end{equation} In the infrared limit $m\rightarrow\infty$ the kinetic term for $\Phi^+_+$ is suppressed, so the second line of (\ref{LL}) becomes auxiliary and can be dropped. The final result for the massless sigma model is: \begin{equation}\label{ADga} S_{m\rightarrow\infty} = \int d^2x du d^2\theta^+_+ \; [iX^{+Y}\partial_{++} X^+_Y + iP^{-Y}_{++} D^{++} X^+_Y + {1\over 2} \Lambda^i_+(\delta^{ij} D^{++} + ({V}^{++})^{ij}) \Lambda^j_+]\; . \end{equation} Here we have added the kinetic term (\ref{acX}) for $X^{+Y}$ and have introduced the harmonic irreducibility condition (\ref{irr}) into the action with the Lagrange multiplier $P^{-Y}_{++}$. The object \begin{equation}\label{calV} ({ V}^{++})^{ij}(X^+,u) = v^{ia}(X^+,u)D^{++}v^{ja}(X^+,u)\; . \end{equation} is the twistor transform of the ADHM $SO(n)$ gauge field (or, we should rather say, the harmonic version \cite{harW} of Ward's \cite{Ward} instanton construction). The alternative to the manifestly supersymmetric gauge $\Lambda^{-Y'}_+=0$ above is the Wess-Zumino gauge (\ref{WZ}). In it, after a suitable diagonalization and again in the infrared limit one finds an ADHM gauge field coupled to the massless subset of the chiral fermions $\lambda^a_+$ \cite{Witten}. This completes our review of the superfield construction for the ADHM sigma model. \section{Searching for (4,4) supersymmetry}\label{sec3} The procedure of the previous section lead to the massless action (\ref{ADga}) for the superfields $X^{+Y}$ and $\Lambda^i_+$. It involves four real bosons $X^{AY}$ and four real left-handed fermions $\psi^{A'Y}_-$ coming from the superfield $X^{+Y}$, as well as $n$ real right-handed fermions $\lambda^i_+$ from the matter superfields $\Lambda_+^i$. If we want to form a (4,4) multiplet out of them, the first necessary condition is to match the numbers of left- and right-handed fermions. Consequently, we have to choose $n=4$ and restrict the gauge group to (at most) $SO(4)\sim SU(2)\times SU(2)$. The search for further (4,4) supersymmetry is based on an examination of the flat action obtained by putting $V^{++}=0$ in (\ref{ADga}): \begin{equation}\label{freeact} S_{\mbox{free}} = \int d^2xdud^2\theta^+_+\; \left(i X^{+Y}\partial_{++} X^+_Y +i P^{-Y}_{++} D^{++} X^+_Y + {1\over 2} \Lambda^{i}_+ D^{++}\Lambda^i_{+} \right) \; . \end{equation} It is not hard to check that this free action has two different off-shell (4,4) supersymmetries, depending on how the $SU(2)$ indices are involved in the transformation laws. The first possibility is obtained by replacing the $SO(4)$ vector index $i$ by the $SU(2)\times SU(2)$ pair $\dot AY$ ($\dot A$ is an $SU(2)$ index of a new type): \begin{eqnarray} \delta X^{+Y} &=& i\varepsilon^+_{-\dot A} \Lambda^{\dot AY}_+ \; , \nonumber \\ \delta \Lambda^{\dot AY}_+ &=& -2\varepsilon^{-\dot A}_- \partial_{++} X^{+Y} - \varepsilon^{+\dot A}_- P^{-Y}_{++} \; , \nonumber \\ \delta P^{-Y}_{++} &=& -2i\varepsilon^-_{-\dot A} \partial_{++} \Lambda^{\dot AY}_+ \; , \label{freetr} \end{eqnarray} where $\varepsilon^{\pm \dot A}_- = u^\pm_A \varepsilon^{A\dot A}_-$. Actually, these transformation laws originate from the $\theta^+_-$ expansion of the (4,4) superfield $q^{+Y} (\theta^+_+, \theta^+_-)$ obtained by dimensional reduction from the $N=2\; D=4$ hypermultiplet \cite{harms}: \begin{equation} q^{+Y} = X^{+Y} + i\theta^+_{-\dot A} \Lambda_+^{\dot AY} -{i\over 2} (\theta^+_-)^2 P^{-Y}_{++} \; . \end{equation} Moreover, in this case the free action (\ref{freeact}) itself can be derived from the hypermultiplet action \begin{equation} S = \int d^4xdud^4\theta^+\; q^{+Y}D^{++}q^+_Y \; . \end{equation} The second possibility is obtained by writing $i$ as $A\dot Y$ (now $\dot Y$ is another type of $SU(2)$ index) and then decomposing $\Lambda^{A\dot Y}_+$ into harmonic $U(1)$ projections $\Lambda^{\pm\dot Y}_+ = u^\pm_A \Lambda^{A\dot Y}_+$ ($\Lambda_+^{\pm\dot Y}$ should not be confused with $\Lambda_+^{\pm Y'}$ from section 2): \begin{eqnarray} \delta X^{+Y} &=& i\varepsilon^{Y\dot Y}_{-} \Lambda^{+}_{+\dot Y} \; , \nonumber \\ \delta \Lambda^{+\dot Y}_+ &=& -2\varepsilon^{Y\dot Y}_- \partial_{++} X^{+}_Y \; , \nonumber \\ \delta \Lambda^{-\dot Y}_+ &=& \varepsilon^{Y\dot Y}_- P^{-}_{++Y} \; , \nonumber \\ \delta P^{-Y}_{++} &=& -2i\varepsilon^{Y\dot Y}_{-} \partial_{++} \Lambda^{-}_{+\dot Y} \; . \label{twisttr} \end{eqnarray} It is not hard to verify that in both cases the algebra of the supersymmetry transformations closes {\it off shell}. As one can see from (\ref{freetr}) and (\ref{twisttr}), the main difference between the two types of supersymmetry amounts to interchanging different types of $SU(2)$ indices (``twist"). Therefore we shall refer to the transformations (\ref{freetr}) as {\it non-twisted} and to (\ref{twisttr}) as {\it twisted} (4,4) supersymmetry. It is a well-known fact that dimensional reduction from $N=2 D=4$ gives rise to the former, whereas the latter is specific to two dimensions \cite{GHR}, \cite{IK}. The main question now is whether we can turn on a background in the free action (\ref{freeact}) compatible with any of the above supersymmetries. The advantage of dealing with off-shell supersymmetry is that we do not have to adjust the transformation laws to the interaction. As will be clear from the end results, a simple self-dual Yang-Mills background like in (\ref{ADga}) cannot be compatible with (4,4) supersymmetry. One is forced to introduce an additional ``curved" deformation of the free action (this point has already been made clear in \cite{CHS}). In order not to miss any possibility, we shall examine the most general background for the action (\ref{freeact}) allowed by the Lorentz and $U(1)$ properties and dimensions of the superfields $X,P,\Lambda$ (note that $X$ is dimensionless and $[P]=1, [\Lambda] = 1/2$). So, we write down \begin{equation}\label{genact} S = \int d^2xdud^2\theta^+_+\; \left[i {\cal L}^{+Y}\partial_{++} X^+_Y + iP^{-Y}_{++} (D^{++} X^+_Y + {\cal L}^{+3}_Y) + {1\over 2} \Lambda^{i}_+ (\delta^{ij}D^{++} + (V^{++})^{ij})\Lambda_{+}^{j} \right] \; . \end{equation} Here ${\cal L}^{+Y}(X^+,u)$, ${\cal L}^{+3Y}(X^+,u)$ and $(V^{++})^{ij}(X^+,u)$ are for the time being arbitrary functions of $X^{+Y}$ and $u^\pm_A$. The next step is to vary the action (\ref{genact}) under either (\ref{freetr}) or (\ref{twisttr}), derive the corresponding restrictions on the potentials ${\cal L}^{+Y}$, ${\cal L}^{+3Y}$, $(V^{++})^{ij}$ and solve them in terms of unconstrained prepotentials. The computations are straightforward, therefore here we shall only give the final answers. Up to insignificant field redefinitions those are: \subsection {\it Non-twisted case} Here the potential ${\cal L}^{+Y}$ takes the form (after field redefinitions) ${\cal L}^{+Y} = X^{+Y}$. The other two potentials are expressed in terms of a single scalar prepotential with $U(1)$ charge $+4$ ${\cal L}^{+4}(X^+,u)$ as follows \begin{equation}\label{potentials} {\cal L}^{+3}_Y = \partial^-_Y {\cal L}^{+4}, \ \ \ (V^{++})_{\dot AY|\dot BZ} = -\epsilon_{\dot A\dot B} \partial^-_Y\partial^-_Z {\cal L}^{+4} \; . \end{equation} Once again, one realizes that this form of the action originates from the dimensional reduction of the general $N=2\; D=4$ hypermultiplet action \cite{HK} \begin{equation}\label{hk} S = \int d^4xdud^4\theta^+\; [q^{+Y}D^{++}q^+_Y + {\cal L}^{+4}(q^+,u)] \; . \end{equation} The prepotential ${\cal L}^{+4}$ in (\ref{hk}) has been shown in \cite{HK} to generate the most general hyper-K\"ahler manifolds. Such manifolds are torsion-free and the connection term $(V^{++})_{\dot AY|\dot BZ}$ in (\ref{potentials}) is in fact the Christoffel connection. \subsection {\it Twisted case} The requirement of (4,4) supersymmetry of the type (\ref{twisttr}) leads to the following restrictions on the potentials in (\ref{genact}): \begin{equation}\label{twpot} {\cal L}^{+3}_Y = 0\; , \ \ \ (V^{++})_{A\dot Y|B\dot Z} = \epsilon_{\dot Y\dot Z} u^+_Au^+_B[1- V(X^+,u)]\; , \ \ \ \partial^-_Y {\cal L}^{+}_Z - \partial^-_Z {\cal L}^{+}_Y = -2 \epsilon_{YZ} V(X^+,u)\; . \end{equation} Once more we have a single scalar prepotential $V(X^+,u)$, but this time it carries no $U(1)$ charge. Note that eq. (\ref{twpot}) determines the potential ${\cal L}^{+Y}$ up to a gradient $\partial^-_Y{\cal L}^{++}$, but one can easily see that this is a gauge invariance of the action (\ref{genact}). Written down in terms of the restricted potentials (\ref{twpot}), the action compatible with twisted supersymmetry takes the form \begin{eqnarray} S &=& \int d^2xdud^2\theta^+_+\; \left(i {\cal L}^{+Y}(X^+,u)\partial_{++} X^+_Y + iP^{-Y}_{++} D^{++} X^+_Y\right. \nonumber \\ & &\ \ \ \ \ \left. -\Lambda_+^{-\dot Y} D^{++}\Lambda^+_{+\dot Y} -{1\over 2} V(X^+,u)\Lambda_+^{+\dot Y} \Lambda^+_{+\dot Y} \right) \; .\label{twact} \end{eqnarray} We see that in fact this action is very similar to the one obtained in section 2. The difference is that in (\ref{twact}) the gauge group is reduced to $SU(2)$ (the gauge superfield $V^{++}$ being given in (\ref{twpot})) and the kinetic term for $X^+$ is deformed by the potential ${\cal L}^{+Y}$. Note that the presence of this potential does not affect in any way the arguments leading to the ADHM-type interaction in section 2; there we have never used the kinetic term for $X^+$. Finally, we briefly mention the generalization of the above results to the case of $4k$ target space dimensions. In fact, the ADHM sigma model of Witten \cite{Witten} reviewed in section 2 can equally well accommodate $R^{4k}$ as its target space, just replacing the $Sp(1)$ spinor index $Y$ of $X^{+Y}$ by an $Sp(k)$ index. For our study of $(4,4)$ supersymmetry it is convenient to adapt the notation as follows. Instead of $X^{+Y}$ we write $X^{+Y\alpha}$, where $Y$ is still an $Sp(1)$ spinor index and we have added a new vector index $\alpha$ of $SO(k)$ (thus $Y\alpha$ forms a decomposition of an $Sp(k)$ spinor index under $Sp(1)\times SO(k)$). The same index $\alpha$ will be attached to the chiral fermions, e.g., $\Lambda^{A\dot Y\alpha}_+$. This amounts to an obvious modification of the $(4,4)$ supersymmetry transformation rules (\ref{freetr}) and (\ref{twisttr}) and of the general interacting action (\ref{genact}). In the non-twisted case we find once again a hyper-K\"ahler background, hence no gauge field. In the twisted case we obtain the analog of (\ref{twact}) \begin{eqnarray} S &=& \int d^2xdud^2\theta^+_+\; \left(i {\cal L}^{+Y\alpha}(X^+,u)\partial_{++} X^{+\alpha}_{Y} + i P^{-Y\alpha}_{++} D^{++} X^{+\alpha}_{Y} \right. \nonumber \\ & &\ \ \ \ \ \left.- \Lambda_+^{-\dot Y\alpha} D^{++}\Lambda^{+\alpha}_{+\dot Y} -{1\over 2} V_{\alpha\beta}(X^+,u)\Lambda_+^{+\dot Y\alpha} \Lambda^{+\beta}_{+\dot Y} \right) \; .\label{twaction} \end{eqnarray} Here the potentials ${\cal L}^{+Y\alpha}(X^+,u)$ and $V_{\alpha\beta}(X^+,u)=V_{\beta\alpha}(X^+,u)$ are related by the constraint \begin{equation} \partial^-_{Y\alpha} {\cal L}^+_{Z\beta} - \partial^-_{Z\beta} {\cal L}^+_{Y\alpha} = -2\epsilon_{YZ} V_{\alpha\beta}\; . \end{equation} We see that the gauge connection $V^{++}$ from (\ref{genact}) takes the form $(V^{++})_{A\dot Y\alpha|B\dot Z\beta} = \epsilon_{\dot Y\dot Z} u^+_Au^+_B [\delta_{\alpha\beta} -V_{\alpha\beta}]$, so it corresponds to the gauge group $Sp(k)$ instead of $SU(2)\sim Sp(1)$ in the four-dimensional case. \section{Components of the twisted sigma model} In order to better understand the content of the twisted sigma model (\ref{twact}) we shall present its component expansion (in the case of 4 space-time dimensions only). This is made very easy by the fact that the Lagrange multipliers $P^{-Y}_{++}$ and $\Lambda_+^{-\dot Y}$ in (\ref{twact}) force both superfields $X^+_Y $ and $\Lambda_+^{+\dot Y}$ to depend trivially on the harmonics, therefore they contain only finite numbers of fields: \begin{eqnarray} X^{+Y} &=& X^{AY}(x)u^+_A + i\theta_+^{+A'}\psi_{-A'}^Y(x) -i (\theta^+_+)^2 \partial_{--} X^{AY}(x)u^-_A \; , \nonumber \\ \Lambda_+^{+\dot Y} &=& \lambda_+^{A\dot Y}(x)u^+_A + \theta_+^{+A'}s_{A'}^{\dot Y}(x) -i (\theta^+_+)^2 \partial_{--} \lambda_+^{A\dot Y}(x)u^-_A \; . \end{eqnarray} We then insert these expansions in the action and do the Grassmann integral. The field $s$ is easily seen to be auxiliary, so we eliminate it and in the end obtain the following sigma model for the fields $X^{AY},\psi_{-A'}^Y,\lambda_+^{A\dot Y}$: \begin{eqnarray} S &=& \int d^2x\; [-(\epsilon_{AB} g(X) + b_{AB}(X)) \partial_{++}X^{AY} \partial_{--}X^{B}_Y +{i\over 2}g(X) \psi_-^{A'Y}i\partial_{++} \psi_{-A'Y} \nonumber \\ &+& {i\over 2}g(X) \lambda_+^{A\dot Y}i\partial_{--} \lambda_{+A\dot Y} + {i\over 2}\psi_-^{A'Y} \psi_{-A'}^T \partial_{++}X^{CZ} \Omega_{CZ|YT}(X) \nonumber\\ &+& {i\over 2}\lambda_+^{A\dot Y} \lambda_{+\dot Y}^{B}\partial_{--}X^{CZ} \Omega_{CZ|AB}(X) + \lambda_+^{A\dot Y} \lambda_{+\dot Y}^{B} \psi_-^{A'Z} \psi_{-A'}^T R_{ABZT}(X) ]\; . \label{compact} \end{eqnarray} Here the sigma model metric is given by \begin{equation}\label{metric} g_{AY|BZ}(X) = \epsilon_{AB}\epsilon_{YZ}\; g(X)\ \ \mbox{\rm with} \ \ g(X) = \int du\; V(X^+,u)\; ; \end{equation} the two-form (the torsion potential) is \begin{equation} b_{AY|BZ}(X) = \epsilon_{YZ} b_{AB}= 2\int du\; u^+_{(A}u^-_{B)} V(X^+,u)\; ; \end{equation} the spin connections are \begin{equation}\label{spinc} \Omega_{CZ|YT}(X) = \epsilon_{Z(T}\partial_{CY)}g, \;\; \Omega_{CZ|AB}(X) = - \epsilon_{C(B}\partial_{A)Z}g\;. \end{equation} Finally, the curvature $R$ in the four-fermion term is constructed in the usual way. The action (\ref{compact}) has two remarkable properties. Firstly, we see that the geometric objects - metric and torsion (but not the two-form itself) are expressed in terms of a {\it single real scalar function} $g(X^{AY})$. In particular, this means that the sigma model metric is {\it conformally flat}. Secondly, the function $g(X^{AY})$ satisfies Laplace's equation. This is obvious from the definition (\ref{metric}): \begin{equation} \Box g(X) = \int du\; \partial^{-Y} \partial^+_Y V(X^+,u) = 0 \end{equation} because of the holomorphic dependence of the potential $V(X^+,u)$ on $X^{+Y}$. In fact, what we see here is an example of Penrose's transform \cite{Penrose}, where the solutions to Laplace's equation are parametrized by an unconstrained holomorphic function in twistor space (the r\^ole of twistor variables here is played by the $SU(2)$ harmonics). The spin connections in (\ref{spinc}) consist of two parts - of the Riemannian connection and of the torsion. As explained in \cite{CHS}, one of the two spin connections can also be viewed as an $SU(2)$ gauge field. As a consequence of (4,4) off-shell supersymmetry this gauge field has the typical form of a 't Hooft instanton solution \cite{Hooft}. Remembering that we started from the action (\ref{ADga}), in which the gauge field was by construction of the instanton (ADHM) type, we see that indeed we deal with a sigma model in which 't Hooft's instantons appear in a natural way. \section{Conclusions} In this paper we have studied the conditions under which the massless $(0,4)$ sigma model involving chiral fermions coupled to an ADHM gauge field and obtained by Witten's procedure can have a larger $(4,4)$ supersymmetry. The main assumption we have made is that the interacting theory should preserve the off-shell $(4,4)$ supersymmetry of the free theory. We have seen that starting from the non-twisted free supersymmetry we could only obtain a hyper-K\"ahler sigma model without torsion and, consequently, without a self-dual gauge field. If we choose to preserve the other, twisted supersymmetry of the free theory, we obtain strong restrictions on the possible background: it must have a conformally flat metric and torsion expressed in terms of a single real scalar function $g(X)$. The latter satisfies Laplace's equation and thus gives rise to an $SU(2)$ instanton gauge field of the 't Hooft type. We should point out that one could reach the same conclusions by applying the general results of \cite{GHR} to the special case of a four-dimensional target space. In \cite{GHR} the analysis of the conditions for $(4,4)$ supersymmetry in a sigma model is carried out in terms of $(2,2)$ superfields (chiral and twisted chiral). This implies choosing holomorphic coordinates in the target space. The sigma model Lagrangian is given by a K\"ahler potential. Our scalar function $g(X)$ appears in \cite{GHR} as a second-order derivative of the K\"ahler potential. The main difference between this approach and ours (which makes use of $(0,4)$ superfields) is in the treatment of the $SU(2)$ symmetry inherent to the problem we address here. Working in a holomorphic basis inevitably leads to loosing manifest $SU(2)$ invariance. Another study of $(4,4)$ twisted sigma models has recently been presented in \cite{IvSut}, this time using a double-harmonic $(4,4)$ superspace formalism. The results obtained there agree with ours. We believe that the $(0,4)$ approach may prove more efficient in investigating the possible {\it linear} $(4,4)$ sigma models with potential terms. We would also like to note that the relevance of 't Hooft instantons in string-inspired sigma models has been shown very clearly in \cite{CHS}. There one deals with two distinct cases: first with $(0,4)$ and then with $(4,4)$ supersymmetry. In the context of $(0,4)$ supersymmetry the coupling to a 't Hooft instanton gauge field (as opposed to a general, ADHM type one) appears as an Ansatz, which fits nicely with string theory. Then, for reasons having to do with the quantum behaviour of the model, the authors of \cite{CHS} want to extend the $(0,4)$ supersymmetry to full $(4,4)$ and realize that the self-dual gauge field must necessarily be identified with the spin connection with torsion. Thus, a $(4,4)$ model incorporating a self-dual gauge field cannot have a flat geometry. A point which is missing in \cite{CHS} is the observation that in the context of $(4,4)$ supersymmetry 't Hooft instantons are not an Ansatz any more, but are the only possibility. We also remark that some conclusions reached in \cite{CHS} about the general conditions on the background for $(0,4)$ supersymmetry have been later on corrected in \cite{BonVal}. In conclusion we may say that the study of instantons in the context of $(4,4)$ sigma models in this paper should be considered as a first step only. The more interesting question is whether there exists a {\it linear} $(4,4)$ sigma model which automatically gives, in the infrared limit, the model with 't Hooft instantons discussed here. An encouraging sign is the fact that the free action of section 2, in which we only keep the mass term in (\ref{int}) (but drop the Yukawa couplings), does indeed have a $(4,4)$ supersymmetry. However, the massive $(4,4)$ multiplet has central charge and is on shell, which makes the analysis of the general potential self-interaction more difficult. We hope to come back to this problem in the near future. \vskip7mm {\bf Acknowledgements.} We are grateful to E. Witten for encouraging us to look into this problem. We also profited from discussions with C. Callan, J. Harvey, C. Hull and M. Rocek. E. S. would like to acknowledge the hospitality extended to him at the Johns Hopkins University, Baltimore and ITP, SUNY at Stony Brook where this work has been done. \newpage
1,314,259,996,899
arxiv
\section{Introduction} \label{sec:Introduction} The interaction between galaxies and their surrounding gas, whether circumgalactic medium, intergalactic medium or intracluster medium (ICM), is a major driver of galaxy evolution. Nowhere is this interaction more dramatically demonstrated than in radio galaxies moving through the ICM of their surrounding cluster. The synchrotron-emitting plasma that comprises the radio lobes is ejected from the body of the galaxy, meaning that it is subject to the hydrodynamical processes that result from its interaction with the ICM: ram pressure will cause the jets to bend \citep{Cowie1975, Begelman1979, O'Dea1985, Roberts2021}, while buoyancy effects can cause them to "float" towards the edge of the cluster \citep{Gull1973, Gendron-Marsolais2017}. Narrow-angle tail radio sources (NATs) are a particular class of extended, double-tail radio galaxy that have had their radio jets bent back such that the observed angle between them is acute. Of course, this projected angle on the sky may not reflect the true three-dimensional bending which could be significantly less extreme, but the most plausible underlying physical cause of such distortions -- ram pressure due to the galaxy's motion relative to the cluster -- means that this projected geometry can always be used as a diagnostic of the source's direction of motion on the plane of the sky. \citet{O'Dea1987} took this approach to analyse the orbits of 70 NATs in Abell clusters, and concluded that the orbits were close to isotropic, but with some indication of a radial bias at small radii. However, they argued that a larger sample was required to make any definitive statement about cluster orbits. The largest study of bent radio jets in clusters to date is by \citet[hereafter \citetalias{Garon2019}]{Garon2019}, who made use of a sample of extended radio galaxies identified through Radio Galaxy Zoo \citep{Banfield2015}. \citetalias{Garon2019} found that the 340 radio sources they identified as "highly bent" have a slight tendency to indicate radial orbits with respect to their cluster centre. They also discovered that such bent systems were found out to fairly large radii, with as many outside $1.5R_{500}$ as inside it. Since ram pressure is proportional to the density of the ICM, and is a necessity for bending double-tail radio sources to such a high degree, it is puzzling that such bent sources would be commonly found out at large distances from the cluster centre where the ICM density is low. However, \citetalias{Garon2019} adopted a generous limit on what constituted a "highly bent" double-tail source, and in fact explicitly excluded all sources in which the observed angle between the two radio jets was less than $45^\circ$ over the concern that such objects might be mis-associated background sources. Since these steeply-bent sources comprise a large proportion of what is classically labelled as a NAT, it is not clear that \citetalias{Garon2019} and \citet{O'Dea1987} identified comparable populations, and hence whether the physics bending the jets is the same in both cases. We have therefore sought to revisit these issues, focusing specifically on radio galaxies identified as NATs in the largest sample available to date. We explore the angles in which their jets are bent relative to the closest cluster, and investigate in more detail how this distribution varies with projected distance from the cluster centre. In \cref{sec:Data_Method}, we describe the data set and analysis technique, while \cref{sec:Results} presents the resulting distribution of orbital angles and its variation with radius, and compares it to the analysis of \citetalias{Garon2019}. In \cref{sec:Discussion}, we discuss the implications of the rather unexpected but very strong signal that we detect. \section{Data and Method} \label{sec:Data_Method} For this study, we use images from the Low-Frequency Array (LOFAR) Two-metre Sky Survey (LoTSS) first data release (DR1; \citealt{Shimwell2019}). LoTSS DR1 is the ideal data set to identify NATs, as the 424 square degree high-resolution radio survey is not only an order of magnitude deeper than previous wide-area radio surveys, with a median noise level of only $71\mu{\rm Jy \,beam^{-1}}$, but is sensitive to structures with sizes ranging from 6\,arcsec to more than a degree. In addition, its observations at 144MHz have been shown to be significantly better for the detection of NATs than higher frequency data, due to the steep spectra of such mature radio sources \citep{O'Neill2019a, O'Neill2019b}. From this data set, we extract the 264 NATs visually identified by \citet{Mingo2019}, who classified the morphologies of 5805 extended radio-loud AGN in LoTSS DR1. Optical or infrared counterparts for all of the NATs were identified by \citet{Williams2019} using either a likelihood ratio identification algorithm or, for larger and more complex sources, visual identification through the LOFAR Galaxy Zoo project\footnote{\url{https://www.zooniverse.org/projects/chrismrp/radio-galaxy-zoo-lofar}}. Spectroscopic redshifts are available for 179 of the 264 NATs, and for the remaining 85 NATs we use the photometric redshifts derived by \citet{Duncan2019}, which have an overall scatter $\sigma_{\rm NMAD}=0.039$ and an outlier fraction of 7.9\%. To identify the environments of the NATs, we draw on the cluster catalogue by \citet{WenHan2015}, selected from the optical SDSS DR12 catalogue \citep{Alam2015}. This catalogue contains 158,103 clusters in the redshift range $0.02<z<0.8$, identified using a friends-of-friends (FoF) algorithm. The catalogue is 95\% complete for clusters of mass $M_{200} > 10^{14} M_\odot$ (where $M_n$ is the mass inside radius $R_n$, within which the density is $n$ times the critical density of the Universe), and it has a false detection rate of less than 6\%. The centre of each cluster is defined by the position of the brightest cluster galaxy (BCG), which is identified as the brightest galaxy within 0.5\,Mpc and a redshift within $\pm 0.04(1 + z)$ of the densest region of each cluster identified by the FoF algorithm. 92.6\% of the clusters within the 424 square degree region covered by LoTSS DR1 have spectroscopic redshifts, which are defined as either the redshift of the BCG or the mean redshift of the cluster members. The remainder of the clusters have photometric redshifts derived by \citet{WenHanLiu2012}, which have a standard deviation of less than $0.018$. \citet{WenHan2015} determine $R_{500}$ and $M_{500}$ for each cluster using empirically-derived scaling relations, which we then use to obtain a characteristic velocity dispersion of each cluster, $\sigma_{500} \equiv (GM_{500}/R_{500})^{1/2}$. Following \citetalias{Garon2019}, we determine the most likely host cluster (if one exists) for a NAT at redshift $z_{\rm NAT}$ by first identifying all clusters whose redshifts $z_{\rm cluster}$ satisfy $|z_{\rm NAT}-z_{\rm cluster}|/(1+z_{\rm NAT})<0.04$, which, reflecting the photometric redshift uncertainties, corresponds to a velocity window of $\pm 12,000\,{\rm km/s}$. We then assign the host to be the cluster with the minimum projected distance to the NAT. These criteria associated a cluster with 255 of the NATS in our sample. In 47 cases, the optical source associated with the NAT was found to be the BCG of the cluster; since these objects are used as a proxy to define the centre of the cluster, there is no meaningful information to be obtained from them regarding offsets from the cluster centre. To exclude such objects, while allowing for possible centring errors, we eliminate all sources within a projected radius of $0.01R_{500}$, which leaves a final sample of 208 NATs. For each NAT--cluster pair, we calculate $\theta$, the counter-clockwise angle between the vector from the cluster centre to the galaxy centroid, ${\mathbf{R}}_{cg}$, and the vector from the galaxy centroid to the centroid of radio emission, $\mathbf{R}_{gr}$; a typical example of this calculation is shown in \cref{fig:vector_angle}, which also illustrates the high quality of the LoTSS data. We map these angles into the range $-180^\circ < \theta < 180^\circ$, so that $|\theta| \sim 0^\circ$ describes a radio tail pointed away from the cluster centre, while $|\theta| \sim 180^\circ$ represents a radio tail aligned toward the cluster centre. \begin{figure} \includegraphics[width=\columnwidth]{NAT_image_diagram.png} \caption{The definition of the angle $\theta$ between ${\mathbf{R}}_{cg}$ and $\mathbf{R}_{gr}$, overlaid on the image of a typical NAT as observed by LOFAR.} \label{fig:vector_angle} \end{figure} \section{Results} \label{sec:Results} \begin{figure} \includegraphics[width=\columnwidth]{NAT_histogram.png} \caption{The angle distribution of narrow-angle tail radio sources with respect to their cluster centres, out to $7R_{500}$. The lines indicate the expectations of a uniform distribution, with Poisson noise appropriate to the size of the sample.} \label{fig:NAT_hist} \end{figure} Using a conservative limit of $R < 7 R_{500}$ to avoid significant line-of-sight contamination, we are left with a sample of 109 NATs, the angle distribution for which are presented in \cref{fig:NAT_hist}. It is immediately apparent from this figure that the data does not appear consistent with the expectations of a uniform distribution, but rather shows an excess at small angles. To test the significance of this apparent non-uniformity, we conducted an Anderson--Darling (AD) test, which offers a more powerful tool than the commonly used Kolmogorov--Smirnov test when assessing the significance of features near the ends of a distribution \citep{Stephens1974}. Testing the observed distribution against a uniform model results in an AD statistic of 7.94, which is significant at the 99.99\% confidence level for a sample of this size \citep{Jantschi2018}. To check that this result is not an artefact produced by the flux-weighted manner in which we have defined the NATs' angles on the sky, we repeated the analysis using the angle defined by the bisector of the two jets in each NAT, as located by their peak fluxes \citep{Mingo2019}; this definition produced a very similar non-uniform distribution of angles. We also note that any residual uncertainty in this measurement would serve only to dilute the signal apparent in \cref{fig:NAT_hist}. We next assess the level of line-of-sight contamination caused by using the photometric redshifts for the subsample of the NATs that lack spectroscopic data. \cref{fig:phase_space} shows the projected phase-space diagram for the subset of objects for which we have full spectroscopic redshifts. The data points have been scaled by their individual values of $R_{500}$ and the characteristic velocity $\sigma_{500}$, so that objects in clusters of differing mass can be compared consistently in this phase space. Although the amount of line-of-sight contamination clearly increases with radius, this plot confirms that its level remains modest out to the $7 R_{500}$ limit adopted in \cref{fig:NAT_hist}: only $\sim 25\%$ of the cluster-NAT pairs are false associations which act to dilute the signal. \begin{figure} \includegraphics[width=\columnwidth]{Phase_space_diagram.png} \caption{The projected phase space distribution (showing line-of-sight velocity versus projected separation) of the NATs in the sample with spectroscopic redshifts out to $20R_{500}$. The dashed lines at $v = \pm2\sigma_{500}$ indicate the limits of velocity assumed to be associated with the cluster.} \label{fig:phase_space} \end{figure} Beyond this radius, the level of contamination increases rapidly, but for these sources with spectroscopic redshifts we can extract a largely uncontaminated sample out to significantly larger radii by taking the NATs that lie within the dashed lines on \cref{fig:phase_space}, for which $|v| < 2\sigma_{500}$. Reassuringly, as is also apparent from \cref{fig:phase_space}, the contaminating sources excluded by this process have an angle distribution that is consistent with random, confirming that the alignment effect in \cref{fig:NAT_hist} is associated with the cluster rather than some spurious systematic bias. In the remaining sources that are associated with clusters, the alignment effect appears to persist out to at least $\sim 10 R_{500}$. We confirm that this phenomenon is not just associated with the cluster core by repeating the AD test on the spectroscopically-confirmed associations that lie in the radial range $3R_{500} < R < 10 R_{500}$, well outside the virial radius, which lies at $\sim 1.4R_{500}$ \citep{Walker2019}. We find that their angular distribution is also inconsistent with a uniform distribution at the 99.9\% confidence level, with an AD statistic of 6.02. Interestingly, if we do look at just the cluster core in \cref{fig:phase_space}, there also appears to be an excess of NATs for which $|\theta| > 135^\circ$, which we will discuss further in \cref{sec:Discussion}. It is notable that these results differ from those of \citetalias{Garon2019} in several ways. Not only is the non-uniformity in $\theta$ presented in \cref{fig:NAT_hist} significantly stronger that that detected by \citetalias{Garon2019}, but it also shows up as an asymmetric feature; we find that many more tails are directed away from the cluster than toward it, whereas \citetalias{Garon2019} determined the presence of this asymmetry in folded data but did not disaggregate these populations. In addition, we find strong evidence that this phenomenon persists to much larger radii than previously probed. The greater strength of these effects suggests that the LOFAR-detected NATs studied here are more dramatic probes of the ICM--galaxy interaction than previously recognised. \section{Discussion} \label{sec:Discussion} In this analysis, we have shown that the angle distribution of NATs implies that at least some are aware of the direction to the nearest cluster of galaxies, and that this awareness extends to surprisingly large distances. By way of summary, \cref{fig:polar_plot} shows in polar form how, for the spectroscopically-confirmed NATs, the angles between bent radio jets and cluster centres are distributed as a function of radius. This plot emphasizes the preference of the radio jets to point away from the cluster out to $\sim 10 R_{500}$, but also indicates the secondary feature of an excess of NATs whose tails point toward the cluster centre within $\sim 0.5 R_{500}$. The cardinal labels on \cref{fig:polar_plot} indicate the direction of travel of the NATs on the plane of the sky, assuming that their tail-bending is due to ram pressure. \begin{figure} \includegraphics[width=\columnwidth]{Polar_plot.png} \caption{A polar diagram of the distribution of NAT angles on-the-sky, $\theta$, as a function of radius, $R/R_{500}$, for those sources spectroscopically confirmed to be associated with a cluster, such that $|v| < 2\sigma_{500}$. The cardinal points are labelled to show the orbital direction we would expect the galaxies to be travelling in with respect to the cluster centre, if the values of $\theta$ are the result of ram-pressure bending of their jets.} \label{fig:polar_plot} \end{figure} Such features are notable because even if an infalling radio galaxy and its immediate surroundings are close enough to feel the gravitational effects of the nearby cluster, the equivalence principle implies that they cannot be aware of such influence -- and hence the jets cannot be bent in specific directions -- if they are simply freely falling in that gravitational field. As previous studies of jet bending have noted, it requires the additional presence of hydrodynamical phenomena, where forces other than gravity are in play \citep{Cowie1975,Begelman1979, O'Dea1987, Sakelliou2000}. Within the virial radius of a cluster, one would expect the ICM to be largely in hydrostatic equilibrium, so radio jets emerging from galaxies in this region would be bent by their motions relative to this stationary gas due to ram pressure. Any additional infalling gas is rapidly decelerated at a "virial shock" close to this radius \citep{Hurier2019}, although the morphology of shocks in infalling gas can be quite complex, with external shocks occurring all the way out to $\sim 5 R_{500}$ \citep{Molnar2009, Walker2019}. Such shocks produce the non-gravitational changes in the bulk flow of the gas that decouples it from the motions of galaxies, potentially providing the speed differential required to form NATs. However, none of these shock processes seem to be predicted to occur out to the $\sim 10 R_{500}$ where NATs are observed here. We therefore suggest that the true morphology of infalling gas is yet more complex, with significant hydrodynamical processes occurring out to even larger radii. In this context, it is interesting to note that Fig.~1 of \citet{Reiprich2013} shows tendrils of heated gas extending to well beyond $5 R_{500}$. While such radially-extended features may be quite rare, it seems likely that hydrodynamical phenomena also play a role in triggering the AGN activity in the first place \citep{Poggianti2017, Marshall2018, Ricarte2020}, which means that radio jets will preferentially be generated in just these regions, highlighting where they do occur. An infalling NAT will, after passing its pericentre near the cluster centre, then continue radially outward on its orbit, on its way to becoming a cluster member. Indeed, we can seemingly identify such a component in \cref{fig:polar_plot}, which shows an excess of NATs within $R_{500}$ that have their jets bent at $|\theta| \sim 180^\circ$, indicating an outbound galaxy on the plane of the sky if the bends are caused by ram pressure. It is interesting to note that the timescale on which an outbound galaxy will reach the $\sim0.5R_{500}$ radius at which the NATs seem to fade out, $\tau \sim 0.5R_{500}/\sigma_{500}$, is, for the characteristic masses of clusters in this sample, a few hundred million years, which is directly comparable to the lifetimes predicted for such sources \citep{Antognini2012}. This coincidence suggests that pericentre passage may represent the point at which new NATs are no longer being triggered. We thus have a scenario that at least plausibly explains the rather unexpected structures apparent in \cref{fig:polar_plot}. At large radii, galaxies and gas lie in infalling filaments, some of which are dense enough that hydrodynamic effects start to decelerate the gas relative to the galaxies. The resulting differential will disturb the gaseous environment around the galaxies, potentially triggering AGN activity to produce large-scale radio jets, and these jets are then bent through ram pressure effects that arise from the speed differential. At least some of these radio jets have long enough lifetimes to survive their pericentre passage, creating the excess of radially outbound NATs at small radii. It would be very interesting to see this picture fleshed out in further detail both by adding more data from the ongoing LOFAR surveys, and from a full comparison to simulations of cluster evolution that incorporates both detailed gas physics and the triggering of AGN activity. \section*{Acknowledgements} We thank G.\,K.~Miley, Huub R\"ottgering, D.\,J.\,B.~Smith and T.\,W.~Shimwell for their helpful comments on this work. KdV, NAH and MRM acknowledge support from the UK Science and Technology Facilities Council (STFC) under grant ST/T000171/1, and BM acknowledges support from the UK STFC under grants ST/R00109X/1, ST/R000794/1, and ST/T000295/1. All authors are grateful for the use of data from LOFAR, the LOw Frequency ARray, and SDSS, the Sloan Digital Sky Survey. This research made use of Astropy, a community-developed core Python package for astronomy \citep{astropy:2013, astropy:2018} hosted at \url{http://www.astropy.org/}, of MATPLOTLIB \citep{Hunter:2007}, of Plotly \citep{plotly}, and of TOPCAT \citep{Taylor:2005}. \section*{Data Availability} The LoTSS DR1 data used in this paper is publicly available, and can be found at \url{https://www.lofar-surveys.org/releases.html}. The cluster catalogue by \citet{WenHan2015} is also publicly available and is associated with the referenced paper. For access to the cluster-matched NAT data compiled from the catalogue, please contact KdV. \bibliographystyle{mnras}
1,314,259,996,900
arxiv
\section{Introduction} In this paper, we consider the situation in which an irreducible subfactor $N \subset M$ has an intermediate subfactor $Q$; and work out a reformulation of the proof of the fact that the planar algebra $P^{(N\subseteq Q)}$ may be derived from $P^{(N\subseteq M)}$ (see \cite{BhaLa} and \cite{La2}) by requiring that the action of a planar tangle $T$ is given by Equation (\ref{alpheq}) below. It is well-known from earlier work of Bisch \cite{Bi1} - and reformulated in \cite{BiJo2} and \cite{La1} in the planar algebraic terms that we will actually use here - that such intermediate subfactors are in bijective correspondence with so-called {\em biprojections} say $q \in P^{(N\subset M)}_2$. We wish here to describe (in Theorem~\ref{main1}) the planar algebra of $N \subset Q$ in terms of the planar algebra of $N \subset M$, while the planar algebra of $Q \subset M$ can be obtained by applying these results to $M \subset M_1$. The biprojection $q$ corresponding to the intermediate subfactor $Q$ gives rise naturally to a mapping $F = \{F_m\}$ from tangles of any colour (say $m$) to partially labelled tangles of the same colour (see Definition \ref{defF}), and a scalar-valued function $\alpha$ defined on the collection of all tangles (see Definition \ref{defalpha}), such that $P^{(N \subset Q)}$ may be identified with a planar algebra, call it $P^\prime$, with $P'_n = range (Z_{F(I^n_n)}^{(N \subset M)})$,where $I^n_n$ is the identity tangle of colour $n$ and the multilinear map $Z^{(N \subset Q)}$ associated to a tangle $T^{k_0}_{k_1,\cdots, k_b}$ is given by \begin{equation} \label{alpheq} Z^{(N \subset Q)}_T = \alpha(T)Z^{(N \subset M)}_{F(T)}, \end{equation} where both sides are thought of as acting on $\otimes_{i=1}^b P'_{k_i}$. The slightly involved proof of the above assertion takes some work - see Theorem \ref{main} and Theorem \ref{main1}. \smallskip The difference in proofs here and in \cite{BhaLa} stems from the two ways that a planar algebra $P$ can be described, respectively, (i) as in \cite{KodSun} (where one says what the underlying spaces $P_n$ are, and explicitly describes the multilinear operator $Z^P_T$ associated to a planar tangle $T$, and then verifying that these tangle maps satisfy the necessary compatibility conditions, as in Theorem \ref{main1}), and (ii) by specifying a non-degenerate scalar-valued partition function $Z$ on $0$-tangles (labelled by $S = \coprod_{k\in ~Col} S_k$) which is invariant under planar isotopy and multiplicative on connected components (as in \cite{Jo2}). Thus, one may say that a `bonus' in our approach is that we know how any planar tangle acts on a vector in its domain. \smallskip With our formulation of intermediate planar algebra, as an application, in Section \ref{CPE} we have recovered the result of \cite{LaSu} which establishes an one-one correspondence between a planar subalgebra $P^{\Theta}$ of the `group planar algebra' which is naturally associated with a group $\Theta$ of automorphisms of the given group $G$ and the planar agebra corresponding to the `subgroup-subfactor' associated with the inclusion $\Theta \subset (G\rtimes \Theta)$. It seems that there is a slight inaccuracy with the constant in the defining isomorphism ${\beta}_k$ of \cite{LaSu},while the corrected constant may be found in Definition \ref{defi}. \smallskip In an appendix, we have described the tower of iterated basic construction of $N\subseteq Q$ in terms of the the corresponding tower of $N \subseteq M$. This is the crucial step in obtaining standard invariant of $N\subset Q$ in terms of $N\subset M$. The Jones' tower of $N\subset Q$ in terms of $N\subset M$ was described in \cite{BhaLa}. Here we give another proof using yet another characterization of the basic construction, in terms of Pimsner Popa bases. It should be mentioned that D. Bisch gave a partial description of standard invariant of $N\subset Q$ in \cite{Bi2} giving the standard invariant of the inclusion $N\subset Q_1$, where $Q_1$ is the first step basic construction for $N\subset Q$. \smallskip After posting this paper on the arXiv, the author has been requested by D. Bisch to mention that there would be a section on intermediate planar algebra involving the same tangles as in Definition \ref{defF} in unpublished joint work with V. Jones, for which a preprint is forthcoming. \section{Notation and some basic facts} In this paper, all factors will be of type ${\rm II}_1$, and all subfactors $N\subset M$ will be of finite index $[M:N]$. By $tr_M$ we will mean the unique normal faithful trace defined on $M$. $E^M_N$ will denote the trace preserving conditional expectation from $M$ onto $N$; we shall often omit $M$ and write $E_N$ when doing so is unambiguous.\par Following Bisch, we denote the `Jones towers' built from the basic construction for $N \subset Q \subset M$ as: \[ N \subset Q \subset M \subset P_1 \subset M_1 \subset P_2 \subset M_2 \subset P_3 \subset \dots \subset M_{2n+1}~. \] We write $e_{\epsilon,i}, \epsilon \in \{0,1\}, i \geq 1$ for the projections: \[ e_{0,i}: L^2(M_{i-1}) \rightarrow L^2(P_{i-1}) \mbox{ \ \ \ \ \ and \ \ \ \ \ } e_{1,i}: L^2(M_{i-1}) \rightarrow L^2(M_{i-2})\] so that $P_i = \langle M_{i-1}, e_{0,i} \rangle$ and $M_i=\langle M_{i-1}, e_{1, i}\rangle$ (here we set $M_0=M$, $M_{-1}=N$, $P_0=Q$). The description of the algebras generated by $e_{0,i}$ and $e_{1,i}$ are given in \cite{BiJo1}. We will use the following relations appearing in \cite{BiJo1}. In what follows, as usual, $[a,b]=0$ means that $a$ and $b$ commute, and $[a,B]=0$ means that $a$ commutes with all elements of the set $B$. \begin{fact} \label{f:erel} The following relations hold: \begin{enumerate} \item $e_{0,i}e_{1,i}=e_{1,i}$, \item $[e_{a,i},e_{b,j}]=0$ for $|i-j|\geq 2$, \item $[e_{0,i},e_{0, i\pm 1}]=0$, \item $[e_{0,i},\ P_{i-1}]=[e_{1,i},\ M_{i-2}]=0$, \item for $i$ even, $e_{0,i}e_{1,i \pm1}e_{0,i} =[Q:N]^{-1} e_{0,i}e_{0,i\pm1}$, and $e_{1,i}e_{0,i \pm1}e_{1,i} = [M:Q]^{-1} e_{1,i} $, \item for $i$ odd, $e_{0,i}e_{1,i \pm1}e_{0,i} =[M:Q]^{-1} e_{0,i}e_{0,i\pm1}$, and $e_{1,i}e_{0,i \pm1}e_{1,i} = [Q:N]^{-1} e_{1,i} $, \item $e_{1,i}e_{1,i \pm1}e_{1,i} =[M:N]^{-1} e_{1,i} $. \end{enumerate} \end{fact} \begin{proof} For a proof look at \cite{BiJo1} Proposition $5.1$. \end{proof} V. Jones introduced his theory of planar algebras in \cite{Jo2}. A summary of planar algebra terminology is given in \cite{La1} and also a crash course on planar algebras is given in \cite {KodSun}. We will mainly follow the notation for planar algebras from \cite{KodSun}(section 2). Thus, we write $P_k$ for the $k$-box space $N^\prime \cap M_{k-1}$, $\delta = [M:N]^{-1/2}$ and write $Z_T$ for the multilinear operator corresponding to a planar tangle $T$. For each disc, one of its boundary arcs is distinguished and marked with a $*$ placed near it (whereas in \cite{KodSun},$*$ was marked to a distinguished point). As is usual, we will normally draw the discs as boxes with their $*$ arcs unmarked and assumed to contain their north-west corner (and in exceptional cases when it has been necessary to use a `2-click rotation, as in the following figure, for instance, the $*$-interval will be explicitly marked); typically, when a 2-box has a $q$ in it, the $*$-arc has to be in a white arc, and for biprojections, it is immaterial which white arc has the $*$, and we may omit indicating the $*$. Similarly, we shall sometimes omit drawing the external disc. (If from the context the shading is clear we will omit that also.) If $r\in P_2$ sometimes we also write $\begin{minipage}{.1\textwidth} \centering \includegraphics[scale= .4]{su1.eps} \end{minipage} $ \hspace{2mm} for \hspace{2mm}$ \begin{minipage}{.1\textwidth} \centering \includegraphics[scale= .4]{su2.eps} \end{minipage}$. \par It is well-known from \cite{Bi1}, \cite{La1} and \cite{BiJo2} that in case $N^\prime \cap M = \mathbb C$, there is a bijective correspondence between biprojections $q$ (corresponding to the Jones projection of $L^2(M)$ onto $L^2(Q)$) and the intermediate subfactor $Q$, where $N \subset Q \subset M$. More precisely, we have the following (reformulation of) Theorem 3.2 of \cite{Bi1}. \begin{theorem}\label{Bisch}\cite{Bi1} \cite{La1} \cite{BiJo2} Let $N \subset M$ be an extremal ${\rm II}_1$ subfactor. Let $P^{(N\subset M)}$ be the planar algebra of $N \subset M$, and let $\Phi_{N\subset M}$ be the presenting map of $P^{(N \subset M)}$ on itself, i.e. $\Phi_{N\subset M}: \mathcal{P} (L) \rightarrow P^{(N\subset M)}$ with $L = \coprod P^{(N \subset M)}_{k}$. Suppose there exists an intermediate subfactor $Q$, $N \subset Q \subset M$. If we let $\begin{minipage}{.05\textwidth} \centering \includegraphics[scale=.35]{q.eps} \end{minipage} $ denote the biprojection corresponding to $Q$, we have \vspace{2mm}\\ \begin{tabular}{ll} a) \ \ $\begin{array}[c]{l} \includegraphics[scale=.45]{bisa.eps} \end{array}$ & b) \ \ $\begin{array}[c]{l} \includegraphics[scale=.45]{bisb.eps} \end{array}$ \vspace{2mm}\\ c) \ \ $\begin{array}[c]{l} \includegraphics[scale=.45]{bisc.eps} \end{array}$ & d) \ \ $\begin{array}[c]{l} \includegraphics[scale=.4]{bisd.eps} \end{array}$ \end{tabular} with $c = [M:N]^{1/2}[M:Q ]^{-1}$. Furthermore, in the case $N'\cap M=\mathbb C$, the converse is also true. Namely a 2-box $\begin{minipage}{.05\textwidth} \centering \includegraphics[scale=.5]{q.eps} \end{minipage} $ satisfying a)-d) above implies the existence of an intermediate subfactor $Q$, $N \subset Q \subset M$ corresponding to $\begin{minipage}{.05\textwidth} \centering \includegraphics[scale=.5]{q.eps} \end{minipage} $. \end{theorem} \begin{corollary}\label{cor:exchange} The following exchange relation holds: \begin{figure}[h!] \begin{minipage}{1\textwidth} \begin{equation*} {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.4]{flower.eps} \end{minipage}} = {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.4]{flower2.eps} \end{minipage}} \end{equation*} \end{minipage} \end{figure} \end{corollary} Denote by $N \subset Q \subset Q_1 \subset Q_2 \subset \dots$ the Jones tower of $N \subset Q $. \begin{definition}\label{defF} Denote the following tangle by $E_n$: $$ \includegraphics [scale=.5] {Def.eps}$$ according as $n$ is even or odd respectively. We shall use these to define a map $T \mapsto F(T)$ from the class of $k$-tangles to the class of {\em partially labelled} $k$-tangles with $(k+1)$ internal discs all but the last of which are 2-boxes labelled with a $q$, with the tangle $T$ inserted in the last disc of colour $k$. Thus, $F(T) = E_k\circ_{ (D_1,D_2,\cdots, D_k, D_{k+1})}(q,q,\cdots,q,T)$. If it is clear from the context then we write $E$ instead of $E_n$. \par Define functions $F_n :P_n \mapsto P_n$ by $F_n(x)=Z_{E_n}(q \otimes q \otimes \cdots \otimes q \otimes x)$ for $x \in P_n$. We often write $F(x)$ instead of $F_n(x)$ if there is no confusion. \end{definition} Following \cite{BhaLa} define the natural inclusion map \[ i: F_n(P_n) \rightarrow F_{n+1}(P_{n+1}) \] given by \[ t \mapsto F_{n+1}(t). \] We denote this inclusion by $\subset_i$. Our starting point is the following result from \cite{BhaLa}: \begin{theorem}\label{planarstinv} The lattice of algebras: \[ \begin{array}{ccccccccccc} F_{0}(P_0) & \subset_i & F_{1}(P_1) & \subset_i &F_{2}(P_2) &\subset_i &\dots& \subset_i& F_{n}(P_{n}) & \subset_i & \dots \\ &&\cup && \cup && &&\cup && \\ && F_{1}(P_{1,1}) & \subset_i & F_{2}(P_{1,2}) & \subset_i & \dots & \subset_i & F_{n}(P_{1,n})& \subset_i & \dots \end{array}\] is isomorphic to the standard invariant of $N\subset Q $: \[ \begin{array}{ccccccccccc} N'\cap N & \subset & N' \cap Q & \subset & N' \cap Q_1 & \subset& \dots& \subset& N' \cap Q_{n-1} &\subset & \dots \\ &&\cup && \cup &&&& \cup && \\ &&Q ' \cap Q &\subset & Q ' \cap Q_1 &\subset& \dots & \subset & Q ' \cap Q_{n-1} & \subset & \dots \end{array}\] The Jones projections are $$ F_{2n+1}(P_{2n+1}) \ni e^{Q }_{2n} = [M:Q ]^{1/2}[Q :N]^{-1/2} {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.5]{jopo1.eps} \end{minipage}} $$ and $$ F_{2n+2}(P_{2n+2}) \ni e^{Q }_{2n+1}= [M:N]^{-1/2} {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.5]{jopo2.eps} \end{minipage}} $$ for $n\geq 1$. The trace on $F_{2n}(P_{2n})$ and $F_{2n+1}(P_{2n+1}))$ is given by $tr_{N \subset Q }(x) = [M:Q ]^{n} tr_{N \subset M}(x) $. \end{theorem} \section{The Intermediate Planar Algebra} This section is devoted to our reformulation of the proof of the fact that the planar algebra $P^{(N\subseteq Q)}$ may be derived from $P^{(N\subseteq M)}$ (see \cite{BhaLa}) by requiring that the action of a planar tangle $T$ is given by Equation (\ref{alpheq}) of the Introduction. \label{sec:planar-algebra-an} \begin {definition}\label{defalpha} Let $ T $ be a $k$-tangle with $ b\geq 1 $ internal discs ${D_1,\dots D_b}$ of colours ${k_1,\dots k_b}$. Then define $\alpha(T) = [M:Q]^{\frac{1}{2}c(T)}$, where \[c(T)=(\lceil k_0/2 \rceil+\lfloor k_1/2 \rfloor+\dots +\lfloor k_b/2 \rfloor)-l(T) \] with $l(T)$ being the number of closed loops after capping the black intervals of the external disc of $T$ and cupping the black intervals of all internal discs of $T$. \end {definition} \begin{proposition}\label{alphas} If $T = T^{k_0}_{k_1,\cdots,k_b}$ and $\tilde{T} = \tilde{T}^{\tilde{k}_0}_{\tilde{k}_1,\cdots,\tilde{k}_{\tilde{b}}}$ are tangles with discs of indicated colours such that $\tilde{k}_0=k_i$ for some $1 \leq i \leq b$, then \[\frac{\alpha(T)\alpha(\tilde{T})}{\alpha(T\circ_i\tilde{T})} = [M:Q]^{\frac{1}{2}(\tilde{k_0} - l(T) - l(\tilde{T}) + l(T\circ_i\tilde{T}))}.\] \end{proposition} \begin{proof} This is simple arithmetic: \begin{eqnarray*} c(T) &=& (\lceil k_0/2 \rceil+\lfloor k_1/2 \rfloor+\dots +\lfloor k_b/2 \rfloor)-l(T)\\ c(\tilde{T}) &=& (\lceil \tilde{k}_0/2 \rceil+\lfloor \tilde{k}_1/2 \rfloor+\dots +\lfloor \tilde{k}_{\tilde{b}}/2 \rfloor)-l(\tilde{T})\\ c(T\circ_{D_i} \tilde{T}) &=& (\lceil k_0/2 \rceil+\lfloor k_1/2 \rfloor+\dots +\lfloor k_b/2 \rfloor)- \lfloor \tilde{k}_0/2 \rfloor + \lfloor \tilde{k}_1/2 \rfloor+\dots +\lfloor \tilde{k}_{\tilde{b}}/2 \rfloor- l(T\circ_i \tilde{T}) \end{eqnarray*} Hence, after all the cancellation, we find that \begin{eqnarray*} \frac{\alpha(T)\alpha(\tilde{T})}{\alpha(T\circ_i\tilde{T})} &=& [M:Q]^{1/2\left(c(T) +c(\tilde{T})-c(T\circ_i \tilde{T})\right)}\\ &=& [M:Q]^{1/2\left(\tilde{k}_0 - l(T) - l(\tilde{T}) + l(T\circ_i\tilde{T}) \right)}~, \end{eqnarray*} since $\lceil n/2 \rceil + \lfloor n/2 \rfloor = n$ for all integral $n$. \end{proof} We shall show that: \begin{theorem}\label{main1} If $P^\prime_k = ran(F(I^k_k))$ and $Z^\prime_T = \alpha(T) Z_{F(T)|_{\otimes P^\prime_{k_i(T)}}}$, then $(P^\prime, T \mapsto Z^\prime_T|_{\otimes P^\prime_{k_i(T)}})$ is a subfactor planar algebra which is isomorphic to $P^{(N\subset Q)}$. \end{theorem} The proof of this theorem has two main ingredients: (a) the verification that $P^\prime$ is a planar algebra; and (b) the verification that this is isomorphic to the planar algebra of $N \subset Q$. The proof of (b) is an application of a theorem of Jones (see \cite{KodSun} Theorem 2.1,which is the formulation that we shall use) while the only really non-trivial part of proving (a) is in the verification of compatibility of the partition function to gluing of tangles. In order to verify that the operation of tangles (in $P^\prime$) is compatible with composition of tangles, we will need to verify that \[\alpha(T\circ \tilde{T}) Z_{F(T \circ \tilde{T})} = \alpha(T)\alpha(\tilde{T}) Z_{F(T)\circ F(\tilde{T})}~,\] which, in view of Proposition \ref{alphas}, is seen to translate to: \begin{equation}\label{TPT} Z_{F(T \circ F(\tilde{T}))} = [M:Q]^{-1/2(\tilde{k}_0 - l(T) -l(\tilde{T_0}) + l(T\circ \tilde{T})} Z_{F(T \circ \tilde{T})}~, \end{equation} which is what the next few pages are devoted to. We start on part (b) in the few lines after the proof of ``compatibility under substitution''. \bigskip \noindent Thus, we assume that $P$ is an irreducible subfactor planar algebra and $q \in P_2$ be a biprojection. Let $T = T_{k_1,\cdots,k_b}^{k_0}$ and $\tilde{T} = \tilde{T}_{\tilde{k}_1,\cdots,\tilde{k}_{\tilde{b}}}^{\tilde{k}_0}$ be tangles with $k_i = \tilde{k}_0$. By $F(T)$ we will denote the partially labelled tangle obtained from $T$ by `surrounding it with $q$'s'. It has the same number and colours of discs as $T$ does. Let $P^\prime_k$ be the range of the tangle $F(I_k^k)$. \begin{theorem}\label{main} The equation \begin{equation*} Z_{F(T) \circ_i F(\tilde{T})} = \tau(q)^{\frac{1}{2}({k}_i + l(T \circ_i \tilde{T}) - l(T) - l(\tilde{T}))} Z_{F(T \circ_i \tilde{T})}, \end{equation*} holds for inputs coming from $P^\prime$. \end{theorem} Here, for any tangle $T$, $l(T)$ is the number of loops obtained after black-cupping the internal discs of $T$ and black-capping the external disc of $T$. The proof of Theorem \ref{main} proceeds by a series of reductions to easier and easier cases until the result is obvious. There are 4 main steps. \bigskip\noindent {\bf Step 1:} Reduction to the case $T$ is a $0_+$-tangle : Let $S$ be the $0_+$-tangle in Figure~\ref{step} below and $\tilde{S} = \tilde{T}$. We claim that the truth of the equation for $S$ and $\tilde{S}$ implies it for $T$ and $\tilde{T}$. The new disc of $S$ is the last numbered one. \begin{figure}[!h] \begin{center} \psfrag{hatt}{} \psfrag{k}{\huge $k_0$} \psfrag{1}{\huge $T$} \resizebox{3cm}{!}{\includegraphics{reduction2.eps}} \end{center} \caption{}\label{step} \end{figure} Observe that, by definition, $l(S) = l(T), l(\tilde{S}) = l(\tilde{T})$ and $l(S \circ_i \tilde{S}) = l(T \circ_i \tilde{T})$. To prove Theorem \ref{main}, it suffices to trace both sides against an arbitrary element $x \in P_{k_0}$ and verify that the results are the same. Now $\delta^{k_0} \tau(Z_{F(T) \circ_i F(\tilde{T})}(\cdots) x) = Z_{F(S) \circ_i F(\tilde{S})}(\cdots, F(x))$ and \par $\delta^{k_0} \tau(Z_{F(T \circ_i \tilde{T})}(\cdots) x) = Z_{F(S \circ_i \tilde{S})}(\cdots, F(x))$ Also, we are given that \begin{equation*} Z_{F(S) \circ_i F(\tilde{S})} = \tau(q)^{\frac{1}{2}({k}_i + l(S \circ_i \tilde{S}) - l(S) - l(\tilde{S}))} Z_{F(S \circ_i \tilde{S})}, \end{equation*} holds when all inputs come from $P^\prime$. Thus the $\delta^{k_0} \tau(Z_{F(T) \circ_i F(\tilde{T})}(\cdots) x) $ above equals \begin{eqnarray*} \tau(q)^{\frac{1}{2}({k}_i + l(S \circ_i \tilde{S}) - l(S) - l(\tilde{S}))} Z_{F(S \circ_i \tilde{S})}(\cdots,F(x)) = \\ \delta^{k_0} \tau(q)^{\frac{1}{2}({k}_i + l(T \circ_i \tilde{T}) - l(T) - l(\tilde{T}))} \tau(Z_{F(T \circ_i \tilde{T})}(\cdots)x).& \end{eqnarray*} The desired reduction follows. This reduction having been made, we will henceforth assume that $T$ is a $0_+$-tangle and therefore the equation that must be seen to hold on $P^\prime$ is: \begin{equation*} Z_{T \circ_i F(\tilde{T})} = \tau(q)^{\frac{1}{2}({k}_i + l(T \circ_i \tilde{T}) - l(T) - l(\tilde{T}))} Z_{T \circ_i \tilde{T}}. {\hspace {1in}} (*) \end{equation*} (since $F(T) =T$ for a $0_+$-tangle $T$). \bigskip\noindent {\bf Step 2:} Reduction to the case $T$ is of the form in Figure \ref{one} \begin{figure}[!h] \begin{center} \psfrag{hatt}{\huge $\hat{T}$} \psfrag{k}{\huge $k_i$} \psfrag{1}{\huge $1$} \resizebox{3cm}{!}{\includegraphics{reduction2.eps}} \end{center} \caption{}\label{one} \end{figure} \noindent where $\hat{T}$ is some ${k}_i$-tangle and $i=1$: This follows from sphericality. \bigskip\noindent {\bf Step 3:} Reduction to the case $\tilde{T}$ is Temperley-Lieb: This is handled in two different ways according as ${k}_i$ is even or odd.\\ Subcase 3.1: Suppose that ${k}_i$ is even. Let $U = U^{k_1}_{2k_1,k_1}$ and $\tilde{S} = \tilde{S}^{2k_1}$ be the following tangles in Figure \ref{subcase3.1}, \begin{figure}[!h] \begin{center} \psfrag{k}{\huge $k_1$} \psfrag{1}{\huge $1$} \psfrag{2}{\huge $2$} \psfrag{u=}{\huge $U=$} \psfrag{s=}{\huge $\tilde{S}=$} \resizebox{12cm}{!}{\includegraphics{tangleu.eps}} \end{center} \caption{}\label{subcase3.1} \end{figure} and $S = T \circ_1 (U \circ_2 \tilde{T})$. It is then clear that $S \circ_1 \tilde{S} = T \circ_1 \tilde{T}$. We claim that the validity of Equation (*) holding for the pair $(S,\tilde{S})$ implies its validity for the pair $(T,\tilde{T})$. To see this, assume that \begin{equation*} Z_{S \circ_1 F(\tilde{S})} = \tau(q)^{\frac{1}{2}(2{k}_1 + l(S \circ_1 \tilde{S}) - l(S) - l(\tilde{S}))} Z_{S \circ_1 \tilde{S}}. \end{equation*} Now observe that $Z_{S \circ_1 \tilde{S}} = Z_{T \circ_1 \tilde{T}}$ and $Z_{S \circ_1 F(\tilde{S})} = Z_{T \circ_1 F(\tilde{T})}$ since $k_1$ is even and using that $q^2 = q$ several times. Also, note that $l(\tilde{S}) = k_1$ and $l(S) = l(\tilde{T}) + l(\hat{T}) = l(\tilde{T}) + l(T)$. Substituting all this in the previous equation and simplifying, we get the desired Equation (*).\\ Subcase 3.2: Suppose that ${k}_1$ is odd. Now let $U = U^{k_1}_{2(k_1+1),k_1}$ and $\tilde{S} = \tilde{S}^{2(k_1+1)}$ be the following tangles in Figure \ref{subcase3.2} \begin{figure}[!h] \begin{center} \psfrag{k}{\huge $k_1$} \psfrag{1}{\huge $1$} \psfrag{2}{\huge $2$} \psfrag{l}{\huge {$k_1+1$}} \psfrag{u=}{\huge $U=$} \psfrag{s=}{\huge $\tilde{S}=$} \resizebox{12cm}{!}{\includegraphics{tangleu2.eps}} \end{center} \caption{}\label{subcase3.2} \end{figure} and let $S = T \circ_1 (U \circ_2 \tilde{T})$. It is then clear that $S \circ_1 \tilde{S}$ differs from $T \circ_1 \tilde{T}$ in having one extra floating loop. We again claim that the validity of Equation (*) holding for the pair $(S,\tilde{S})$ implies its validity for the pair $(T,\tilde{T})$. To see this, assume that \begin{equation*} Z_{S \circ_1 F(\tilde{S})} = \tau(q)^{\frac{1}{2}(2({k}_1+1) + l(S \circ_1 \tilde{S}) - l(S) - l(\tilde{S}))} Z_{S \circ_1 \tilde{S}}. \end{equation*} Now observe that $Z_{S \circ_1 \tilde{S}} = \delta Z_{T \circ_1 \tilde{T}}$. Also note that $l(\tilde{S}) = k_1+1$, $l(S) = l(\tilde{T}) + l(\hat{T}) = l(\tilde{T}) + l(T)$, and $l(S \circ_1 \tilde{S}) = l(T \circ_1 \tilde{T})+1$. To finish the proof, it suffices to see that $Z_{S \circ_1 F(\tilde{S})} = \delta \tau(q) Z_{T \circ_1 F(\tilde{T})}$. We will first do this in the case $k_1=5$. \begin{figure}[!h] \begin{center} \psfrag{that}{\Huge $\hat{T}$} \psfrag{ttilde}{\Huge $\tilde{T}$} \psfrag{q}{\Huge $q$} \psfrag{1}{\huge $1$} \resizebox{9cm}{!}{\includegraphics{scircfs.eps}} \caption{}\label{complicated} \end{center} \end{figure} The tangle $S \circ_1 F(\tilde{S})$ is depicted in Figure \ref{complicated}. With a little bit of manipulation, this reduces to Figure \ref{simplified}. \begin{figure}[!h] \begin{center} \psfrag{that}{\Huge $\hat{T}$} \psfrag{ttilde}{\Huge $\tilde{T}$} \psfrag{q}{\Huge $q$} \psfrag{=}{\Huge $= \delta \tau(q)$} \resizebox{12cm}{!}{\includegraphics{scrifs2.eps}} \end{center} \caption{}\label{simplified} \end{figure} \noindent Since the last picture is clearly $T \circ_1 F(\tilde{T})$, we're done. It should be clear that a similar proof works whenever $k_1$ is odd. \bigskip\noindent {\bf Step 4:} Resolution of the case $\tilde{T}$ is Temperley-Lieb in three different subcases by induction on ${k}_1$. In each of the subcases, we will show that the statement for a suitably chosen $S$ and $\tilde{S}$ with $k_0(\tilde{S}) < k_0(\tilde{T})$, implies it for $T$ and $\tilde{T}$.\\ Subcase 4.1: Suppose that in $\tilde{T}$ some $2i-1$ and $2i$ are joined by a string so that $\tilde{T}$ has the form in Figure \ref{two} \begin{figure}[!h] \begin{center} \psfrag{cdots}{\huge ${\cdots}$} \psfrag{stilde}{\huge $\tilde{S}$} \psfrag{q}{\Huge $q$} \psfrag{1}{\large $1$} \psfrag{2i-1}{\large $2i-1$} \psfrag{2i}{\large $2i$} \psfrag{2i+1}{\large $2i+1$} \psfrag{2i+2}{\large $2i+2$} \psfrag{2i-2}{\large $2i-2$} \psfrag{2k_1}{\large $2k_1$} \psfrag{=}{\Huge $= \delta \tau(q)$} \resizebox{5cm}{!}{\includegraphics{tildet.eps}} \end{center} \caption{}\label{two} \end{figure} for some Temperley-Lieb tangle $\tilde{S}$ of colour $k_1-1$. In this case, let $S$ be the tangle in Figure \ref{three}. \begin{figure}[!h] \begin{center} \psfrag{cdots}{\huge ${\cdots}$} \psfrag{ttilde}{\huge $\hat{T}$} \psfrag{q}{\Huge $q$} \psfrag{1}{\large $1$} \psfrag{2i-1}{\large $2i-1$} \psfrag{2i}{\large $2i$} \psfrag{2i+1}{\large $2i+1$} \psfrag{2i+2}{\large $2i+2$} \psfrag{2i-2}{\large $2i-2$} \psfrag{2k_1}{\large $2k_1$} \psfrag{=}{\Huge $= \delta \tau(q)$} \resizebox{5cm}{!}{\includegraphics{tangles.eps}} \end{center} \caption{}\label{three} \end{figure} Note that $T\circ_1\tilde{T} = S\circ_1\tilde{S}$. It follows that $l(T\circ_1\tilde{T}) = l(S\circ_1\tilde{S})$ and it is easy to see that $l(S) = l(T)$ ($= l(\hat{T})$) and that $l(\tilde{S}) = l(\tilde{T}) - 1$. To show that the statement for the pair $S,\tilde{S}$ implies that for the pair $T,\tilde{T}$, it therefore suffices now to see that $T\circ_1F(\tilde{T}) = S\circ_1 F(\tilde{S})$. This follows easily from the fact that `$q$ capped on top can be replaced by the identity'. \noindent Subcase 4.2: Suppose that in $\tilde{T}$ some $2i$ and $2i+1$ are joined by a string so that $\tilde{T}$ has the form in Figure \ref{subcase4.2} \begin{figure}[!h] \begin{center} \psfrag{cdots}{\huge ${\cdots}$} \psfrag{stilde}{\huge $\tilde{S}$} \psfrag{q}{\Huge $q$} \psfrag{1}{\large $1$} \psfrag{2i-1}{\large $2i$} \psfrag{2i}{\large $2i+1$} \psfrag{2i+1}{\large $2i+2$} \psfrag{2i+2}{\large $2i+3$} \psfrag{2i-2}{\large $2i-1$} \psfrag{2k_1}{\large $2k_1$} \psfrag{=}{\Huge $= \delta \tau(q)$} \resizebox{5cm}{!}{\includegraphics{tildet.eps}} \end{center} \caption{}\label{subcase4.2} \end{figure} \noindent for some Temperley-Lieb tangle $\tilde{S}$ of colour $k_1-1$. Note that $l(\tilde{S}) = l(\tilde{T})$. Here there are two further subcases.\\ Subcase 4.2(a): The black intervals $[2i-1,2i]$ and $[2i+1,2i+2]$ are part of distinct black regions in $\hat{T}$. In this case, let $S$ be the tangle Figure \ref{subcase4.2a} \begin{figure}[!h] \begin{center} \psfrag{cdots}{\huge ${\cdots}$} \psfrag{ttilde}{\huge $\hat{T}$} \psfrag{q}{\Huge $q$} \psfrag{1}{\large $1$} \psfrag{2i-1}{\large $2i$} \psfrag{2i}{\large $2i+1$} \psfrag{2i+1}{\large $2i+2$} \psfrag{2i+2}{\large $2i+3$} \psfrag{2i-2}{\large $2i-1$} \psfrag{2k_1}{\large $2k_1$} \psfrag{=}{\Huge $= \delta \tau(q)$} \resizebox{5cm}{!}{\includegraphics{tangles.eps}} \end{center} \caption{}\label{subcase4.2a} \end{figure} Here too $T\circ_1\tilde{T} = S\circ_1\tilde{S}$ and it follows that $l(T\circ_1\tilde{T}) = l(S\circ_1\tilde{S})$. Recall that $l(\tilde{S}) = l(\tilde{T})$. The pictures for computing $l(T)$ and $l(S)$ are shown in Figure \ref{four}. (The picture for $l(T)$ is above the one for $l(S)$). \begin{figure}[!h] \begin{center} \psfrag{cdots}{\huge ${\cdots}$} \psfrag{ttilde}{\huge $BC(\hat{T})$} \psfrag{q}{\Huge $q$} \psfrag{1}{\large $1$} \psfrag{2i-1}{\large $2i-1$} \psfrag{2i}{\large $2i$} \psfrag{2i+1}{\large $2i+1$} \psfrag{2i+2}{\large $2i+2$} \psfrag{2i-2}{\large $2i-2$} \psfrag{2k_1}{\large $2k_1$} \psfrag{=}{\Huge $= \delta \tau(q)$} \resizebox{8cm}{!}{\includegraphics{ltandls.eps}} \end{center} \caption{}\label{four} \end{figure} Here $BC(\hat{T})$ is the tangle obtained by `black cupping' the insides of all boxes of $\hat{T}$. We need to compare the number of loops in the top and bottom pictures. Observe that the black regions of $\hat{T}$ and that of $BC(\hat{T})$ are in natural bijective correspondence and therefore the black intervals $[2i-1,2i]$ and $[2i+1,2i+2]$ are part of distinct black regions in $BC(\hat{T})$. Thus the loops containing $2i-1$ (and $2i$) and $2i+1$ (and $2i+2$) are different in the first picture while these two loops are cut and spliced into a single loop in the second picture. It follows that $l(S) = l(T) -1$. Now suppose that we know that \begin{equation*} Z_{S \circ_1 F(\tilde{S})} = \tau(q)^{\frac{1}{2}({k}_1-1 + l(S \circ_1 \tilde{S}) - l(S) - l(\tilde{S}))} Z_{S \circ_1 \tilde{S}}. \end{equation*} It follows that \begin{equation*} Z_{S \circ_1 F(\tilde{S})} = \tau(q)^{\frac{1}{2}({k}_1 + l(T \circ_1 \tilde{T}) - l(T) - l(\tilde{T}))} Z_{T \circ_1 \tilde{T}}, \end{equation*} and so to complete the proof it suffices to see that $Z_{S \circ_1 F(\tilde{S})} = Z_{T \circ_1 F(\tilde{T})}$. To see this, first note that the `antipode symmetry' of the $q$ implies that $T \circ_1 F(\tilde{T}) = V \circ_1 F(\hat{T})$ where $V$ is the tangle in Figure \ref{antipode}. \begin{figure}[!h] \begin{center} \psfrag{hatt}{\huge $1$} \psfrag{k}{\huge $k_i$} \psfrag{1}{\huge $\tilde{T}$} \resizebox{3cm}{!}{\includegraphics{reduction2.eps}} \end{center} \caption{}\label{antipode} \end{figure} Thus $T \circ_1 F(\tilde{T})$ is given by the picture on the left in Figure \ref{last} which equals the one on the right using properties of $q$. \begin{figure}[!h] \begin{center} \psfrag{that}{\huge $\hat{T}$} \psfrag{k}{\huge $k_i$} \psfrag{q}{\huge $q$} \psfrag{stilde}{\huge $\tilde{S}$} \resizebox{12cm}{!}{\includegraphics{tftilde.eps}} \end{center} \caption{}\label{last} \end{figure} The two middle $q$'s in the picture on the right may be deleted using one application of Lemma \ref{follows} to $\hat{T}$ and then what is left is clearly $S \circ_1 F(\tilde{S})$. \begin{lemma}\label{follows} Let $T$ be a $k$-tangle and $[k] = \{1,2,\cdots,k\}$ be regarded as the set of black external boundary arcs of $T$, enumerated, say, in clockwise direction starting from the one immediately next (counterclockwise) to the $*$ arc. Let $A \subseteq [k]$ be such that any black region of $T$ intersects at most one element of $A$. Surround $T$ with $q$'s in all positions except those given by $A$ and call this partially labelled tangle $F_A(T)$. Then $Z_{F_A(T)} = Z_{F(T)}$ on $P^\prime$. \end{lemma} \begin{proof} Consider the external boundary of any black region of $T$ that intersects an external boundary arc. Say it looks like something in Figure \ref{lemma2fig2}. \begin{figure}[!h] \begin{center} \psfrag{&}{\Huge $\&$} \resizebox{5cm}{!}{\includegraphics{lemma2fig2.eps}} \end{center} \caption{}\label{lemma2fig2} \end{figure} Here the dark portions represent boundary arcs of discs of $T$ while the light portions represent strings. Say the portions marked $\&$ are boundary arcs of the external disc of $T$ while the rest are boundary arcs of various internal discs of $T$. By assumption, at most one of the portions marked $\&$ is in $A$. Now in calculating $F(T)$, this black region looks as in Figure \ref{lemma2fig3}, where every 2-box has a $q$ in it. \begin{figure}[!h] \begin{center} \psfrag{&}{\Huge $\&$} \resizebox{5cm}{!}{\includegraphics{lemma2fig3.eps}} \end{center} \caption{}\label{lemma2fig3} \end{figure} The external portions have a $q$ by definition of $F(T)$, while the internal portions have a $q$ because we're only interested in the values of the tangle when inputs come from $P^\prime$. Now observe that in calculating $F_A(T)$, at most one of the $q$'s is missing - which does not matter because of the exchange relation that $q$ satisfies (see Corollary \ref{cor:exchange}). \end{proof} \noindent Subcase 4.2(b): The black intervals $[2i-1,2i]$ and $[2i+1,2i+2]$ are part of the same black region in $\hat{T}$. Draw a dotted line from the midpoint of the interval $[2i-1,2i]$ to the midpoint of the interval $[2i+1,2i+2]$ in $\hat{T}$ that lies entirely in the black region that these are both part of. This line does not intersect any string of $\hat{T}$ (by definition of a region) and so the part of $\hat{T}$ that lies inside this dotted line is a 1-box that joins the points $2i$ and $2i+1$. By irreducibility we may replace this one box by a scalar times a string and thus assume that in $\hat{T}$ too, the points $2i$ and $2i+1$ are joined together. Thus $\hat{T}$ is of the form in Figure \ref{subcase4.2b} \begin{figure}[!h] \begin{center} \psfrag{cdots}{\huge ${\cdots}$} \psfrag{stilde}{\huge $W$} \psfrag{q}{\Huge $q$} \psfrag{1}{\large $1$} \psfrag{2i-1}{\large $2i$} \psfrag{2i}{\large $2i+1$} \psfrag{2i+1}{\large $2i+2$} \psfrag{2i+2}{\large $2i+3$} \psfrag{2i-2}{\large $2i-1$} \psfrag{2k_1}{\large $2k_1$} \psfrag{=}{\Huge $= \delta \tau(q)$} \resizebox{5cm}{!}{\includegraphics{whatt.eps}} \end{center} \caption{}\label{subcase4.2b} \end{figure} \noindent for some tangle $W$ of colour $k_1-1$. Set $S$ to be the tangle in Figure \ref{stangle}. \begin{figure}[!h] \begin{center} \psfrag{hatt}{\huge $W$} \psfrag{k}{\huge $k_i-1$} \psfrag{1}{\huge $1$} \resizebox{3cm}{!}{\includegraphics{reduction2.eps}} \end{center} \caption{}\label{stangle} \end{figure} Again we claim that the truth of the statement of $S$ and $\tilde{S}$ implies that of the statement for $T$ and $\tilde{T}$. So suppose that \begin{equation*} Z_{S \circ_1 F(\tilde{S})} = \tau(q)^{\frac{1}{2}({k}_1-1 + l(S \circ_1 \tilde{S}) - l(S) - l(\tilde{S}))} Z_{S \circ_1 \tilde{S}}. \end{equation*} Note that $T\circ_1 \tilde{T} = S \circ_1 \tilde{S}$ with one extra floating loop and therefore $Z_{T \circ_1 \tilde{T}} = \delta Z_{S \circ_1 \tilde{S}}$ and $l({T \circ_1 \tilde{T}}) = l({S \circ_1 \tilde{S}}) +1$. Also $l(T) = l(\hat{T}) = l(W) = l(S)$ and we recall that $l(\tilde{S}) = l(\tilde{T})$. It remains to compare $S \circ_1 F(\tilde{S})$ and $T \circ_1 F(\tilde{T})$. Observe that $T \circ_1 F(\tilde{T})$ equals the picture on the left in Figure \ref{finish} which equals $\delta \tau(q)$ times picture on the right using properties of $q$ - which is clearly $S \circ_1 F(\tilde{S})$. \begin{figure}[!h] \begin{center} \psfrag{that}{\huge $W$} \psfrag{cdots}{\huge $\cdots$} \psfrag{k}{\huge $k_i$} \psfrag{q}{\huge $q$} \psfrag{stilde}{\huge $\tilde{S}$} \resizebox{12cm}{!}{\includegraphics{tftilde2.eps}} \end{center} \caption{}\label{finish} \end{figure} Therefore $Z_{T \circ_1 F(\tilde{T})} = \delta \tau(q) Z_{S \circ_1 F(\tilde{S})}$. It now follows that \begin{equation*} Z_{T \circ_1 F(\tilde{T})} = \tau(q)^{\frac{1}{2}({k}_1 + l(T \circ_1 \tilde{T}) - l(T) - l(\tilde{T}))} Z_{T \circ_1 \tilde{T}}. \end{equation*} This completes the proof of Theorem \ref{main}. \bigskip We proceed to verify that our prescription for the tangle action does indeed specify various compatibility requirements that must be satisfied in order to define a planar algebra. \bigskip \noindent \textbf{(1) Compatibility with renumbering} Let, $\sigma\in {\varSigma}_b$. Consider the tangle $\sigma(T)$ which as a subset of ${\mathbb{R}}^2$ is the same as $T$ except that its $\sigma(i)$-th disc is the $i$-th disc of $T$. We have to show the following diagram commutes: \begin{displaymath} \xymatrix{ P^{\prime}_{k_1} \otimes P^{\prime}_{k_2} \otimes \cdots \otimes P^{\prime}_{k_b} \ar[r]^-{U_\sigma} \ar[d]^{Z^{\prime}_T} & P^{\prime}_{k_{\sigma^{-1}(1)}} \otimes P^{\prime}_{k_{\sigma^{-1}(2)}} \otimes \cdots \otimes P^{\prime}_{k_{\sigma^{-1}(b)}} \ar[ld]^{Z^{\prime}_{\sigma(T)}} & \\ P^{\prime}_{k_0} } \end{displaymath} Where, $$ U_{\sigma}(x_1\otimes \cdots \otimes x_b)= x_{{\sigma}^{-1}(1)} \otimes \cdots \otimes x_{{\sigma}^{-1}(b)} $$ for $x_i$ belongs to $P^\prime_i \subseteq P_i$. Now, \begin{align*} & Z^\prime_{\sigma(T)}\circ U_{\sigma}(x_1\otimes \cdots \otimes x_b)\\ & \qquad = Z^\prime_{\sigma(T)}(x_{{\sigma}^{-1}(1)} \otimes \cdots \otimes x_{{\sigma}^{-1}(b)})\\ & \qquad = \alpha(\sigma(T)) F_{k_0}(Z_{\sigma(T)}(x_{{\sigma}^{-1}(1)} \otimes \cdots \otimes x_{{\sigma}^{-1}(b)}))\\ & \qquad = \alpha(T) F_{k_0}(Z_{\sigma(T)} \circ U_{\sigma}(x_1 \otimes \cdots \otimes x_b))~~~~ \textrm{[since}~~ ~~~\alpha(\sigma(T))=\alpha(T)]\\ & \qquad = \alpha(T) F_{k_0}(Z_T(x_1 \otimes \cdots \otimes x_b))~~\textrm{[~~renumbering ~~~axiom~~~for~~~Z]}\\ & \qquad = Z^{\prime}_T(x_1 \otimes \cdots \otimes x_b)~~ \textrm{~~[by ~~definition]} \end{align*} \noindent \textbf{(2) Non-degeneracy} We have to show, $ Z^{\prime}_{I^k_k} = id_{{P_k}^{\prime}}.$ Now for $x \in P^{\prime}_k$, \begin{align*} & Z^{\prime}_{I^k_k} (x)\\ & \qquad = \alpha(I^k_k)F_k(Z_{I^k_k}(x))~~~~\textrm{[by ~~~~definition]}\\ & \qquad = \alpha(I^k_k) F_k(x)~~~~\textrm{[non-degeneracy~~~~of~~Z]}\\ & \qquad = F_k(x)~~~~~~~~~~\textrm{[since~~~~}~~~\alpha(I^k_k)= 1]\\ & \qquad = x \end{align*} \noindent \textbf{(3) Compatibility with respect to substitution} Let $T= T^{k_0}_{k_1,\cdots,k_b}$ and $ \widetilde{T}= T^{{\widetilde{k}}_0}_{\widetilde{k_1},\cdots, {\widetilde{k}}_{\tilde{b}}}$ with $\tilde{k_0}= k_i$ for some $i\in \{1,\cdots,b\}$. We need to check that the following diagram commutes:\newline When $\tilde{b} > 0 :$ \begin{displaymath} \xymatrix{ (\otimes_{j=1}^{i-1}P^{\prime}_{k_j}) \otimes (\otimes_{j=1}^{\widetilde{b}} P^{\prime}_{\widetilde{k}_j}) \otimes (\otimes_{j=i+1}^{b}P^{\prime}_{k_j}) \ar[d]_-{(\otimes_{j=1}^{i-1}id_{P^{\prime}_{k_j}}) \otimes Z^{\prime}_{\widetilde{T}} \otimes (\otimes_{j=i+1}^{b}id_{P^{\prime}_{k_j}})} \ar[dr]^-{Z^{\prime}_{T \circ_i {\widetilde{T}}}} &\\ (\otimes_{j=1}^{b}P^{\prime}_{k_j}) \ar[r]^{Z^{\prime}_T} & P^{\prime}_{k_0} } \end{displaymath} Let $({\otimes}^{i-1}_{j=1} x_j) \otimes ({\otimes}^{\widetilde{b}}_{j=1} \widetilde{x_j}) \otimes ({\otimes}^b_{j=i+1} x_j)$ belongs to $({\otimes}^{i-1}_{j=1} P^{\prime}_{k_j}) \otimes ({\otimes}^{\widetilde{b}}_{j=1} P^{\prime}_{\widetilde{k_j}}) \otimes ({\otimes}^b_{j=i+1} P^{\prime}_{k_j})$ Then, \begin{align*} & Z^{\prime}_T \circ (id \otimes Z^{\prime}_{\widetilde{T}} \otimes id ) [({\otimes}^{i-1}_{j=1} x_j) \otimes ({\otimes}^{\widetilde{b}}_{j=1} \widetilde{x_j}) \otimes ({\otimes}^b_{j=i+1} x_j)]\\ & \qquad = Z^{\prime}_T[({\otimes}^{i-1}_{j=1} x_j) \otimes Z^{\prime}_{\widetilde{T}}({\otimes}^{\widetilde{b}}_{j=1} \widetilde {x_j}) \otimes ({\otimes}^b_{j=i+1} x_j)]\\ & \qquad = Z^{\prime}_T[({\otimes}^{i-1}_{j=1} x_j) \otimes \alpha(\widetilde{T}) Z_{E \circ \widetilde{T}}(({\otimes}^{\widetilde{k_0}} q) \otimes ({\otimes}^{\widetilde{b}}_{j=1} \widetilde{x_j})) \otimes ({\otimes}^b_{j=i+1} x_j)]\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \textrm{~~~~~(by~~definition}~~of Z^{\prime}_T)\\ & \qquad = \alpha(T)\alpha(\widetilde{T}) Z_{E \circ T}[({\otimes}^{k_0} q) \otimes ({\otimes}^{i-1}_{j=1} x_j) \otimes Z_{E \circ \widetilde{T}} (({\otimes}^{\widetilde{k_0}} q) \otimes ({\otimes}^{\widetilde{b}}_{j=1} {\widetilde{x}}_j)) \otimes ({\otimes}^b_{j=i+1} x_j)]\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \textrm{~~~~~(by~~definition}~~of Z^{\prime}_T)\\ & \qquad = \alpha(T)\alpha(\widetilde{T}) Z_{E\circ(T\circ_i(E\circ \widetilde{T}))}[({\otimes}^{k_0} q) \otimes ({\otimes}^{i-1}_{j=1} x_j) \otimes ({\otimes}^{\widetilde{k_0}} q) \otimes ({\otimes}^{\widetilde{b}}_{j=1} {\widetilde{x}}_j) \otimes ({\otimes}^b_{j=i+1} x_j)]\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text{(since $Z$ is associative)}\\ & \qquad = \alpha(T) \alpha(\widetilde{T}) \frac{\alpha(T\circ_i \widetilde{T})}{\alpha(T) \alpha(\widetilde{T})} Z_{E\circ(T\circ_i{\widetilde{T}})}[({\otimes}^{k_0} q) \otimes ({\otimes}^{i-1}_{j=1} x_j) \otimes ({\otimes}^{\widetilde{b}}_{j=1} {\widetilde{x}_j}) \otimes ({\otimes}^b_{j=i+1} x_j)]\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \text{(by Theorem \ref{main})}\\ & \qquad = Z^{\prime}_{T\circ_i{\widetilde{T}}}[({\otimes}^{i-1}_{j=1} x_j) \otimes ({\otimes}^{\widetilde{b}}_{j=1} {\widetilde{x}_j}) \otimes ({\otimes}^b_{j=i+1} x_j)] \textrm{~~~~~~~~~~(by~~definition)}.\\ \end{align*} When $\tilde{b} = 0 :$ We need to check the following diagram commutes:\newline \begin{displaymath} \xymatrix{ (\otimes_{j=1}^{i-1}P^{\prime}_{k_j}) \otimes \mathbb{C} \otimes (\otimes_{j=i+1}^{b}P^{\prime}_{k_j})\ar[r]^-\cong \ar[d]_-{(\otimes_{j=1}^{i-1}id_{P^{\prime}_{k_j}})} \otimes Z^{\prime}_{\widetilde{T}} \otimes (\otimes_{j=i+1}^{b}id_{P^{\prime}_{k_j}}) & \otimes_{\substack{j=1, \\ j\neq i}}^{b}P^{\prime}_{k_j} \ar[d]_-{Z^{\prime}_{T \circ_i {\widetilde{T}}}} & \\ \otimes_{j=1}^{b}P^{\prime}_{k_j} \ar[r]^{Z^{\prime}_T} & P^{\prime}_{k_0} } \end{displaymath} The proof is as above.\newline Thus $T \mapsto Z^{\prime}_T$ is compatible with substitution. In conclusion, the collection $P^{\prime}=\{P^{\prime}_k: k\in Col\}$ of vector spaces, equipped with the assignment $T\mapsto Z^{\prime}_T$ of multilinear maps is a planar algebra.\\ \begin{proof}{\em (of part (b) in the notation of the paragraph following the statement of Theorem \ref{main}.)}\\ That $P^{\prime}_k=(P^{(N \subseteq Q)})_k$ and consisteny under inclusions of two sides follows from definition. That $(P^{\prime},Z^{\prime})$ has modulus $\sqrt{[Q:N]}$ follows from definition of $\alpha$. We need, further, to show the following: \begin{fact} $(i) Z^{\prime}_{{\mathcal{E}}^{2k+1}}(1)= \sqrt{[Q:N]} e^Q_{2k}$ and $(ii) Z^{\prime}_{{\mathcal{E}}^{2k}} (1)= \sqrt{[Q:N]} e^Q_{2k-1}.$ \end{fact} See the figure \ref{fig:jp1}. Left one is for case (ii) and right one is for case (i).\\ \begin{figure}[h] \includegraphics[scale=.7]{jp.eps} \caption{Jones Projection} \label{fig:jp1} \end{figure} \underline{Justification of (i)}: $\alpha({\mathcal{E}}^{2k+1})= \sqrt{[M:Q]}$. \\By definition, \begin{align*} & Z^{\prime}_{{\mathcal{E}}^{2k+1}}(1)\\ & \qquad = \sqrt{[M:Q]} F_{2n+1}(Z^{N\subseteq M}_{{\mathcal{E}}^{2k+1}}(1))\\ & \qquad = \sqrt{[M:Q]} \hspace{5mm} {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.6]{jopo1.eps} \end{minipage}}\\ & \qquad = \sqrt{[Q:N]} e^Q_{2k} \textrm{~~~[by ~~Theorem~~\ref{planarstinv}]} \end{align*} This justifies the fact(i).\\ \underline{Justification of (ii)}: $\alpha({\mathcal{E}}^{2k})= [M:Q]^{-\frac{1}{2}}$. \\By definition, \begin{align*} & Z^{\prime}_{{\mathcal{E}}^{2k}}(1)\\ & \qquad = [M:Q]^{-\frac{1}{2}} F_{2k}(Z^{N\subseteq M}_{{\mathcal{E}}^{2k}}(1))\\ & \qquad =[M:Q]^{-\frac{1}{2}} {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.6]{jopo2.eps} \end{minipage}}\\ & \qquad = \sqrt{[Q:N]} e^Q_{2k-1} \textrm{~~~[by ~~~Theorem~~\ref{planarstinv}]} \end{align*} This justifies the fact(ii). \begin{fact} $ Z^{\prime}_{{(E^{\prime})}^n_n} (x) = \sqrt{[Q:N]} E_{Q^{\prime}\cap {Q_{n-1}}} (x)$ for all $x$ belongs to $N^{\prime}\cap Q_{n-1}$ and $k\geq 1$. Where corresponding trace of $E^{N^{\prime}\cap Q_{n-1}}_{Q^{\prime}\cap Q_{n-1}}$ is $tr_{N\subseteq Q}.$ \end{fact} \underline{Justification}: The tangles ${(E^{\prime})}^n_n$ are as in figure \ref{LCE} according as $n$ is odd or even respectively. \begin{figure}[h] \includegraphics[scale=.5]{lce} \caption{Left Conditional Expectation} \label{LCE} \end{figure}\\ Consider the case when $n$ is odd. Now, for all $y \in M^{\prime}\cap M_{n-1}$,\\ $tr([M:Q]F(E^{N^{\prime}\cap M_{n-1}}_{M^{\prime}\cap M_{n-1}} (x) F(y)))$ is equal to \begin{align*} & {\delta}^{-n}\frac{[M:Q]}{\sqrt{[M:N]}} \hspace{10mm} {\begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.35]{jt1.eps} \end{minipage}}\\\\ & \qquad = {\delta}^{-n}\frac{[M:Q]}{\sqrt{[M:N]}} {\begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.35]{jt2.eps} \end{minipage}}\\\\ &\qquad\qquad\qquad\qquad\qquad \qquad~~~~\qquad\qquad~~~~\textrm~~{~~[by ~~~Theorem~~~ \ref{Bisch}~~~(a)]}\\\\ & \qquad = {\delta}^{-n}\frac{[M:Q]}{\sqrt{[M:N]}} {\begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.35]{jt3.eps} \end{minipage}}\\\\ & \qquad\qquad\qquad\qquad\qquad \qquad\textrm~~{~~~[since~~~x~~\in F_n(P_n)}~~\textrm{and~~by~~exchange ~~relation]}\\\\ & \qquad = {\delta}^{-n}\frac{[M:Q]}{\sqrt{[M:N]}} {\begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.35]{jt4.eps} \end{minipage}}\\\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad~~~\textrm{~~[by~~extremality]}\\\\ &\qquad \qquad = {\delta}^{-n} {\begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.35]{jt5.eps} \end{minipage}}\\\\ &\qquad\qquad\qquad\qquad\qquad\qquad ~~\textrm{~~[by~~Theorem~~\ref{Bisch}~~(c)~~~and~~~ ~~~extremality]}\\\\\\\\ &\qquad \qquad = {\delta}^{-n}{\begin{minipage}{.5\textwidth} \centering \includegraphics[scale=.35]{jt6.eps} \end{minipage}}\\\\ & \qquad \qquad\qquad\qquad\qquad\qquad~~~\textrm{~~[as~~x}~~\in~~F_n(P_n)]\\\\ & \qquad \qquad= tr(xF(y)). \end {align*} Thus it follows that \begin{equation}\label{eqce} E^{F(N^{\prime}\cap M_{n-1})}_{F(M^{\prime}\cap M_{n-1})}(x)= [M:Q] F(E^{N^{\prime} \cap M_{n-1}}_{M^{\prime}\cap M_{n-1}}(x)). \end{equation} Now, \begin{align*} & Z^{\prime}_{{(E^{\prime})}^n_n} (x) \\ & \qquad = \alpha ({(E^{\prime})}^n_n) F(Z_{{(E^{\prime})}^n_n}(x))~~\textrm~~{[by~~definition~~]}\\ & \qquad = [M:Q]^{\frac{1}{2}} [M:N]^{\frac{1}{2}} F(E^{N^{\prime}\cap M_{n-1}}_{M^{\prime}\cap M_{n-1}}(x))~~\textrm~~{[by~~~~~definition~~of~~~\alpha]}\\\ & \qquad = [Q:N]^{\frac{1}{2}} E^{F({N^{\prime}\cap M_{n-1}})}_{F(M^{\prime}\cap M_{n-1})}(x)~~~\textrm{~~[by ~~~Equation~~~~(\ref{eqce})]} \end{align*} This completes the proof for odd case. Even case is exactly similar, so we omit it. \begin{fact} $ Z^{\prime}_{E^n_{n+1}} (x)= \sqrt{[Q:N]} E_{N^{\prime} \cap Q_{n-1}}(x)$ for all $x$ belongs to $N^{\prime}\cap Q_n$ and this is required to hold for all $k$ in Col, where for $k=0_{\underline{+}}$, the equation is interpreted as $Z^{\prime}_{E^{0_{+}}_1}(x)= \sqrt{[Q:N]}tr_{N\subseteq Q}(x)$ for all $x$ belongs to $N^{\prime}\cap Q$. Here again the trace corresponding to the conditional expectation is given by $tr_{N \subseteq Q}$. \end{fact} \begin{figure}[h] \includegraphics[scale=.5]{sa.eps} \caption{Conditional Expectation} \label{fig:co} \end{figure} \underline{Justification}: Consider the conditional expectation tangle as in Figure \ref{fig:co}. For Case I we give a diagramatic proof and for Case II we give analytic proof. \bigskip $\mathcal Case I (E^{2n}_{2n+1}) :$ Now, for all $y \in F(N^{\prime}\cap M_{2n-1})$, $tr(F(E^{N^{\prime}\cap M_{2n}}_{N^{\prime}\cap M_{2n-1}} (x)) F(y))$ is equal to \begin{align*} & \qquad {\delta}^{-2n} {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.45]{jtr1.eps} \end{minipage}}\\\\\\\\ & \qquad = {\delta}^{-2n}{\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.45]{jtr2.eps} \end{minipage}}\\\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad~~~\textrm{~~[by ~~Theorem~~\ref{Bisch}(a)]}\\\\\\ &\qquad = {\delta}^{-2n} {\begin{minipage}{.4\textwidth} \centering \includegraphics[scale=.45]{jtr3.eps} \end{minipage}}\\\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad~~\textrm{~~~~~[since},~~y~~\in~~P_{2n}~~\subseteq~~P_{2n-1}]\\\\ & \qquad = tr(xF(y)) \end{align*} Thus, \begin{equation}\label{eqce2} E^{F(N^{\prime}\cap M_{2n})}_{F(N^{\prime}\cap M_{2n-1})}(x)= F(E^{N^{\prime}\cap M_{2n}}_{N^{\prime}\cap M_{2n-1}}(x)). \end{equation} \vspace{1cm} Then the following equations hold: \begin{align*} & Z^{\prime}_{E^{2n}_{2n+1}}(x)\\ & \qquad = \alpha(E^{2n}_{2n+1}) F(Z_{E^{2n}_{2n+1}}(x))~~\textrm{~~[by~~definition~~]}\\ & \qquad = [M:Q]^{-\frac{1}{2}} F(Z_{E^{2n}_{2n+1}}(x))~~\textrm{~~[by~~definition~~of}~~\alpha]\\ & \qquad= [M:Q]^{-\frac{1}{2}} [M:N]^{\frac{1}{2}} F(E^{N^{\prime}\cap M_{2n}}_{N^{\prime}\cap M_{2n-1}}(x)) \\ & \qquad = [Q:N]^{\frac{1}{2}} E^{F(N^{\prime}\cap M_{2n})}_{F(N^{\prime}\cap M_{2n-1})}(x)~~\textrm{~~[by~~Equation~~(\ref{eqce2})]} \end{align*} Thus, we proved $ Z^{\prime}_{{(E^{\prime})}^n_n} (x) = \sqrt{[Q:N]} E_{Q^{\prime}\cap {Q_{n-1}}} (x)$.\vspace{2mm} \\ $\mathcal Case II (E^{2n-1}_{2n}):$ By definition, $\alpha(E^{2n-1}_{2n})=\sqrt{[M:Q]}$. Then, for $x \in P^{\prime}_{2n}= F_{2n}(P_{2n})$, \begin{equation}\label{star} Z^{\prime}_{E^{2n-1}_{2n}}(x)= \sqrt{[M:Q]} F(Z^{N\subseteq M}_{E^{2n-1}_{2n}}(x))= \sqrt{[M:Q]}\sqrt{[M:N]} F(E^{N^{\prime}\cap M_{2n-1}}_{N^{\prime}\cap M_{2n-2}}(x)) \end{equation} Then we claim, \begin{equation*} E^{F(N^{\prime}\cap M_{2n-1})}_{F(N^{\prime}\cap M_{2n-2})}(x) = [M:Q] F(E^{N^{\prime}\cap M_{2n-1}}_{N^{\prime}\cap M_{2n-2}}(x)) \end{equation*} \underline{Justification}: Firstly observe (see \cite{BhaLa}), \begin{equation*} F_{2n}(P_{2n})= p_{[0,{2n-1}]} (N^{\prime} \cap M_{2n-1}) p_{[0,{2n-1}]}. \end{equation*} Put, $ x= p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]}$ for $ m_{2n-1} \in N^{\prime}\cap M_{2n-1}$. Then the following self-explanatory array of equations hold for any $m_{2n-2} \in N^{\prime}\cap M_{2n-2} $(using Fact \ref{f:erel} repeatedly): \begin{align*} & tr_{N\subseteq Q}([M:Q] p_{[0,{2n-1}]} E^{N^{\prime}\cap M_{2n-1}}_{N^{\prime}\cap M_{2n-2}}(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]}) p_{[0,{2n-1}]} p_{[0,{2n-1}]} m_{2n-2} p_{[0,{2n-1}]})\\ & \qquad = {[M:Q]}^{n+1} tr(p_{[0,{2n-1}]} E^{N^{\prime}\cap M_{2n-1}}_{N^{\prime}\cap M_{2n-2}}(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]}) p_{[0,{2n-1}]} m_{2n-2} p_{[0,{2n-1}]})\\ & \qquad = {[M:Q]}^{n+1} tr( E^{N^{\prime}\cap M_{2n-1}}_{N^{\prime}\cap M_{2n-2}}(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]})p_{[0,{2n-1}]} m_{2n-2} p_{[0,{2n-1}]})\\ & \qquad = {[M:Q]}^{n+1} tr( E^{N^{\prime}\cap M_{2n-1}}_{N^{\prime}\cap M_{2n-2}}(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]})p_{[0,{2n-3}]}E^{N^{\prime}\cap M_{2n-2}}_{N^{\prime}\cap P_{2n-2}}( m_{2n-2}) p_{[0,{2n-3}]}e_{0,{2n-1}})\\ & \qquad = {[M:Q]}^{n+1} tr( E^{N^{\prime}\cap M_{2n-1}}_{N^{\prime}\cap M_{2n-2}}(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]} E^{N^{\prime}\cap M_{2n-2}}_{N^{\prime}\cap P_{2n-2}}( m_{2n-2}) p_{[0,{2n-3}]})e_{0,{2n-1}})\\ & \qquad = {[M:Q]}^n tr(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]} E^{N^{\prime}\cap M_{2n-2}}_{N^{\prime}\cap P_{2n-2}}( m_{2n-2}) p_{[0,{2n-3}]})~~\textrm{~~~[Markov Property]}\\ & \qquad = {[M:Q]}^n tr(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]} p_{[0,{2n-1}]} m_{2n-2} p_{[0,{2n-1}]})\\ & \qquad = tr_{N\subseteq Q}(p_{[0,{2n-1}]} m_{2n-1} p_{[0,{2n-1}]} p_{[0,{2n-1}]} m_{2n-2} p_{[0,{2n-1}]})\\ \end{align*} Thus from definition of trace preserving conditional expectation we conclude that the claim is justified. Hence from Equation (\ref{star}) it follows that, \begin{equation*} Z^{\prime}_{E^{2n-1}_{2n}}(x)= \frac{\sqrt{[M:Q]} \sqrt{[M:N]}}{[M:Q]} E^{F(N^{\prime}\cap M_{2n-1})}_{F(N^{\prime}\cap M_{2n-2})}(x)= \sqrt{[Q:N]} E^{P^{\prime}_{2n}}_{P^{\prime}_{2n-1}}(x) \end{equation*} This is what we wanted to show. \end{proof} Thus the proof of Theorem \ref{main1} is complete. \section{Examples} \subsection{Dual Intermediate Planar algebra} We now consider the other intermediate subfactor $Q \subset M$. We can describe its planar algebra using Theorem~\ref{main1} and the fact that it is the dual subfactor to $M \subset Q _1$. Namely, apply Theorem~\ref{main1} to the planar algebra $(P_{1,n}(L))_n$ of $M \subset M_1$, with respect to the projection $[M:Q ]^{1/2}[Q :N]^{-1/2} \begin{minipage}{.1\textwidth} \centering \includegraphics[scale= .3]{q1.eps} \end{minipage}$. We obtain the planar algebra $(P^{(M\subset Q _1)}_n)_n$. The planar algebra of $Q \subset M$ is its dual, $(P^{(M\subset Q _1)}_{1,n})_n$. If we carry out this process, we obtain the following planar algebra:\\ \begin{definition} Denote the following tangle by $E^{\prime}_n$: \begin{center} \includegraphics[scale=.5]{dual2.eps} \hspace{1in} for $n$ odd, \end{center} \begin{center} \includegraphics[scale=.5]{dual1.eps} \hspace{1in} for $n$ even. \end{center} where ${\begin{minipage}{.1\textwidth} \centering \includegraphics[scale=.55]{r.eps} \end{minipage}}= \sqrt{\frac{[M:Q]}{[Q:N]}} ~~{\begin{minipage}{.1\textwidth} \centering \includegraphics[scale=.55]{q.eps} \end{minipage}}$ according as $n$ is odd or even respectively. We shall use these to define a map $T \mapsto G(T)$ from the class of $k$-tangles to the class of {\em partially labelled} $k$-tangles with $(k+1)$ internal discs all but the last of which are 2-boxes labelled with a $r$, with the the tangle $T$ inserted in the last disc of colour $k$. Thus, $G(T) = E^{\prime}_k\circ_{ (D_1,D_2,\cdots, D_k, D_{k+1})}(r,r,\cdots,r,T)$. If it is clear from the context then we write $E^{\prime}$ instead of $E^{\prime}_n$. \par Define functions $G_n :P_n \mapsto P_n$ by $G_n(x)=Z_{E^{\prime}_n}(r\otimes r \otimes \cdots \otimes r \otimes x)$ for $x \in P_n$. We often write $G(x)$ instead of $G_n(x)$ if there is no confusion. \end{definition} \begin {definition} Let $ T $ be a $k$-tangle with $ b\geq 1 $ internal discs ${D_1,\dots D_b}$ of colours ${k_1,\dots k_b}$. Then define $\widetilde{\alpha}(T) = [Q:N]^{\frac{1}{2}\widetilde{c}(T)}$, where \[\widetilde{c}(T)=(\lceil k_0/2 \rceil+\lfloor k_1/2 \rfloor+\dots +\lfloor k_b/2 \rfloor)-\widetilde{l}(T) \] with $\widetilde{l}(T)$ being the number of closed loops after capping the white intervals of the external disc of $T$ and cupping the white intervals of all internal discs of $T$. \end {definition} It is straightforward to verify the following corollary: \begin{corollary} \label{main2} If $P^{\prime \prime}_k= ran(G(I^k_k))$ and $Z^{\prime \prime}_T = \widetilde{\alpha}(T) Z_{G(T)|_{\otimes P^{\prime \prime}_{k_i(T)}}}$, then $(P^{\prime \prime}, T \mapsto Z^{\prime \prime}_T|_{\otimes P^{\prime \prime}_{k_i(T)}})$ is a subfactor planar algebra which is isomorphic to $P^{(Q\subset M)}$. \end{corollary} \bigskip \subsection{Crossed Product Example}\label{CPE} Landau described the planar algebra $P(G)$ of the group subfactor (corresponding to the fixed-points of an outer action of a finite group $G$ on a $II_1$ factor ) which has a presenatation with generators given by $L_2= G$ and $L_k=\phi$ for $k\neq2,$ and the relation that a simple closed loop of either colour be the scalar $\sqrt{\lvert G \rvert}$ and the additional six relations labelled $00,0,1,2,3,4$ as in \cite{La1}. Denote by $e$, the identity of $G$. \smallskip We have another group $\Theta$ and an action $\alpha:\Theta \mapsto Aut(G)$ as in \cite{LaSu}. Without loss of generality we can assume $\alpha$ is $1-1.$ Denote by $f$, the identity of $\Theta$. The map that replaces the label of each $2$-box with the label's image under $\theta\in \Theta$ defines an automorphism of $P(G)$ (we will denote this also by $\theta$). Then the set $P^{\Theta}$ of invariants for this action is a sub-planar algebra of $P$, and the set of $\Theta$-invariant $k$-boxes of $P(G)$ constituts precisely the set of $k$-boxes of $P^{\Theta}$. \smallskip \begin{notation}\label{basis} We follow the same notation as in \cite{LaSu} [Remark $3.3.1.(b)$] to denote an orthonormal basis of $P(G)_k$ (with respect to the inner product given by the natural trace): define $S(\bar{g})$ (where $\bar{g} \in G^{k-1}$) to be the labelled $k$-tangle $(k >2)$ given by the following two Figures \ref{basis1} and \ref{basis2} for $k$ odd and even respectively. \begin{figure} \includegraphics[scale=.5]{cross1} \caption{$k$ Odd} \label{basis1} \end{figure} \begin{figure} \includegraphics[scale=.5]{cross2} \caption{$k$ Even} \label{basis2} \end{figure} Also, \begin{equation*} S(g)= {\begin{minipage}{.2\textwidth} \centering \includegraphics[scale=.7]{gg} \end{minipage}} \end{equation*} \smallskip We use Latin alphabets to denote the elements of $G$, whereas we use Greek symbols to write the elements of $\Theta$. As usual we write the elements of $G\rtimes \Theta$ as ordered pairs $(g,\theta)$ with the ususal multiplication $(g_1,\theta_1)(g_2,\theta_2)= (g_1\theta_1(g_2),\theta_1\theta_2).$ Also for each integer $k\geq 1$ and $\theta \in \Theta$, we simply write $\theta(g_1,\cdots,g_k)$ to denote the map $\alpha^{(k)}_{\theta} \in Aut(G^k)$ defined by $\alpha^{(k)}_{\theta}(g_1,g_2,\cdots,g_k)= (\alpha_{\theta}(g_1),\alpha_{\theta}(g_2),\cdots,\alpha_{\theta}(g_k))$. Lastly by ${\bar{\delta}}_n$ we denote the $n$-tuple $(\delta_1,\cdots,\delta_n)$. If from the context it is obvious what is $n$,we simply write $\bar{\delta}.$ For convenience we denote by ${\bar{\delta}}_{[k,n]} ($ respectively, ${\bar{\delta}}_{(k,n]}$) the tuple $(\delta_k,\cdots,\delta_n) ($ respectively, $(\delta_{k+1},\cdots,\delta_n)).$ \end{notation} We prove the following theorem (\cite{LaSu}): \begin{theorem}\label{subgroup} Let $G,\Theta$ be as above, and let $G\rtimes \Theta$ denote the semi-direct product,and let $N= R^{G\rtimes \Theta}\subset R^{\Theta}=M$ denote the corresponding subgroup-subfactor. Then, $$P^{\Theta} \simeq P^{(N\subset M)}.$$ \end{theorem} We prove this theorem in various steps. \begin{fact}\label{s} For $k\geq 3$, using exchange relation repeatedly and other relations labelled $0,1,2$ as stated in \cite{La1} we get, \begin{align*} & S(g_1,g_2,\cdots,g_{k-1}) S(h_1,h_2,\cdots,h_{k-1})\\ & \qquad = {(\sqrt{ \lvert G\rvert})}^{(\lceil k/2 \rceil-1)}(\prod_{i=2}^{\lceil k/2 \rceil} \delta(h_1 g_{k+1-i}, h_i)) S(h_1g_1,h_1g_2,\cdots,h_1 g_{\lceil k/2 \rceil}, h_{\lceil k/2 \rceil +1},\cdots, h_{k-1}) \end{align*} Then for $k=2$ simply observe, $ S(g_1)S(g_2) = S(g_1 g_2). $ \end{fact} \bigskip \begin{fact}\label{theta} Define $\Theta S(\bar{g})= \sum_{\theta \in \Theta} S(\theta(\bar{g}))$ for $\bar{g}\in G^{k-1}.$ Then as stated in \cite{LaSu} $\{\Theta S(\bar{g}):[\bar{g}]\in G^{k-1}/\Theta\}$ is an orthogonal basis for $P^{\Theta}_k$. A simple calculation shows the following: \begin{align*} & \Theta S(g_1,g_2,\cdots, g_{k-1}) \Theta S(h_1,h_2,\cdots, h_{k-1}) \\ & \qquad = {(\sqrt {\lvert G \rvert})}^{(\lceil k/2 \rceil -1)}\sum_{{\theta}^{\dprime}\in \Theta}(\prod_{i=2}^{\lceil k/2\rceil} \delta(h_1 {\theta}^{\dprime}(g_{k+1-i}),h_i))~~~\times \\ & \qquad \qquad \qquad \qquad \qquad \Theta S(h_1{\theta}^{\dprime}(g_1), h_1 {\theta}^{\dprime}(g_2), \cdots, h_1{\theta}^{\dprime}(g_{\lceil k/2\rceil}), h_{\lceil k/2 \rceil +1}, h_{\lceil k/2 \rceil +2},\cdots, h_{k-1}) \end{align*} For $k=2$ the above is being interpreted as \begin{equation*} \Theta S(g) \Theta S(h) = \sum_{{\theta}^{\dprime} \in \Theta} \Theta S(g {\theta}^{\dprime}(h)). \end{equation*} \end{fact} \bigskip \begin{remark} Note that there is a slight correction in constant in Fact \ref{s} and \ref{theta} as compared to \cite{LaSu} Remark 3.3.1 (f) and (g) respectively. \end{remark} \begin{fact}\label{f} Let $q$ be the biprojection corresponding to the intermediate subfactor $R^{\Theta}$ such that\\ $R^{G\rtimes \Theta}\subset R^{\Theta}\subset R$. In other words, \begin{equation*} {\begin{minipage}{.1\textwidth} \centering \includegraphics[scale=.65]{q} \end{minipage}}= \frac{1}{\lvert \Theta \rvert}\sum_{\theta \in \Theta} {\begin{minipage}{.2\textwidth} \centering \includegraphics[scale=.6]{tt} \end{minipage}} \end{equation*} Then using exchange relation we easily get the following result as mentioned in \cite{LaSu}: \begin{align*} & F_n(S((g_1,{\theta}_1), (g_2,{\theta}_2),\cdots,(g_{n-1},{\theta}_{n-1})))\\ & \qquad = \frac{1}{{\lvert \Theta\rvert}^n}\sum_{\substack{{\theta}\in \Theta\\ \bar{\gamma}\in {\Theta}^{n-1}}} S((\theta(g_1),\gamma_1),(\theta(g_2),\gamma_2),\cdots,(\theta(g_{n-1}),\gamma_{n-1})) \end{align*} \end{fact} \bigskip \begin{remark} Observe that the formula in Fact \ref{f} depends only on the orbit of $(g_1,g_2,\cdots,g_{n-1})$ under $\Theta$. Following \cite{LaSu} we put,$$U(g_1,g_2,\cdots, g_{k-1})= \sum_{\substack{{\theta}\in \Theta\\ \bar{\gamma}\in {\Theta}^{k-1}}} S((\theta(g_1),\gamma_1),(\theta(g_2),\gamma_2),\cdots,(\theta(g_{k-1}),\gamma_{k-1})).$$ Then it is simple to verify that $\{U(\bar{g}): [\bar{g}]\in G^{k-1}/\Theta\}$ is an orthogonal basis for $F_k(P^{(R^{G\rtimes \Theta}\subset R)}).$ \end{remark} \bigskip \begin{lemma}\label{u} \begin{align*} & U(g_1,g_2,\cdots,g_{k-1}) U(h_1,h_2,\cdots,h_{k-1})\\ & \qquad = {(\sqrt {\lvert G\rvert})}^{(\lceil k/2 \rceil -1)}{(\sqrt {\lvert \Theta \rvert})}^{(\lceil k/2 \rceil -1)}(\lvert \Theta \rvert)^{\lfloor k/2 \rfloor}\sum_{{\theta}^{\dprime}\in \Theta}(\prod_{i=2}^{\lceil k/2\rceil} \delta(h_1 {\theta}^{\dprime}(g_{k+1-i}),h_i)) ~~~ \times\\ & \qquad \qquad \qquad \qquad \qquad U(h_1{\theta}^{\dprime}(g_1), h_1 {\theta}^{\dprime}(g_2), \cdots, h_1{\theta}^{\dprime}(g_{\lceil k/2\rceil}), h_{\lceil k/2 \rceil +1}, h_{\lceil k/2 \rceil +2},\cdots, h_{k-1}) \end{align*} For $k=2$ the above is being interpreted as \begin{equation*} U(g) U(h)= \lvert \Theta \rvert \sum_{{\theta}^{\dprime}\in \Theta} \Theta S(g {\theta}^{\dprime}(h)). \end{equation*} \end{lemma} \begin{proof} \begin{align*} & U(g_1,g_2,\cdots, g_{k-1}) U(h_1,h_2,\cdots, h_{k-1})\\ & \qquad = (\sum_{\substack{{\theta}\in \Theta\\ \bar{\gamma}\in {\Theta }^{k-1}}} S((\theta(g_1),\gamma_1),(\theta(g_2),\gamma_2), \cdots, (\theta(g_{k-1}),\gamma_{k-1})))~~~ \times\\ & \qquad \qquad \qquad (\sum_{\substack{{\phi}\in \Theta\\ \bar{\sigma}\in {\Theta}^{k-1}}} S((\phi(h_1),\sigma_1), (\phi(h_2),\sigma_2), \cdots, (\phi(h_{k-1}),\sigma_{k-1})))\\ & \qquad = (\sqrt{G})^{(\lceil k/2 \rceil -1)} (\sqrt{\lvert \Theta \rvert})^{(\lceil k/2 \rceil -1)} \sum_{\theta,\phi, \bar{\gamma}, \bar{\sigma}}(\prod_{i=2}^{\lceil k/2 \rceil} \delta((\phi(h_1)\sigma_1 \theta(g_{k+1-i}),\sigma_1 \gamma_{k+1-i}), (\phi(h_i),\sigma_i)))~~\times\\ & \qquad \qquad \qquad S((\phi(h_1)\sigma_1 \theta(g_1), \sigma_1 \gamma_1), (\phi(h_1)\sigma_1 \theta(g_2), \sigma_1 \gamma_2), \cdots, (\phi(h_1) \sigma_1 \theta(g_{\lceil k/2 \rceil}), \sigma_1 \gamma_{\lceil k/2 \rceil}), \\ & \qquad \qquad \qquad(\phi(h_{\lceil k/2 \rceil +1}),\sigma_{\lceil k/2 \rceil +1}),\cdots, (\phi(h_{k-1}),\sigma_{k-1})))~~~~~~~~~~~\textrm{[Using~~Fact}~~~\ref{s}]\\ & \qquad = (\sqrt{G})^{(\lceil k/2 \rceil -1)} (\sqrt{\lvert \Theta \rvert})^{(\lceil k/2 \rceil -1)} \sum_{\phi,\theta} \sum_{\substack{\sigma_1\in \Theta\\ \bar{\delta}\in {\Theta}^{k-1}\\ {\bar{\gamma}}_{(\lceil k/2 \rceil,k-1]}}}(\prod_{i=2}^{\lceil k/2 \rceil} \delta(\phi(h_1 {\phi}^{-1} \sigma_1 \theta(g_{k+1-i})),\phi(h_i))~~\times\\ & \qquad \qquad \qquad S((\phi(h_1{\phi}^{-1}\sigma_1 \theta(g_1)), \delta_1), (\phi(h_1 {\phi}^{-1}\sigma_1 \theta(g_2)), \delta_2), \cdots, (\phi(h_1 {\phi}^{-1} \sigma_1 \theta(g_{\lceil k/2 \rceil})), \delta_{\lceil k/2 \rceil}), \\ & \qquad \qquad \qquad (\phi(h_{\lceil k/2 \rceil +1}),\delta_{\lceil k/2 \rceil +1}),\cdots, (\phi(h_{k-1}),\delta_{k-1})))\\ & \qquad = (\sqrt{G})^{(\lceil k/2 \rceil -1)} (\sqrt{\lvert \Theta \rvert})^{(\lceil k/2 \rceil -1)} (\lvert \Theta \rvert)^{(\lfloor k/2 \rceil-1)} \lvert \Theta \rvert \sum_{\phi,\bar{\delta}}\sum_{{\theta}^{\dprime}\in \Theta}(\prod_{i=2}^{\lceil k/2 \rceil} \delta(h_1 \theta^{\dprime}(g_{k+1-i}), h_i)~~\times\\ & \qquad \qquad \qquad S((\phi(h_1\theta^{\dprime}(g_1)), \delta_1), (\phi(h_1\theta^{\dprime}(g_2)), \delta_2), \cdots, (\phi(h_1 \theta^{\dprime}(g_{\lceil k/2 \rceil})), \delta_{\lceil k/2 \rceil}), \\ & \qquad \qquad \qquad (\phi(h_{\lceil k/2 \rceil +1}),\delta_{\lceil k/2 \rceil +1}),\cdots, (\phi(h_{k-1}),\delta_{k-1})))~~~~~~~~~\textrm{[Putting}~~{\phi}^{-1}\sigma_1 \theta = {\theta}^{\dprime}] \end{align*} This completes the proof. \end{proof} \begin{definition}\label{defi} Define linear maps ${\Phi}_k : P^{\Theta}_k\longmapsto F_k(P_k(G\ltimes \Theta))$ by,\par $${\Phi}_k(\Theta (S(\bar{g})))= {\lvert \Theta\rvert}^{-\lfloor k/2 \rfloor}(\sqrt{\lvert \Theta\rvert})^{(1-\lceil k/2\rceil)} U(\bar{g}),$$ which also equals to $${(\alpha(S))}^{1-k-\lfloor k/2 \rfloor} U(\bar{g}).$$ Here $S$ is the tangle as in Notation \ref{basis}, but unlabelled. Here $[\bar{g}]\in G^{k-1}/\Theta.$ \end{definition} To prove Theorem \ref{subgroup} we need to check that the following equation holds: \begin{equation} \label{sanat} {\Phi}_{k_0}(Z_T(x_1\otimes\cdots \otimes x_b))=Z^{\prime}_T({\Phi}_{k_1}(x_1)\otimes \cdots \otimes {\Phi}_{k_b}(x_b))\} \end{equation} for any tangle $T(= T^{k_0}_{k_1,\cdots,k_b}).$ In view of \cite{KodSun}[Theorem 3.3], it suffices to prove \begin{theorem}\label{generating tangles} The collection $\mathscr{T}$ of those tangles $T$ which satisfy Equation (\ref{sanat}) contains a class of `generating tangles' namely $\mathscr{T} \supset \{1^{0_{+}},1^{0_{-}}\}\cup\{{\mathcal{E}}^k:k\geq 2\}\cup \{{(E^{\prime})}^k_k: k\geq 1\}\cup \{E^k_{k+1},M_k, I^{k+1}_k : k\in Col\} $ \end{theorem} We prove in detail that $\mathscr{T}$ contains the multiplication tangles and the right conditional expectation tangles. In other cases we just sketch the proofs. \begin{lemma} $M_k \in \mathscr{T}$. \end{lemma} \begin{proof} Firstly note, \begin{align*} & {\Phi}_k(Z_{M_k}(\Theta S(g_1,g_2,\cdots,g_{k-1})\otimes \Theta S(h_1,h_2,\cdots,h_{k-1})))\\ & \qquad \qquad \qquad= {\Phi}_k(\Theta S(g_1,g_2,\cdots,g_{k-1}) \Theta S(h_1,h_2,\cdots,h_{k-1}))\\ & \qquad \qquad \qquad= {(\sqrt {\lvert G\rvert})}^{(\lceil k/2 \rceil -1)}\sum_{{\theta}^{\dprime}\in \Theta}(\prod_{i=2}^{\lceil k/2\rceil} \delta(h_1 {\theta}^{\dprime}(g_{k+1-i}),h_i))\times \\ & \qquad \qquad \qquad \qquad \qquad {\Phi}_k(\Theta S(h_1{\theta}^{\dprime}(g_1), h_1 {\theta}^{\dprime}(g_2), \cdots, h_1{\theta}^{\dprime}(g_{\lceil k/2\rceil}), h_{\lceil k/2 \rceil +1}, h_{\lceil k/2 \rceil +2},\cdots, h_{k-1}))\\ &\qquad \qquad \qquad \qquad \qquad \qquad\qquad\qquad \qquad \qquad \textrm{[Using~~Fact}~~\ref{theta}]\\ & \qquad \qquad \qquad= (\sqrt{\lvert \Theta\rvert})^{1-\lceil k/2 \rceil}(\lvert \Theta\rvert)^{-\lfloor k/2\rfloor}{(\sqrt {\lvert G\rvert})}^{(\lceil k/2 \rceil -1)}\sum_{{\theta}^{\dprime}}(\prod_{i=2}^{\lceil k/2\rceil} \delta(h_1 {\theta}^{\dprime}(g_{k+1-i}),h_i))\times \\ & \qquad \qquad \qquad \qquad \qquad U(h_1{\theta}^{\dprime}(g_1), h_1 {\theta}^{\dprime}(g_2), \cdots, h_1{\theta}^{\dprime}(g_{\lceil k/2\rceil}), h_{\lceil k/2 \rceil +1}, h_{\lceil k/2 \rceil +2},\cdots, h_{k-1})\\ &\qquad \qquad \qquad \qquad \qquad \qquad\qquad\qquad \qquad \qquad \textrm{[Definition}~~\ref{defi}] \end{align*} On the other hand, since $\alpha(M_k) =1$ the following equations hold: \begin{align*} & Z^{\prime}_{M_k}({\Phi}_k(\Theta S(g_1,g_2,\cdots,g_{k-1}))\otimes {\Phi}_k(\Theta S(h_1,h_2,\cdots, h_{k-1})))\\ & = \alpha(M_k) F_k(Z_{M_k}((\lvert \Theta\rvert)^{-2\lfloor k/2 \rfloor} (\sqrt{\lvert\Theta \rvert})^{2(1-\lceil k/2 \rceil)}U(g_1,g_2,\cdots,g_{k-1})\otimes U(h_1,h_2,\cdots,h_{k-1})))\\ &\qquad \qquad \qquad \qquad \qquad \qquad\qquad\qquad \qquad \qquad \qquad \qquad \qquad \textrm{[Definition}~~\ref{defi}]\\ & = {(\sqrt {\lvert G\rvert})}^{(\lceil k/2 \rceil -1)}{(\sqrt {\lvert\Theta\rvert})}^{(\lceil k/2 \rceil -1)}(\lvert\Theta\rvert)^{\lfloor k/2 \rfloor}(\lvert\Theta\rvert)^{-2\lfloor k/2 \rfloor} (\sqrt{\lvert\Theta\rvert})^{2(1-\lceil k/2 \rceil)} \sum_{{\theta}^{\dprime}\in \Theta}(\prod_{i=2}^{\lceil k/2\rceil} \delta(h_1 {\theta}^{\dprime}(g_{k+1-i}),h_i)) \\ & \qquad \qquad \qquad \qquad U(h_1{\theta}^{\dprime}(g_1), h_1 {\theta}^{\dprime}(g_2), \cdots, h_1{\theta}^{\dprime}(g_{\lceil k/2\rceil}), h_{\lceil k/2 \rceil +1}, h_{\lceil k/2 \rceil +2},\cdots, h_{k-1}\\ &\qquad \qquad \qquad \qquad \qquad \qquad\qquad\qquad \qquad \qquad \qquad \qquad \qquad \textrm{[by~~Lemma~~\ref{u}]}\\ & = {(\sqrt {\lvert G\rvert})}^{(\lceil k/2 \rceil -1)}(\lvert\Theta\rvert)^{-\lfloor k/2 \rfloor} (\sqrt{\lvert\Theta\rvert})^{1-\lceil k/2 \rceil)}\sum_{{\theta}^{\dprime}}(\prod_{i=2}^{\lceil k/2\rceil} \delta(h_1 {\theta}^{\dprime}(g_{k+1-i}),h_i))\\ & \qquad \qquad \qquad \qquad U(h_1{\theta}^{\dprime}(g_1), h_1 {\theta}^{\dprime}(g_2), \cdots, h_1{\theta}^{\dprime}(g_{\lceil k/2\rceil}), h_{\lceil k/2 \rceil +1}, h_{\lceil k/2 \rceil +2},\cdots, h_{k-1}) \end{align*} This completes the proof. \end{proof} \begin{lemma} $E^k_{k+1} \in \mathscr{T}$. \end{lemma} \begin{proof} \textbf{Case I}: $k=2n$. Put $T= E^k_{k+1}$. \bigskip For $n=1$, use relation $1$ to get $$Z_T(S(g_1,g_2))= S({g_1}^{-1}).$$ If $n\geq 2$, we again using relation $1$ get the following result easily: \begin{equation}\label{a} Z_T(S(g_1,g_2,\cdots, g_k))= S(g_1,g_2,\cdots,g_{(k/2)}, g_{(\frac{k}{2}+2)},\cdots,g_k) \end{equation} Also observe, $\alpha(T)= {\lvert \Theta \rvert}^{-1/2}$. We show for $n \geq 2, T\in \mathscr{T} (n=1$ is exactly similar). \begin{align*} & {\Phi}_{k}(Z_T(\Theta S(g_1,g_2,\cdots,g_k)))\\ &\qquad ={\Phi}_{k}(Z_T(\sum_{\theta \in \Theta}S(\theta(g_1),\theta(g_2),\cdots, \theta(g_k))))\\ & \qquad= {\Phi}_{k}(\sum_{\theta}Z_T(S(\theta(g_1),\theta(g_2),\cdots, \theta(g_k))))\\ & \qquad= {\Phi}_{k}(\sum_{\theta}S(\theta(g_1),\theta(g_2),\cdots,\theta(g_{k/2}),\theta(g_{(\frac{k}{2}+2)}),\cdots,\theta(g_k)))~~~\textrm{[by~~Equation~~\ref{a}]}\\ &\qquad= {\Phi}_{k}(\Theta S(g_1, g_2,\cdots,g_{(k/2)},g_{(\frac{k}{2}+2)},\cdots,g_k))\\ & \qquad= (\sqrt{\lvert \Theta \rvert})^{(1-\lceil k/2 \rceil)}(\lvert \Theta \rvert)^{-\lfloor k/2 \rfloor}U(g_1,g_2,\cdots,g_{(k/2)},g_{(\frac{k}{2}+2)},\cdots,g_k). \end{align*} On the other hand, \begin{align*} & Z^{\prime}_T({\Phi}_{k+1}(\Theta S(g_1,g_2,\cdots,g_k)))\\ & \qquad= \alpha(T)F_k(Z_T((\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2}\rceil)}(\lvert \Theta \rvert)^{-\lfloor\frac{k+1}{2}\rfloor}U(g_1,g_2,\cdots,g_k)))\\ &\qquad= (\lvert \Theta \rvert)^{-1/2}(\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2}\rceil)}(\lvert \Theta \rvert)^{-\lfloor \frac{k+1}{2}\rfloor}F_k(Z_T(\sum_{\substack{\theta \in \Theta\\ \bar{\gamma}\in {\Theta}^k}}(\theta(g_1),\gamma_1),(\theta(g_2),\gamma_2),\cdots,(\theta(g_k),\gamma_k))))\\ &\qquad= (\lvert \Theta \rvert)^{-1/2}(\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2}\rceil)}(\lvert \Theta \rvert)^{-\lfloor \frac{k+1}{2}\rfloor} F_k(\sum_{\theta,\bar{\gamma}}(\theta(g_1),\gamma_1),(\theta(g_2),\gamma_2),\cdots\\ &\qquad \qquad \qquad \qquad \qquad\qquad\qquad \qquad \cdots,(\theta(g_{(k/2)}),\gamma_{(k/2)}),(\theta(g_{(\frac{k}{2}+2)}),\gamma_{(\frac{k}{2}+2)}),\cdots,(\theta(g_k),\gamma_k)))\\ &\qquad= (\lvert \Theta \rvert)^{-1/2}(\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2} \rceil)}(\lvert \Theta \rvert)^{-\lfloor \frac{k+1}{2} \rfloor} \lvert \Theta \rvert F_k(U(g_1,g_2,\cdots,g_{(k/2)},g_{(\frac{k}{2}+2)},\cdots,g_k)).\\ & \qquad= (\sqrt{\lvert \Theta \rvert})^{(2-\lceil \frac{k+1}{2} \rceil)} (\lvert \Theta \rvert)^{-\lfloor \frac{k+1}{2} \rfloor} U(g_1,g_2,\cdots,g_{(k/2)},g_{(\frac{k}{2}+2)},\cdots,g_k) \end{align*} Simple algebraic calculation tells us, $\lceil (k+1)/2\rceil= \lceil (k/2)\rceil +1,$ and $ \lfloor (k+1)/2 \rfloor= \lfloor k/2 \rfloor.$ Thus we have proved, $$ {\Phi}_{k}(Z_T(\Theta S(g_1,g_2,\cdots,g_k))) = Z^{\prime}_T({\Phi}_{k+1}(\Theta S(g_1,g_2,\cdots,g_k))) $$ In other words, $E^k_{k+1} \in \mathscr{T}$. \bigskip \textbf{Case II}: $k=2n-1.$ Put $T= E^k_{k+1}$. \bigskip The case $n=1$ is trivial. \bigskip For $n\geq 2$ using relation $2$ as in \cite{LaSu} and exchange relation we easily get: \begin{equation}\label{b} Z_T(S(g_1,g_2,\cdots,g_k))= \sqrt{\lvert G\rvert} \delta(g_{(\lceil k/2 \rceil)},g_{(\lceil k/2 \rceil+1)}) S(g_1,g_2,\cdots, g_{(\lceil k/2\rceil)}, g_{(\lceil k/2 \rceil +2)},\cdots, g_k). \end{equation} Then the following equations are easy to check: \begin{align*} &{\Phi}_k(Z_T(\Theta S(g_1,g_2,\cdots,g_k)))\\ &\qquad= {\Phi}_k(Z_T(\sum_{\theta\in \Theta} S(\theta(g_1),\theta(g_2),\cdots,\theta(g_k))))\\ &\qquad= {\Phi}_k(\sum_{\theta}\sqrt{\lvert G \rvert}~~~\delta(g_{(\lceil k/2 \rceil)},g_{(\lceil k/2 \rceil+1)})S(\theta(g_1),\theta(g_2),\cdots,\theta(g_{(\lceil k/2\rceil)}), \theta(g_{(\lceil k/2 \rceil +2)}),\cdots,\theta(g_k))\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \textrm{[by~~Equation~~\ref{b}]}\\ & \qquad= {\Phi}_k(\sqrt{\lvert G\rvert}~~~\delta(g_{(\lceil k/2 \rceil)},g_{(\lceil k/2 \rceil+1)}) \Theta S(g_1,g_2,\cdots, g_{(\lceil k/2\rceil)}, g_{(\lceil k/2 \rceil +2)},\cdots, g_k))\\ & \qquad= \sqrt{\lvert G\rvert}~~~\delta(g_{(\lceil k/2 \rceil)},g_{(\lceil k/2 \rceil+1)}) (\sqrt{\lvert \Theta \rvert})^{(1-\lceil k/2 \rceil)}(\lvert \Theta \rvert)^{-\lfloor k/2 \rfloor} U(g_1,g_2,\cdots, g_{(\lceil k/2\rceil)}, g_{(\lceil k/2 \rceil +2)},\cdots, g_k). \end{align*} On the other hand, \begin{align*} & Z^{\prime}_T({\Phi}_{k+1}(\Theta S(g_1,g_2,\cdots,g_k)))\\ &\qquad = (\lvert \Theta \rvert)^{1/2} F_k(Z_T((\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2}\rceil)}(\lvert \Theta \rvert)^{- \lfloor\frac{k+1}{2}\rfloor} U(g_1,g_2,\cdots,g_k)))~~\textrm{[since}~~\alpha(T) = {\lvert \Theta \rvert}^{1/2}].\\ & \qquad = (\lvert \Theta \rvert)^{1/2}(\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2}\rceil)}(\lvert \Theta \rvert)^{- \lfloor\frac{k+1}{2}\rfloor}\\ &\qquad\qquad \qquad \qquad F_k(Z_T(\sum_{\substack{\theta\in \Theta\\ \bar{\gamma}\in {\Theta}^k}} S((\theta(g_1),\gamma_1),(\theta(g_2),\gamma_2),\cdots,(\theta(g_k),\gamma_k))))\\ &\qquad= (\lvert \Theta \rvert)^{1/2}(\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2}\rceil)}(\lvert \Theta \rvert)^{- \lfloor\frac{k+1}{2}\rfloor} (\sqrt{\lvert G\rvert} \sqrt{\lvert \Theta \rvert})\\ & \qquad \qquad \qquad \qquad \delta(g_{(\lceil k/2 \rceil)},g_{(\lceil k/2 \rceil+1)}) F_k(U(g_1,g_2,\cdots, g_{(\lceil k/2\rceil)}, g_{(\lceil k/2 \rceil +2)},\cdots, g_k))~~\textrm{[by~~Equation~~\ref{b}]}\\ &\qquad= \sqrt{\lvert G\rvert}(\sqrt{\lvert \Theta \rvert})^{(1-\lceil \frac{k+1}{2}\rceil)}(\lvert \Theta \rvert)^{(1- \lfloor\frac{k+1}{2}\rfloor)} \delta(g_{(\lceil k/2 \rceil)},g_{(\lceil k/2 \rceil+1)}) U(g_1,g_2,\cdots, g_{(\lceil k/2\rceil)}, g_{(\lceil k/2 \rceil +2)},\cdots, g_k) \end{align*} In this case observe that, $ \lceil k/2 \rceil= \lceil (k+1)/2 \rceil $ and $ \lfloor (k+1)/2 \rfloor = \lfloor k/2 \rfloor +1. $ Thus, $$ {\Phi}_{k}(Z_T(\Theta S(g_1,g_2,\cdots,g_k))) = Z^{\prime}_T({\Phi}_{k+1}(\Theta S(g_1,g_2,\cdots,g_k))) $$ In other words, $E^k_{k+1} \in \mathscr{T}$. \end{proof} \begin{lemma} $I^{k+1}_k \in \mathscr{T}$. \end{lemma} \begin{proof} Put $T= I^{k+1}_k.$ Clearly, $\alpha(T)=1.$ Put \begin{equation*} h= \frac{1}{\sqrt{\lvert G \rvert}}\sum_{g\in G} {\begin{minipage}{.2\textwidth} \centering \includegraphics[scale=.7]{gg} \end{minipage}} \end{equation*} Clearly $\theta(h)=h$ for all $\theta \in \Theta.$ \bigskip It suffices to check that: \begin{equation*} Z_T(S(g_1,g_2,\cdots,g_{k-1}))= S(g_1,g_2,\cdots,g_{(k/2)},h,g_{(k/2+1)},\cdots, g_{k-1}). \end{equation*} for $k=2n.$ and \begin{equation*} Z_T(S(g_1,g_2,\cdots,g_{k-1})) = S(g_1, g_2,\cdots, g_{(\lceil k/2 \rceil)}, g_{(\lceil k/2 \rceil)},g_{(\lceil k/2 \rceil+1)},\cdots, g_{k-1}). \end{equation*} for $k=2n-1.$ The proof of the above two equations is routine, and omitted. \end{proof} \begin{lemma} ${(E^{\prime})}^k_k\in \mathscr{T}.$ \end{lemma} \begin{proof} We omit the details. Put $T= {(E^{\prime})}^k_k.$ In this case $\alpha(T)= (\lvert \Theta \rvert)^{1/2}$. Then simply observe: \begin{equation*} Z_T (S(g_{1},g_2,\cdots,g_{k-1}))= \sqrt{\lvert G \rvert} \delta(g_{1},e) S(e, g_2,\cdots,g_{k-1}). \end{equation*} \end{proof} \bigskip This complets the proof of Theorem \ref{generating tangles}. \bigskip Lastly we apply Theorem \ref{main1} to conclude that the proof of Theorem \ref{subgroup} is now complete. \section{Appendix} As promised in the introduction, we here describe the tower of iterated basic construction of $N\subseteq Q$ in terms of the the corresponding tower of $N \subseteq M$. Firstly we state the following lemma which we will use to prove this. \begin{lemma}\cite{Bak} \label{fvrt} Let $N\subseteq M$ be an inclusion of Type $II_1$ factors. Assume $\{\lambda_i:i\in{1,2,..n}\}$ is a basis for $M/N$ (in the sense used in \cite{JoSu}). Let $P$ be a $II_1$ factor such that $P$ contains $M$ and also contains a projection $f$ such that $\sum_{i=1}^n{{\lambda_i}^*f{\lambda_i}}=1$ and satisfies further the following two properties :\par$1)fxf=E_N(x)f$ for all $x\in M$ and\par2)\{${\tau}^{-1/2}f{\lambda_i}$\} is a basis for $P/M $. \\Then there exists an isomorphism from $M_1=\langle M,e_1\rangle$ onto $P$ which maps $e_1$ to f. \end{lemma} The following well known fact is often useful: \begin{fact}\label{f:localtraceformula} Given a $II_1$ factor $A$ and projections $r \in A$ and $s \in (rAr)'$, we have \begin{equation} \label{eq1} tr_{rArs}(rzrs)= (tr_A(r))^{-1} tr_A (rzr) \end{equation} for all {$z \in A$.} \end{fact} \begin{theorem} \label{T:intsi} Given $N\subset Q \subset M$ and the notation introduced above, set $p =e_{0,1}\ e_{0,3} \ e_{0,5} \cdots e_{0, 2n-1}$. Then the chain \[ Np \subset p Mp \subset p M_1p \subset p M_2p \subset \dots \subset p M_{2n-1}p \] is isomorphic to the first $2n-1$ steps of the basic construction of $N \subset Q $. The Jones projections are given by $e_{0,2i}p : L^2( p M_{2i-1}p ) \rightarrow L^2( p M_{2i-2}p )$ and $e_{1,2i+1}p : L^2( p M_{2i}p ) \rightarrow L^2( p M_{2i-1}p )$. The unique normalized trace on the chain, denoted $tr_{N \subset Q }$, is given by $tr_{N \subset Q }(x) = [M:Q ]^{n}tr_{N \subset M}(x) $. \end{theorem} \begin{proof} We put, $p_{[0,2n-1]}= e_{0,1}\ e_{0,3} \ e_{0,5} \cdots e_{0, 2n-1}$. The final trace assertion is immediate from the fact $tr_{N \subset M}(p_{[0,2n-1]}) = [M:Q ]^{-n}$. It suffices to show that the above chain is a basic construction and that the inclusion $Np_{[0,2n-1]}\subset p_{[0,2n-1]}M p_{[0,2n-1]}$ is isomorphic to $N \subset Q $. We do this in several steps. \bigskip $ \noindent \textbf{Step 1}:$ Firstly we show, $Ne_{0,1}\subseteq e_{0,1}Me_{0,1} \subseteq e_{0,1}M_1e_{0,1}$ is isomorphic to the (first step) basic construction of $N\subseteq Q$, where the corresponding Jones' projection is given by $e_{1,1}e_{0,1}(= e_{1,1})$. We prove this using Lemma \ref{fvrt}. \par Since $Q^{\prime}$ is a von Neumann algebra and $e_{0,1}\in Q^{\prime}, Qe_{0,1}$ is a von Neumann algebra. Again, as $e_{0,1} \in N^{\prime}, Ne_{0,1}$ is also a von Neumann algebra. Since $e_{0,1}Me_{0,1}= Qe_{0,1}$ it is clear that $(Ne_{0,1}\subseteq e_{0,1}Me_{0,1}) \cong (N \subseteq Q)$ via the map $ q \longmapsto q e_{0,1} $ for $q\in Q $. In particular, $[Qe_{0,1}:Ne_{0,1}]= [Q:N]$. Let $\{\lambda_i\}$ be a basis for $Q/N$, which always exists by \cite{PiPo}, then since $E^{Qe_{0,1}}_{Ne_{0,1}}(qe_{0,1})= E^Q_N(q)e_{0,1}, \{\lambda_i e_{0,1}\}$ is a basis for $Qe_{0,1}/Ne_{0,1}$. Now, $e_{1,1} \in e_{0,1}M_1e_{0,1}$ since $e_{1,1}\leq e_{0,1}$ and observe, \begin{align*} & \sum{{(\lambda_i e_{0,1})}^* e_{1,1} \lambda_i e_{0,1}}\\ & \qquad = \sum{{\lambda_i}^* e_{0,1} e_{1,1} e_{0,1} \lambda_i}~~\textrm{~[since}~~ e_{0,1}\in Q^{\prime}]\\ & \qquad = \sum{{\lambda_i}^* e_{1,1} \lambda_i}~~~\textrm{~[since}~~~e_{1,1}= e_{1,1}e_{0,1}= e_{0,1}e_{1,1}]\\ & \qquad = \sum{{\lambda_i}^* e^Q_N e_{0,1}\lambda_i}~~~\textrm{~[since}~~~e^Q_N e_{0,1}= e_{1,1}]\\ & \qquad = \sum{({\lambda_i}^* e^Q_N \lambda_i}) e_{0,1}\\ & \qquad = e_{0,1} ~~~\textrm{~[since}~~\sum{{\lambda_i}^* e^Q_N \lambda_i}=1]. \end{align*} Also, \begin{equation}\label{p1} e_{1,1} (e_{0,1}m e_{0,1}) e_{1,1}= e_{1,1} m e_{1,1}= E^M_N(m) e_{1,1} \end{equation} and, \begin{equation}\label{p2} E^{Qe_{0,1}}_{Ne_{0,1}}(e_{0,1} m e_{0,1})e_{1,1}= E^{Qe_{0,1}}_{Ne_{0,1}}(E^M_Q(m)e_{0,1})e_{1,1}=E^Q_N(E^M_Q(m))e_{0,1} e_{1,1}= E^M_N(m) e_{1,1} \end{equation} for all $m\in M.$ Equation (\ref{p1}) and (\ref{p2}) implies $e_{1,1} (e_{0,1}m e_{0,1}) e_{1,1}= E^{Qe_{0,1}}_{Ne_{0,1}}(e_{0,1} m e_{0,1})e_{1,1}.$ We next show, \begin{equation}\label{p3} E^{e_{0,1}M_1e_{0,1}}_{Q e_{0,1}}(x)= [M:Q] E^{M_1}_Q(x)e_{0,1}. \end{equation} for all $x \in e_{0,1}M_1e_{0,1}.$ This follows from the following array of equations $\forall q\in Q $: \begin{align*} & tr_{N\subseteq Q}([M:Q] E^{M_1}_Q (x)e_{0,1}.qe_{0,1})\\ & \qquad = [M:Q]^2 tr(E^{M_1}_Q(x)qe_{0,1})\\ &\qquad = [M:Q]^2 tr(E^{M_1}_Q(xq)e_{0,1})\\ & \qquad= [M:Q] tr(E^{M_1}_Q(xq)) \textrm{~~~[since}~~~tr(e_{0,1})= {[M:Q]}^{-1}]\\ & \qquad= [M:Q] tr(xq)\\ & \qquad = tr_{N\subseteq Q}(e_{0,1}xq) \textrm{~~~[since}~~~e_{0,1}x=x]\\ & \qquad= tr_{N\subseteq Q}(x.qe_{0,1}). \end{align*} Then we show, $\{\sqrt{[Q:N]}e_{1,1}\lambda_i e_{0,1}\} $ is a basis for $e_{0,1}M_1 e_{0,1}/e_{0,1} M e_{0,1}.$ To prove this firstly note that,by Jones's local index formula (see \cite{Jo1} or sections 2.2-2.3 of \cite{JoSu}) and extremality (see \cite{Po} page 176) the following equation holds \begin{equation}\label{p4} [e_{0,1}M_1 e_{0,1}:Q e_{0,1}]= {tr(e_{0,1})}^2 [M_1:Q]= [Q:N]= [Qe_{0,1}:Ne_{0,1}] \end{equation} Then,the following array of equations hold: \begin{align*} & E^{e_{0,1}M_1 e_{0,1}}_{Qe_{0,1}}[(\sqrt{[Q:N]} e_{1,1} \lambda_i e_{0,1})({\sqrt{[Q:N]} e_{1,1} \lambda_j e_{0,1})}^*]\\ & \qquad = [Q:N] E^{e_{0,1}M_1 e_{0,1}}_{Qe_{0,1}}(e_{1,1}e_{0,1}\lambda_i {\lambda}^*_je_{1,1}) ~~\textrm{~[since}~~ e_{0,1}\in Q^{\prime}]\\ & \qquad = [Q:N] E^{e_{0,1}M_1 e_{0,1}}_{Qe_{0,1}}(e_{0,1}e_{1,1}\lambda_i {\lambda}^*_je_{1,1}e_{0,1})~~~\textrm{~[since}~~~e_{1,1}= e_{1,1}e_{0,1}= e_{0,1}e_{1,1}]\\ & \qquad = [Q:N] [M:Q] E^{M_1}_Q(e_{0,1}e_{1,1}\lambda_i {\lambda}^*_je_{1,1}e_{0,1})e_{0,1}~~~~\textrm{~~~[by~~Equation}~~(\ref{p3})]\\ & \qquad= [M:N] E^{M_1}_Q(e_{1,1} \lambda_i{\lambda}^*_je_{1,1})e_{0,1}\\ & \qquad = [M:N] E^{M_1}_Q[E^M_N(\lambda_i{\lambda}^*_j)e_{1,1}]e_{0,1}\\ & \qquad = [M:N]E^Q_N(\lambda_i{\lambda}^*_j)E^{M_1}_Q(e_{1,1}) e_{0,1}\\ & \qquad = [M:N] E^Q_N(\lambda_i{\lambda}^*_j)E^M_Q(E^{M_1}_M(e_{1,1})) e_{0,1}\\ & \qquad = E^Q_N(\lambda_i{\lambda}^*_j)e_{0,1}\\ & \qquad = E^{Qe_{0,1}}_{Ne_{0,1}}(\lambda_i{\lambda}^*_j e_{0,1})\\ & \qquad = E^{Qe_{0,1}}_{Ne_{0,1}}(\lambda_i e_{0,1}.e_{0,1}{\lambda}^*_j) ~~\textrm{~[since}~~ e_{0,1}\in Q^{\prime}] \end{align*} Now as $ \{\lambda_i e_{0,1}\}$ is a basis for $Qe_{0,1}/Ne_{0,1}$ the last equation in the above array of equations together with Equation (\ref{p4}) tells that $\{\sqrt{[Q:N]}e_{1,1}\lambda_i e_{0,1}\} $ is a basis for $e_{0,1}M_1 e_{0,1}/e_{0,1} M e_{0,1}.$ Here we have used Theorem 2.2 in \cite{Bak}. Now applying Lemma \ref{fvrt} we get the desired result.\vspace{4mm}\\ $\noindent \textbf{Step 2}:$ Here again we have $N\subseteq Q \subseteq M$, with biprojection $e_{0,1}$. We claim $(p_{[0,3]}M p_{[0,3]}\subseteq p_{[0,3]}M_1 p_{[0,3]}\subseteq p_{[0,3]}M_2 p_{[0,3]})\cong(Q \subseteq Q_1 \subseteq Q_2)$. Here Jones' projection is given by $e_{0,2}p_{[0,3]}$. We will again apply Lemma \ref{fvrt}.\\ Firstly, as $e_{0,3}$ commutes with $e_{0,1}$ and every element of $M_1$, it follows from above discussion that \\(a)$\{\sqrt{[Q:N]} e_{1,1}\lambda_i p_{[0,3]}\}$ is a basis for $p_{[0,3]} M_1 p_{[0,3]}/p_{[0,3]} Mp_{[0,3]}$.\vspace{2mm}\\ Next we show that \\ (b) for $m_1\in M_1,$ \begin{equation}\label{p5} (e_{0,2} p_{[0,3]})(p_{[0,3]} m_1 p_{[0,3]})(e_{0,2}p_{[0,3]})= E^{p_{[0,3]} M_1 p_{[0,3]}}_{p_{[0,3]} M p_{[0,3]}}(p_{[0,3]} m_1 p_{[0,3]})e_{0,2}p_{[0,3]}. \end{equation} As $e_{0,3}$ commutes with $e_{0,1}$ and every element of $M_1$, it follows that, \begin{align*} & E^{p_{[0,3]} M_1 p_{[0,3]}}_{p_{[0,3]} M p_{[0,3]}}(p_{[0,3]} m_1 p_{[0,3]})\\ & \qquad = E^{e_{0,1} M_1 e_{0,1} e_{0,3}}_ {Q e_{0,1}e_{0,3}}(e_{0,1} m_1 e_{0,1}e_{0,3})\\ & \qquad = E^{e_{0,1} M_1 e_{0,1}}_{Q e_{0,1}}(e_{0,1} m_1 e_{0,1})e_{0,3}~~~\textrm{~~[by~~Fact~~\ref{f:localtraceformula}]}\\ & \qquad = [M:Q] E^{M_1}_Q(e_{0,1}m_1 e_{0,1})p_{[0,3]}~~\textrm{~~[by~~~Equation}~~~(\ref{p3})] \end{align*} Since $e_{0,1}\in P_1$ it is easy to see that, $E^{M_1}_{P_1}(e_{0,1}m_1e_{0,1})=E^{M_1}_{P_1}(e_{0,1}m_1e_{0,1})e_{0,1}=me_{0,1}$ for some unique $m \in M$, which exists by \cite{PiPo}.\\ That implies, \begin{align*} &E^{M_1}_M(E^{M_1}_{P_1}(e_{0,1}m_1e_{0,1}))\\ & \qquad \qquad= m E^{M_1}_M(e_{0,1})\\ & \qquad \qquad= m E^{P_1}_M(e_{0,1})\\ & \qquad \qquad= m{[M:Q]^{-1}} \end{align*} That is, $ E^{M_1}_M(e_{0,1}m_1e_{0,1})=m{[M:Q]}^{-1}.$ Therefore $ m=[M:Q] E^{M_1}_M(e_{0,1}m_1e_{0,1}).$ Thus, \begin{equation}\label{pop} E^{M_1}_{P_1}(e_{0,1}m_1e_{0,1})= [M:Q] E^{M_1}_M(e_{0,1}m_1e_{0,1}) e_{0,1} \end{equation} Thus, \begin{align*} & (e_{0,2} p_{[0,3]})(p_{[0,3]} m_1 p_{[0,3]})(e_{0,2}p_{[0,3]})\\ & \qquad = e_{0,2}p_{[0,3]}m_1p_{[0,3]}e_{0,2}p_{[0,3]}\\ & \qquad = e_{0,3}e_{0,2}e_{0,1}m_1 e_{0,1}e_{0,2}e_{0,3}~~~\textrm{~~~[by~~~Fact~~\ref{f:erel}(3)]}\\ & \qquad = e_{0,1}{E^{M_1}_{P_1}}(e_{0,1}m_1e_{0,1})e_{0,2}e_{0,3}~~~\textrm{~~~[by~~~definition~~of}~~e_{0,2}]\\ & \qquad= [M:Q]e_{0,1}{E^{M_1}_M}(e_{0,1}m_1e_{0,1})e_{0,1}e_{0,2}e_{0,3}~~~~\textrm{~~~[by~~~Equation~~(\ref{pop})]}\\ & \qquad= [M:Q]E^M_Q(E^{M_1}_M(e_{0,1}m_1e_{0,1}))e_{0,1}e_{0,2}e_{0,3}\\ & \qquad= [M:Q]E^{M_1}_Q(e_{0,1}m_1e_{0,1})p_{[0,3]}e_{0,2}p_{[0,3]}~~~\textrm{~~~[by~~~Fact~~\ref{f:erel}(3)]}\\ & \qquad= E^{p_{[0,3]}M_1p_{[0,3]}}_{p_{[0,3]}Mp_{[0,3]}}({p_{[0,3]}m_1p_{[0,3]}})e_{0,2}p_{[0,3]} \end{align*} This completes the proof of (b).\\ (c) We need to prove $ [p_{[0,3]}M_2p_{[0,3]}:p_{[0,3]}M_1p_{[0,3]}]=[Q:N]$\newline For this note, \begin{align*} & p_{[0,3]}M_2p_{[0,3]}\\ & \qquad= e_{0,1}e_{0,3}M_2e_{0,3}e_{0,1}\\ & \qquad= e_{0,1}P_2e_{0,3}e_{0,1}~~~\textrm{~~~[by~~~definition~~of}~~e_{0,3}]\\ &\qquad= p_{[0,3]}P_2p_{[0,3]}~~~\textrm{~~~[since}~~e_{0,3}\in P^{\prime}_2] \end{align*} It is trivial to see that, $[P_2:M_1] = [M_1:P_1] = \frac{[M_1:M]}{[P_1:M]}= \frac{[M:N]}{[M:Q]}=[Q:N]$. Thus, \begin{align*} &[Q:N]=[P_2:M_1]\\ & \qquad \qquad=[e_{0,1}P_2e_{0,1}:e_{0,1}M_1e_{0,1}] \textrm~~~{~~~~~~~~[~~~~~~~as}~~~~e_{0,1}\in M_1\subseteq P_2]\\ & \qquad \qquad=[p_{[0,3]}P_2p_{[0,3]}:p_{[0,3]}M_1p_{[0,3]}]\\ & \qquad \qquad= [p_{[0,3]}M_2p_{[0,3]}:p_{[0,3]}M_1p_{[0,3]}] \end{align*} This proves (c).\\ (d) The following array of equations hold: \begin{align*} & \sum{(e_{1,1} \lambda_i p_{[0,3]})}^* e_{0,2}p_{[0,3]}(e_{1,1} \lambda_i p_{[0,3]})\\ & \qquad= \sum p_{[0,3]}{{\lambda}^*_i} e_{1,1}e_{0,2}e_{1,1} \lambda_i p_{[0,3]}\\ & \qquad= [Q:N]^{-1} \sum p_{[0,3]} {{\lambda}^*_i}e_{1,1} \lambda_i p_{[0,3]}~~~\textrm{~~~[Fact}~~\ref{f:erel} (6)]\\ & \qquad= [Q:N]^{-1} \sum p_{[0,3]} {{\lambda}^*_i} e^Q_N e_{0,1} \lambda_i p_{[0,3]}\\ & \qquad= [Q:N]^{-1} p_{[0,3]} \end{align*} The last equation follows from the fact that $\{\lambda_i\}$ is a basis for $Q/N$ and hence $\sum{{\lambda}^*_i e^Q_N {\lambda}_i} = 1$.\\ (e) Firstly it is easy to check that, \begin{equation}\label{e1} E^{P_2}_{M_1}(e_{0,2})= [M_1:P_1]^{-1}= [Q:N]^{-1}. \end{equation} Next we show that, \begin{equation}\label{e2} E^{M_1}_{P_1}(e_{1,1})= [Q:N]^{-1}e_{0,1} \end{equation} To see this, note that by \cite{PiPo} there exists unique $m_0\in M$ such that $E^{M_1}_{P_1}(e_{1,1})= E^{M_1}_{P_1}(e_{1,1})e_{0,1} = m_0 e_{0,1}.$ Thus, $E^{M_1}_M(e_{1,1})= m_0 E^{M_1}_M(e_{0,1})$. Hence, $m_0 = [Q:N]^{-1}.$ We have, \begin{align*} & [p_{[0,3]}M_2p_{[0,3]}:p_{[0,3]}M_1p_{[0,3]}]\\ & \qquad= [Q:N]~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\textrm~~~~~~{[by~~~~~~ ~~(c)]}\\ & \qquad= [Q p_{[0,1]}:N p_{[0,1]}]\\ & \qquad= [p_{[0,1]}M_1p_{[0,1]}:p_{[0,1]}Mp_{[0,1]}]~~~~~~~~~~~~~~~~~~~~~~~~~~~~\textrm~~~{[by~~~Equation~~(\ref{p4})]}\\ & \qquad= [p_{[0,3]}M_1p_{[0,3]}:p_{[0,3]}Mp_{[0,3]}] \end{align*} By Fact \ref{f:localtraceformula} it is trivial to check that for all $m_2 \in M_2$: \begin{equation} \label{la} E^{p_{[0,3]}M_2 p_{[0,3]}}_{p_{[0,3]}M_1 p_{[0,3]}}(p_{[0,3]}m_2 p_{[0,3]})=p_{[0,3]}E^{M_2}_{M_1}(m_2)p_{[0,3]} \end{equation} Then the following array of equation hold: \begin{align*} & E^{p_{[0,3]}M_2 p_{[0,3]}}_{p_{[0,3]}M_1 p_{[0,3]}} [(e_{0,2}e_{1,1} \lambda_i p_{[0,3]}){(e_{0,2}e_{1,1} \lambda_j p_{[0,3]})}^*]\\ & \qquad \qquad= E^{p_{[0,3]}M_2p_{[0,3]}}_{p_{[0,3]}M_1p_{[0,3]}} (p_{[0,3]}e_{0,2}e_{1,1} \lambda_i {{\lambda}^*_j} e_{1,1}e_{0,2}p_{[0,3]})\\ & \qquad \qquad= p_{[0,3]}E^{M_2}_{M_1} (e_{0,2}e_{1,1} \lambda_i {{\lambda}^*_j} e_{1,1}e_{0,2})p_{[0,3]}~~~~\textrm{~~[by~~Equation~~(\ref{la})]}\\ & \qquad\qquad= p_{[0,3]}E^{M_2}_{M_1} (E^{M_1}_{P_1}(e_{1,1} \lambda_i {{\lambda}^*_j} e_{1,1})e_{0,2})p_{[0,3]}\\ & \qquad \qquad= p_{[0,3]}E^{M_1}_{P_1} (e_{1,1} \lambda_i {{\lambda}^*_j} e_{1,1})E^{M_2}_{M_1}(e_{0,2}) p_{[0,3]}\\ & \qquad\qquad= p_{[0,3]} E^Q_N(\lambda_i {{\lambda}^*_j})E^{M_1}_{P_1}(e_{1,1})E^{P_2}_{M_1}(e_{0,2})p_{[0,3]}\\ & \qquad \qquad= p_{[0,3]} E^Q_N (\lambda_i {{\lambda}^*_j}){[Q:N]}^{-1} E^{M_1}_{P_1}(e_{1,1})p_{[0,3]}~~~\textrm{~~~[by ~~Equation~~(\ref{e1})]}\\ & \qquad \qquad= p_{[0,3]}E^Q_N (\lambda_i {{\lambda}^*_j}){[Q:N]}^{-1} {[Q:N]}^{-1}e_{0,1}p_{[0,3]}~~~~\textrm{~~~~[by~~~Equation~~(\ref{e2})]}\\ & \qquad \qquad= {[Q:N]}^{-2} p_{[0,3]} E^Q_N (\lambda_i {{\lambda}^*_j})p_{[0,3]}\\ & \qquad \qquad= {[Q:N]}^{-2} E^{Q p_{[0,3]}} _{Np_{[0,3]}}(\lambda_i p_{[0,3]} p_{[0,3]} {{\lambda}^*_j}) \end{align*} Since, $\{\lambda_i p_{[0,3]}\}$ is a basis for $Qp_{[0,3]}/Np_{[0,3]}$ it follows from Theorem $2.2$ in \cite{Bak} that $\{[Q:N] e_{0,2}p_{[0,3]}e_{1,1}\lambda_i p_{0,3}\}$ that is, $\{[Q:N] e_{0,2}e_{1,1} \lambda_i p_{[0,3]}\}$ is a basis for $p_{[0,3]} M_2 p_{[0,3]}/p_{[0,3]}M_1 p_{[0,3]}$.\\ Thus combining (a),(b),(c),(d) and (e) and applying Lemma \ref{fvrt} we complete the proof of Step 2. \vspace{4mm}\\ $ \noindent \textbf{Step 3}:$ \\ In general apply Step 1 to the following subfactors for $2n-1\geq i\geq 3 $ and $i$ odd:\\ $p_{[0,2n-i]} M_{2n-i} p_{[0,2n-i]}\subseteq p_{[0,2n-i]} P_{2n-{i+1}} p_{[0,2n-i]} \subseteq p_{[0,2n-i]} M_{2n-{i+1}} p_{[0,2n-i]}.$\\ Here biprojection is given by $ p_{[0,2n-i]} e_{0,2n-{i+2}}$. We get $p_{[0,2n-{i+2}]} M_{2n-i} p_{[0,2n-{i+2}]}\subseteq p_{[0,2n-{i+2}]}\\ M_{2n-{i+1}} p_{[0,2n-{i+2}]}\subseteq p_{[0,2n-{i+2}]} M_{2n-{i+2}} p_{[0,2n-{i+2}]}$ is a Jones' tower with corresponding Jones' projection is given by $e_{1,2n-i+2} p_{[0,2n-{i+2}]}$. But $ e_{0,2n-1} e_{0,2n-3}\cdots e_{0,2n-{i+4}} \in M^{\prime}_{2n-i}, M^{\prime}_{2n-{i+1}}$ and $ M^{\prime}_{2n-{i+2}}.$ So, using Fact \ref{f:erel} (3) we have $$p_{[0,2n-1]} M_{2n-i} p_{[0,2n-1]}\subseteq p_{[0,2n-1]} M_{2n-{i+1}} p_{[0,2n-1]} \subseteq p_{[0,2n-1]} M_{2n-{i+2}} p_{[0,2n-{i+2}]}$$ is a Jones' tower with the Jones' projection $e_{1,2n-i+2} p_{[0,2n-1]}$. \vspace{3mm}\\ Apply Step 2 to the following subfactors for $2n-1\geq j\geq 3 $ and $j$ odd:\\ $p_{[0,2n-j]} M_{2n-{j-2}} p_{[0,2n-j]}\subseteq p_{[0,2n-j]} P_{2n-{j-1}} p_{[0,2n-j]} \subseteq p_{[0,2n-j]} M_{2n-{j-1}} p_{[0,2n-j]}.$\\ Here biprojection is given by $ p_{[0,2n-j]} e_{0,2n-j}$. Then we get $p_{[0,2n-{j+2}]} M_{2n-{j-1}} p_{[0,2n-{j+2}]}\subseteq p_{[0,2n-{j+2}]}\\ M_{2n-j} p_{[0,2n-{j+2}]} \subseteq p_{[0,2n-{j+2}]} M_{2n-{j+1}} p_{[0,2n-{j+2}]}$ is a Jones' tower with the corresponding Jones' projection is given by $e_{0,2n-j+1} p_{[0,2n-j+2]}$. Again note that $ e_{0,2n-1} e_{0,2n-3}\cdots e_{0,2n-{j+4}} \in M^{\prime}_{2n-j-1}, M^{\prime}_{2n-j}$ and $ M^{\prime}_{2n-j+1}$. Thus it follows from Fact \ref{f:erel} (3) that $$p_{[0,2n-1]} M_{2n-{j-1}} p_{[0,2n-1]}\subseteq p_{[0,2n-1]} M_{2n-j} p_{[0,2n-1]} \subseteq p_{[0,2n-1]} M_{2n-{j+1}} p_{[0,2n-1]}$$ is a Jones's tower with the corresponding Jones' projection is given by $e_{0,2n-j+1} p_{[0,2n-1]}$.\vspace{3mm}\\ Lastly note that, from Step 1 and Fact \ref{f:erel} it follows that $$Np_{[0,2n-1]}\subseteq p_{[0,2n-1]}Mp_{[0,2n-1]} \subseteq p_{[0,2n-1]}M_1p_{[0,2n-1]}$$ is a Jones' tower with the corresponding Jones' projection $e_{1,1}p_{[0,2n-1]}$.\par Combining all the facts mentioned above we prove the result as stated in the theorem. \end{proof} \section*{Acknowledgement} The author sincerely thanks V.S. Sunder and Vijay Kodiyalam for innumerable enlightening discussions and fruitful advice. He also wishes to thank Sohan Lal Saini and Sebastein Palcoux for various useful discussions.
1,314,259,996,901
arxiv
\section{Introduction} Reverberation Mapping \cite[RM;][]{Blandford_1982,Peterson_2014} is the primary technique to measure super massive black hole (SMBH) masses. Unlike other mass estimators (for example, stellar kinematics), RM does not require high spatial resolution in order to resolve the sphere of influence of the central black hole (BH). Instead, RM monitors different parts of the electromagnetic spectrum (corresponding to different emitting regions of the AGN) and measures the timing of ``light echoes'' between different regions. Variable ionizing emission is emitted from an accretion disk surrounding the central black hole. As the UV/optical radiation from the accretion disk travels outward, it is reprocessed by various components of the AGN, for example, the broad-line region (BLR) and the dusty torus \cite[e.g.,][]{Peterson_1997}. RM measures the delay between signals at different wavelengths to probe the structure and kinematics of various regions of the AGN. Most spectroscopic RM efforts have measured the time delay between the continuum emission (arising in the accretion disk) and the broad emission lines (produced by high-velocity gas clouds in the BLR) using optical spectra. Assuming the BLR is virialized, one can measure the black hole mass ($M_{\rm BH}$) using the BLR size ($R_{\rm BLR}$) inferred from the time delay and the BLR virial velocity determined from the width of a broad emission line ($\Delta V$) using the following equation: \begin{equation} M_{BH} = f \frac{R_{BLR} \Delta V^2}{G}\ , \end{equation} where $f$ is a dimensionless scale factor of order unity, called the virial coefficient, that accounts for BLR geometry, kinematics, and inclination. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{mockid00203.pdf} \caption{An example of our simulated light curves and lag measurements using three different methods. Left: the simulated light curves (continuum in the top panel and emission line in the bottom panel) and predicted light curve models from {\tt JAVELIN} (shaded blue area). The right three panels display the ICCF, ZDCF, and the posterior distribution function from {\tt JAVELIN}. The black solid line marks the assigned lag of the mock quasar, and the red vertical lines indicate the measured lag (solid) and their uncertainties (dotted). In this case, the measured lags from ICCF and {\tt JAVELIN} are considered as true detections (see criteria in \S\ref{sec:criteria}), and the measured lag from ZDCF is not considered a detection.} \label{fig:lc_example} \end{figure*} One of the most important results of past RM studies is the discovery of a correlation between the H$\beta$ BLR radius and the luminosity of the AGN \citep[the R-L relation, e.g.,][]{Laor_1998, Wandel_1999, Kaspi_2000, Kaspi_2005, Bentz_2006, Bentz_2009b, Bentz_2013}, which is the basis of the empirical single-epoch (SE) method \citep{Vestergaard_Peterson_2006, Shen_2013} for BH mass estimation that utilizes single-epoch spectroscopy. Using the measured R-L relation and assuming it applies to objects at different redshifts and luminosities, BH masses of broad-line quasars can be estimated with the luminosity and broad-line width measured from single-epoch spectra. Due to its simplicity, the SE method is widely used to estimate quasar BH masses \citep{Vestergaard_Peterson_2006, Kelly_2013}, although its reliability for emission lines other than H$\beta$ and in the high-redshift and high-luminosity regime remains to be tested. Traditional RM studies have focused only on the brightest sources with the highest variability and generally the strongest BLR lines in the local universe ($z<0.1$) to ensure successful measurements of time lags. So far our understanding of the BLR and the R-L relation is based on only $\sim$60 local AGN, which is a biased representation of he distant and luminous quasar population. The Sloan Digital Sky Survey Reverberation Mapping Project \citep[SDSS-RM, ][]{Shen_2015a} is a large-scale RM program that simultaneously monitors 849 uniformly-selected quasars over a broad range of $i$-band magnitude (15.0$<i<$21.7) and redshift (0.1$<z<$4.5), which greatly expands the AGN parameter space for which RM has been conducted. With its multiplex capability, SDSS-RM also dramatically improves the observing efficiency of RM, and thus can extend the redshift and luminosity range for which RM lag measurements are feasible. The first-season data from SDSS-RM has already produced lags for different emission lines in a luminosity-redshift regime largely unexplored by past RM studies \citep{Shen_2016b, Li_2017, Grier_2017}, {and the multi-year data have started probing lags at even higher redshifts and luminosities \citep{Grier_2019}.} With an industrial-scale MOS-RM program such as SDSS-RM, it is important to understand the interplay among the quasar sample, {variability characteristics}, survey design and observation sensitivity in order to evaluate/forecast the overall success and limitations of lag measurements. Lag detections strongly depend on the design of the monitoring program, including the cadence, total observation baseline, seasonal/weather gaps, and the signal-to-noise ratio (S/N) of the flux measurements. The complicated selection function induced by these various survey parameters may lead to preferential lag detections in a certain time range and may thus introduce potential selection biases when assessing any intrinsic correlations between lags and quasar properties (such as the R-L relation). In addition, the often poor S/N and lower-amplitude variability in quasars produce low-quality measurements or even false detections. Biases may also arise from different methods and assumptions used by a specific lag-measuring technique when applied to the typical survey-quality light curves produced by MOS-RM programs, as most of these techniques were originally developed using high-quality data from local AGN. Detailed simulations of mock data are required to quantify the detection efficiency and quality of lag measurements for MOS-RM programs and assess the strengths and weaknesses of different lag-measuring techniques \citep[e.g.,][]{Peterson_1998,Shen_2015a,King_2015}. In this paper, we use a set of simulated observations of a uniform quasar sample (similar to the SDSS-RM sample after down-sampling) to conduct an investigation on a set of lag-measuring methods: the Interpolated Cross-Correlation Function \citep[ICCF, ][]{Gaskell_Peterson_1987}, $z$-Transformed Discrete Correlation Function \citep[ZDCF, ][]{Alexander_2013} and {\tt JAVELIN} \citep{Zu_2011}. {Although all three methods are widely used in the literature, there has not been a comprehensive comparison of their performance over a broad range of light curve properties. In some recent RM work \citep[e.g.,][]{Grier_2017, Homayouni_2018, Edelson_2019}, {\tt JAVELIN} and ICCF are found to yield consistent lag measurements, but {\tt JAVELIN} lag uncertainties are often smaller than those for ICCF.} The main purposes of this study are to inform current and upcoming MOS-RM programs and to understand selection biases introduced by the MOS-RM program design. This work expands our previous investigation \citep{Shen_2015a} that only focused on the traditional ICCF method to advise the design of the SDSS-RM program. Section \ref{sec:data} describes the generation of our uniform mock quasar sample and its simulated continuum and broad-line light curves. Section \ref{sec:measurelags} presents the methods we use for measuring lags. We compare these different methods using results from the uniform sample in Section \ref{sec:results}, where we down-sample the uniform quasar sample to provide results that can be compared to realistic, flux-limited MOS-RM programs. Section \ref{sec:reallife} introduces a statistical approach to efficiently eliminate false detections from low-quality light curves and present the measurement results from this statistical approach. The implications for the observed R-L relation are discussed in Section \ref{sec:discussion}, and the results are summarized in Section \ref{sec:con}. Throughout this work, we adopt a $\Lambda$CDM cosmology with $\Omega_{\Lambda}=0.7$, $\Omega_{M}=0.3$, and $h=0.7$. \section{Simulations}\label{sec:data} A sample of 100,000 mock quasars and their associated light curve pairs were generated following the procedures described by \cite{Shen_2015a}. We first generate a quasar sample uniformly distributed over a grid of $i$-band magnitude (15$<M_{i}<$22) and redshift (0$<z<$5), and calculate the absolute $i$-band magnitudes ($M_{i}$) using K-corrections from \cite{Richards_2006}. The chosen $i$-band and redshift grids are similar to those selected for the SDSS-RM program. Using a power-law spectral index of 0.5 in $F_{\nu}$, we convert the absolute $i$-band magnitudes to monochromatic rest-frame continuum luminosities $L_{5100}$, $L_{3000}$, and $L_{1350}$, which correspond to the continuum wavelengths commonly adopted for use with H{$\beta$}, Mg\,{\sc ii}\ , and C\,{\sc iv}\ reverberation mapping, respectively. To simplify the simulations, we consider RM for a single line in a given redshift interval: H{$\beta$}\ for $z\leq$0.9, Mg\,{\sc ii}\ for 0.9$<z\leq$2.2 and C\,{\sc iv}\ for $z>$2.2. We assign equivalent widths of H{$\beta$}, Mg\,{\sc ii}\ and C\,{\sc iv}\ as functions of the continuum luminosities of each mock quasar using empirical relations and dispersions measured from the SDSS DR7 quasar sample \citep{Shen_2011} and their corresponding broad-line luminosities. BH masses are assigned using the single-epoch mass estimator based on H{$\beta$}\ and the model broad-line widths and continuum luminosities \citep{Vestergaard_Peterson_2006}. For the majority of this work, we focus on the single-season program simulation (with a duration of 180 days) and measure H{$\beta$}\ lags with a few Mg\,{\sc ii}\ lags at intermediate redshifts. For the multi-season simulation (Section \ref{sec:multi-yr}), we use the same H{$\beta$}\ R$_{\rm BLR}$-L relations for Mg\,{\sc ii}\ and C\,{\sc iv}\ because the actual R-L relations for the other lines are not as well-established as that for H{$\beta$}. We adopt the average H{$\beta$}\ R$_{\rm BLR}$-L relation at 5100\AA\ with a dispersion of 0.15 dex \citep{Bentz_2009b} to assign the expected BLR lags: \begin{equation} \log_{10}\big(\frac{\tau}{{\rm days}}\big) = -21.3 + (0.519)\times \log_{10}\big(\frac{\lambda L_{\lambda, 5100}}{{\rm erg\, s^{-1}}}\big)\ . \end{equation} Although only a handful of Mg\,{\sc ii}\ lags have been reported in the literature \citep[e.g.,][]{Reichert_1994, Dietrich_1995, Metzroth_2006,Shen_2016a, Czerny_2019}, previous studies have demonstrated that the lags for broad Mg\,{\sc ii}\ and H{$\beta$}\ line widths are correlated \citep[e.g.,][]{Wang_2009,Shen_2011,Wang_2019}, and that Mg\,{\sc ii}\ may be used as a substitute for H{$\beta$}\ at $z$ $>$1 \citep[e.g.,][]{McLure_2004, Shen_2012, Trakhtenbrot_2012}. The C\,{\sc iv}\ R-L relation at high redshift is currently constrained by only a handful of high redshift quasars with measured C\,{\sc iv}\ lags \citep[e.g.,][]{Kaspi_2007, Lira_2018, Grier_2019}. {In local low-luminosity AGN, C\,{\sc iv}\ lags are found to be smaller than H{$\beta$}\ lags by a factor of $\sim 2$ \citep[e.g.,][]{Peterson_Wandel_1999,Peterson_Wandel_2000}. However, the discrepancies in different R-L relations will not affect our results, as the purpose of this study is to show how well each lag-measuring method recovers the assigned lags under different observing circumstances.} For each mock quasar, we generate a continuum light curve with daily sampling, assuming that quasar continuum variability follows the Damped Random Walk (DRW) model \citep[a.k.a. the Ornstein-Uhlenbeck process or the first-order continuous autoregressive (CAR(1)) process, e.g.][]{Kelly_2009, Kelly_2011, Kozlowski_2010, Macleod_2010, Macleod_2012}. The DRW model describes a stochastic process with a damping timescale $\tau$ (the timescale for the time series to become uncorrelated) and a driving variability amplitude $\sigma$. {While the short ($<$day) and long ($>$years) timescales of quasar variability are not well-constrained by existing observations and may deviate from the DRW model \citep[e.g.,][]{Macleod_2010, Macleod_2012,Mushotzky_2011,Simm_2016, Guo_2017, Smith_2018}, our simulations will focus on the timescales where the observed quasar variability can be approximately described by DRW models (1--1000 days). In Section \ref{sec:psd}, we further test the capabilities of each lag measuring method with simulated non-DRW light curves.} The DRW parameters, $\tau$ and $\sigma$, can depend on the rest-frame color, luminosity and black hole mass of the quasar \citep{Macleod_2010, Macleod_2012}. {We assign the DRW parameters following the empirical Equation 6 from \cite{Macleod_2012} and using simulated quasar properties. However, \cite{Kozlowski_2017a, Kozlowski_2017b} reported that the scaling relations between DRW parameters and quasar properties in \cite{Macleod_2012} might be biased or might not exist. In this work, we only use the DRW parameters to produce realistic stochastic light curves to mimic quasar variability. Furthermore, we do not attempt to recover the DRW parameters during the fitting to mock light curves; instead, we fix the DRW parameters and only use the DRW model as a tool to interpolate light curves. } The daily-sampled DRW continuum light curve is constructed using the assigned DRW parameters, and the emission line light curve is generated by convolving the continuum light curve with a Gaussian transfer function with an offset equal to the assigned lag and a width of 1/10 of the assigned lag. The transfer function describes the emission line response to the continuum variability and is related to the physical structure and kinematics of the BLR \citep{Blandford_1982}. The choice of the transfer function width is motivated by velocity-resolved lag observations \citep[e.g.,][]{Grier_2013, Skielboe_2015, Pancoast_2018}, but we have tested different transfer function widths and found that the results are insensitive to this detail \citep[e.g.,][]{Shen_2015a}. For each simulation set, we down-sample the full light curves to 30 epochs with a cadence of 6 days to mimic the first-year light curves from the SDSS-RM program. {Following the assumptions of \cite{Shen_2015a}, we adopt fiducial uncertainties of 10$^{-15}$ erg~s$^{-1}$~cm$^{-2}$ and 10$^{-16}$ erg~s$^{-1}$~cm$^{-2}$ for the continuum and line light curves, which are the typical flux uncertainties in the SDSS DR9 BOSS quasar catalog \citep{Paris_2012}, to represent the sensitivity of our simulated survey.} {The median relative uncertainties are $\sim$2\% for continuum fluxes and $\sim$10\% for line fluxes for the final down-sampled, flux-limited sample that mimics the SDSS-RM program (see Section \ref{sec:downsample} for details).} Finally, the fluxes are resampled in the down-sampled light curves by adding to the original flux a Gaussian random deviate with zero mean and a dispersion equal to the flux uncertainty. Figure \ref{fig:lc_example} (left panels) presents an example of our simulated light curves. Compared to the light curves from actual SDSS-RM data used in \cite{Grier_2017}, the median S/N {(flux over flux uncertainty)} of the simulated continuum and line light curves are $\sim$3.5 and $\sim$1.5 times larger at similar $i$-magnitude and redshift (Figure \ref{fig:lc_snr}). {The continuum light curves in \cite{Grier_2017} include additional photometric monitoring data from the Steward Observatory Bok 2.3 m telescope and the 3.6 m Canada-France-Hawaii Telescope. An inter-calibration of the light curves was performed with the Continuum REprocessing AGN MCMC ({\tt CREAM}) software \citep{Starkey_2016}, which corrected for detector properties, telescope throughputs, and other properties specific to the individual telescopes. In addition, {\tt CREAM} applied a corrective term to the continuum and line light curve uncertainties to account for the inter-calibration and additional systematic uncertainties, which inflated the uncertainties by a factor of a few. In most of our simulations, we will not use the inflated uncertainties and will not discuss the effects of systematic flux uncertainties in individual light curves. Instead, we will discuss the effect of light curve S/N on lag detection using inflated uncertainties that include these corrections and systematics in Section \ref{sec:diss_err}.} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{LC_cont_SNR.pdf} \includegraphics[width=0.5\textwidth]{LC_line_SNR.pdf} \caption{S/N of the simulated light curves (black shaded histogram) compared to that of the \cite{Grier_2017} light curves (red open histogram). The S/N of the simulated light curves is represented with 50 realizations of randomly selected down-sampled subsets (see \S \ref{sec:downsample} for details of the down-sampling procedure) from the uniform sample to match the redshift and $i$-band magnitude distribution of the \cite{Grier_2017} sample.} \label{fig:lc_snr} \end{figure} \section{Measuring Time Lags}\label{sec:measurelags} We measure time lags with three methods commonly used in the literature: ICCF, ZDCF and {\tt JAVELIN}. ICCF measures the cross correlation between linearly interpolated light curves by assuming light curves are smooth between epochs. ZDCF does not use any interpolation and calculates the discrete cross correlation based solely on the observed data points only. Finally, {\tt JAVELIN} assumes the DRW model to describe the variations of the light curves and utilizes Markov chain Monte Carlo \citep[MCMC, e.g.,][]{EMCEE} to fit for the best time lag. While there are other methods available, i.e., non-parametric techiniques \citep{Skielboe_2015,Chelouche_2017}, Discrete Correlation Function \citep[DCF, ][]{Edelson_Krolik_1988} and \texttt{CREAM} \citep{Starkey_2016}, the three chosen methods are the most commonly used in analyzing the light curves and measuring time lags; we thus limit our study to these three. Below we describe each of the three methods in further detail. \subsection{Interpolated Cross-Correlation Function}\label{sec:ccf} The most frequently used technique of measuring RM time lags is the ICCF method. ICCF calculates time lags by shifting and linearly interpolating the two light curves, calculating the cross-correlation coefficient $r$ at each given time lag ($\tau$) and finding the most likely time lag by locating the maximum $r$ over a grid of lag values. ICCF is designed for high-cadence observations (i.e., traditional RM with the aim for high success rate), and it is unclear to what extent ICCF can be applied to low-to-moderate quality light curve data from MOS-RM programs such as SDSS-RM. In this work, we implement ICCF using the publicly available PyCCF code \citep{pyCCF} adapted from the original ICCF code written by B. Peterson \citep{Peterson_1998}. For a 180-day observing baseline, we compute the ICCF with a search range of $\pm$100 days to require that at least roughly half of the observations are included in the calculation of ICCF. {We tested different values ({0.1, 0.2, 0.5, 1.0, and 2.0 times of the light curve cadence}) for the $\tau$ grid spacing. The overall ICCF shape does not change drastically with different $\tau$ grid spacing; however, the ICCF may have spurious spikes or become over-smoothed when the grid density is too high or too low. We selected half of the light curve cadence to be the $\tau$ grid spacing, which yields reasonably smooth CCFs for our mock light curves.} We adopt the traditional flux randomization/random subset sampling (FR/RSS) procedure \citep{Peterson_1998} to obtain the measured time lag and its uncertainties. For 1,000 Monte Carlo (MC) realizations, we randomize the flux measurements by their uncertainties and use a subset of light curve points (chosen at random with repetition) to calculate the CCF. The flux randomization accounts for the flux measurement uncertainties. By choosing random subsets of observations, we can avoid artificial lags introduced by the sampling characteristics of our observations or certain combinations of a few epochs. The centroid computed over five points centered around the ICCF peak is used as the measured time lag $\tau_{cent}$ in each realization. This approach is slightly different from the conventional method of calculating $\tau_{cent}$ from all the data points with $r>0.8\times r_{max}$. We found that with sparse light curves, CCFs occasionally have multiple strong peaks, which causes the centroid calculated in the conventional method to be biased. With the 5-point method, we are guaranteed to calculate a centroid from the local region of the strongest peak, ignoring the impact of aliased lags from sparse light curves. Figure \ref{fig:peakdefinition} demonstrates that the 5-point method eliminates the majority of false detections while retaining similar detection efficiency (defined as the fraction of objects with a detected lag, see Section \ref{sec:criteria} for our detection criteria). Finally, the cross correlation centroid distribution (CCCD), which is the distribution of the measured $\tau_{cent}$ in all MC realizations, is used to define the final lag and its measurement uncertainty, as described in detail in Section \ref{sec:aliases}. \begin{figure} \centering \includegraphics[width=9cm]{detmap_c6xe30_peakdef_imag21p7_stat_ref.pdf} \caption{Detection efficiency of the ICCF method for simulations with a cadence of 6 days and 30 epochs, using different ICCF centroid calculation schemes. Left panel: the centroid is calculated with all points above 0.8 of the maximum $r$; right panel: the centroid is calculated using 5 points centered on the peak. The colormap represents the detection efficiency and the numbers are the detection counts (true detections in black and false detections in red) of a single down-sampling realization. The total numbers of true and false detections shown in the lower-right corner are the median and uncertainties derived from 100 down-sampling realizations. The grey contours show the approximate constant lags from the R-L relation from \cite{Bentz_2009b}.} \label{fig:peakdefinition} \end{figure} \subsection{$z$-Transformed Discrete Correlation Function}\label{sec:zdcf} The $z$-transformed discrete correlation function \citep[ZDCF,][]{Alexander_2013} is a modified version of the original DCF proposed by \cite{Edelson_Krolik_1988}. DCF analyzes the correlations in time series data with a conservative approach by merely calculating the cross correlation of the data points, without any interpolation. DCF calculations can avoid effects of correlated errors between continuum and line fluxes measured from the same spectrum, and yield more conservative uncertainties. However, DCF does not perform well for light curves with irregular or sparse cadences. ZDCF incorporates two improvements to the original DCF: the implementation of equal-population binning and the uncertainty calculations using the $z$-transform. For each given light curve pair, we calculate and sort the time differences between all data pairs from the two light curves. The ZDCF time lag grid is determined by requiring equal numbers of data pairs in each lag bin, i.e., the ZDCF time grid resolution is adaptive to the sampling of the light curves: when the sampling is denser, ZDCF has better resolution at certain time lags. Next, we calculate the correlation coefficient for the data pairs in each bin, and the uncertainty is calculated following \cite{Alexander_2013} using the $z$-transform method. The above procedure is repeated for 100 Monte Carlo realizations, where in each iteration the observed fluxes are randomly altered by the flux uncertainties. The final ZDCF is the {\it average} of the 100 Monte Carlo realizations. To determine peak position and its uncertainties, ZDCF calculates the maximum likelihood from the likelihood function instead of using the traditional FR/RSS method to prevent interpolation of data. We calculate the likelihood of point $i$ being the maximum in the final averaged ZDCF, which is approximately the product of the possibilities for point $i$ to be larger than any other point $j$ in the ZDCF \citep[see][for the complete mathematical description]{Alexander_2013}. We adopt the peak position as the measured lag and the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles of the normalized likelihood function (or the fiducial distribution) as the uncertainties of the peak position. Due to the binning method, the search range of ZDCF is limited by the number of data pairs, especially with sparse light curves. \subsection{JAVELIN}\label{sec:javelin} Another approach to measure lags is to assume {a statistical quasar variability model} and model the continuum light curves, line light curves and their lags simultaneously. {\tt JAVELIN} assumes that the quasar continuum light curve can be described by the DRW model and the line light curve is the shifted, scaled continuum light curve smoothed by a transfer function (a narrow top-hat function is usually assumed in {\tt JAVELIN}, though there are other options available in the code as well). This is a more empirically motivated method to interpolate the data than simple linear interpolation as in ICCF, especially when the observations are sparse or unevenly sampled. {Linear interpolations have minimum uncertainties halfway between data points, where there are no actual data points and the uncertainties are expected to be the largest. On the other hand, the DRW model (and other stochastic process models) is a model of data covariance and can interpolate unmeasured data points based on the statistical properties of the entire light curve.} For the timescales of interest here (e.g., days to months), the DRW model provides a reasonably good statistical description of stochastic quasar continuum variability \citep[e.g.,][]{Kelly_2009, Kozlowski_2010,Macleod_2010}. The {\tt JAVELIN} code first fits a DRW model to the continuum light curve and then fits the lag, width and scale of the transfer function. Since our mock light curves typically do not have sufficient quality (in terms of cadence and baseline) to constrain the damping time scale or the width of the transfer function, we fix these parameters to 300 days and 2 days, respectively. {Since the damping timescale is fixed, we are merely using {\tt JAVELIN} to ``interpolate'' the light curves with a DRW model.} The damping timescale is chosen to be close to the median of the assigned values in the mock sample that mimic the observed distribution for SDSS quasars \citep[][]{Kelly_2009, Macleod_2010, Macleod_2012}, and the transfer function width is chosen to be smaller than the observing cadence. {Even though the transfer function width is different from our assigned value when generating light curves (1/10 of the assigned lag), it is sufficiently close to the widths for most of our detected lags (on a scale of few days) and the exact choice does not matter as the transfer function cannot be well constrained with our cadence.} We tested this assumption by fixing the damping timescale and the transfer function width at different values (damping timescales at 180, 300, 500 days and transfer function widths at 1, 2, 5, 10 days) in {\tt JAVELIN} and {found the lag measurements do not change with different damping timescale or transfer function widths}; we thus stress that our results are mostly insensitive to these assumptions. We ran {\tt JAVELIN} on the full length of light curves with a flat prior of lags, but only examine the posterior distribution within $\pm$100 days to match our ICCF analysis. {This practice is almost equivalent to limiting the lag search range to $\pm$100 days in {\tt JAVELIN}. We chose not to limit the search range in {\tt JAVELIN} so that we can examine the posterior to verify the lag limit is reasonable for the length of our light curves and examine the alias effects at the lag limit. In some cases, imposing the lag limit later can effectively remove the strong peaks near the edges in the posterior, which are caused by fits with only a small overlapping segment of the light curves.} The fitting uses MCMC to sample the probability distribution of all the fitted parameters. The posterior distribution function (PDF) is used in a similar fashion as the CCCD for ICCF to calculate the measured lag and its uncertainties, which will be further discussed in Section \ref{sec:aliases}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{detmap_c6xe30_lim100_imag21p7_alias_ref.pdf} \caption{{Similar to Figure \ref{fig:peakdefinition}. Detection efficiency of the ICCF method for simulations with a cadence of 6 days and 30 epochs, with (left) and without (right) the alias removal procedure.}} \label{fig:aliasremoval} \end{figure} \subsection{Alias Removal}\label{sec:aliases} Upon examining the CCCD of the traditional ICCF and the PDF of {\tt JAVELIN}, we occasionally observe multiple peaks. These aliases may be caused by various reasons, including aliases from a segment of the light curves that may be coincidentally correlated, or from a local minimum in MCMC in the case of {\tt JAVELIN}. Here, we follow the quantitative alias removal procedure of \cite{Grier_2017}. First, we apply a weight $P$ to each point in the CCCD/PDF using the fraction of data points included in the calculation: $P=[N(\tau)/N(0)]^{2}$, where $N(x)$ is the number of overlapping points at time lag $x$. Next, we smooth the CCCD/PDF by convolving with a Gaussian filter with a dispersion of 5 days. The 5-day kernel is determined by visual inspection of the PDF. Finally, the primary peak of the weighted and smoothed CCCD/PDF is identified and all data points beyond the range of the peak, i.e. beyond the closest local minima on both sides of the peak, are excluded. Once the primary peak is identified, we adopt the median of the truncated (but not weighted or smoothed) CCCD/PDF as the final measured lag and the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles as the lower and upper uncertainties. As discussed by \cite{Grier_2017}, the particular choice of the weights does not carry any physical significance. Instead, this empirical weighting form was found to perform well in recovering the true lags and reducing aliases. This step goes beyond the traditional ICCF and {\tt JAVELIN} approach in lag measurements, but is necessary for low-quality light curve data, specifically when the sample size is large and it is unknown whether or not a true lag will be detected. This additional alias removal step does not affect the results for good-quality light curve data where the CCCD/PDF has a well-defined primary peak. {Figure \ref{fig:aliasremoval} shows an example of measured detection efficiency with and without alias removal using ICCF. The alias removal procedure is effective in improving lag detection efficiency by doubling the number of detections in this case, despite introducing a small number of false detections.} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{lag_density_c6xe30_srange100_all_ref.pdf} \caption{Distribution of the measured lag and assigned lag {of the true detections in the uniform sample} for simulations with a cadence of 6 days and 30 epochs. The shaded area is the 2D histogram of the assigned and measured lags (in terms of number of detections in each bin; color bars are in logarithmic scale) and the solid vertical line segments are the uncertainties of the measured lags (randomly down-sampled from all detections for clarity). The black solid line is the 1:1 line for guidance.} \label{fig:lag_density} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{lag_rel_err_density_c6xe30_srange100_all.pdf} \caption{Distribution of the normalized measurement uncertainty ($\sigma_{\tau, mea}/\tau_{true}$) and the fractional difference between measured and assigned lags {of the true detections in the uniform sample} for simulations with a cadence of 6 days and 30 epochs. The irregular edges in the upper right corner in the 2-d histograms are shaped by the detection criteria, that is, absolute difference $<$3 days (appears as the upper-right tip along the 1:1 line), $\delta$Lag$<$0.75 (cutoff in x-axis seen in ZDCF and {\tt JAVELIN}) or the normalized measured uncertainty $<$1/3 (cutoff in y-axis). {Lag measurement precision (accuracy) improves towards the lower (left) direction. }} \label{fig:lag_relerr} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{lag_assigned_err_c6xe30_srange100_all.pdf} \caption{Absolute uncertainties of lag measurements as a function of assigned lags, based on {the true detections in the uniform sample}. The small dots represent the individual measurements in the mock sample. The open circles mark the median absolute uncertainty and the error bars show the 16$^{\rm th}$ and 84$^{\rm th}$ percentile in each 10-day bin of assigned lags.} \label{fig:lag_abserr} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{err_gauss_c6xe30.pdf} \caption{Distribution of the difference between assigned and measured lags, normalized by the measurement uncertainty, of the uniform sample. The black dashed line is a Gaussian distribution with unity dispersion.} \label{fig:norm_err} \end{figure} \subsection{Detection Criteria}\label{sec:criteria} For a measured lag to be a detection, we require that it lie more than 3$\sigma$ away from zero, is positive (i.e., the line flux lags behind the continuum flux), and fewer than half of the CCCD points or MC realizations are rejected in the alias removal procedure (for ICCF and {\tt JAVELIN}). This approach assumes that there is no physical reason to produce a negative lag, and measured negative lags are likely to arise from aliases due to sampling properties of the light curves. We impose additional criteria for the measured lag to qualify as a ``true detection'' in our simulated data. For a true detection, the measured lag must fulfill at least one of the following criteria: \begin{enumerate} \item[1.] Absolute difference from the true lag is $<$ 3 days. \item[2.] Relative difference from the true lag is $<$ 25$\%$. \item[3.] Absolute difference from the true lag is $<3 \sigma$. \end{enumerate} {The first two criteria are introduced because the last criteria can be systematically biased against short lags: short lags are less likely to meet the 3$\sigma$ detection requirement given the same measurement error.} If all three criteria are not satisfied, the detection is classified as a false detection. False detections are inevitable even after imposing our alias removal procedure. In addition, we require detections (including both true and false detections) to have assigned lags $<$100 days, i.e., the search range for a 180-day observation baseline. A great majority of false detections are produced by light curve pairs of longer lags with variability on shorter timescales that leads to aliases. In our controlled experiment, where the true lags are known, we can simply choose to make a cut of the assigned true lags (which will not be detected with a search range of $\pm100$ days), and compare different lag measuring methods in Section \ref{sec:results}. Of our uniform sample, 9,942 ($\sim$10\%) of the mock quasars have assigned lags less than 100 days, thus only 10\% of the initial quasar sample can have lags detected from one season of observation. In reality, the true lags of quasars are unknown, therefore we develop a set of realistic selection criteria that can effectively remove false detections in \S\ref{sec:reallife}. By implementing these reasonable selection criteria, we can assess the reliability of observed lags in actual MOS-RM programs. Unless otherwise specified, the measured lags refer to the measured observe-frame lags in the following. \subsection{Flux-limited Down-sampling}\label{sec:downsample} In order to mimic realistic MOS-RM surveys with flux-limited samples, we also compare the lag detection results by down-sampling from our uniform sample. The redshift and $i$-mag distribution is matched to that of SDSS-RM quasars using the quasar luminosity function from \cite{Richards_2006}. {For each simulation set, we generate 100 realizations of the down-sampling, which have a median of $956^{+47}_{-26}$ sources in total and $177^{+14}_{-11}$ ($\sim$19\%) sources with lag$<$100 days (uncertainties are derived from the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles).} This sample will be referred to as the flux-limited sample. \section{Results}\label{sec:results} \subsection{Measured Lags} To evaluate the robustness of each technique, Figure \ref{fig:lag_density} shows the density distribution of the assigned lags versus the measured lags {of the true detections} for the uniform sample. {There are very few false detections and they can be ignored for now.} The results for {\tt JAVELIN} have the lowest scatter in the distribution: {the Pearson correlation coefficients are $r_{\rm ICCF}\sim$0.984, $r_{\rm ZDCF}\sim$0.980, $r_{\rm \tt{JAVELIN}}\sim$0.993}, indicating that {\tt JAVELIN} lags are more accurate. ICCF lags are consistent with their assigned lags in general, despite the larger scatter. ZDCF is the the least accurate at reproducing the assigned lags and is not capable for detecting lags shorter than the observation cadence by design. {For the flux-limited sample, the Pearson correlation coefficients are $r_{\rm ICCF}\sim$0.933, $r_{\rm ZDCF}\sim$0.925, $r_{\rm \tt{JAVELIN}}\sim$0.974.} Figure \ref{fig:lag_relerr} evaluates the quality of lag measurements by comparing the normalized measurement uncertainties (normalized by the value of the assigned lag) and fractional difference between the assigned and measured lags ($\delta{\rm Lag\equiv |\tau_{\rm mea}/\tau_{\rm assigned}-1|}$) for the true detections {of the uniform sample}. At low measurement quality (high normalized uncertainties and $\delta$Lag), the irregular edges are caused by the detection criteria and are similar among all three methods. At high measurement quality, ICCF and {\tt JAVELIN} are able to make lag measurements with smaller uncertainties and $\delta$Lag than ZDCF. {The normalized measurement uncertainties and $\delta$Lag (median values and uncertainties derived from the 16$^{\rm th}$/84$^{\rm th}$ percentiles) are 7.5\%($^{+12}_{-0.50}$), 3.3\%($^{+8.0}_{-2.5}$) for ICCF, 11\%($^{+12}_{-6.1}$), 3.6\%($^{+6.3}_{-2.5}$) for ZDCF and 4.3\%($^{+12}_{-0.30}$), 2.2\%($^{+7.3}_{-1.7}$) for {\tt JAVELIN}.} In addition, more {\tt JAVELIN} lags lie in the higher quality regime (low normalized uncertainties and $\delta$Lag) than ICCF lags. {For the flux-limited sample, the normalized measurement uncertainties and $\delta$Lag are 15\%($^{+12}_{-8.7}$), 5.8\%($^{+12}_{-4.4}$) for ICCF, 17\%($^{+11}_{-9.2}$), 4.7\%($^{+11}_{-3.5}$) for ZDCF and 9.8\%($^{+13}_{-6.6}$), 4.2\%($^{+10}_{-3.3}$) for {\tt JAVELIN}.} Figure \ref{fig:lag_abserr} demonstrates that the absolute uncertainties of {\tt JAVELIN} lags are smaller than those from ICCF and ZDCF, a result confirmed in previous works \citep[e.g.,][]{Grier_2017,Edelson_2019}. With our controlled experiment with known lags, we are able to demonstrate that {\tt JAVELIN} can provide more accurate lag measurements, as already evident in Figure \ref{fig:lag_density}. In addition, Figure \ref{fig:lag_relerr} suggests that the {\tt JAVELIN} errors are reasonable and are not an underestimation of the actual uncertainties in general. To further illustrate this point, Figure \ref{fig:norm_err} shows the distribution of the difference between assigned and measured lags, normalized by the measurement errors, {for the uniform sample}. {The distribution for {\tt JAVELIN} ($\sigma_{gauss}\sim$0.85) is most consistent with a Gaussian with unity dispersion, while the ICCF ($\sigma_{gauss}\sim$0.69) and ZDCF ($\sigma_{gauss}\sim$0.54) lag errors are more overestimated, leading to narrower distributions. ICCF also produces more outliers with underestimated lag uncertainties ($\Delta {\rm Lag}/\sigma_{\tau, mea}>3$) ($\sim$5.1\%) compared to {\tt JAVELIN} ($\sim$0.57\%) and ZDCF does not have any outliers. For the flux-limited sample, $\sigma_{gauss}\sim$0.73 for ICCF, $\sigma_{gauss}\sim$0.49 for ZDCF and $\sigma_{gauss}\sim$0.76 for {\tt JAVELIN} and the fractions of outliers with underestimated lag uncertainties are 1.1\%, 0.0\% and 0.28\%, respectively.} The reason why ICCF produces overestimated lag errors is not entirely clear \citep[e.g.,][]{Edelson_2019}. The flux resampling part of ICCF produces noisier light curves than the original light curve (i.e., the data points are perturbed twice by flux errors), and the random subset sampling procedure will remove epochs, which increases the uncertainty of lag detection due to the loss of temporal information and may become critical in the low-quality regime (i.e., sparse sampling and large light curve errors). {\tt JAVELIN} is a more statistically rigorous approach and does not suffer from these simplifications used in the ICCF. \begin{figure}[h!] \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{hist_c6xe30_lim100_srange100_all_imag21p7.pdf} \end{tabular} \caption{Distribution of the measured lags of the uniform sample in observed frame. The grey solid histogram shows the number of detectable lags in each bin. The open histograms represent the number of true detections and the solid histograms are the number of false detections. {The number of false detections are inflated by a factor of five for clarity.}} \label{fig:hist_lim100} \end{figure} \begin{figure}[h!] \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{hist_c6xe30_lim100_srange100_all_RF_imag21p7.pdf} \end{tabular} \caption{Similar format to Figure \ref{fig:hist_lim100}. The distribution of measured lags of the uniform sample in rest frame.} \label{fig:hist_RF} \end{figure} \subsection{Distribution of Detected Lags}\label{sec:distribution_lags} Figure \ref{fig:hist_lim100} compares the distribution of measured lags to that of assigned lags in the uniform sample. Lags in the range of $\sim$10--90 days are most likely to be detected with our fiducial cadence and baseline. In this range, {all methods have similar detection efficiency in each lag bins; the median detection efficiencies are $\sim$61$\%$ for ICCF, $\sim$52$\%$ for ZDCF and $\sim$64$\%$ for {\tt JAVELIN}}, which suggests the detections are not biased towards certain lag ranges. Interestingly, {\tt JAVELIN} detects many more short lags than ICCF and ZDCF. This behavior indicates that, by assuming the DRW model, {\tt JAVELIN} is capable of producing reasonable predictions of light curves on a grid finer than the cadence, and thus makes it possible to detect a lag below the formal cadence of the data under certain circumstances. {As shown in Figure \ref{fig:hist_lim100}, most of the false detections fall in the range of $>$60 days. ICCF is prone to producing false detections in the 60--100 days range, regardless of the input lag.} Figure \ref{fig:hist_RF} shows the same distributions but in the rest frame. A uniform and wide distribution of rest-frame lags is critical to measuring an unbiased R-L relation. With our 180-day monitoring duration, we detect mostly rest-frame lags in the range of 20--40 days. The detection rate decreases at $\lesssim20$ days due to cadence limitations, and fewer detections are made at $>40$ days because the observed-frame lags are shifted beyond our search range. We further discuss the biases in measuring the R-L relation slope under different observing conditions and lag measurement methods in Section \ref{sec:rl}. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{detmap_c6xe30_lim100_imag21p7_srange100_all_ref.pdf} \caption Detection efficiency in the simulated grid of quasars measured with each method in a simulated program of 6-day cadence and 30 epochs. From the top to bottom panels are the results from ICCF, ZDCF and {\tt JAVELIN}. The colormap represents the detection efficiency and the numbers are the detection counts (true detections in black and false detections in red) of a single down-sampling realization. The total numbers of true and false detections shown in the lower-right corner are the median and uncertainties derived from 100 down-sampling realizations. The grey contours show the approximate constant lags from the R-L relation from \cite{Bentz_2009b}.} \label{fig:detmap} \end{figure*} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{corr_det_imag_c6xe30_lim100_srange100_all_ds_imag21p7.pdf} \includegraphics[width=0.5\textwidth]{corr_det_z_c6xe30_lim100_srange100_all_ds_imag21p7.pdf} \end{tabular} \caption{Detection efficiency (solid line and shaded area) and true (square) and false (cross) detection counts of the three methods as functions of $i$-band magnitude (right panel) and redshift (left panel) in a simulated observation with 6 day cadence and 30 epochs. Detection counts are obtained using 100 down-sampling realizations, the median and the 16$^{\rm th}$ and 84$^{\rm th}$ percentiles are adopted as the final counts and their uncertainties. {The dotted lines show the number of sources with lags shorter than the search range (i.e. 100 days) in each magnitude or redshift bin.} For $i<$18, the detection efficiencies are not shown because there are no quasars selected in more than 80\% bootstrapping iterations.} \label{fig:corr} \end{figure*} \subsection{Detection Efficiency}\label{sec:det_eff} Figure \ref{fig:detmap} displays the detection efficiency of true detections in each redshift and $i$-band magnitude bin with simulations of 6-day cadence and 30 epochs for the uniform sample. {The overall detection fractions are $\sim$40\% for ICCF and $\sim$36\% {\tt JAVELIN}, and $\sim$23\% for ZDCF out of all the detectable sources (i.e. assigned lag$<$100 days) in the flux-limited sample.} However, as previously shown in Figure \ref{fig:hist_lim100}, ICCF has a higher false detection rate {($\sim$5.4\%)} than the other two methods {({\tt JAVELIN}$\sim$3.1\% and ZDCF$\sim$2.4\%) for the flux-limited sample}. Most of these false detections lie in the fainter quasar population, where the quasar variability is buried in the flux measurement uncertainties. The detection efficiency, as the time lags, depends on redshift and $i$-band magnitude. Observed-frame lags are time dilated by $(1+z)$, so the lags will be shifted out of the search range at high redshifts. Our 100-day search range only allows detection of lags at redshifts $z<$1.5. Similarly, lags at low redshift are difficult to detect as the lags may fall below the observing cadence. Quasar variability is more likely to be diluted by noise for dimmer sources, so the detection efficiency naturally decreases as we approach the survey flux limit. In the faintest $i$-mag and lowest $z$ bin, the detection rate is low because luminous quasars with high Eddington ratios tend to vary more on longer timescales ($>100$ days) and their lags are not detectable within our observing baseline \citep{Macleod_2010}. The detection efficiency and detection counts as functions of $i$-band magnitude and redshift are shown in Figure \ref{fig:corr}, using the downsampled simulations that mimic the SDSS-RM sample. Detection efficiency decreases with $i$-band magnitude. However, since the number of quasars increases with $i$-band magnitude, the number of detections also increases. The detection efficiencies of ICCF and {\tt JAVELIN} are roughly the same and higher than that of ZDCF in all magnitude bins. The detection efficiency is the highest for {\tt JAVELIN} and ZDCF at redshift $\sim$0.2 and decreases both towards lower and higher redshift. As the redshift increases, quasars with detectable lags tend to be fainter, thus decreasing the detection efficiency. The lags of the quasars in the lowest redshift bin are too short to detect. For ICCF, detection efficiency is relatively consistent in the range of $\sim$0.2--1.0, {because ICCF is more sensitive to lags around $\sim$100 days. Our down-sampled realizations demonstrate that most of the detected lags are from quasars around redshift $\sim$0.5 and with $i$>19. This behavior is a selection effect due to the sample characteristics and the range of lags where the fiducial survey design is sensitive. \begin{figure} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{detfunc_cad_lim100_imag21p7.pdf} \end{tabular} \caption{Total counts of true (square) and false (cross) detections in the flux-limited sample of simulations with 3-, 6-, and 12-day cadence.} \label{fig:cad} \end{figure} \subsection{Effects of Cadence/$N_{epoch}$}\label{sec:diss_cad} To investigate the effect of cadence on lag detection, we ran additional simulations with cadences of 3 and 12 days and the same 180-day observation baseline to compare to our fiducial cadence of 6 days. We start from the daily-sampled light curves described in Section \ref{sec:data} and resample the cadence and flux measurements based on the same mock quasars and light curves, and follow the same procedures in measuring lags as described in Section \ref{sec:measurelags}. As shown in Figure \ref{fig:cad}, using the same method, the overall detection efficiency decreases as the monitoring cadence increases. Again {\tt JAVELIN} and ICCF have higher detection efficiencies than ZDCF. The increase of detection efficiency as cadence improves is mainly a result of more data points in the light curves, since most of the expected lags will be resolved even with a 12-day cadence, but a higher cadence can lead to more lag detections on shorter timescales. These results are already confirmed in earlier simulations with ICCF \citep{Shen_2015a}. Figure \ref{fig:corr_cad_imag} and Figure \ref{fig:corr_cad_z} show the breakdown of the detection efficiency in each redshift and $i$-band magnitude bin for different cadences/$N_{epoch}$ (but with fixed baseline). The number of detections and detection efficiency decrease in all bins with increasing cadence as expected. The only exception is the high-redshift bins in the ICCF case, which remains similar in the 3-day and 6-day cadence simulation; this result may simply be due to small number statistics. \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{corr_det_imag_CCF_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_imag_ZDCF_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_imag_Javelin_ds_imag21p7.pdf} \end{tabular} \caption Detection efficiency (solid line and shaded area) and true (square) and false (cross) detection counts as functions of $i$-band magnitude in simulated observations with 3-, 6- and 12-day cadence. The dotted lines show the number of sources with lags shorter than the search range (i.e. 100 days) in each magnitude or redshift bin. For $i<$18, the detection efficiencies are not shown because there are no quasars selected in more than 80\% bootstrapping iterations.} \label{fig:corr_cad_imag} \end{figure*} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{corr_det_z_CCF_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_z_ZDCF_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_z_Javelin_ds_imag21p7.pdf} \end{tabular} \caption Detection efficiency (solid line and shaded area) and true (square) and false (cross) detection counts as functions of redshift in simulated observations with 3-, 6- and 12-day cadence. The dotted lines show the number of sources with lags shorter than the search range (i.e. 100 days) in each magnitude or redshift bin.} \label{fig:corr_cad_z} \end{figure*} We also ran our simulated observations with non-uniform cadence using the first-year SDSS-RM spectroscopic observations that have an average cadence of 5.7 days (median cadence of 4 days) and 32 epochs. For ICCF and {\tt JAVELIN}, the overall detection efficiency and number of detections after downsampling are consistent with our uniform-cadence simulations. While correlated variations in the poorly-sampled sections of the light curves may be missed by the correlation analysis for some sources, lags of other sources might be identified in more densely-sampled parts of the light curves, so the non-uniform cadence does not significantly change the results for the overall sample. However, this is not the case for ZDCF, where the detection efficiency for the non-uniform cadence case is only about half of that for the uniform-cadence case. ZDCF detects fewer lags in all bins with non-uniform cadence, but especially so at lag $\lesssim 20$ days and $\sim40$ days. This lack of detections arises because the ZDCF binning algorithm is less sensitive to lag in this range with the non-uniform cadence. Having a reasonable interpolation scheme, such as with ICCF or {\tt JAVELIN}, helps detect lags when the cadence is not uniform. \begin{figure} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{detfunc_err_lim100_imag21p7.pdf} \end{tabular} \caption{Total counts of true (square) and false (cross) detections in the flux-limited sample at different error inflating factor for the line light curve. Continuum light curve errors are all 3.5 times higher compared to previous figures (i.e., Figures \ref{fig:lag_density} to \ref{fig:corr_cad_z}).} \label{fig:snr} \end{figure} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{corr_det_imag_CCF_err_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_imag_ZDCF_err_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_imag_Javelin_err_ds_imag21p7.pdf} \end{tabular} \caption Detection efficiency (solid line and shaded area) and true (square) and false (cross) detection counts as functions of $i$-band magnitude in simulated observations with inflated error bars. The dotted lines show the number of sources with lags shorter than the search range (i.e. 100 days) in each magnitude or redshift bin. For $i<$18, the detection efficiencies are not shown because there are no quasars selected in more than 80\% bootstrapping iterations.} \label{fig:corr_snr_imag} \end{figure*} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{corr_det_z_CCF_err_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_z_ZDCF_err_ds_imag21p7.pdf} \includegraphics[width=0.33\textwidth]{corr_det_z_Javelin_err_ds_imag21p7.pdf} \end{tabular} \caption Detection efficiency (solid line and shaded area) and true (square) and false (cross) detection counts as functions of redshift in simulated observations with inflated error bars. The dotted lines show the number of sources with lags shorter than the search range (i.e. 100 days) in each magnitude or redshift bin.} \label{fig:corr_snr_z} \end{figure*} \subsection{Effects of Light Curve S/N}\label{sec:diss_err} Sufficient light curve S/N is required for any lag measuring method to identify correlated variability in the presence of flux errors. In this section, we decrease our continuum light curve S/N by a factor of 3.5, to match the S/N of the continuum light curves of \cite{Grier_2017}, and line light curve S/N by a factor of 0.5, 1.0, 1.5 (closest to those in the \cite{Grier_2017} sample), and 2.0 to investigate the performance of each method under various flux S/N. In Figure \ref{fig:snr}, the total detection count decreases as the light curve S/N decreases for ICCF and ZDCF as expected. However, for {\tt JAVELIN}, the total number of detections remains approximately constant as light curve S/N decreases. {The individual bootstrapping realizations indicate that most of the {\tt JAVELIN} lags are still detected when the light curve quality is degraded to these levels, but with slightly larger lag uncertainties.} For ICCF and ZDCF, however, the dimmer and higher-$z$ quasars are no longer detected when the light curve S/N decreases. {Measurement uncertainties for {\tt JAVELIN} are always the most reliable (i.e. $\sigma_{gauss}\sim$0.75 for all simulations) for different S/N levels, but ICCF and ZDCF measurement uncertainties become more overestimated when light curve S/N decreases.} Similar trends are observed in the detection efficiency when broken down into $i$-band magnitude and redshift bins in Figure \ref{fig:corr_snr_imag} and Figure \ref{fig:corr_snr_z}. \subsection{Effects of the Power Spectral Density (PSD) of the Driving Light Curve}\label{sec:psd} { Our mock light curves are simulated using DRW models, which is also the assumption used in {\tt JAVELIN} for lag measurements. If the actual quasar light curves are approximately described by DRW models, as observed for large samples of quasars for the timescales of interest here \citep[e.g.,][]{Macleod_2010}, then using {\tt JAVELIN} is the correct approach to interpolate the light curves between the epochs in the lag calculation. However, one concern is that if the actual quasar light curve significantly deviates from a DRW model, then the basic assumption in {\tt JAVELIN} is violated and the lag measurement may be problematic. } {To test this possibility, we generate long, {daily-sampled} continuum light curves with a power-law PSD$\propto f^{\alpha}$ with slope $\alpha$ of $-1$, $-2$ and $-3$ {using the astroML \citep{astroML} package in python, which follows the approach described in \citet{TK95}}. The light curve variances are scaled to match the same rms variability as for the {uniform sample} in our fiducial simulations {before adding gaussian measurement uncertainties}. We then follow the same procedures of generating the line light curves, {assigning light curve uncertainties} and measure the time lags with ICCF, ZDCF and {\tt JAVELIN} as described in Section \ref{sec:measurelags}. {After down-sampling to sparse, shorter light curves, there will not be sufficient data points or baseline to properly sample the frequency space and the measured PSD slope might change, which is similar to the situation that our light curves can not constrain DRW parameters.} Specifically, the DRW model has a broken power-law PSD with a slope of $-2$ at high frequencies and a slope of $0$ at low frequencies \citep[the characteristic timescale is about a few hundred days, e.g.,][]{Macleod_2010}. Recent PSD measurements for several AGN observed by the Kepler satellite {\citep{Mushotzky_2011, Kasliwal_2015, Kasliwal_2017, Smith_2018}} suggested a PSD slope steeper than $-2$ for timescales below a few days, indicating less variability on the shortest timescales than the DRW model. Using a single power-law slope for the PSD over all relevant timescales is probably a bad assumption, as the quasar variability PSD is usually a broken power-law in the optical \citep[e.g.,][]{Simm_2016, Smith_2018}, but nevertheless this allows us to test the impact of any deviations from the DRW models. } { The measured lags are most correlated with the assigned lags ($r_{\rm ICCF}\sim$0.95, $r_{\rm ZDCF}\sim$0.95, $r_{\rm \tt{JAVELIN}}\sim$0.98 for the flux-limited sample) when $\alpha=-1$ and least correlated with the assigned lags when $\alpha=-3$ ($r_{\rm ICCF}\sim$0.81, $r_{\rm ZDCF}\sim$0.75, $r_{\rm \tt{JAVELIN}}\sim$0.92 for the flux-limited sample). {\tt JAVELIN} remains the best among the three in reproducing the assigned lags in terms of being close to the true lags, even when the input PSD is significantly different from the one assumed (i.e., DRW) in {\tt JAVELIN}. } { As shown in Figure \ref{fig:psd}, all methods detect the most lags when $\alpha=-1$ (detection efficiencies for the flux-limited sample: $\sim$38\% for ICCF, $\sim$24\% for ZDCF and $\sim$45\% for {\tt JAVELIN}) and the fewest lags when $\alpha=-3$ (detection efficiencies for the flux-limited sample: $\sim$22\% for ICCF, $\sim$9.6\% for ZDCF and $\sim$12\% for {\tt JAVELIN}). When there is more variability on short timescales ($\alpha=-1$), there are more features in the light curves for the methods to model and correlate. When $\alpha=-3$, light curves tend to be slowly varying or even monotonic for almost the entire 180-day baseline, which makes detecting lags more difficult for all methods. In all cases, ICCF has the highest false detection rate, and the false detections tend to cluster around the search limit ($\sim 60\%$ of the monitoring period), which will lead to a biased lag distribution as discussed in Section \ref{sec:distribution_lags}. {\tt JAVELIN} also has a higher false detection rate when $\alpha=-1$ compared to the two other $\alpha$ cases (but still fewer false detections than ICCF). {\tt JAVELIN} is unable to reproduce the high-frequency variations in these light curves for this shallowest PSD slope, leading to more false detections (8\% of all sources, compared to $<$1\% when $\alpha=-2$ or $-3$), and more overestimated uncertainties as seen in the left panel in Figure \ref{fig:psd_gauss}. Overall, ICCF uncertainties are overestimated ($\sigma_{gauss}\sim$0.6) when $\alpha=-1$, but underestimated ($\sigma_{gauss}\sim$1.1) when $\alpha=-3$, ZDCF uncertainties are overestimated in all simulations, and {\tt JAVELIN} uncertainties are slightly overestimated but more consistent among all simulations. } {These additional tests demonstrate that when the actual light curve PSD is different from the DRW model, the relative performance in terms of lag detection efficiency, rate of false detections, and the reliability of reported lag uncertainties remains more or less the same among the three methods. {These tests also show that the DRW model is extremely flexible and is capable of fitting non-DRW light curves (see \cite{Kozlowski_2016} for a similar conclusion). Even though the DRW parameters cannot be constrained by our light curves, the DRW model can produce reasonable interpolation (better than linear interpolation) and thus outperforms ICCF and ZDCF in most test cases.} Of course we have not exhausted the variety of PSD shapes and it is possible that certain peculiar PSD shapes will change the relative performance among the three methods.} {Finally, we point out that we have not tested the effect of deviations in the line transfer function from the assumed top-hat function in {\tt JAVELIN}. \cite{Yu_submitted} performed a more detailed study on the impact of transfer function forms on the performance of {\tt JAVELIN}, in the regime of high quality light curves typically achieved for local RM programs. It is always a possibility that {\tt JAVELIN} will fail badly for specific cases with unusual transfer functions or variability PSD. However, for the bulk of typical quasar light curves, and especially for the regime of light-curve quality (e.g., S/N and sampling) of interest to most MOS-RM programs, {\tt JAVELIN} is favored over the other two methods.} \begin{figure} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{detfunc_PSD_lim100_imag21p7.pdf} \end{tabular} \caption{{Total counts of true (square) and false (cross) detections for the flux-limited sample with mock light curves generated from single power-law PSDs (as opposed to the DRW model) with different slopes. The overall detection fraction decreases for steeper PSDs, where the light curves are more and more dominated by slow varying (or even monotonic) trends. }} \label{fig:psd} \end{figure} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{err_gauss_c6xe30_psd1_ds.pdf} \includegraphics[width=0.33\textwidth]{err_gauss_c6xe30_psd2_ds.pdf} \includegraphics[width=0.33\textwidth]{err_gauss_c6xe30_psd3_ds.pdf} \end{tabular} \caption {Distribution of the difference between assigned and measured lags, normalized by the measurement uncertainty, for the flux-limited sample using mock light curves generated with single power-law PSDs with different slopes. The black dashed line is a Gaussian distribution with unity dispersion.}} \label{fig:psd_gauss} \end{figure*} \section{Lag Detection in Real Surveys}\label{sec:reallife} \subsection {{Selection with light curve quality cuts}} In reality, the true lags of quasars in the MOS-RM sample are unknown. Instead of comparing with the true lag, we can use the quality of light curve fits and the properties of the light curves to evaluate the quality of the lag measurements. Traditionally, visual inspection is often invoked to assess the quality of the lag measurements. {\cite{Grier_2017} applied cuts on the minimum ICCF correlation coefficient ($r_{max}$) and the continuum and line light curve RMS variability S/N (defined as the intrinsic variability of the light curve about a fitted linear trend, divided by the uncertainty of the estimated intrinsic variability). $r_{max}$ can be used to evaluate if the light curves are well-correlated. The continuum and line RMS variability can be used to identify short-time variability and exclude spurious correlations for noisy light curves or light curves with long, monotonic trends. The selected cutoff values strongly depend on the desired balance between completeness and purity of lag detections, for example, to achieve an acceptable false-detection rate. Figure \ref{fig:detmap_g17} shows the simulated detections by imposing the additional lag-significance criteria of \cite{Grier_2017}, with simulated light curve S/N matched to the \cite{Grier_2017} sample (continuum light curve uncertainties inflated by 3.5 times and line light curve uncertainties inflated by 1.5 times). We evaluate the robustness of the detections based on Section \ref{sec:criteria}, which is more stringent than \cite{Grier_2017} as they only require a 2$\sigma$ deviation from zero-lag for a significant lag. Roughly half of the detections from the original test in Section \ref{sec:diss_err} are removed due to low correlation or low variability amplitude. The number of detections is slightly lower than for \cite{Grier_2017} due to the more stringent detection criteria, and the false-detection rate is around 20\% to 30\%. Since these additional quality cuts remove the same objects from the detected sample in each method, they do not affect our conclusions about the relative performance of different lag measuring methods. } \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{detmap_c6xe30_lim100_imag21p7_cerr35lerr15_sig.pdf} \caption {Detection efficiency of simulations with a cadence of 6 days and 30 epochs, based on the lag-significance criteria of \cite{Grier_2017}. The colormap represents the detection efficiency and the numbers are the detection counts of a single down-sampling realization. The total numbers of true and false detections shown in the lower-right corner are the median and uncertainties derived from 100 down-sampling realizations, defined by the detection criteria described in Section \ref{sec:criteria} (true detections in black and false detections in red). The grey contours show the approximate constant lags from the R-L relation from \cite{Bentz_2009b}.}} \label{fig:detmap_g17} \end{figure*} \subsection {{Selection with statistical test}} Here we introduce a statistical approach to remove false detections in MOS-RM surveys without knowing the true lags or expected lags from an assumed R-L relation. Since detectable lags in a specific survey design depend on the quasar magnitude and redshift, we filter out false detections by removing all sources in a redshift-magnitude bin that are unlikely to host detectable lags. When analyzing light curve pairs with undetectable lags, statistically, all lag detection methods should have an equal chance to produce positive and negative lags, all which are false detections. We compute the ratio of positive lags to negative lags in each of the magnitude-redshift bins and set a cutoff to exclude bins with a low positive-to-negative measurement ratio. If a grid has a ratio below this cutoff, we assume the time lags of all sources in that grid are not reliable and all the detections (both true and false detections) are removed in the grid. In this work, we start with the uniform sample and impose redshift bins of 0.2 and magnitude bins of 0.5 as an example. Each grid element has $\sim$600 quasars. {The cutoff ratio of positive to negative lags of 1.5 is selected, which is optimized by searching for a ratio that eliminates the most false detections while keeping the most true detections.} {After the statistical selection, we apply the same down-sampling procedure as in \S \ref{sec:downsample} to mimic the SDSS-RM program.} Using this ratio criterion, we regenerate the true and false detection map in Figure \ref{fig:detmap_pntest}. We recover most ($>$90\%) of the lags found previously with the knowledge of the true lags. {In the following analysis, we still label the lags according to the true/false detection criteria, but they are not selected by the assigned lags and should be indistinguishable in real surveys.} Because we are selecting the redshift and $i$-band magnitude bins without knowing the true lags, false detections increase in the bins where $\tau_{obs}$ is $\sim$100 days, where some longer lags could be falsely detected with a smaller measured lag. {These falsely-detected long lags make up roughly a third (ICCF) to half (ZDCF and {\tt JAVELIN}) of all the false detections.} The median false detection rate is roughly 18\%, 9.0\%, and 6.7\% for ICCF, ZDCF and {\tt JAVELIN} for the flux-limited sample, again with {\tt JAVELIN} having the lowest false detection rate. These results are similar to the estimated false detection rate of \cite{Grier_2017}, roughly 10\%. Most of the sources in eliminated i-mag and redshift bins have lags of $\gtrsim$100~days, above the limit used in the lag search. With this statistical approach to mimic the reality of MOS-RM programs, the average lag distribution from the 100 down-sampling realizations is shown in Figure \ref{fig:hist_lag_ds}. {\tt JAVELIN} and ZDCF measure a relatively uniform lag distribution. ICCF favors lags around $\sim$60--90 days and measures more true and false detections in this range. {This result suggests that the R-L relation derived with ICCF lags are more biased, especially for samples with a narrow redshift distribution, where the limited observed-frame lag distribution would correspond to a limited rest-frame lag distribution).} In Figure \ref{fig:corr_pntest}, ICCF has higher detection rate in the low-luminosity and high-redshift bins compared to {\tt JAVELIN} and ZDCF, but the false detection rate is also high in those grids. The total number of detections deceases significantly beyond $z\sim$1 as the light curves have lower S/N and the lags are closer to the $\sim$100 day search limit. Overall these results are similar to those in Section \ref{sec:diss_cad} and Section \ref{sec:diss_err}. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{detmap_c6xe30_pntest_imag21p7_srange100_all_ref.pdf} \caption Detection efficiency of simulations with a cadence of 6 days and 30 epochs, selected by the statistical test described in Section \ref{sec:reallife}. The colormap represents the detection efficiency and the numbers are the detection counts (true detections in black and false detections in red) of a single down-sampling realization. The total numbers of true and false detections shown in the lower-right corner are the median and uncertainties derived from 100 down-sampling realizations. The grey contours show the approximate constant lags from the R-L relation from \cite{Bentz_2009b}. {The bins removed by the statistical test are labeled with a red cross.} } \label{fig:detmap_pntest} \end{figure*} \begin{figure} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{hist_c6xe30_pntest_srange100_all_imag21p7.pdf} \end{tabular} \caption Distribution of the measured lags of the uniform sample in observed frame, selected by the statistical test described in Section \ref{sec:reallife}. The grey solid histogram shows the number of detectable lags in each bin. The open histograms represent the number of true detections and the solid histograms are the number of false detections. The number of false detections are inflated by a factor of five for clarity.} \label{fig:hist_pntest} \end{figure} \begin{figure} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{hist_lag_c6xe30_pntest_srange100_all.pdf} \end{tabular} \caption{Median distribution of the detected lags in the 100 down-sampling realization. The open histograms show the number of true detections and the solid histograms indicate the number of false detections. The grey shaded area represents the median assigned lag distribution.} \label{fig:hist_lag_ds} \end{figure} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{corr_det_imag_c6xe30_pntest_srange100_all_ds_imag21p7.pdf} \includegraphics[width=0.5\textwidth]{corr_det_z_c6xe30_pntest_srange100_all_ds_imag21p7.pdf} \end{tabular} \caption Detection efficiency (solid line and shaded area) and true (square) and false (cross) detection counts of the three methods as functions of $i$-band magnitude (right panel) and redshift (left panel) in a simulated observation with 6 day cadence and 30 epochs. The detections are selected with the statistical approach described in Section \ref{sec:reallife}, i.e., assuming no knowledge of the true lags. The dotted lines show the number of sources with lags shorter than the search range (i.e. 100 days) in each magnitude or redshift bin. For $i<$18, the detection efficiencies are not shown because there are no quasars selected in more than 80\% bootstrapping iterations.} \label{fig:corr_pntest} \end{figure*} \section{Discussion}\label{sec:discussion} \subsection{The R-L relation}\label{sec:rl} Now we examine how the selection effects from the sample and survey design, as well as the uncertainties in the measured lags, can affect the slope of the R-L relation, as compared to the R-L relation and scatter used to assign lags to our simulated quasars as described in \S \ref{sec:data}. We use the 100 down-sampled realizations of the flux-limited sample with the statistical approach described in Section \ref{sec:reallife}. The observed-frame lags are shifted to rest-frame by dividing by a factor of $(1+z)$, and then fit the slope in the $\tau-L$ relation with the linear regression code \texttt{LINMIX} \citep{Kelly_2007}. \texttt{LINMIX} uses a Bayesian approach to perform linear regression with measurement errors in both coordinates and produces more consistent fitting results than traditional regression methods when the data have large intrinsic scatter or are poorly measured. Since the R-L relation is derived with H$\beta$ lags which are only measured at $z<0.9$, we exclude all measured lags with $z>0.9$ during the fitting. The fitting results are shown in Figure \ref{fig:rl}. We first examine the effects of selection bias due to sample/survey design by fitting the R-L relation with the assigned lags of the true detections (top row in Figure \ref{fig:rl}). Due to our limited observation period, we cannot detect observed-frame lags longer than 100 days with any method and most short lags (less than the cadence) with ICCF and ZDCF. This constraint limits the dynamical range of luminosity and time lag in the R-L relation fitting, resulting in the fitted R-L relation slope being shallower than the nominal slope. Next, we examine the effects of lag measurement uncertainties by fitting the R-L relation with the measured lags (incorporating the lag uncertainties) for the true detections only (middle row in Figure \ref{fig:rl}) and for all detections (bottom row in Figure \ref{fig:rl}). When only considering the true detections (which cannot be identified in real surveys), the fitted R-L slopes are slightly shallower but still consistent with the previous values. When including both true and false detections, the fitted R-L slopes are the same within the uncertainties for {\tt JAVELIN} compared to the fitting with only true detections. However, the fitted R-L slopes for ICCF and ZDCF are biased by the false detections at $\sim$100 days (observed-frame), as indicated by the larger scatter in Figure \ref{fig:slope_hist}. In some realizations, the measured lags of false detections deviate significantly from the nominal R-L relation for ICCF and lead to a highly biased R-L slope ($\sim$0). Therefore, in practice, it is important to examine questionable lag measurements and establish criteria to discard them from the sample. In general, R-L slopes measured from {\tt JAVELIN} are more robust and accurate than those from ICCF and ZDCF, which is due to the combined benefits of having more detected lags (especially the short lags) and higher lag measurement quality. \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{rl_sub_CCF_c6xe30_srange100_all_true.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_ZDCF_c6xe30_srange100_all_true.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_Javelin_c6xe30_srange100_all_true.pdf} \\ \includegraphics[width=0.33\textwidth]{rl_sub_CCF_c6xe30_srange100_all_measure.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_ZDCF_c6xe30_srange100_all_measure.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_Javelin_c6xe30_srange100_all_measure.pdf}\\ \includegraphics[width=0.33\textwidth]{rl_sub_CCF_c6xe30_srange100_all_all.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_ZDCF_c6xe30_srange100_all_all.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_Javelin_c6xe30_srange100_all_all.pdf}\\ \end{tabular} \caption{H$\beta$ R-L relation derived from one down-sampling realization. In each panel, the grey contours represent the uniform quasar sample, and the blue and orange points are the true and false detections. The top row shows the R-L relation derived using the assigned lags of the true detections, the middle row presents the result using measured lags with error bars of the true detections, and the bottom row displays the result using measured lags of both true and false detections. {The black solid line is the input R-L relation \citep{Bentz_2009b}} used to generate the uniform sample and the blue lines are 50 random realizations drawn from the posterior of the Bayesian regression fit to the R-L relation. {The black points are the \cite{Bentz_2013} local RM AGN sample for reference.}} \label{fig:rl} \end{figure*} Figure \ref{fig:slope_hist} presents the histograms of the fitted R-L slope {from the measured lags (including false detections)} in the 100 down-sampled realizations. In the 6-day cadence simulations, fitted slopes are $\sim$0.4 for {\tt JAVELIN} and $\sim$0.3 for ICCF and ZDCF, {and the normalized median absolute deviations (NMAD) are $\sim$0.08 for {\tt JAVELIN} and $\sim$0.14 for ICCF and ZDCF.} When the cadence decreases (number of epochs increases), there are fewer false detections in ICCF, so the fitted slope approaches $\sim$0.4, where most of the remaining bias is due to the limited lag range. With the 12-day cadence, the detections from ICCF and ZDCF decrease and false detections increase, causing the R-L relation fitting to become unreliable, as indicated by the broader range of the slope distribution {(NMAD $\sim$0.51 for ICCF and $\sim$1.35 for ZDCF). For {\tt JAVELIN}, the fitted slope converges around 0.4 for all three cadences and the NMAD only increase slightly to 0.14 at cadence of 12 days, which is comparable to NMAD for ICCF and ZDCF simulations at cadence of 6 days.} For the light curve S/N dependence, the median of fitted R-L slopes is consistent at different S/N for all three methods, because the number of detections and their distribution in the R-L plane do not change drastically as light curve S/N varies. When the S/N is degraded, the R-L slope uncertainties increase for ICCF and ZDCF, but not for {\tt JAVELIN} --- this is because the scatter in the R-L plane primarily originates from the increased lag uncertainties as light curve S/N decreases, which is not the case for {\tt JAVELIN} (see \S \ref{sec:diss_err}). \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{RL_beta_hist_c3xe60_pntest_srange100_all.pdf} \includegraphics[width=0.33\textwidth]{RL_beta_hist_c6xe30_pntest_srange100_all.pdf} \includegraphics[width=0.33\textwidth]{RL_beta_hist_c12xe15_pntest_srange100_all.pdf} \end{tabular} \caption{Distribution of the best-fit R-L slopes from 100 down-sampling realizations. From the left to right are the simulations of 3-day, 6-day and 12-day cadences. The vertical dashed lines indicate the median of each distribution and the solid vertical lines mark the slope of the input R-L relation ($\beta$=0.519).} \label{fig:slope_hist} \end{figure*} \subsection{Scatter of the R-L relation} The slope of the R-L relation derived from our simulation is consistently shallower than the assigned value. {However, the \cite{Grier_2017} R-L relation shows more scatter than the \cite{Bentz_2009b} and \cite{Bentz_2013} R-L relations (the \cite{Bentz_2013} R-L relation is an updated version of the \cite{Bentz_2009b} R-L relation that includes more low-luminosity sources).} There are many possible reasons for this discrepancy. For example, \cite{Grier_2017} used spectral decomposition to correct for host galaxy light in the estimation of quasar-only luminosity instead of high resolution imaging decomposition as with \cite{Bentz_2009b, Bentz_2013}. There may also be intrinsic differences in the R-L relations due to the difference in samples (e.g., the SDSS-RM H{$\beta$}\ lag sample is at substantially higher redshift and spans a broader range of quasar parameter space than the Bentz et al. sample). {\cite{Du_2016} suggested that the R-L relation might depend on quasar luminosity and accretion rate \citep[also see ][]{Loli_2019}.} This discrepancy motivates us to investigate how the observed R-L relation changes with different assumptions of the intrinsic scatter. We produced another set of simulations while applying increased scatter in the input R-L relation, following the same procedures described in Section \ref{sec:data} but doubling the scatter in the initial R-L relation to generate a new set of mock quasars and light curves. With the larger scatter in the input R-L relation, the fitted R-L slope becomes less constrained, as demonstrated in the top rows of Figure \ref{fig:rlx2}. The observed R-L relation slopes are shallower compared to the original simulation for all three methods. In addition, false detection rates increase for all techniques, as it is more difficult {to statistically eliminate false lags by rejecting quasars in certain magnitude and redshift bins} due to the increased scatter in the lags in each bin. These false detections are located near the edge of our search range, mostly in the range of 60--80 days. As a result, the deduced R-L relation is flatter because the fit is skewed by these false detections (see bottom left panel in Figure \ref{fig:rlx2} for an example). The distributions of the fitted R-L relation slopes are presented in Figure \ref{fig:slope_hist_rlx2}: for all three lag measuring methods the slopes are shallower than those in the original simulations {$\sim$0.2 for ICCF, $\sim$0.1 for ZDCF and $\sim$0.3 for {\tt JAVELIN})} and the NMAD increases compared to original simulation {(NMAD $\sim$0.27 for ICCF, $\sim$0.46 for ZDCF and $\sim$0.11 for {\tt JAVELIN})}. The slope from {\tt JAVELIN} lags is the least biased among the three methods. If the intrinsic scatter in the R-L relation is indeed larger for the SDSS-RM sample than for the local RM sample, then it is likely that we will measure a shallower slope using the measured lags. The shallower measured slopes are primarily a caveat of the limited dynamic range in the measured lags, and should be mitigated with additional lags measured over a broader range in luminosity. {The lags from \cite{Grier_2017} also on average fall below the \cite{Bentz_2009b, Bentz_2013} R-L relation \citep[also see, e.g.,][]{Du_2016}. From our simulation, there is no evidence that selection effects can cause a vertical offset from the input R-L relation. However, long lags ($>80$ days) tend to be measured with smaller values than the assigned values using the \cite{Grier_2017} lag-significance criteria, which can partially contribute to the shallower slope in the R-L relation.} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.33\textwidth]{rl_sub_CCF_c6xe30_srange100_RLx2_true.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_ZDCF_c6xe30_srange100_RLx2_true.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_Javelin_c6xe30_srange100_RLx2_true.pdf}\\ \includegraphics[width=0.33\textwidth]{rl_sub_CCF_c6xe30_srange100_RLx2_all.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_ZDCF_c6xe30_srange100_RLx2_all.pdf} \includegraphics[width=0.33\textwidth]{rl_sub_Javelin_c6xe30_srange100_RLx2_all.pdf} \end{tabular} \caption H$\beta$ R-L relation derived from one down-sampling realization in the simulation with more scattered R-L relation. In each panel, the grey contours represent the uniform quasar sample, and the blue and orange points are the true and false detections. The top row shows the R-L relation derived using the assigned lags of the true detections, and the bottom row displays the result using measured lags of both true and false detections. {The black solid line is the input R-L relation \citep{Bentz_2009b}} used to generate the uniform sample and the blue lines are 50 random realizations drawn from the posterior of the Bayesian regression fit to the R-L relation. {The black points are the \cite{Bentz_2013} local RM AGN sample for reference.}} \label{fig:rlx2} \end{figure*} \begin{figure} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{RL_beta_hist_c6xe30_pntest_srange100_RLx2.pdf} \end{tabular} \caption Distribution of the best-fit R-L slopes from 100 down-sampling realizations in the simulation with more scattered R-L relation. The vertical dashed lines indicate the median of each distribution and the solid vertical lines mark the slope of the input R-L relation ($\beta$=0.519).} \label{fig:slope_hist_rlx2} \end{figure} \subsection{Multi-year observations}\label{sec:multi-yr} Following the SDSS-RM survey design, we ran a 5-year simulation (with 30 observing epochs for the first year, 15 epochs for years 2 and 3, and 6 epochs for years 4 and 5) and examine the lag measurements using ICCF and {\tt JAVELIN} on the flux-limited sample. Since the ZDCF method consistently underperforms over the other two methods, we do not consider ZDCF further in this section. Similar to our 100-day search range criteria, we set the search range to $\sim$800 days in order to avoid strong CCCD/PDF signals produced with fewer overlapping points. The grid size of the ICCF is set to 15 days, the median of the cadence, which results in smoother ICCFs for light curve pairs with larger lags. The MCMC parameters are set to be the same as for our 180-day simulations (see \S\ref{sec:measurelags}), as this value is sufficient for the results to converge. For the alias removal procedure, the width of the Gaussian smoothing kernel was increased to 7.5 days to improve the ability to capture longer lags. We scale the CCCD/PDFs as a function of the number of overlapping points in each of the 6-month observing seasons. Both ICCF and {\tt JAVELIN} interpolate within the 6-month seasonal gaps, and these lag ranges are down-weighted in the alias removal procedure. Finally, we perform the statistical selection as in \S\ref{sec:reallife} to remove unlikely detections. This approach removes $\sim$10\% false detections and $<$1\% true detections for ICCF, and $\sim$20\% false detections and $\sim$1\% true detections for {\tt JAVELIN}. Figure \ref{fig:detmap_multiyr} presents the detection map of the 5-year simulations. The shaded area is the detection efficiency calculated of the flux-limited sample, instead of the uniform quasar sample as in the previous figures (e.g., Figure \ref{fig:detmap}). The overall detection efficiency is $\sim$45\% for ICCF and $\sim$56\% for {\tt JAVELIN} and false detection rates are $\sim$16\% for ICCF and $\sim$6.9\% for {\tt JAVELIN} with the 5-year baseline. Lag detections are limited by the observing baseline, redshift, and light curve S/N. Below redshift $\sim$2, the detection efficiency follows similar trends as the single-season simulations. Compared to the 180-day simulation, the detection efficiency increases for lag of $<100$ days with additional seasons of observation, especially for {\tt JAVELIN}. At longer lags ($>100$ days), lag detection efficiency increases at 2$<z<$2.5 for faint objects. This behavior arises because, with the seasonal gaps and the chosen baseline, our survey will be most sensitive to lags $<$100 days and 250--400 days. These trends are observed in Figure \ref{fig:corr_multiyr}. In our 5-year simulation, detection efficiency peaks at $z\sim$0.5 and $z\sim$2.5 and falls off sharply at $z>$3. Detections mostly fall in the range of 0.5$<z<$2.5 due to the redshift distribution of our sources. The distribution of detected lags (Figure \ref{fig:hist_multiyr}) reveals gaps in the distribution of detected lags with ICCF, which correspond to the seasonal gaps in the observations. For {\tt JAVELIN}, however, these gaps are less obvious, indicating that {\tt JAVELIN} is interpolating reasonably well within long seasonal gaps and measures lags more accurately in multi-year projects than ICCF. In addition, {\tt JAVELIN} has lower and more evenly-distributed false detections throughout the lag ranges. For $>$600 day lags, there are as many false detections as true detections for ICCF, suggesting that it will be very difficult to identify true detections with ICCF in this lag range. Figure \ref{fig:rl_multiyr} displays the fitting of the R-L relation in one down-sampled realization. Since the detected lags cover a wide range in luminosity, the slopes are less biased by the limited dynamical range than the 180-day simulation. The ICCF R-L relation is still skewed by the false detections clustered at $\sim$600--800 days. Since the distribution of false detections in {\tt JAVELIN} is more uniform over the range of lags, the fitted slope of the R-L relation is more accurate. However, the derived slopes are still somewhat shallower than the assigned value due to the false detections at higher lags and lower detection rate at small lags (in the rest frame). Figure \ref{fig:slope_hist_multiyr} shows the distribution of the fitted slopes from the 100 down-sampling realizations. The measured slopes from the {\tt JAVELIN} lags {(slope $\sim$ 0.42, NMAD $\sim$ 0.03)} are again more consistent with the input slope than those from the ICCF lags {(slope $\sim$ 0.35, NMAD $\sim$ 0.05)}. {Expanding the lag sample to a wider AGN luminosity range appears to be necessary to recover the true slope of the R-L relation. The low-luminosity end of the R-L relation can be filled in by measuring short lags in low-luminosity sources at rapid cadence with short baselines. However, the high-luminosity end of the R-L relation is a more difficult problem: it requires continuous monitoring for years or even decades. The longest measurable lags will always be limited by the total baseline. Understanding the lag detection limit and the false detection rate can help understand biases in the high-luminosity end of the R-L relation.} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{detmap_multiyr_pntest_imag21p7.pdf} \caption{Similar format to Figure \ref{fig:detmap}. Detection maps for the 5-year simulation following the statistical selection described in \S\ref{sec:reallife}. The colormap represents the detection efficiency and the numbers are the detection counts (true detections in black and false detections in red) of a single down-sampling realization. The total numbers of true and false detections shown in the lower-right corner are the median and uncertainties derived from 100 down-sampling realizations. The grey contours show the approximate constant lags from the R-L relation from \cite{Bentz_2009b}. The detection efficiency is calculated for the selected sources in the flux-limited sample, instead of using the uniform sample like in Figure \ref{fig:detmap}.} \label{fig:detmap_multiyr} \end{figure*} \begin{figure*} \centering \begin{tabular}{@{}cc@{}} \includegraphics[width=0.5\textwidth]{corr_det_imag_multiyr_ds_imag21p7.pdf} \includegraphics[width=0.5\textwidth]{corr_det_z_multiyr_ds_imag21p7.pdf} \end{tabular} \caption Detection efficiency (solid lines and shaded area) and true (square) and false (cross) detection counts of the three methods as functions of $i$-band magnitude (right panel) and redshift (left panel) of the flux-limited sample from the 5-year simulation. The dotted lines show the number of sources with lags shorter than the search range (i.e. 800 days) in each magnitude or redshift bin. For $i<$17 and $z>4$, the detection efficiencies are not shown because there are no quasars selected in more than 95\% of bootstrapping realizations.} \label{fig:corr_multiyr} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{hist_lag_multiyr_pntest_srange800.pdf} \caption Median distribution of the detected lags in the 100 down-sampling realization of the 5-year simulation. The open histograms show the number of true detections and the solid histograms indicate the number of false detections. The grey shaded area represents the median assigned lag distribution.} \label{fig:hist_multiyr} \end{figure} \begin{figure*} \includegraphics[width=0.5\textwidth]{rl_fdet_CCF_multiyr_srange800_bs00.pdf} \includegraphics[width=0.5\textwidth]{rl_fdet_Javelin_multiyr_srange800_bs00.pdf} \caption H$\beta$ R-L relation derived from one down-sampling realization in the 5-year simulation. The grey contours represent the uniform sample, and the blue and orange points are the true and false detections. The black points are the Bentz et al. (2013) local RM AGN sample for reference. The black solid line is the input R-L relation used to generate the uniform sample and the blue lines are 50 random realizations drawn from the posterior of the Bayesian regression fit to the R-L relation. } \label{fig:rl_multiyr} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{RL_hist_pntest_multiyr_multiyr_srange800.pdf} \caption Distribution of the best-fit R-L slopes from 100 down-sampling realizations in the 5-year simulation. The vertical dashed lines indicate the median of each distribution and the solid vertical lines mark the slope of the input R-L relation ($\beta$=0.519).} \label{fig:slope_hist_multiyr} \end{figure} \section{Conclusions}\label{sec:con} In this work, we used simulated MOS-RM observations to test the strengths and weaknesses of three popular time lag measuring methods: ICCF, ZDCF and {\tt JAVELIN}. We examined lag detections for a uniform mock quasar sample and down-sample it to mimic flux-limited samples in real surveys. Among the three methods, ZDCF has the lowest detection efficiency and detection quality, indicating that the interpolation between data points in the other two methods enhances the probability of lag detection. {\tt JAVELIN} performs better than ICCF in essentially all major benchmarks we tested: \begin{enumerate} \item[$\bullet$] {\tt JAVELIN} can recover more lags that are shorter than the cadence, which we ascribe to the more empirically-motivated interpolation scheme based on the DRW model used to describe stochastic quasar continuum variability. \item[$\bullet$] Overall, {\tt JAVELIN} produces both more accurate and more precise lag measurements for typical MOS-RM programs. The formal lag errors from {\tt JAVELIN} are also {the most reliable (compared with the deviations from the true lags)} among the three methods. \item[$\bullet$] {\tt JAVELIN} in general produces fewer false detections than ICCF, and its detection efficiency and quality are less sensitive to degradation of the S/N of light curves {(Fig. 16)}. \item[$\bullet$] {\tt JAVELIN} is less affected by large, seasonal gaps in the light curves, resulting in more lags that are near the seasonal gaps that will otherwise be missed by ICCF. This is again the result of the more physically-motivated interpolation scheme by {\tt JAVELIN} {(Section \ref{sec:multi-yr})}. \item[$\bullet$] The advantages of {\tt JAVELIN} in lag measurements lead to less bias in the measured slope in the R-L relation than ICCF {(Section \ref{sec:rl} and Figure \ref{fig:slope_hist})}. \item[$\bullet$] {{\tt JAVELIN} performs at least equally well as ICCF in all the aforementioned tests even when the continuum light curves deviate from the DRW model assumed by JAVELIN, in the single power-law PSD models we tested }{(Section \ref{sec:psd})}. \end{enumerate} These results demonstrate the clear preference for {\tt JAVELIN} over the other two methods as the primary method of lag measurements for MOS-RM surveys, where the quality of light curves is generally worse than that achieved for traditional RM programs targeting local low-luminosity AGN. We further developed a statistical approach to efficiently eliminate false detections in MOS-RM surveys, without knowing the true lags of the sample. Using this statistical approach, we can recover 90\% of the true (detectable) lags while retaining a reasonably low false detection rate ($\sim 18\%$ for ICCF and $<10\%$ for ZDCF and {\tt JAVELIN}). {{\tt JAVELIN} recovers the most accurate R-L relation slope compared to the fiducial slope measured for the low-$z$ RM sample, and the recovered R-L relation slope from ICCF and ZDCF is shallower. When the intrinsic scatter in the R-L relation increases, the recovered R-L relation becomes even shallower.} This is mainly because our 180-day mock observation is not capable of detecting long lags (and lags much shorter than the cadence) and thus limiting the dynamic range in the R-L relation fitting. Indeed, when we include long lags from multi-year observations, this discrepancy in the R-L relation slope is reduced. The deficiency of short lags in the low-luminosity regime still limits the recoverability of the true slope. However, only {\tt JAVELIN} is capable of producing consistent slope measurements when the cadence is reduced or the light curve S/N is degraded. Our investigations have not explored the entire parameter space of RM and other less common methods of lag measurements, and it is possible that {\tt JAVELIN} may perform worse than ICCF in special circumstances. However, for large-scale MOS-RM programs, the recently developed, more statistically robust methods (such as {\tt JAVELIN} and \texttt{CREAM}) convincingly produce superior results than the traditional ICCF in order to utilize the full power of these MOS-RM data. \bigskip {We thank the referee for a thorough report and useful comments, and Zhefu Yu and Brad Peterson for helpful discussions.} JIL and YS acknowledge support from an Alfred P. Sloan Research Fellowship (YS) and NSF grant AST-1715579. LCH was supported by the National Science Foundation of China (11721303) and the National Key R\&D Program of China (2016YFA0400702). JRT and YH acknowledge support from NASA grant HST-GO-15260. WNB and CJG acknowledge support from NSF grants AST-1517113 and AST-1516784. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, University of Cambridge, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
1,314,259,996,902
arxiv
\section*{Introduction} \rm About 35 years ago, the author was asked by two colleagues in physics what restrictions there were on the cup-product cubic form on $H^2 (X, {\bf Z})$ given by $D \mapsto D^3$ for a Calabi--Yau threefold $X$. Until now, we were only able to make rather weak statements in response to this question, but recent work now enables us to give a more revealing answer. The question becomes interesting when $b_2 (X) \ge 3$, and in this paper we concentrate on the case $b_2(X) =3$; the intervening three decades have suggested the following Motivating Question to the author: \vspace{2 mm} \noindent \bf Motivating Question. \rm Is the following true? If $X$ is a simply connected Calabi--Yau\ threefold with $b_2 (X)=3$ and the cubic form defines a smooth real elliptic curve $C \subset {\bf P}^2 ({\bf R} )$, then either $C$ has two real connected components and the K\"ahler\ cone ${\mathcal K} (X)$ is contained in the cone on the bounded component of $C$ on which the cubic is positive, or $C$ has one real component and we are in the special case where the Hessian curve consists of three non-concurrent real lines. \begin{rem} By considering a quintic in ${\bf P} ^4 ({\bf C} )$ with two singularities, each of which is analytically a cone on a del Pezzo surface, and its desingularization $X$, we note that the case with the real elliptic curve having one component and the singular Hessian noted above does occur (and the cubic is a Fermat cubic $x^3 + y^3 + z^3 =0$ in appropriate real coordinates). In the other case where the cubic is smooth but the Hessian is singular, the Hessian curve consists of one real line and two complex lines --- the author does know any examples where this case occurs, nor any examples where the elliptic curve $C$ has one real component and the Hessian is smooth. In the real coordinates introduced in Section 1, the Fermat cubic corresponds to $k=0$, the other cubic with singular Hessian corresponds to $k=-2$, the cubics with one real component and smooth Hessian correspond to $k<1$ and $k\ne -2$ or $0$, and those with two real components correspond to $k>1$. \end{rem} We recall that the \it movable cone \rm of a smooth threefold $X$ is the closure of the cone in $H^2(X, {\bf R})$ generated by the classes of mobile divisors and that the Hessian is non-negative on the movable cone by Lemma 3.2 of \cite{WilBd}. As we saw in the earlier papers (\cite{WilBd}, \cite{WilPic3}), the \it rigid non-movable \rm surfaces $E$ on a Calabi--Yau\ threefold $X$, which were defined as the irreducible surfaces on $X$ that deform with any small deformation of the complex structure on $X$ but for which no multiple moves in the threefold (see Section 2 of \cite{WilBd} for a discussion of these) play a crucial role in understanding possible Calabi--Yau\ structures on a compact 6-manifold. In \cite{WilPic3}, we made no assumptions concerning these rigid non-movable surfaces and this caused some of the arguments to be rather intricate. In this paper, we shall make the following simplifying assumptions: \vspace{2 mm} \noindent \bf Simplifying Assumptions. \rm We assume throughout the paper that $X$ is a simply connected Calabi--Yau\ threefold with $b_2(X)=3$, that the cubic form on $H^2 (X , {\bf Z} )$ defines a smooth cubic curve in ${\bf P}^2 ({\bf R})$ with smooth Hessian, and that there are no rigid non-movable surfaces $E$ on $X$ with $E^3 >0$ --- by the results of Section 2 from \cite{WilBd}, any such $E$ would have $E^3 \le 9$. \vspace{2 mm} Recall the result of Wall (\cite{Wall}, Theorem 5) that under an assumption ($H$) that the compact simply connected 6-manifolds studied have torsion free homology and class $w_2 (M) =0$ (the latter assumption holding if $M$ supports a Calabi--Yau\ structure), then the diffeomorphism classes of compact oriented manifolds $M$ satisfying ($H$) correspond bijectively to isomorphism classes of invariants: two free abelian groups $H$ and $G$ (corresponding to $H^i (M, {\bf Z})$ for $i=1,2$) with the rank of $G$ being even, a symmetric trilinear map $\mu : H \times H \times H \to {\bf Z}$, a homomorphism $p_1 : H \to {\bf Z}$, subject to: for all $x, y \in H$, $$\mu (x,x,y) \equiv \mu (x,y,y)\quad (\hbox{\rm mod\ } 2)\quad \hbox{\rm and} \quad p_1 (x) \equiv 4\mu (x,x,x) \quad (\hbox{\rm mod\ } 24).$$ We shall be interested in the case when there is a Calabi--Yau\ structure $X$ on the manifold, in which case the linear form on the given free abelian group $H$ may be identified as $p_1 (X) = -2c_2(X)$. Wall first constructs the relevant 6-manifold $M_0$ when $G=0$, and he then forms a connected sum of $M_0$ with $b_3/2$ copies of $S^3 \times S^3$, and so at the smooth level the information on $H^3 (M, {\bf Z})$ and the invariants on $H^2 (M, {\bf Z} )$ are independent. This is not true for Calabi--Yau\ threefolds, since for $b_2 (X)\le 2$ we know that the invariants on $H^2(X, {\bf Z})$ bound $b_3(X)$ (for $b_2 =1$ this follows using Hilbert schemes, and see \cite{WilBd} for the $b_2 =2$ case). When $M$ supports an almost complex structure with $c_1 =0$, it is unique up to homotopy by Theorem 9 of \cite{Wall}. By taking an appropriate integral multiple of the invariants $\mu$ and $p_1$, the required congruences still hold and we see that the assumption that there are no rigid non-movable surfaces $E$ with $1 \le E^3 \le 9$ is automatically satisfied. In this paper, we prove the Motivating Question true under the above simplifying assumptions, and the author suspects that it is probably a matter of detail to prove it in general. For each of the nine possible values of $E^3 >0$, we saw in Proposition 2.2 of \cite{WilBd} that there are only finitely many possible values for $c_2 \cdot E$, and we can then deduce (under very weak assumptions on the cubic and linear forms, for instance as explained in the first paragraph of the proof of Lemma 3.7 of \cite{WilPic3} it is sufficient that the line defined by $c_2$ does not meet the elliptic curve at an inflexion point, or as remarked in Remark 4.4 of the same paper it is also sufficient that the line does not meet the elliptic curve in a rational point) that there are only finitely many classes $E$ with these invariants. The main result of this paper is the following: \begin{main} Let $X$ be a simply-connected Calabi--Yau\ threefold with $b_2 (X)=3$ satisfying the above simplifying assumptions and let $F =0$ denote the real elliptic curve corresponding to the cubic form and $H=0$ its (smooth) Hessian curve. Then the real elliptic curve determined by $F$ has two connected components and the K\"ahler\ cone of $X$ is contained in the cone on the interior of the bounded component on which the cubic is positive. Moreover the linear form $c_2$ is positive on the (positive) open cone in $H^2(X, {\bf R})$ on the bounded component. \end{main} In particular, we note using Wall's result that, for any given even $b_3 >0$, we have an abundance of examples of smooth compact 6-manifolds which support no Calabi--Yau structures, both in the case when the cubic defines a real elliptic curve with one component and in the case of two components --- for the latter we shall also need to choose the linear form appropriately, so that it is negative somewhere on the (positive) open cone on the bounded component. The last sentence of the theorem will also be relevant for boundedness questions --- cf. Section 3 of \cite{WilPic3}. \section{Components of the positive index cone for real ternary cubics} In \cite{WilBd}, a central role was played by the positive index cone corresponding to the cubic on $H^2 (X, {\bf Z} ) = {\bf Z} ^\rho$, namely the real classes $L$ for which $L^3 >0$ and the quadratic form given by $D \mapsto L\cdot D^2$ has index $(1,\rho -1)$. For $\rho =3$ the cubic defines a curve in the real projective plane, and our assumptions say that this is a real elliptic curve. To study real elliptic curves, the Hesse normal form for the curve will be useful, as we saw in \cite{WilPic3}, the theory of which may be found in Theorem 6.3 \cite{BM} or Section 3 of \cite{Dolg}. Normally we might take real coordinates so that the real elliptic curve takes the symmetric Hesse normal form $x^3 + y^3 + z^3 = 3kxyz$ with parameter $k$, but for our purposes it will be more convenient to make a change of coordinates so that the cubic takes the form $$ F(x,y,z) = -x^3 - y^3 - (z-x-y)^3 + 3kxy(z-x-y), \eqno{(1)}$$ so that the `triangle of reference' of $F$ is now in the affine plane $z=1$ with vertices $ (0,0), (1,0)$ and $ (0,1)$. We shall write $F_k$ if we wish to indicate the dependence on $k$. Recall that if $k>1$, then the real curve $F=0$ has two components, the bounded component (lying in the triangle of reference) and the unbounded component. The cone in ${\bf R}^3$ corresponding to the bounded component has two connected components when one removes the origin, a positive part inside which $F>0$ and a negative part inside which $F<0$, whilst the cone on the unbounded component only has one connected component in ${\bf R}^3$, even after removing the origin. In the case $k>1$ it is easily checked that the unbounded component has three affine branches, one of which lies in the negative quadrant $x<0,\ y<0$, one in the sector $ y<0,\ x+y >1$ and the third in the sector $x<0,\ x+y >1$. The (real) inflexion points of the cubic are at $B_1 = (0:1:0)$, $B_2 =(1:0:0)$ and $B_3 = (1:-1:0)$, i.e. the intersection of the line at infinity $z=0$ with the curve (a further reason why the chosen change of coordinates is helpful). The asymptotes for the affine branches of the unbounded component may be found by calculating the tangents to the curve at the inflexion points, and are $$ x = - {1\over {k-1}}, \quad y = - {1\over {k-1}} \quad \hbox{\rm and}\quad x+y = {k\over {k-1}}. \eqno{(2)}$$ Noting that $x^3 + y^3 + z^3 -3xyz = (x+y+z)(x+\omega y + \omega^2 z)(x+ \omega ^2 y + \omega z)$, where $\omega$ is a primitive cube root of unity, when $k=1$ the cubic (1) splits into the real line $z=0$ and two complex lines (meeting at the centroid $({1\over 3}:{1\over 3}:1)$ of the triangle of reference). When $k <1$, the cubic $F=0$ is smooth but with only one real component, with three affine branches, one in the region $x>0,\ y>0, \ x+y >1$, one in the region $x>0, \ y<0, \ x+y <1$ and one in the region $x<0, \ y>0, \ x+y < 1$. The asymptotes are calculated as before and are given by the equations (2). By Remark 2.11 of \cite{BM}, the Hessian of the cubic $-x^3 -y^3 -z^3 + 3kxyz$ is given by $$27\bigl(2k^2(x^3 +y^3 +z^3 )- ( 8-2k^3) xyz\bigr).$$ Thus if $H_k$ denotes the Hessian of the cubic $F_k$, the fact that our change of coordinates was unimodular shows for $k\ne 0$ that $H_k = -54k^2F_{k'}$, with parameter $k' = {{4-k^3}\over {3k^2}}$. In particular we see that if $k>1$, then $k' <1$, and so the Hessian curve of a real elliptic curve with two components has only one component. For $k<1$, we have two notable values: $k=0$ for which the Hessian curve is the three real lines given by $-xy(z-x-y) =0$, and $k=-2$ for which the Hessian curve is the three lines (two of them complex) corresponding to $k' =1$ described before. Apart from these two values, for any real elliptic curve with one component, the Hessian curve is a real elliptic curve with two components, the bounded component lying in the triangle of reference. In order to keep track of signs, we note that $F_k(0,0,1) =-1$ for all $k$ and that $H_k(0,0,1) >0$ for $k\ne 0$. Given the simplifying assumptions made in the Introduction, we have however assumed that the Hessian cubic is also smooth, namely that $k \ne -2$ or $0$. The case of the curve having two components is illustrated in Figure 1 (which shows $F_5$, $H_5$ and the three asymptotes for $F_5$). The picture for one component when $k \ne -2$ or $0$ is not dissimilar, where the roles of the cubic and its Hessian are interchanged. For $k \ne -2$, $0 $ or $1$, we note that the asymptotes to the cubic $F_k =0$ are tangent to the affine Hessian curve $H_k =0$; this is just a special case of a classical result that the double polar with respect to the cubic at a point on its Hessian is tangent to the Hessian at the image of the point under the Steinian involution (see \cite{Dolg}, Section 3.2 and Exercise 3.8) --- the Steinian involution on the Hessian will be explained at the start of Section 2. When the point is an inflexion point of the cubic, the double polar is just the tangent line to the cubic, in our case the asymptote --- the corresponding points on the Hessian are labelled $Q_i$, $i=1,2,3$. This gives more precise information about the affine regions where the Hessian curve can lie. \begin{figure} \centering \includegraphics[width=10cm]{Figure1} \caption{Cubic with asymptotes and Hessian for $k=5$} \end{figure} As was explained in Section 1 of \cite{WilPic3}, we can now identify the components of the positive index cone. Taking $A= (a,b,1)$ in the affine plane $z=1$, we wish to know the index of the quadratic form defined by the homogeneous quadratic $G_A(D) = A\cdot D^2$; if $D = (x,y,z)$ in the above coordinates, this is explicitly given by $$-ax^2 -b y^2 - (1-a-b)(z-x-y)^2 +kay(z-x-y) +kbx(z-x-y) + k(1-a-b)xy. $$ If $F_k =0$ has two real components, i.e. $k>1$, then this quadratic form at $A$ has index $(1,2)$ if $H(A) >0$, index $(2,1)$ if $H(A) <0$ and index $(1,1)$ if $H(A)=0$. This may be easily verified by considering sample points such as $A= (0,0,1)$, where the index is plainly $(1,2)$ and points $A = (t,t,1)$ for $t\gg0$ where the index is $(2,1)$. One is therefore looking for the regions for which either $F>0$ and $H>0$, or $F<0$ and $H<0$, the latter being relevant since for $D$ in such a region, both $F$ and $H$ will be positive at $-D$. The cubic curve then bounds precisely four (convex) regions of the affine plane $z=1$ on which both $F>0$ and $H>0$ including the bounded component inside which $F>0$ (contained in the triangle of reference), where the index of the corresponding quadratic form is $(1,2)$. Moreover the Hessian curve bounds precisely three (convex) regions of the affine plane on which both $F$ and $H$ are negative (where the index of the associated quadratic form is $(2,1)$). Each unbounded component defining a region on which $F>0$ will together with the negative of the appropriate region defined by $H<0$ give rise to a connected component of the positive index cone in ${\bf R}^3$, part of whose boundary is contained in $F=0$ and part of whose boundary is contained in $H=0$, with the two parts meeting along rays corresponding to two of the inflexion points of the curve $F=0$. For each of these three resulting \it hybrid \rm cones in ${\bf R}^3$, we have $F>0$, $H>0$ and a continuity argument verifies that the index is $(1,2)$ on each cone. The other component of the positive index cone corresponds to the bounded component and whose boundary is contained in $F=0$. In the case when $F=0$ only has one connected component, i.e. $k<1$, we have that the condition $F>0$ defines three (convex) regions of the affine plane $z=1$, and in these regions the Hessian is positive and the index is $(1,2)$. Under the assumption that the Hessian is also smooth, there there are four (convex) regions of the affine plane on which the Hessian is negative (where the cubic is also negative), three unbounded regions on which the index is $(2,1)$, and the region determined by the bounded component of the Hessian, on which $F<0$ and $H<0$. We therefore as before obtain three \it hybrid \rm components of the positive index cone, obtained from unbounded affine regions on which $F>0,\ H>0$ together with the (negative of) unbounded affine regions on which $F<0,\ H<0$; a continuity argument again ensures that the index is $(1,2)$. For the Fermat cubic, i.e. $k=0$, we check easily that the index at any interior point of the triangle of reference is in fact $(0,3)$. Thus by a continuity argument, for any $-2 < k < 1$, the index of the quadratic form for any $A$ inside the bounded component of the Hessian remains $(0,3)$, since as $k\to 0$ the bounded component of the Hessian of $F_k$ tends to the triangle of reference. For points actually on this bounded component of the Hessian the index is $(0,2)$, apart from the Fermat cubic and the vertices of the triangle of reference, where the index is $(0,1)$. For $k<-2$, we check that for $A$ inside the bounded component, the index is $(2, 1)$ (check it at a suitable such point for $k \ll -2$ and use continuity again), and for $A$ actually on the bounded component the index is $(1,1)$. Thus in this case, points in the negative of the corresponding cone have index $(1,2)$, and we obtain a further connected component of the positive index cone. In both cases under consideration, namely the real elliptic curve has one or two components with smooth Hessian, the components of the positive index cone are all convex, and their closures are strictly convex. \begin{rem} Suppose now that in the coordinates chosen, we have a rigid non-movable class $E= (a,b,c)$; if $c\le 0$ and $E$ does not represent an inflexion point of the cubic curve, then the fact that $H(E) \ge 0$ by the Hodge Index theorem implies that $F(E) = E^3 >0$, contrary to assumption. Thus one convenient consequence of the simplifying assumptions from the Introduction is that the classes of all rigid non-movable sufaces $E= (a,b,c)$ lie in the upper half-space $c\ge 0$, and $c=0$ is only possible if $E$ represents an inflexion point on the cubic. Note that because we have only chosen a \it real \rm coordinate system, it is unclear when a point $(a,b,c)$ represents a class in $H^2 (X, {\bf Z})$; this however will not be a problem in our arguments. \end{rem} When $-2 < k < 0$ or $0<k<1$, we know from Hodge index considerations and the above calculations that the negative closed convex cone on the bounded component of the Hessian cannot contain the class of any surface, and in particular the cone cannot contain the K\"ahler\ cone of $X$. Under our assumptions, we show below that this latter fact continues to be true when $k<-2$. Recall that for a convex body $V$ in ${\bf R}^n$, a point $D$ on the boundary $\partial V$ is said to be \it visible \rm (with respect to $V$) from a point $A$ if the line segment $AD$ does not meet the interior of $V$ --- see Introduction to \cite{WilPic3} for further discussion of this concept. \begin{prop} Under the simplifying assumptions from the Introduction, if the cubic form corresponds to the case $k<-2$, and so there is a component of the positive index cone $P^\circ$ corresponding to the bounded component of the Hessian, then the K\"ahler\ cone of $X$ is not contained in $P^\circ$. \begin{proof} We suppose that the K\"ahler\ cone is contained in $P^\circ$. Suppose first that there are at least three rigid non-movable surfaces on $X$, say $E_i$ for $i = 1,2, 3$, and let $L\in P^\circ$ denote an integral ample class. Since these four integral classes are linearly dependent in $H^2 (X, {\bf Z})$, and we cannot have $L$ being a rational convex combination of the $E_i$ since they lie in different half-spaces, and no $E_i$ is a rational convex combination of the other three since it is non-movable, we deduce without loss of generality that some integral convex combination of say $E_1$ and $E_2$ is an integral convex combination of $E_3$ and $L$, and hence is mobile. As however the cone generated by mobile classes lies inside the closed cone $P$ by index considerations, and hence lies in the open lower half-space $z<0$, this is a contradiction. We saw in Proposition 1.1 from \cite{WilPic3} that the Proposition is true when there is at most one rigid non-movable surface, and in our case we show this proof may be strengthened to work when there are precisely two rigid non-movable surfaces $E_1, E_2$ on $X$. We have already noted that the interior of the movable cone is contained in $P^\circ$; assume there is a point $D$ on the boundary of the movable cone which is visible from both $-E_1$ and $-E_2$, then by the second proof of Theorem 0.1 in \cite{WilBd}, ${\rm vol} (D) \ge D^3 >0$, and this remains true for nearby rational points $D'$ not in the movable cone. Writing $D'$ in terms of its movable part $\Delta$ and rational multiples of the $E_i$ yields a contradiction, since our assumption that $D$ is visible from both $-E_1$ and $-E_2$ ensures that $\Delta$ cannot lie in the movable cone. Thus no point $D$ on the boundary of the movable cone can be visible from both $-E_1$ and $-E_2$, and so by convexity of the movable cone we deduce that some convex combination of $-E_1$ and $-E_2$ must lie in the interior of the movable cone. This is an obvious contradiction, since a convex combination of $E_1$ and $E_2$ is effective and so its negative cannot be effective, and hence cannot lie in the interior of the movable cone. \end{proof} \end{prop} We now consider the case where the real elliptic curve has two components, and the K\"ahler\ cone is contained in the (positive) cone on the bounded component. We shall need an elementary lemma in convexity theory, the idea of the proof given being suggested to the author by a colleague Imre Leader. \begin{lem} Let $V \subset {\bf R}^2$ be a open bounded convex body with a smooth boundary curve, and $x_1 , x_2 , \ldots $ be a (perhaps infinite) collection of points in ${\bf R}^2$ not in $V$. We let $W$ denote the points of the boundary $\partial V$ which cannot be seen from any of the $x_i$, i.e. $W$ consists of points $x \in \partial V$ such that the line segments $xx_i$ all meet $V$. Then $V$ is contained in the closure $Z$ of the convex hull of $W$ and $x_1, x_2 , \ldots $. \begin{proof} We suppose that the result is not true, and so in particular by the convexity of $V$ there is a point $x$ in the boundary of $V$ which is not in the closure $Z$ of the convex hull of $W$ and $x_1, x_2 , \ldots$. Standard convexity results imply the existence of a line $L$ though $x$ such that $Z$ is strictly on one side of $L$, and in particular all points of $W$ and all the $x_i$ are in the corresponding open half-plane. If $L$ is tangent to the boundary $\partial V$ at $x$, then $V$ itself must be on one side of $L$. If $W$ and the $x_i$ are contained in the open half-plane disjoint from $V$, we get an immediate contradiction by considering $y$ the other point on $\partial V$ whose tangent is parallel to $L$, which therefore cannot be seen from any of the $x_i$ and hence by definition lies in $W$, contrary to assumption. If however $W$ and the $x_i$ are contained in the same open half-plane that contains $V$, then $x$ cannot be seen from any of the $x_i$ and so $x\in W$, contrary to assumption. If however $L$ is not tangent to the boundary $\partial V$ at $x$, we consider the point $y\in \partial V$ with tangent parallel to $L$ such that $y$ is on the other side of $L$ to $W$ and the $x_i$. Then $y$ cannot be seen from any of the $x_i$ and hence $y\in W$, a contradiction. \end{proof} \end{lem} \begin{prop} Suppose that $X$ is general in moduli and that the real elliptic curve has two components, with the K\"ahler\ cone of $X$ contained in the open positive cone $P^\circ$ on the bounded component. Then $P^\circ$ is contained in the interior of the effective cone of $X$. \begin{proof} Consider now the affine plane $z=1$ and let $V$ be the open convex body given by the bounded component of the above real elliptic curve, so that $P^\circ$ is the cone on $V$. Suppose that $E = (a,b,c)$ represents the class of a rigid non-movable surface; by assumption $c\ge 0$, and if $c>0$ there is a unique point $A$ of the affine plane which is a positive multiple of $E$; moreover $A\not\in V$. The points of the boundary of $V$ which cannot be seen from $E$ with respect to $P^\circ$ are precisely those points $D$ with $E\cdot D ^2 >0$; the points at which $E\cdot D^2 =0$ corresponding to the tangent lines which pass through $A$, with an appropriate interpretation for a point $E$ at infinity. We let $\tilde Q$ denote the (convex) subset of $V$ defined by the inequalities $E_i \cdot D^2 >0$ for all $i$, which contains in its closure the set of all $D_0 \in \partial V$ which cannot be seen from any $E_i$ (the cone $Q$ on which contains the K\"ahler\ cone). Given such a $D_0$, for any ample class $L$, note that any strictly convex combination of $D_0$ and $ L$ lies in $\tilde Q$. This enables us to find rational points $D_i \in \tilde Q$ for $ i>0$ with $D_i \to D_0$. By the argument in Proposition 4.1 and Lemma 4.3 from \cite{WilBd}, each $D_i$ is in the effective cone (the quoted results here use the assumption that $X$ is general in moduli), and so the point $D_0$ is in the pseudo-effective cone. Thus, from Lemma 1.3, it follows that the closure $P$ of $P^\circ$ is in the pseudoeffective cone, and hence the open cone $P^\circ$ is contained in the interior of the effective cone (i.e. the \it big \rm cone). \end{proof} \end{prop} \begin{cor} Under the simplifying assumptions of the Introduction, when the K\"ahler\ cone of $X$ is contained in the open positive cone $P^\circ$ on the bounded component of the elliptic curve, the linear form $c_2$ is strictly positive on $P^\circ$. \begin{proof} Without loss of generality, we may assume that $X$ is general in moduli. We noted in Section 2 of \cite{WilBd} that if $c_2 \cdot E <0$ for some rigid non-movable surface, then $E^3 >0$. Thus our assumptions imply that $c_2 \cdot E \ge 0$ for all rigid non-movable surfaces $E$ on $X$. Moreover we also noted in Remark 1.2 of \cite{WilBd} that $c_2 \cdot L \ge 0$ for any movable class $L$. Since by the previous result, any element of $D \in P^\circ$ is effective and $X$ is general in moduli, it follows that $c_2 \cdot D \ge 0$ for any $D\in P^\circ$. A well-known fact due to S.-T. Yau \cite{Yau} is that if $c_2$ is numerically trivial, then $X$ is an \'etale quotient of an abelian threefold; thus since $X$ is assumed simply connected, we know that $c_2$ is cannot be numerically trivial and so $c_2\cdot L >0$ for any class $L\in P^\circ$.\end{proof} \end{cor} In the light of these results, for the Main Theorem we are reduced to proving. \begin{thm} Under the assumptions of the Main Theorem, the K\"ahler\ cone is not contained in a hybrid component of the positive index cone. \end{thm} For the examples of 6-manifolds not supporting any Calabi--Yau\ structure, we use Wall's Theorem and for any even rank free abelian group $H^3 (M, {\bf Z} )$, we may take the cubic form on $H^2 (M, {\bf Z} ) = {\bf Z}^3$ to be an appropriate integral multiple of $F$ as in equation (1), with $k<1$ an integer other than $-2$ or $0$, and any suitable integral linear form satisfying the congruence conditions. We may also take the cubic form on $H^2 (M, {\bf Z} ) = {\bf Z}^3$ to be an appropriate integral multiple of $F$ as in equation (1) with any integer $k>1$ and any suitable integral linear form satisfying the congruence conditions but which is not positive on the open positive cone on the bounded component of the real elliptic curve. In these cases, if the integral multiple of $F$ has been chosen so that there are no integral classes $E$ with $1\le E^3 \le 9$, our Main Theorem rules out the possibility of any Calabi--Yau\ structures. \begin{notation} We shall now fix on the notation that will be used in the rest of this paper to describe a hybrid component $P^\circ$ of the positive index cone, with $P$ denoting its closure. Taking the affine slice $z=1$, where the cubic is of the form described above, with slight abuse of notation concerning the points at infinity, we denote the projectivised boundary of $P$ by $C = C_1 \cup C_2$, where $C_1$ is an affine branch of $F=0$ and $C_2$ is an associated affine branch of $H=0$, with $C_1$ and $C_2$ meeting at two inflexion points (at infinity). When $k>1$, without loss of generality we may by symmetry take $C_1$ in the negative quadrant and then $C_2$ in the region $x>0, \ y>0, \ x+y >1$. The branches $C_1$ and $C_2$ meet (at infinity) at the inflexion points $B_1 = (0:1:0)$ and $B_2 = (1:0:0)$. More specifically, the cone $P$ then has boundary the positive cone on $C_1$ together with the negative cone on $C_2$, the two parts meeting in two rays corresponding to positive multiples of $(0, -1, 0)$ and $(-1,0,0$). When $k<1$ and $k \ne 0, -2$, we take $C_1$ in the region $x>0,\ y>0,\ x+y >1$ and $C_2$ in the negative quadrant. The branches $C_1$ and $C_2$ again meet (at infinity) at the inflexion points $B_1$ and $B_2 $. The cone $P$ then has boundary the positive cone on $C_1$ together with the negative cone on $C_2$, the two parts meeting in the two rays corresponding to positive multiples of $(0, 1, 0)$ and $(1,0,0$). In both cases, we shall denote by $V_1$ the open convex subset of the affine plane bounded by $C_1$ and $V_2$ the open convex subset bounded by $C_2$, \end{notation} \begin{prop} Suppose that $P^\circ$ is the hybrid component of the positive index cone as described above, $E_1 , E_2 , \ldots $ are the (perhaps infinitely many) rigid non-movable classes in $H^2 (X, {\bf Z} )$ and $Q$ a connected component of the subcone of $P^\circ$ defined by the inequalities ${E_i \cdot D^2 >0}$ for all $i$. If $Q$ contains the K\"ahler\ cone then there cannot exist a non-trivial open arc of points $-D \in C_2$ which are visible with respect to the cone on $V_2$ from every $E_i$, with each $D$ representing a point of the boundary $\partial Q$. \begin{proof} Again, we may without loss of generality assume that $X$ is general in moduli. We have that $Q = \bigcap _{i\ge 1} Q(i)$, where $Q(i)$ is the component of the subcone of $P^\circ$ defined by $E_i \cdot D^2 >0$ which contains the K\"ahler\ cone. Were such an arc of points in $C_2$ to exist, we choose an interior point $-D_0$ of the arc; note that no point $E_i$ lies on the tangent plane to the cone on $C_2$ along the ray ${\bf R} _- D_0$, since otherwise some points of the arc would not be visible from $E_i$. Thus $D_0$ is not visible from any of the $E_i$ with respect to $Q$. For any real ample divisor $L$, we note that any strictly convex combination of $D_0$ and $L$ lies in each $Q(i)$, and hence in $Q$. In this way we show, using the argument in Proposition 4.1 and Lemma 4.3 from \cite{WilBd} as in the proof of Proposition 1.4, where the quoted results assume that $X$ is general in moduli, that $D_0$ is a limit of effective rational divisors $D_j \in Q$; therefore $D_0$ is pseudoeffective. For any prime divisor $\Gamma$, we have a function $\sigma_\Gamma$ on the pseudoeffective cone as in Definition 1.6 of Chapter III from \cite{Nak} --- all the following references to \cite{Nak} will be from Chapter III. Moreover by Proposition 1.10 and Corollary 1.11 of the given Chapter, there are at most three $\Gamma$ with $\sigma _\Gamma (D_0)>0$, whose classes are moreover linearly independent in $H^2 (X, {\bf R} )$. Under our assumption that $X$ is general in moduli these will be surfaces $E_i$, without loss of generality $E_1 , \ldots ,E_r$ with $r\le 3$. Thus in the $\sigma$-{\it decomposition }\rm (see Definition 1.12 from the Chapter) $D_0 = P + N$ of $D_0$, we have $N = \Sigma _{i=1}^r \sigma _{E_i} (D_0) E_i$. In particular, by Lemma 1.8 of the Chapter, we have $\sigma _{E_i} (P) = 0$ for $i=1,\ldots ,r$, and indeed also that $\sigma _{\Gamma} (P) = 0$ for any other surface $\Gamma$. If we knew that $D_0$ is big, then by Lemma 1.4 (4) of the Chapter, applied $r$ times, we have that $D_0 = \Delta + \Sigma _{i=1}^r \sigma _{E_i} (D_0) E_i$, with $\Delta$ also big and in particular pseudoeffective, and $\sigma _\Gamma (\Delta) = 0$ for all surfaces $\Gamma$. Thus by Lemma 1.14 (1) of the Chapter, we would deduce that $\Delta$ is movable. Suppose first that $r=0$; then by Lemma 1.14 (1) of \cite{Nak} Chapter III, we deduce that $D_0 $ is movable. We saw then in the second proof of Theorem 0.1 in Section 4 of \cite{WilBd} that ${\rm vol} (D_0) \ge D_0^3 >0$ and hence $D_0$ is big. If there are no rigid non-movable surfaces $E$ on $X$, we noted in \cite{WilBd} that this gives an immediate contradiction, since then there would exist (rational) points $D$ near $D_0$ at which the Hessian is negative but which are big and hence (recalling that $X$ assumed general in moduli) in this case also movable; this is a contradiction. However, given any rigid non-movable surface $E$, we deduce that $D_0 -\epsilon E$ is also big for $0<\epsilon \ll1$, and hence (using a previous argument) of the form $\Delta + {\mathcal E}'$, with $\Delta$ movable and ${\mathcal E}'$ supported on at most three of the $E_i$. Therefore. $D_0 = \Delta + {\mathcal E}' + \epsilon E$. We suppose now that $r>0$ and verify that $D_0$ is big --- cf. Proposition 0.1 of \cite{WilPic3}; we saw above that then $D_0 = \Delta + {\mathcal E}$, for some class $\Delta$ in the movable cone and some real non-zero convex combination ${\mathcal E}$ of at most three of the $E_i$. We know that there is an $E= E_i$ with $\sigma _E (D_0)>0$; choose $\delta = \sigma _E (D_0)/2$. Thus for all $0<\epsilon \ll 1$, the big divisor $D_0 + \epsilon L$ has $\sigma _E (D_0 + \epsilon L)> \delta$, straight from the definition of $\sigma _E$ as a limit in Definition 1.6 of Chapter III from \cite{Nak}. Then by Lemma 1.4 (4) of the Chapter, we deduce that $D_0 + \epsilon L -\delta E $ is big for all $0 <\epsilon \ll1$. Thus $D_0 -\delta E$ is pseudoeffective. From our choice of $D_0$, we know that $D_0$ is not visible from $E$ with respect to $Q$ and hence $D_0 +tE \in Q$ for $0<t \ll 1$, and hence effective (as we saw above). Since \it all \rm points $-D$ of the original arc in $C_2$ have $D$ pseudoeffective, we must have that $D_0$ lies in the interior of the pseudoeffective cone, and hence is big as claimed. Summing up therefore, we can in all cases write $D_0 = \Delta + {\mathcal E}$, for some class $\Delta$ in the movable cone and some real non-zero convex combination ${\mathcal E}$ of finitely many of the $E_i$. We remark that $\Delta \ne 0$ follows immediately from our simplifying assumptions, since the classes $E_i$ all lie in the upper half-space $z \ge 0$ and $D_0$ lies in the open lower half-space. Thus some positive multiple of $-\Delta = -D_0 + {\mathcal E}$ is in another of the regions in the affine plane where $H\le 0$ and hence $F<0$, and in particular we deduce that $\Delta ^3 >0$. If $L$ denotes an ample divisor in $P^\circ$, then $\Delta +tL$ is movable for all $t\ge 0$, and $(\Delta +tL)^3 > 0$ for all $t\ge 0$, from which it follows by connectedness that $\Delta \in P$. Recall that $C_2$ has been assumed smooth; since $-D_0 \in C_2$ is visible from every $E_i$ with respect to $V_2$, it follows that the Hessian at $-D_0 + \varepsilon {\mathcal E}$ is strictly positive for $0< \varepsilon \ll 1$, and hence the Hessian at $D_0 - \varepsilon {\mathcal E}$ is strictly negative; this contradicts the convexity of $P$. \end{proof} \end{prop} Our technique for ruling out such a hybrid component $P^\circ$ containing the K\"ahler\ cone is as follows; in the remaining sections of the paper, we prove the following result in the various cases for the elliptic curve. In the light of this combined with Proposition 1.7, the only conclusion then is that $P^\circ$ cannot contain the K\"ahler\ cone, and so Theorem 1.6 will be proven. The proof of Proposition 1.8 is rather technical when written out in full, but the basic mathematical input consists just of classical facts about the Steinian involution on the Hessian of an elliptic curve; the basic properties of the $E_i$ we use are that they lie in the upper half-space and that the index at $E_i$ is $(1,q_i)$ with $q_i \le 2$. \begin{prop} Under the simplifying assumptions from the Introduction, let $P^\circ$ be a hybrid component of the positive index cone. Suppose $E_1, E_2, \ldots $ denote the (perhaps infinitely many) rigid non-movable surface classes in $H^2(X, {\bf R})$ and let $Q$ be a connected component of the subcone of $P^\circ$ defined by the inequalities $E_i\cdot D^2 >0$ for all $i$, where $Q$ has non-empty interior. Then there exists a non-trivial open arc in $C_2$ of points $-D$ which are visible from all the $E_i$ with respect to the cone on $V_2$, with each $D$ representing a point of the boundary $\partial Q$. \end{prop} \section{Hybrid components when elliptic curve has two real components} In this section, we study the case when $k>1$, i.e. the real elliptic curve $F=0$ has two components, and so in particular the Hessian is smooth. For simplicity, from now on we often use $H$ to denote both the Hessian and the Hessian curve. With notation as in the previous section, we let $P^\circ$ denote a hybrid component of the positive index cone, with closure $P$. Without loss of generality, we may by symmetry adopt the notation explained in the previous section, where the component is determined by the curves $C_1$ and $C_2$ in the affine plane given by $z=1$. Following the theory from Section 2 of \cite{WilPic3}, we now specify, for any given (possible) rigid non-movable surface class $E= (a,b,c)$ with $c\ge 0$, where the homogeneous quadratic given by $G_E(D) = E\cdot D^2$ vanishes on the boundary of $P$. We recall from Remark 1.1 that the only possible rigid non-movable surface classes $E = (a,b,c)$ with $c=0$ must define one of the inflexion points of the cubic, and for such points in ${\bf R}^3$ the function $G_E$ is easy to understand explicitly. The main case to consider therefore is when $c>0$ and so some positive real multiple $A$ of $E$ lies in the affine plane $z=1$. We therefore study real classes $A$ in the affine plane which have index $(1,q)$ for $q\le 2$. In the light of the simplifying assumptions from the Introduction, we can also assume that $E^3 \le 0$; indeed it was this assumption which told us that, apart from when $E$ defines an inflexion point on the cubic, the real classes we are interested in all lie in the open half-space $c>0$, and so give rise to points $A$ in the affine plane; moreover no $E$ lies in $P^\circ$. There is a simple answer to the question of where on $C_1$ we have vanishing of $G_A$, namely points on of $C_1$ for which the tangent passes through $A$ (including maybe inflexion points at infinity). There will be two such points if $A$ is in the quadrant $x\le -{1\over {k-1}},\ y\le -{1\over {k-1}}$ (including the possibilities of the inflexion points $B_1$ and $B_2$, or a point on $C_1$ taken twice), no such points if $A$ is in the quadrant $x> -{1\over {k-1}}, \ y> -{1\over {k-1}}$ and one point otherwise. Let us consider therefore the branch $C_2$ of the Hessian $H$ passing through $B_1, B_2$ and $R = Q_3$, the affine point with coordinates $( {k\over {2(k-1)}}, {k\over {2(k-1)}})$. As $B_3 = (-1:1:0)$ is the third inflexion point, we have already observed that the tangent to $F=0$ at $B_3$ is tangent to $H=0$ at $R$, i.e. that if we take $B_3$ to be the zero of the group law, then $R$ is the unique real 2-torsion point of the Hessian. Recall the classical fact that there is a well-defined base-point free involution $\alpha$ on the Hessian curve, known as the \it Steinian map \rm or \it Steinian involution \rm (\cite{Dolg}, Section 3.2, noting a misprint in Corollary 3.2.5), where the polar conic of $F$ with respect to a point $U$ on $H$ is a line pair with singularity at $U' =\alpha (U)$. We note that this says that $U\cdot U' \equiv 0$. In the case currently under consideration where the Hessian has only one real component, corresponding to a choice of inflexion point for the zero of the group law, there is a unique real 2-torsion point, and $\alpha$ is given by translation in the group law by this point. The Steinian involution has the property that for any point $U' \in H$, the second polar of $F$ with respect to the point $U'$ is the tangent to the Hessian $H$ at $U= \alpha (U')$ (\cite{Dolg}, Exercise 3.8, again noting a misprint). Let $Q_1$ be the point on the branch of $H$ in the region $x<0, \ y>0, \ x+y <1$, which is the 2-torsion point when we take $B_1$ as the zero in the group law, $Q_2$ the point on the branch of $H$ in the region $x>0, \ y<0, \ x+y <1$ corresponding to $B_2$, and $R = Q_3$ the point on the branch of $H$ in the region $x>0,\ y>0, \ x+y >1$ corresponding to $B_3$. It is left to the reader to check that $\alpha (B_1) = Q_1$, $\alpha (B_2) =Q_2$ and $\alpha (R) = B_3$. Thus the second polar of $F$ with respect to the inflexion point $B_1$ is the tangent to the Hessian at $Q_1$, namely given affinely by $x= -{1\over {k-1}}$, the second polar of $F$ with respect to the inflexion point $B_2$ is the tangent to the Hessian at $Q_2$, namely given affinely by $y= - {1\over {k-1}}$, and the second polar of $F$ with respect to the point $R$ is the tangent to the Hessian at $B_3$, namely the asymptote $x+y = {k'\over {(k'-1)}}$, where as before $k' = {{4-k^3}\over {3k^2}}$. Under the Steinian involution, the arc $Q_1 B_3$ of the Hessian corresponds to the arc $B_1 R$ of $C_2$, whilst the arc $B_3 Q_2$ corresponds to the arc $RB_2$ of $C_2$. Setting $A =(a,b,1)$, it is easily checked from this that $G_A (B_1) >0 $ if and only if $a< -{1\over {k-1}}$, that $G_A (B_2) >0$ if and only if $b < -{1\over {k-1}}$ and that $G_A (R)>0$ if and only if $a+b < {{k'}\over {(k'-1)}}$. Moreover if $A=Q_i$, then $G_A (B_i) =0$ for $i=1,2$ and if $A= B_3$ then $G_A(R)=0$. We are interested in the cases of $B_1$ and $B_2$ since we want to know the sign of $G_A$ on points of $C_2$ where either $y \gg 0$ or $x\gg 0$. This gives us a dictionary as to how many points of $C_2$ there are at which the function $G_A$ vanishes, cf. Lemma 2.1 and Corollary 2.2 of \cite{WilPic3}; the reader should refer to Figure 1 to clarify the geometric arguments used below. Recall that $V_1$ denotes the open convex subset of the affine plane bounded by the curve $C_1$, and $V_2$ the open convex subset of the affine plane bounded by the curve $C_2$. Let $A \not\in V_1$ be a point in the affine plane satisfying the above condition on the index; given the technique described at the end of Section 1, we are interested in the points $D$ of $C_2$ for which $G_A (D) >0$; the answers here are provided by the classical facts on the Steinian involution, in particular that the tangent line to the Hessian at a point $U$ is defined by the double polar of $F$ with respect to the point $U' = \alpha (U)$ on the Hessian, where $\alpha$ denotes the Steinian involution. So for any point $A$ on the tangent line at $U$, the conic $G_A =0$ contains the point $U'$; moreover if $A\ne U$, the conic is non-singular at $U'$ with tangent line $L$ at $U'$ independent of $A$, which may be seen to be defined by the linear form $W\cdot U'$, where $W$ is any point ($\ne U$) on the tangent line through $U$. In this case, $\{ G_A >0 \} \cap V_2$ has a component whose boundary contains $U'$ and has tangent $L$ there; moreover $U'$ is the unique point where $L$ meets the conic. We recall that when $A$ has index $(1,q)$ with $q\le 2$, it follows from Proposition 3.4 of \cite{WilBd} that any component of $V_2 \cap \{ D \ :\ A\cdot D^2 >0 \}$ is convex, and any component $Q$ of the cone $P^\circ \cap \{ D \ :\ A\cdot D^2 >0 \}$ is also convex. We can however describe the above common tangent line $L$ more explicitly. Assuming without loss of generality that $U$ lies on the arc $Q_1B_3$ of the Hessian, the tangent line at $U$ meets the Hessian again at a point $Z$ on the branch $B_3B_1$, and in the above characterization of $L$ we may take $W=Z$. The conic $Z\cdot D^2 =0$ is a line pair for which $U'$ is a smooth point. The line $L$ is therefore the line from this pair which contains $U'$. In more detail, if $Z$ lies on the open arc $B_3Q_2$, then $L$ is the line joining $U'$ to the singular point $Z' = \alpha (Z)$ (on the arc $RB_2$). If $Z= Q_2$, then the line pair has a singularity at $B_2$ and $L$ is the line joining $U'$ to $B_2$. If $Z$ lies on the open arc $Q_2B_1$, then the line pair has a singularity at $Z' = \alpha (Z)$ on the arc $B_2Q_1$, one line of which is disjoint from $V_1 \cup V_2$, and the other line (which is therefore $L$) joins $U'$ to the point on $C_1$ whose tangent line contains $Z$. With $A = (a,b,1)$, if $a\le -{1\over {k-1}}$ and $b \le -{1\over {k-1}}$, then the function $G_A$ is strictly positive on both $V_2$ and the affine curve $C_2$. Moreover all of $C_2$ is visible (with respect to $V_2$) from $A$. The other extreme is when $a\ge -{1\over {k-1}}$, $b \ge -{1\over {k-1}}$ and $a+b \ge {k'\over {k'-1}}$; here $G_A$ is negative on all of $P^\circ$ and so $P^\circ \cap \{ G_A >0 \}$ is empty. The interesting affine regions for $A$ are therefore as follows: \vspace{0.2cm} \it Region \rm 1 : $a< -{1\over {k-1}}$ and $a+b \ge {k'\over {k'-1}}$, and the corresponding \it Region $1'$ \rm with the roles of $a$ and $b$ reversed. For a point $A$ in \it Region \rm 1, there is a unique point $U$ on the open arc $Q_1 B_3$ of the Hessian for which $A$ lies on the tangent line to the Hessian at $U$. Here $G_A$ will vanish at $U' = \alpha (U)$ on $C_2$, and if $a+b = {k'\over {k'-1}}$ twice more at $R$, and vanishes also at the point of $C_1$ for which the tangent passes through $A$. Note that $U'$ is the same for all $A$ on the tangent line to the Hessian at $U$, as is the tangent line to the conic $G_A =0$ at $U'$ when $A\ne U$. We deduce that $\{ G_A >0\}$ defines a unique component in both $V_2$ and $P^\circ$, the latter containing $(0,-1,0)$ in its boundary. \vspace{0.2cm} \it Region \rm 2 : $a< -{1\over {k-1}}$, and $a+b < {k'\over {k'-1}}$ and $A$ lies above the arc $Q_1 B_3$ of the Hessian curve, and the corresponding \it Region $2'$ \rm with the roles of $a$ and $b$ reversed. If $A$ in \it Region \rm 2 is strictly above the relevant arc of the Hessian, there are two points $U_1$, $U_2$ on this arc (with $U_2$ between $U_1$ and $B_3$) and one point $U_3$ on the arc $B_3 Q_2$ of the Hessian curve, where the tangents to the Hessian at the $U_i$ all contain $A$. If $U _i' = \alpha (U_i)$ for $i=1,2,3$, we then have that $G_A$ vanishes on $C_2$ at $U_1'$, $U_2'$ and $U_3'$, where the first two are on the arc $B_1 R$ and the third on $RB_2$; $G_A$ also vanishes at the point on the arc of $C_1$ from $B_1$ to the halfway point of the branch, determined by the tangent to $C_1$ passing through $A$. We deduce that $\{ G_A >0\}$ defines two components in both $V_2$ and $P^\circ$; one component in $V_2$ will have the part of its boundary in $C_2$ being unbounded, with the corresponding component of $P^\circ$ containing $(0,-1,0)$ in its boundary, and the other component in $V_2$ has boundary intersecting $C_2$ in the bounded arc $U_2' U_3'$, with corresponding component in $P^\circ$ containing $-R$ in its boundary. The special case when $A =U$ lies on the relevant arc of the Hessian gives rise to $G_A =0$ being a real line pair with singularity at $U'= \alpha (U)$, one line passing through the point $\beta (U)$ on the arc of $C_1$ between $B_1$ and the halfway point, determined by the tangent to $C_1$ at $\beta (U)$ also passing through $U$, and the other line passing through the point $\gamma (U)$ on the arc $RB_2$ of $C_2$ which is the image under the Steinian involution of the point on the arc $B_3 Q_2$ for which the tangent to the Hessian passes through $U$. For future use, we denote by $\Omega_1 (U)$ the part of $V_1\cup V_2$ which lies above the first line on $V_2$ and below it on $V_1$, and by $\Omega _2 (U)$ the part of $V_2$ lying below the second line --- we shall refer to these as the \it unbounded \rm and \it bounded \rm components. We note that if we let the point $U$ move from $Q_1$ towards $B_3$, the point on the arc $B_3Q_2$ for which the tangent contains $U$ moves from $Q_2$ towards $B_3$, and so $\gamma (U)$ on $C_2$ moves from $B_2$ towards $R$; moreover the point $\beta (U)$ moves on $C_1$ from $B_1$ towards the midpoint of $C_1$. Thus for distinct points $U_1$, $U_2$ on the arc $Q_1B_3$, the `upper' line corresponding to $U_1$ only intersects the `upper' line corresponding to $U_2$ at a point outside $V_1 \cup V_2$ and the `lower' lines corresponding to $U_1$ and $U_2$ do not meet in $V_2$. If moreover $U_1$ lies in the arc $Q_1U_2$, it follows that $\Omega _1 (U_1) \subset \Omega _1 (U_2 )$ and $\Omega _2 (U_1) \supset \Omega _2 (U_2)$; so as $U$ on the branch $Q_1B_3$ of the Hessian moves towards $B_3$, we have the unbounded component of $P^\circ \cap \{ U\cdot D^2 >0 \}$ gets larger and the bounded component gets smaller. By symmetry, a similar statement holds for $U$ on the branch $Q_2B_3$ of the Hessian as $U$ moves towards $B_3$. \vspace{0.2cm} \it Region \rm 3 : $a\ge -{1\over {k-1}}$, $b \ge -{1\over {k-1}}$ and $a+b < {k'\over {k'-1}}$. Here $A$ lies on two tangents to the Hessian, one at a point on the arc $Q_1B_3$, including possibly $B_3$, and one at a point on the arc $B_3 Q_2$, including possibly $B_3$, and we label these points $U_1$, $U_2$, with images under the Steinian involution being $U_1'$, $U_2'$, where $R$ is in the arc $U_1'U_2'$ of $C_2$. In this case the inequality $G_A >0$ defines a unique component in $V_2$, whose boundary has intersection $U_1'U_2'$ with $C_2$, and there is a corresponding unique component in $P^\circ$. \vspace{0.4cm} \it Region \rm 4 : $a < -{1\over {k-1}}$, $b > -{1\over {k-1}}$ and $A$ lies below the arc $ B_2 Q_1$ of the Hessian curve, and the corresponding \it Region $4'$ \rm with the roles of $a$ and $b$ reversed. If $A$ in \it Region \rm 4 is strictly below the relevant arc of the Hessian, then there is a unique point $U$ on the arc $B_3 Q_2$ where the tangent line contains $A$, and so $U' = \alpha (U)$ in the arc $RB_2$ is the unique point of $C_2$ where $G_A$ vanishes, with $G_A$ also vanishing once on $C_1$ at the point whose tangent passes through $A$. In this case the inequality $G_A >0$ defines a unique component in both $V_2$ and $P^\circ$, the latter containing $(0,-1,0)$ in its boundary. The special case where $A$ lies on the arc $B_2Q_1$ of the Hessian gives rise to $G_A =0$ being a real line pair (with singularity $A' = \alpha (A)$ on the arc $Q_2B_1$ of the Hessian), one of the lines of which meets neither $C_1$ nor $C_2$ and the other joining $U'= \alpha (U)$ (where again $U$ is the unique point on the arc $B_3 Q_2$ whose the tangent line contains $A$) with the point on $C_1$ as described before. Thus in this case too, the inequality $G_A >0$ defines unique components in $V_2$ and $P^\circ$, the latter containing $(0,-1,0)$ in its boundary. \vspace{0.4cm} \noindent \bf Summary: \rm Summing up therefore, suppose that $U$ is a point of the arc $Q_1B_3$ of the Hessian, with $U' = \alpha (U)$ the corresponding point of the arc $B_1R$ of $C_2$. The tangent line to the Hessian at $U$ will intersect the Hessian again somewhere on the branch $B_3B_1$. Recall that for any point $A$ of the tangent line, the conic $A\cdot D^2 =0$ contains $U'$ and that if $A\ne U$, then the conic is smooth with tangent line $L$ at $U'$ independent of $A$. Moreover, for any point $A =(a,b,1)$ on the tangent line at $U$, above the third point of intersection with the Hessian, the index at $A$ is $(1,q)$ with $q\le 2$. Thus the components of $V_2 \cap \{A\cdot D^2 >0 \}$ are convex. In what follows, we shall call a component of $V_2 \cap \{A\cdot D^2 >0 \}$ \it bounded \rm if its boundary intersects $C_2$ in a bounded arc, and \it unbounded \rm otherwise (in which case its boundary contains $B_1$ or $B_2$). For $A$ on the tangent line in \it Region \rm 1 (and so furthest away from the above third point of intersection), we know that there is a unique component of $V_2 \cap \{A \cdot D^2 >0 \}$, and that this (unbounded) component lies above $L$, with $L$ tangent to its boundary at $U'$. For brevity, we leave the reader to formulate the appropriate analogous statements about $V_1 \cap \{A \cdot D^2 >0 \}$. When we reach the line $a+b = {k'\over {k'-1}}$, we have that $G_A$ also vanishes at $R$, and for $A$ between here and $U$ on the tangent line, the open subset $V_2 \cap \{A \cdot D^2 >0 \}$ has two components, an unbounded one (with boundary passing through $U'$ with $L$ tangent there) lying above the line $L$ and a bounded one lying below $L$. When $A=U$, the conic is a line pair with singularity at $U'$, as detailed in the description of \it Region \rm 2 above. In particular, we note by continuity that $\Omega_1 (U)$ and $\Omega_2 (U)$ lie on opposite sides of the line $L$ (for points in $V_2$, this means `above', respectively `below', the line $L$), since for all $A\ne U$ on the tangent line and in particular for $A$ near $U$ on the tangent line, the quadratic $A\cdot D^2$ is negative on $L\cap V_2$. Between $U$ and the point where $a= -1/(k-1)$, we again get an unbounded and a bounded component, but this time it is the bounded component which passes through $U'$ with tangent $L$ there; the unbounded component is still above $L$ and the bounded component below $L$. As we move to points on the tangent line with $a\ge -1/(k-1)$, we lose the unbounded component and just have a bounded component (lying below $L$). As we move further down to points with $b< -1/(k-1)$, one of two things can happen (until we reach the third point of intersection with the Hessian). Either we are in \it Region $2'$ \rm and we pick up in addition an unbounded component of $V_2 \cap \{A \cdot D^2 >0 \}$, where both components lie below $L$ (which is tangent to the boundary of the bounded component at $U'$), or we are in \it Region $4'$ \rm and the bounded component becomes unbounded, still however lying below $L$ with its boundary tangent to $L$ at $U'$. The above classification will be pivotal in the proof of Proposition 2.2. Given a class $A$ in the upper half-space whose index is $(1,q)$ with $q\le 2$, the subcone of $P^\circ$ defined by $A\cdot D^2 >0$, has at most two components. For the purposes of the results in this Section, we can ignore the case where $A= (a,b,1)$ has $a\ge -{1\over {k-1}}$, $b\ge -{1\over {k-1}}$ and $a+b \ge {k'\over {k'-1}}$ (where it is easily checked that $G_A$ is negative on $P^\circ$); we can ignore points $E $ which are positive multiples of $(0,1,0)$ or $(1,0,0)$ for the same reason. We can also ignore the case when $a\le -{1\over {k-1}}$, $b\le -{1\over {k-1}}$ (where $G_A$ is positive on the whole of $V_2$ and $C_2$, and all of $C_2$ may be seen from $A$), and we can ignore points $E$ which are positive multiples of $(0,-1,0)$ or $(-1,0,0)$ for the same reasons. The case of $E$ representing $B_3$ is more interesting; the conic $B_3 \cdot D^2 =0$ consists of a line pair with singularity at $R$, one line of which is tangent to $C_2$ there and one line of which is the line of symmetry $y=x$. We denote by $\tilde B_3$ the point $(-1,1,0) \in {\bf R}^3$. Thus the subset of $P^\circ$ given by $\tilde B_3 \cdot D^2 >0$ lies on one side of the plane corresponding to the line of symmetry, and the arc in $C_2$ given by the same inequality is just the upper half of $C_2$, which we note is also the half which is visible from $\tilde B_3$. Clearly there is a corresponding statement for the point $-\tilde B_3$. For future use, we also introduce the notation that $\tilde B_1 = (0,1,0)$ and $\tilde B_2 =(1,0,0)$. We shall need to consider connected components of $P^\circ \cap \{G_A >0 \}$ for $A$ lying in the various regions as listed above. We first check that we have the required property that the corresponding points of $C_2$ are visible from $A$ with respect to $V_2$. The crucial classical result used here is that if $U$ is a point on the Hessian curve, with image $U'= \alpha (U)$ under the Steinian involution, with $U''$ being the third point of intersection of the line $UU'$ with the Hessian, then the tangent lines to the Hessian at $U$ and $U'$ intersect at the point $\alpha (U'') $ of the Hessian (and the line $UU'$ is one of the lines of the line pair $\alpha (U'') \cdot D^2 =0$, with $U''$ being the singularity) --- see \cite{Dolg}, Proposition 3.2.7. The common intersection point is therefore just the third point of intersection of the tangent line to the Hessian at $U$ (or $U'$) with the Hessian. \begin{prop} Given a component $Q$ of $P^\circ \cap \{ E\cdot D^2 >0 \}$, associated to a real class $E$ with $E^3 \le 0$ and index $(1,q)$ where $q\le 2$, then all points $S$ of $C_2$ whose negative multiples are on the boundary of $Q$ are visible (with respect to the cone on $V_2$) from $E$. \begin{proof} Given that we have checked this statement explicitly on points in ${\bf R}^3$ defining the inflexion points of the cubic, we may take the positive multiple $A= (a,b,1)$ of $E$ which lies in the affine plane. We prove the Proposition first when $A$ lies in \it Region \rm 1. Then $A$ lies on a tangent line at a point $U$ on the arc $Q_1B_3$ of the Hessian with $A$ lying above (and to the left of) $U$, and the corresponding arc on $C_2$ where $A\cdot D^2 \ge 0$ has as its lowest point $U' = \alpha (U)$. We show that $U'$ is visible from $A$; for this we note that the tangent at $U$ intersects the Hessian again at one further point, on the branch $B_1B_3$, and from the above quoted classical result it is this point where the tangent lines at $U$ and $U'$ intersect. Thus $U'$ is visible from all points on the tangent line to the Hessian at $U$ lying above (and to the left of) this third intersection point, and this in particular is true for $A$. We now consider the case when $A$ lies in \it Region \rm 2; this is the case when there are two components defined by intersecting $P^\circ$ with $G_A >0$, as described in the above Summary. Therefore we have that $A$ lies on the tangent line to the Hessian at some point $U$ on the arc $B_3Q_2$, which intersects the Hessian again at a point on the arc $Q_1B_3$ (on the other side of $A$ to $U$ on the tangent line), and from the above quoted classical result it is this point where the tangent lines at $U$ and $U'$ intersect. This ensures that $U'$ is visible from $A$ as claimed, and thus the same is true for \it all \rm points of the arc $B_1 U'$ (not just the points of $C_2$ where $G_A \ge 0$). A similar argument shows that when $A=(a,b,1)$ is in \it Region \rm 3, and so $G_A \ge 0$ defines just a single arc in $C_2$ (bounded unless $A$ is on either of the lines $a = -1/(k-1)$ or $b = -1/(k-1)$), then both endpoints are visible from $A$, as is the arc inbetween. The final case (modulo symmetry) that we need to consider is when $A$ lies in \it Region \rm 4 above. Here the intersection of $C_2$ with $G_A \ge 0$ is a single arc $B_1 U'$ containing $R$, where $U' = \alpha (U)$ with $U$ is the point on the arc $B_3 Q_2$ of the Hessian whose tangent line passes through $A$. This tangent line contains one more point of the Hessian (on the other side of $A$ to $U$ on the tangent line), and so this must be the point where the tangent to $C_2$ at $U'$ meets the tangent to the Hessian at $U$. This ensures that $U'$ is visible from $A$, and hence the same is true for all points on the arc $B_1 U'$ of $C_2$. The proof is valid also when $A$ lies on the arc $B_3Q_2$. \end{proof} \end{prop} Crucial for next proof will be the fact noted before that for any $U$ on the arc $Q_1B_3$ of the Hessian, and $W$ on the tangent line through $U$, not only does the conic $W\cdot D^2 =0$ always intersect $C_2$ at $U' = \alpha (U)$, but also when $W\ne U$, the conic is non-singular at $U'$ and the tangent line to the conic at $U'$ does not depend on the choice of $W$. However a simple-minded argument just involving consideration of tangent lines to the conics at points $U_i' \in C_2$ may be seen not to work, since using our explicit description of these tangent lines from earlier in the Section, two different such tangent lines to the conics at distinct points $U_i'$ may well have a point of intersection within $V_2$ say, even though we can prove that the corresponding components are disjoint. The other ingredient that we shall need in the proof below is the explicit description of the two lines when the conic is singular. \begin{prop} Let $E_1$, $E_2$ be real classes (for $i=1,2$) at which $E_i^3 \le 0$ and the index is $(1,q_i)$ for $q_i \le 2$, and suppose there are components $Q(i)$ of $P^\circ \cap \{G_{E_i}>0\}$ for $i=1,2$ whose intersection is non-empty; then some non-trivial open arc in $C_2$ is in the boundaries of both $-Q(1)$ and $-Q(2)$. \begin{proof} We show the converse, that if there is no arc of $C_2$ as described, then $Q(1) \cap Q(2)$ is empty. Suppose $Q(1)$ is a component of $P^\circ \cap \{D\ : \ E_1 \cdot D^2 >0\} $ and $Q(2)$ is a component of $P^\circ \cap \{ D \ :\ E_2\cdot D^2 >0 \}$. Corresponding to these components, we have components of $C_2 \cap \{E_1 \cdot D^2 >0 \}$ and $C_2 \cap \{E_2 \cdot D^2 >0\}$, open arcs $\Gamma _1$ and $\Gamma_2$ in $C_2$, where we assume that $\Gamma_1 \cap \Gamma_2$ is empty. We deduce that at least one component has corresponding arc $\Gamma _i$ in $C_2$ not containing $R$. We comment that if one of these arcs has $R$ in its closure but not in its interior, then this is the case when $E_i$ is a positive multiple of $\pm \tilde B_3$ and the conic $E_i \cdot D^2 =0$ consists of a line pair with singularity at $R$, one line of which is tangent to $C_2$ there and one line of which is the line of symmetry $y=x$. Thus the corresponding $Q(i)$ lies on one side of the plane $\Lambda$ in ${\bf R}^3$ determined by this line of symmetry. The arc corresponding to the other component cannot by our assumption then contain $R$; of course it may be that the other component corresponds to taking a negative multiple of $\tilde B_3$, in which case the result is obvious. Otherwise, the argument given in the first basic case below implies that the other component is contained in the complementary half-space, and the result follows. This enables us to assume that both $E_1$ and $E_2$ lie strictly above the plane $z=0$, and we can let $A_1$ and $A_2$ denote the corresponding points in the affine plane. We can then reduce to considering two basic cases: when neither $\Gamma _i$ contains $R$, and when one doesn't and one does. If $R \not\in \Gamma _i$, we may assume by the above comment that it is not in the closure. Using symmetry, the above (converse) assertion is proved in these cases by the arguments below. We shall without further reference repeatedly use the facts detailed in the above Summary. We first deal with the case of $Q(1)$ corresponding to an unbounded component of $V_2 \cap \{ G_{A _1} > 0 \}$ with the corresponding arc $\Gamma _1 \subset C_2$ containing $B_1$ but not $R$ (corresponding to a point $A_1$ in \it Regions \rm 1 or 2), and a component $Q(2)$ corresponding to an unbounded component of $V_2 \cap \{ G_{A_2} > 0\}$ with the corresponding arc $\Gamma_2 \subset C_2$ containing $B_2$ but not $R$ (corresponding to a point $A_2$ in \it Regions $1'$ \rm or $2'$). We consider the affine picture; if $\bar A_1$ denotes the point of the arc $Q_1 B_3$ of the Hessian vertically below $A_1$, we have two lines defined by $\bar A_1 \cdot D^2 = 0$, with one line joining $\alpha(\bar A_1)$ to the point $\beta (\bar A_1)$ (below the midpoint) on $C_1$ where the tangent contains $\bar A_1$, and this corresponds to a plane in ${\bf R}^3$. Recalling that we defined $\tilde B_1 = (0,1,0)$, on the corresponding half-space containing $-\tilde B_1$ we have $\bar A_1 \cdot D^2 >0$. Since $\tilde B_1 \cdot D^2 <0$ at all points $D$ of $P^\circ$, the component $Q(1)$ is contained in this half-space. A similar statement holds for the component $Q(2)$; we take $\bar A_2$ to be the point on the arc $B_3 Q_2$ of the Hessian horizontally to the left of $A_2$; one the two lines given by $\bar A_2 \cdot D^2 =0$ (namely the one joining $A_2'$ to the appropriate point of $C_1$, this point being above the midpoint) corresponds to a half-space in ${\bf R}^3$ containing $(-1,0,0) $ and that $Q(2)$ is contained in this half-space. We note that the point of intersection of the two affine lines under consideration is a point of the affine plane not in $V_1 \cup V_2$, Therefore the planes we have constructed via $\bar A_1$ and $\bar A_2$ meet in a line disjoint from $P^\circ$, from which it follows that $Q(1) \cap Q(2)$ is empty. In fact, since the above two lines do not intersect the line of symmetry $y=x$ inside $V_1\cup V_2$, the two components lie on opposite sides of the plane of symmetry $\Lambda$ defined above. The second possibility to consider is when $Q(1)$ corresponds to a component of $V_2 \cap \{ G_{A_1} > 0 \}$, with an associated open arc $\Gamma_1$ in $C_2$ containing $B_1$ but not $R$ (corresponding to a point $A_1$ in \it Regions \rm 1 or 2), and $Q(2)$ corresponds either to a component of $V_2 \cap \{ G_{A_2} > 0 \}$, with an associated open arc $\Gamma_2$ in $C_2$ which is bounded and therefore containing $R$ (corresponding to $A_2$ lying in \it Regions \rm 2, 3 or $2'$), or an unbounded component of $V_2 \cap \{ G_{A_2} > 0 \}$ with an associated arc $\Gamma_2$ containing both $B_2$ and $R$ (corresponding to a point $A_2$ in \it Region \rm $4'$). Such components $Q(1)$ and $Q(2)$ will be described as Type I and II components respectively. Suppose the component of Type I has $\Gamma _1 = B_1U_1'$ in $C_2$, and so $A_1$ lies on the tangent line to the Hessian at $U_1 = \alpha (U_1')$, above (and to the left of) $U_1$; suppose the other (Type II) component has arc $\Gamma _2$ with endpoint $U_2'$ in the arc $U_1'R$, and so $A_2$ lies on the tangent line to the Hessian at $U_2 = \alpha (U_2')$, below (and to the right of) $U_2$. Then either $U_1' = U_2' =U'$ say, or $\partial (-Q(1)) \cap C_2$ and $\partial (-Q(2)) \cap C_2$ are disjoint. In the special case when $U_1' = U_2' =U'$, there is a common tangent line $L$ to the conics $A_i \cdot D^2 =0$ at $U'$. The points of $-Q(1)\cap V_2$ all lie above $L$ and those of $Q(1)\cap V_1$ lie below $L$. A similar statement holds for $Q(2)$, replacing above/below by below/above. By continuity, we saw in the above Summary that these statements remain true if one or both of the $A_i = U_i$. Then $L$ corresponds to a plane in ${\bf R}^3$, with the $Q(i)$ on opposite sides of this plane, and in particular $Q(1)\cap Q(2)$ is empty. \begin{figure} \centering \includegraphics[width=12cm]{Figure2.png} \caption{Second possibility in proof of Proposition 2.2} \end{figure} In the general case, where the closures of $\Gamma_1$ and $\Gamma_2$ are disjoint, we have corresponding points under the Steinian involution $U_1$ and $U_2$ as shown in the schematic diagram in Figure 2, with $A_1$ on the tangent line $L_1$ at $U_1$ above (and to the left of) $U_1$ and $A_2$ on the tangent line $L_2$ at $U_2$ below (and to the right of) $U_2$. When $L_1$ and $L_2$ intersect at a point $B$ in the segment $U_1A_1$ as shown in Figure 2, the point $A$ is defined to be the point on $L_2$ vertically below $A_1$. In order not to complicate the diagram, we have not included the arc $Q_1B_3$ of the Hessian, which is tangent to $L_1$ at $U_1$ and $L_2$ at $U_2$, but we note in this case that both the possibilities of $A$ in the arc $U_1 U_2$ (shown in the diagram) and of $U_2$ in the arc $U_1 A$ (when the $x$-coordinate of $A_1$ is less than that of $U_2$) may occur. Also the point of intersection $B$ of $L_1$ and $L_2$ may lie on the segment $U_2A_2$ of $L_2$ (which is certainly the case for instance when $A_2$ lies \it Regions \rm 3, $2'$ or $4'$), or $A_2$ may be contained in the segment of $L_2$ with endpoints $U_2$ and $B$ (which happens only when $A_2$ lies in \it Region \rm 2). We prove the result first for the configuration of $A_1$ and $A_2$ shown in Figure 2, and then observe how the proof gets modified for the other configurations. The smaller subdiagram in Figure 2 illustrates the relevant regions in $V_2$. We let $L$ denote the common tangent at $U_2'$ to the affine conics $W\cdot D^2 =0$ at $U_2 '$ for $W\ne U_2$ on the tangent line $L_2$ to the Hessian at $U_2$. Thus whatever the location of $A_2$ below (and to the right of) $U_2$, the convex component of $V_2\cap \{ A_2 \cdot D^2 >0\}$ we are interested in lies below $L$, illustrated in the smaller subdiagram of Figure 2 (and where `below', is replaced by `above' for $Q(2)\cap V_1$). Recall that $A$ is the point of $L_2$ lying vertically below $A_1$, as shown. Since $\tilde B_1\cdot D^2 <0$ on $P^\circ$, the components of $P^\circ \cap \{ A_1 \cdot D^2 >0 \}$ are strictly contained in those of $P^\circ \cap \{ A \cdot D^2 >0 \}$. We note that the line $L$ only intersects the conic $A\cdot D^2 =0$ at the one point, namely the point of tangency $U_2'$. Moreover $V_2\cap \{ A \cdot D^2 >0\}$ has two components, one below $L$ and one strictly above $L$ --- it is this latter component which contains $B_1$ and hence is the relevant component of $V_2\cap \{ A_1 \cdot D^2 >0\}$. As argued before, there is then a plane in ${\bf R}^3$ corresponding to the line $L$; the component of $P^\circ \cap \{ A_1 \cdot D^2 >0 \}$ we are interested in therefore lies in the associated open half-space containing $-\tilde B_1$, and hence is disjoint from the given component of $P^\circ \cap \{ A_2 \cdot D^2 >0 \}$ (which lies in the complementary half-space). The same proof however holds equally well if $A_1$ has smaller $x$-coordinate than that of $U_2$, when $A \in L_2$ is to the left of $U_2$, when it is the componenent of $V_2 \cap \{ A\cdot D^2 >0 \}$ passing through $U_2'$, with tangent $L$ there, which lies above $L$ and which therefore contains the relevant component of $V_2 \cap \{ A_1 \cdot D^2 >0 \}$; so the two components still lie in complementary half-spaces with respect to the plane determined by the line $L$. Suppose now that $A_1 \in L_1$ lies between $U_1$ and the point of intersection $B$ of $L_1$ and $L_2$. Note that $L_2$ intersects the Hessian curve again on the branch $B_3B_1$. We let $M$ denote the common tangent at $U_1'$ to the affine conics $W\cdot D^2 =0$ at $U_1 '$ for $W\ne U_1$ on the tangent line $L_1$ to the Hessian at $U_1$. If $A_2$ lies to the right of (and below) the point $B$, then it lies to the left (and above) this third intersection point of $L_2$ with the Hessian. When $A_2$ lies in Regions 2, 3 or $4'$, an entirely symmetric argument to that used above, by projecting $A_2$ horizontally to the left onto a point $A \in L_1$ (at which the Hessian is also positive) and using the fact that the tangent to the conic $W\cdot D^2 =0$ at $U_1'$ for points $W\ne U_1$ is $M$, shows that the two components in question are still disjoint. In fact the argument still works for $A_2$ in \it Region $2'$ provided \rm that the $y$-coordinate of the point of intersection of $L_1$ with the branch $Q_2B_3$ of the Hessian is not greater than the $y$-coordinate of $A_2$. We recall that when $A_2$ is in \it Region $2'$\rm, we need only consider $Q(2)$ corresponding to the bounded component, since the other component has been dealt with in the second paragraph of this proof. In the case when the $y$-coordinate of the point of intersection of $L_1$ with the branch $B_1B_3$ of the Hessian is strictly greater than the $y$-coordinate of $A_2$, we have to modify the argument, as the Hessian at the point $A\in L_1$ will then be negative. In this case, let $Z_1$ denote the point where $L_1$ meets the given branch of the Hessian. The first subcase is when $Z_1$ lies on the open arc $Q_2B_3$, then $Z_1\cdot D^2 =0$ consists of two lines meeting at the point $ Z_1' = \alpha (Z_1)$ of the arc $RB_2$, the upper (bounded) one joining $Z_1'$ to $U_1'$ and which we previously identified to be $M$ and the lower (unbounded) one joining $Z_1'$ to a point of $C_1$; then $V_2 \cap \{ Z_1\cdot D^2 >0\} $ consists of two components, with the bounded one (below the line $M$) containing $R$ in its boundary and the unbounded one containing $B_2$ in its boundary. We now move the point $Z$ along the arc $Q_2B_3$ from $Z_1$ towards $B_3$ until we reach the point $Z_2$ horizontally to the left of $A_2$. As we saw above in our discussion of \it Region \rm 2, the bounded component of $V_2 \cap \{ Z\cdot D^2 >0 \}$ gets smaller, and then passing from $Z_2$ to $A_2$ shrinks the bounded component further. Putting all this together, we deduce that the bounded component of $V_2 \cap \{ A_2 \cdot D^2 >0 \}$ lies below the line $M$ and hence is disjoint from the component corresponding to $Q(1)$, which lies above $M$ (with boundary tangent to $M$ at $U_1'$). For the second subcase, we have that $Z_1$ is on the open arc $Q_2B_1$ of the Hessian. The line pair $Z_1\cdot D^2 =0$ has a singularity on the branch $B_2Q_1$ of the Hessian and only intersects the projectivised boundary of $P$ in two points, with one line being disjoint from $V_1\cup V_2$ (and therefore not relevant for our purposes) and the other line we previously identified as $M$; hence the unique component of $V_2 \cap \{ Z_1 \cdot D^2 >0\}$ consists of the points of $V_2$ lying below $M$. We now consider the point $Z_3$ of the arc $B_3Q_2$ horizontally to the right of $Z_1$. Thus the conic $Z_3 \cdot D^2 =0$ has singularity on the arc $RB_2$ of $C_2$, and the two components of $V_2 \cap \{ Z_3 \cdot D^2 >0 \}$ are both below the line $M$. This is true in particular for the bounded component, and arguing as in the previous subcase shows that it is true also for the bounded component of $V_2 \cap \{ A_2 \cdot D^2 >0\} $. Thus again we deduce that $Q(1)$ and $Q(2)$ are disjoint. We are therefore left with the case where $A_1 \in L_1$ lies to the right of (and below) the intersection point $B$ of $L_1$ and $L_2$, and $A_2 \in L_2$ lies to the left of (and above) this point. We let $\bar B$ denote the vertical projection of $B$ onto the Hessian, with $\bar A_1$, $\bar A_2$ already defined similarly earlier in the proof; so in the case under consideration $\bar B$ is between $\bar A_1$ and $\bar A_2$, where $\bar A_1$ lies between $Q_1$ and $\bar B$. Here the conics $\bar A_2 \cdot D^2 =0$, $\bar A_1 \cdot D^2 =0$ and $\bar B \cdot D^2 =0$ are line pairs, where the lines have been explicitly described before in the description of what happens for points $A$ in \it Region \rm 2, in particular those on the arc $Q_1B_3$. With these descriptions (and notation), we have that $\Omega _1 (\bar A_1) \subset \Omega _1 (\bar B)$ and $\Omega _2 (\bar A_2 ) \subset \Omega_2 (\bar B)$. Since the two open sets $\Omega _1 (\bar B)$ and $\Omega _2 (\bar B)$ corresponding to $G_{\bar B} >0$ are disjoint (as the two lines meet only at the point $\alpha (\bar B )$), we deduce that the convex subcone of $P^\circ$ (containing $-\tilde B_1$ in its boundary) on which $G_{\bar A_1} > 0$ and the relevant convex subcone of $P^\circ$ on which $G_{\bar A_2} > 0$ are disjoint. As moving from $\bar A_i$ to $A_i$ (for $i = 1,2$) only shrinks the components, we deduce the required disjointness for the two components under consideration. \end {proof} \end{prop} When $X$ contains only finitely many rigid non-movable surfaces, the previous result suffices to prove Proposition 1.8. In general, we shall need the following limiting argument. \begin{cor} Suppose that the real elliptic curve $F=0$ has two connected components, then the statement of Proposition 1.8 holds. \begin{figure} \centering \includegraphics[width=12cm]{Figure3.png} \caption{Diagram for last part of the proof of Corollary 2.3} \end{figure} \begin{proof} For each component $Q(i)$ (with associated class $E_i$) we have an open arc $\Gamma _i \subset C_2$. The corollary is clearly true if each $\Gamma _i$ contains $R$. Without loss of generality, we may assume that $Q(1)$ has $\Gamma _1$ unbounded not containing $R$, and furthermore by symmetry we may assume that $B_1 \in \bar\Gamma_1$. We now extend our previous terminology from the proof of Proposition 2.2, we say that a $Q(i)$ is of Type I if $B_1 \in \bar \Gamma _1$ but $R\not\in \Gamma _1$; here we are now allowing for the possibility that a Type I component $Q(i)$ corresponds to $\tilde B_3$. As before, in the second possibility in the proof of Proposition 2.2, we say that a $Q(i)$ is of Type II if $\Gamma _i$ is either bounded (and hence contains R), or unbounded containing both $R$ and $B_2$. If there was a component, say $Q(2)$, which was not of Type I or Type II, then it follows from Proposition 2.2 that $Q(1)\cap Q(2)$ is empty, contrary to our assumption about $Q$. Thus every $Q(i)$ is either Type I or Type II We show now that they cannot all be of Type I. If they were, then we can assume without loss of generality that none of the $E_i$ are positive multiples of $\tilde B_3$ and thus each $E_i$ will give rise to a corresponding point $A_i$ in the affine plane. To each $Q(i)$ we have an arc $B_1U_i'$ in $C_2$; unless some subsequence of the $U_i'$ tend to $B_1$, we have $\bigcap _{i\ge 1} \Gamma _i$ is an arc of the form $B_1U$ in $C_2$ and the result is proved. Without loss of generality therefore, we assume that the sequence $U_i'$ tends to $B_1$. To each $U_i'$, we have a corresponding point $U_i = \alpha (U_i')$ in the arc $B_3Q_1$ of the Hessian, with the $U_i$ tending to $Q_1$, and so the $A_i$ defining $Q(i)$ lies on the tangent line to the Hessian at $U_i$, above (and to the left of) $U_i$. Moreover for each $i$, we have a common tangent line $M_i$ to the conic $A\cdot D^2 =0$ at $U_i'$ for $A \ne U_i$ on the tangent line at $U_i$. Corresponding to $M_i$, there is a plane in ${\bf R}^3$, and the component $Q(i)$ lies in the associated open half-space that also contains $- \tilde B_1$; as usual we note that this is also true when $A_i = U_i$. We let $\tilde Q(i) \supset Q(i)$ denote the open subcone of $P^\circ$ given by all the points in $P^\circ$ on the same side of the plane corresponding to $M_i$ as $-\tilde B_1$. If $U_i \to Q_1$, then the tangent lines $M_i$ tend to the line $M$ corresponding to $Q_1$, which may be checked is the tangent line to $C_1$ at $B_1$, namely the asymptote given affinely by $x =-1/(k-1)$; it intersects the projectivised boundary of $P$ just at $B_1$. This implies that $\bigcap _{i\ge 1} \tilde Q(i)$ is empty, and hence so too is $Q = \bigcap _{i\ge 1} Q(i)$, contrary to assumption. Hence there must be at least one component of each type. We put an ordering on the points of $C_2$ by specifying that $S \le S'$ if $S$ is in the (closed) arc $B_1 S'$. Let $S_1 \in C_2$ denote the infimum of the righthand ends of the arcs $\Gamma _i = B_1U_i$ corresponding to Type I components, and let $S_2 \in C_2$ denote the supremum of the lefthand ends of arcs in $C_2$ corresponding to Type II components. If $S_2 < S_1$, then the required arc of $C_2$ is $S_2S_1$. If $S_2 > S_1$, then we can find a component of Type I, say $Q(1)$, and a component of Type II, say $Q(2)$, with $\Gamma _1 \cap \Gamma_2$ empty. From the Proposition 2.2, we then have that $Q(1) \cap Q(2)$ is empty, therefore contradicting the assumption that $Q$ has non-empty interior. Finally, we need to exclude the remaining possibility, that for any pair of a Type I component $Q(1)$ and a Type II component $Q(2)$ amongst the given $Q(i)$, there is an arc $U_2' U_1'$ of $C_2$ in the boundary of the intersection, but the intersection of such arcs is a single point $U' = \alpha (U)$ on $C_2$. Given such a pair $Q(1)$ and $Q(2)$, we have a schematic diagram as in Figure 3, with the smaller subdiagram illustrating the regions of $V_2$ corresponding to $-Q(1)$ and $-Q(2)$. For $i=1, 2$, we saw that for any $A_i$ on the tangent to the Hessian at $U_i$, the conic $A_i \cdot D^2 =0$ has a zero at $U_i' = \alpha (U_i) \in C_2$ and the tangent $T_i$ to the conic at $U_i'$ is independent of the choice of $A_i \ne U_i$. It follows that all the points of $V_2$ which are negative multiples of elements in $Q(1)$, respectively $Q(2)$, lie above $T_1$, respectively below $T_2$, whilst the points of $V_1$ which are positive multiples of elements in $Q(1)$, respectively $Q(2)$, lie below $T_1$, respectively above $T_2$. This ensures that $Q(1)\cap Q(2)$ is contained in the subcone of $P^\circ$ lying between the planes corresponding to $T_1$ and $T_2$. By the continuity argument from the Summary, this continues to be true even if one or both of the $A_i = U_i$. If now $L$ is the common tangent line to the conics $A\cdot D^2 =0$, where $A\ne U$ lies on the tangent to the Hessian at the point $U = \alpha (U')$ defined above, then we can find a sequence of Type I components amongst the $Q(i)$ for which the corresponding tangent lines $T_1$ tend to $L$, and similarly a sequence of Type II components $Q(2)$ for which the corresponding tangent lines $T_2$ tend to $L$. From this we deduce that the points of $Q = \bigcap _{i\ge 1} Q(i)$ must lie in the plane in ${\bf R}^3$ corresponding to $L$, and so $Q$ would have empty interior, contrary to assumption. \end{proof} \end{cor} Therefore in the case when the real elliptic curve has two components, we have completed (via Proposition 1.7) the proof of Theorem 1.6, and hence we have proved (via Corollary 1.5) the relevant part of our Main Theorem. \section{Hybrid components when elliptic curve has one real component} We now wish to study the case when the elliptic curve has only one real component, and so the Hessian, assumed smooth, has two components. Our objective will be to prove the analogous results to Proposition 2.1, Proposition 2.2 and Corollary 2.3 in this case, and in this way complete (via Proposition 1.7) the proof of Theorem 1.6, and hence prove the relevant part of our Main Theorem. The three hybrid components of the positive index cone are as described in Section 1. Recall that there are two special values for $k<1$ where changes occur, namely $k =0$ and $-2$. Away from these two values, we wish to describe the Steinian map $\alpha$. If as before the inflexion points of the cubic (and hence of the Hessian) are denoted $B_1, B_2, B_3$, then the tangents there to the cubic $F$ (which we saw are just the asymptotes to the three affine branches) will be tangent to the Hessian at three distinct points $Q_1, Q_2, Q_3$. Having chosen one of the $B_i$ as the zero of the group law, the corresponding point $Q_i$ where the tangent to $F$ at $B_i$ is tangent to the Hessian is just one of the 2-torsion points of the Hessian. Moreover the second polar of $F$ with respect to each of these three points $B_i$ will be the tangent to the Hessian at the corresponding point $Q_i$. As we saw in Section 4 of \cite{WilPic3}, it is easiest to understand what is going on dynamically. For $k>1$, we found a precise description of $\alpha$, where the tangent to $F$ at the each $B_i$ is tangent to an affine branch of the unique connected component of $H$. If we consider the corresponding points of the upper half-sphere in $S^2$, we note that as $k\to 1$, the affine branches of both the real curves given by $F$ and $H$ tend to arcs of the equator $z=0$ between representatives of the relevant inflexion points, whilst the bounded component shrinks to a point, so that for $k=1$ both $F$ and $H$ vanish on the equator plus an isolated point on the upper half-sphere corresponding to the centroid $({1\over 3}: {1\over 3}:1)$ of the triangle of reference. Deforming away from $k=1$ towards zero, the singular point then expands to become the bounded component of the Hessian, and the arcs on the equator that were limits as $k\to 1+$ of the affine branches of $F$ deform to affine branches of $H$ and the arcs that were limits as $k\to 1+$ of the affine branches of $H$ deform to affine branches of $F$. Recall here that $H_k = -54k^2 F_{k'}$ from Section 1, and so one does expect the regions of the affine plane occupied by the unbounded affine branches of $F$ and $H$ to switch over. In particular, for $0<k<1$, the tangents to the cubic at each inflexion point are tangents to the unbounded affine branches of $H$, similar therefore to the case $k>1$. Thus in this case, the Steinian map $\alpha$ sends each inflexion point $B_i$ to a point on the unbounded component of $H$, and hence gives an involution on both connected components of $H$ individually. The next change occurs at $k=0$, where the bounded component together with the three affine branches of $H$ just tend to the three real lines determined by the triangle of reference. To see what happens to the Steinian map, it is probably easiest to look at the value $k= -2$; here the Hessian is just the line at infinity together with the isolated point $({1\over 3}: {1\over 3} :1)$ and all three asymptotes of the cubic pass through this point. As one deforms in either direction away from $k=-2$, this point expands to give the bounded component of the Hessian and each asymptote of $F$ will now be tangent to the bounded component of the Hessian, which by continuity will also be the case for all $k<-2$ and $-2 < k <0$. Thus for $k < -2$ and $-2 < k < 0$, the Steinian map $\alpha$ interchanges the two components of $H$. The bounded component will be contained in (and tangent to) the asymptotic triangle given by the lines $x ={1\over {1-k}}$, $y = {1\over {1-k}}$ and $x + y = {{k}\over {k-1}}$. For $k >-2$, the asymptotic triangle will be given by the inequalities $x \le{1\over {1-k}}$, $y \le {1\over {1-k}}$ and $x + y \ge {{k}\over {k-1}}$, whilst for $k<-2$, it will be given by $x \ge {1\over {1-k}}$, $y \ge {1\over {1-k}}$ and $x + y \le {{k}\over {k-1}}$. What is occurring here is that for each value of $k' > 1$, there are three possible values of $k$ for which $H_k$ is a multiple of $F_{k'}$, one with $k<-2$, one with $-2 < k < 0$ and one with $0< k <1$. If we choose an inflexion point $B_3$ say, the tangent to $F_k$ at $B_3$ will be tangent at one of the three 2-torsion points of $H_k$ and hence $F_{k'}$, the one on the unbounded component if $0< k <1$, and the ones on the bounded component in the other two cases. For any given point $A= (a,b,1)$ in the affine plane $z=1$ at which the index is $(1,q)$ with $q\le 2$, we let $G_A$ denote the homogeneous quadratic given by $A\cdot D^2$: explicitly in coordinates $$-ax^2 -b y^2 - (1-a-b)(z-x-y)^2 +kay(z-x-y) +kbx(z-x-y) + k(1-a-b)xy, $$ and we wish to understand how $G_A =0$ intersects not only the affine branches of $F$ but also the unbounded affine branches of the Hessian. We will therefore need to understand this in all the three cases detailed above, as the Steinian map will be different in the three cases. In all three cases, we let $C_1$ denote the unbounded branch of $F=0$ which lies in the region $x>0, \ y>0$ and $x+y >1$, and $C_2$ the unbounded branch of the Hessian lying in the sector $x<0,\ y<0$. There is then a hybrid component $P^\circ$ of the positive index cone whose boundary consists of the positive cone on $C_1$ together with the negative cone on $C_2$, the two parts meeting along rays generated by $(0,1,0)$ and $(1,0,0)$, and without loss of generality we may assume that this is the hybrid component which we study. For the case when the cubic is $F_k$ with $0<k<1$, the proofs of the analogous results to those in Section 2 are essentially identical to the arguments for $k>1$ given in Section 2, modulo the fact that the regions of the affine plane occupied respectively by the unbounded branches of $F$ and $H$ have switched over. These arguments therefore prove Proposition 1.8 in the case $0<k<1$. For $k>1$, the bounded component of $F$ essentially played no role in the proof of Proposition 1.8, whilst for $0 < k <1$, it is the bounded component of $H$ that essentially plays no role in the proof. We shall therefore not give any further details in this case, and from now on concentrate on the other two cases. Even in these other two cases, although the Steinian involution looks very different, the arguments we use are very similar to those in Section 2, with no novel ideas being introduced, and so we shall not need to explain the proofs in the same detail. \vspace{0.5cm} \it For the rest of this Section, we shall assume that $-2 < k <0$, and in the final Section we shall study the remaining case when $k<-2$. We shall in this Section prove Proposition 1.8 when $-2<k<0$.\vspace{0.5cm} \rm For $-2 < k <0$, we have a (schematic) picture as in Figure 4, showing all the affine branches of the Hessian and the branch $C_1$ of the cubic. We note that the tangent line to $F$ at $B_3$ is the line $x+y = e_2 = {k\over {k-1}} = {{-k}\over {1-k}}$, and this is tangent to the bounded component of the Hessian at the point $Q_3 = (e_2/2 , e_2/2)$. If we take $B_3$ as the zero of the group law, this is just a 2-torsion point of $H$. Moreover the bounded component of the Hessian in this case lies above (and touches) the asymptote ${x+y }= {k\over {k-1}}$. Also playing a role will be the other two lines through $B_3$ that are tangent to the Hessian and yielding 2-torsion points of the Hessian; these have the form $x+y = e_3$, corresponding to the other tangent to the bounded component and $x+y = e_1$ corresponding to the tangent to the unbounded component of $H$. Explicitly, if the Hessian is (up to a multiple) the Hessian of $F_{k_i}$, where $0 < k_1 <1$, $-2 < k_2 = k <0$ and $k_3 < -2$, then $e_i = k_i/(k_i -1)$; moreover $e_1 < e_2 < e_3$. \begin{figure} \centering \includegraphics[width=12cm]{Figure4.png} \caption{Schematic picture when $-2 <k<0$} \end{figure} We have the Steinian involution on the Hessian which in the case being studied interchanges the two components; explicitly we let $Q_1$ be the point on the bounded component of the Hessian whose tangent is also the tangent to $F$ at $B_1$, and $Q_2, Q_3$ defined similarly with respect to the inflexion points $B_2, B_3$. The latter we saw above was the point $(e_2/2 , e_2/2)$ and the tangent line $x+y =e_2$, where $e_2 = {k\over {k-1}}$. Under the Steinian map, the branch $C_2$ (going from $B_1$ to $B_2$) of the Hessian corresponds to the `upper' arc (i.e. not containing $Q_3$) on the bounded component going from $Q_1$ to $Q_2$. We now argue similarly to Section 2. For a given class $A = (a,b,1)\not \in P^\circ$, it is clear how many times the conic $G_A = A\cdot D^2 =0$ cuts $C_1$ --- it will cut it twice if $a\ge {1\over {1-k}}$ and $b\ge {1\over {1-k}}$ (since there will be two tangents to $C_1$, including maybe at points at infinity), it will not cut $C_1$ at all if $a< {1\over {1-k}}$ and $b< {1\over {1-k}}$, and will cut it once in the other cases. We now ask how many times and where the conic cuts $C_2$. To answer this question, we are looking for the tangents from $A$ to the upper arc (i.e. not containing $Q_3$) from $Q_1$ to $Q_2$ on the bounded component of the Hessian. Here the answer is twice (with multiplicity) if $A$ is in the region bounded by $a = {1\over {1-k}}$, $b ={1\over {1-k}}$ and by the specified arc $Q_1 Q_2$, none for any other points with $a< {1\over {1-k}}$ and $b< {1\over {1-k}}$ or with $a> {1\over {1-k}}$ and $b> {1\over {1-k}}$, and precisely once otherwise as there is exactly one tangent to the given arc $Q_1 Q_2$. Moreover, the midpoint of the arc is the point $R = (e_3/2 , e_3/2)$, and under the Steinian map this point corresponds to the midpoint $R' = \alpha (R) = (e_1/2 , e_1/2)$ of $C_2$, namely the intersection of $C_2$ with $x=y$. So being more precise still, $G_A =0$ will cut $C_2$ in the part given by $y\le x$ if and only if $a \le {1\over {1-k}}$, $b \ge {1\over {1-k}}$ and $a+b \ge e_3$ \it or \rm $a \ge {1\over {1-k}}$, $b \le {1\over {1-k}}$ and $a+b \le e_3$. One difference from the first two cases considered is that there, the points $E = \pm \tilde B_3$ were isolated, in that they did not represent the point at infinity on any tangent to the affine Hessian curve. In the remaining two cases, this is no longer true, in that they represent the point at infinity of the tangent line at $R$, the midpoint of the arc $Q_2Q_1$. It is still the case that $E\cdot D^2 =0$ is a line pair, with singularity at $R' = \alpha (R) \in C_2$, one line being tangent there and the other being the line of symmetry; in the two cases now being studied, the points $A$ on the affine tangent line give rise to conics $A\cdot D^2 =0$ containing $R'$, and that for $A\ne R$ the conic is smooth at $R'$ with tangent line $L$ the line of symmetry. The special cases $E = \pm \tilde B_3$ are not then as special as previously, in that now they represent a limit of points on the affine tangent line at $R$. The proofs can therefore invoke continuity when dealing with these cases. Let us consider a point $U$ on the open arc $RQ_1$; the tangent at $U$ will intersect the Hessian again on the branch $B_1B_3$. We assume that $A$ lies on this tangent line and the index at $A$ is $(1,q)$ with $q\le 2$. Thus $A$ lies below (and to the right of) the intersection point with the Hessian, and for any such $A$, the conic $A\cdot D^2 =0$ passes though the point $U' = \alpha (U)$ on the arc $B_1R'$ of $C_2$. We have in this case that $Q = P^\circ \cap \{ G_A >0 \}$ is always connected. At the third intersection point $Z$ of this tangent line with the Hessian, the line pair $Z\cdot D^2 =0$ has one line not intersecting the projectivised boundary of $P$, with the other line being the common tangent line $L$ at $U'$ of the conics $A\cdot D^2 =0$ for $A\ne U$ on the tangent line to the Hessian at $U$ (this characterisation of $L$ having been explained early in Section 2). Moreover in this case $ P^\circ \cap \{ G_Z >0 \}$ is given by intersecting with a half-space corresponding to the plane determined by the line $L$. If $A= (a,b,1)$ with $b \ge {1\over {1-k}}$, then the closure of $Q$ contains $(1,0,0)$, and in the case of strict inequality it contains not only $(1,0,0)$ but also points of $C_1$. When $b <{1\over {1-k}}$, we have that $(1,0,0)$ is no longer in the closure of $Q$. When $A$ is between the point with $b ={1\over {1-k}}$ and $U$, there is a second point $U_1$ on the arc $Q_1Q_2$ of the Hessian for which $A$ also lies on the tangent line at $U_1$; thus there is a point $U_1'$ in the open arc $B_2 U'$ at which $G_A$ vanishes. In this case $\partial (-Q) \cap C_2$ consists of the finite arc $U_1'U'$. For $A= U$, the conic is a pair of \it complex \rm lines with singular point $U'$, and $G_A <0$ on $P^\circ$. As $A = (a,b,1)$ passes to the other side of $U$ but still with $a < {1\over {1-k}}$, we again obtain a second zero $U_2'$ of $G_A$ on $C_2$, this time in the arc $U'B_1$, corresponding to the other point $U_2$ in the arc $U Q_1$ where the tangent contains $A$. When $a = {1\over {1-k}}$, then the closure of $Q$ contains $\tilde B_1 = (0,1,0)$, and then for all further points $A$ we have that $Q$ contains points of $C_1$ in addition to $(0,1,0)$ in its boundary. We now use the same methods as in the previous section to prove analogous results to Proposition 2.1, Proposition 2.2 and Corollary 2.3. \begin{prop} Suppose the elliptic curve has one component, with invariant $-2 < k<0$. Given a component $Q$ of $P^\circ \cap \{ E\cdot D^2 >0 \}$, associated to a real class $E$ with $E^3 \le 0$ and index $(1,q)$ where $q\le 2$, all points $S$ of $C_2$ whose negative multiples are on the boundary of $Q$ are visible (with respect to the cone on $V_2$) from $E$. \begin{proof} Since visibility is a closed condition, we may assume that $E$ represents a point $A=(a,b,1)$ in the affine plane. If $A$ lies on the tangent to the Hessian at $R$, strictly above (and to the left of) the point $R$, then it is the upper half $B_1 R'$ of $C_2$ on which $G_A$ is positive, and all these points are visible from $A$ (it is only when $E = \tilde B_3$ that this is precisely the set of points visible from $E$). Otherwise, we may assume without loss of generality that $A$ lies on the tangent line at a point $U$ in the open arc $RQ_2$, since then the case when $U = Q_2$ follows by continuity. We noted above, that for such points $A$, the tangent line at $U$ intersects the Hessian at a point on the branch $B_1B_3$. We note that $U' = \alpha (U)$ lies on the arc $B_2R'$ of $C_2$. By the classical result used repeatedly in Section 2 (\cite{Dolg}, Proposition 3.2.7), the point where the tangents to the Hessian at $U$ and $U'$ meet is precisely the third point of intersection of the tangent at $U$ (or $U'$) with the Hessian. Given that the Hessian at $A$ is non-negative, $A$ lies below (and to the right of) this point of intersection, from which it follows that $U'$ is visible from $A$. As the point $A= (a,b,1)$ moves down the tangent line from the intersection point of the tangent line with the branch $B_1B_3$ towards $U$, the part of $C_2$ on which $G_A >0$ is initially just the arc $B_2 U'$, but a larger arc of points is visible from $A$, which will eventually be all of $C_2$. When we reach points $A$ with $b < {1\over {1-k}}$, the part of $C_2$ on which $G_A >0$ is then a bounded arc $U_1' U'$, whilst all of $C_2$ is visible from $A$. The case $A=U$ is not relevant here, and as $A$ passes to the other side of $U$, the part of $C_2$ on which $G_A >0$ is then initially a bounded arc $U'U_2'$, and when $a \ge {1\over {1-k}}$ this is all of $U' B_1$. With $A$ moving further downwards on the tangent line, initially all of $C_2$ was visible from $A$, but the set of visible points will always be an arc containing $R'B_1$; hence all points of the arc $U'B_1$ are visible from $A$, which verifies the claimed result. \end{proof} \end{prop} \begin{prop} Suppose the elliptic curve has one component, with invariant $-2 < k<0$. Let $E_1$, $E_2$ be real classes (for $i=1,2$) at which $E_i^3 \le 0$ and the index is $(1,q_i)$ for $q_i \le 2$, and suppose there are components $Q(i)$ of $P^\circ \cap \{G_{E_i}>0\}$ for $i=1,2$ whose intersection is non-empty; then some non-trivial open arc in $C_2$ is in the boundaries of both $-Q(1)$ and $-Q(2)$. \begin{proof} As in the proof of Proposition 2.2, we show the converse, and in particular assuming that the arcs on $C_2$ corresponding to the $Q(i)$ are disjoint, we show that $Q(1) \cap Q(2)$ is empty. Suppose $Q(1)$ is a component of $P^\circ \cap \{D\ : \ E_1 \cdot D^2 >0\} $ and $Q(2)$ is a component of $P^\circ \cap \{ D \ :\ E_2\cdot D^2 >0 \}$. Corresponding to these components, we have components of $C_2 \cap \{E_1 \cdot D^2 >0 \}$ and $C_2 \cap \{E_2 \cdot D^2 >0\}$, namely open arcs $\Gamma _1$ and $\Gamma_2$ in $C_2$. The assumption that $\Gamma _1$ and $\Gamma_2$ are disjoint, means that for the appropriate endpoints $U_1'$ and $U_2'$ of $\Gamma _1$ and $\Gamma _2$ on $C_2$, we may without loss of generality assume that $\Gamma _1$ is a subarc of $B_2U_1'$ and that $\Gamma _2$ is a subarc of $U_2'B_1$ with $U_1'$ in the arc $B_2U_2'$. We then have corresponding points $U_1$, $U_2$ on the arc $Q_2 Q_1$ of the bounded component ($U_1$ in the arc $Q_2 U_2$). We shall also assume that the $E_i$ define points $A_i$ in the affine plane --- as we note below, the limit cases with $E_i $ a positive multiple of $\pm \tilde B_3$ (when the corresponding $U_i =R$) will follow by an essentially unchanged argument. Thus $Q(1)$ corresponds to a point $A_1$ on the tangent line $L_1$ at $U_1$, with $A_1$ strictly above (and to the left of) $U_1$, whilst $Q(2)$ corresponds to a point $A_2$ on the tangent line $L_2$ at $U_2$, with $A_2$ strictly below (and to the right of) $U_2$. With these conventions, we show that $Q(1)\cap Q(2)$ is empty. Recall that for all $A\in L_1$ the conic $A\cdot D^2 =0$ passes though $U_1' \in C_2$, and if $A \ne U_1$ the conic is smooth there with tangent line $L$ independent of choice of $A$. Moreover $-Q(1) \cap V_2$ lies above $L$ in $V_2$ (and $Q(1)\cap V_1$ lies below $L$ in $V_1$). We consider various possibilities for $U_1$ and $U_2$; the easy case is when $U_1 = U_2$, for then $-Q(2) \cap V_2$ lies below the line $L$ in $V_2$ (and $Q(2)\cap V_1$ lies above $L$ in $V_1$), and disjointness of the two components is clear. If both points $U_i$ lie on the arc $RQ_1$, then both $L_1$ and $L_2$ meet the Hessian again on the branch $B_1B_3$. We denote by $A$ the point on $L_1$ vertically above $A_2$, and hence below (and to the right of) $U_1$ on $L_1$. Our assumptions ensure that the Hessian is positive also at $ A$. If we consider the subcone of $P^\circ$ given by $A \cdot D^2 >0$, this (projectively) lies on the opposite side of $L$ to that given by $A_1 \cdot D^2 >0$; thus the subcones of $P^\circ$ given by $A_1 \cdot D^2 >0$ and $A \cdot D^2 >0$ lie on opposite sides of the plane in ${\bf R}^3$ corresponding to $L$, and hence in particular are disjoint. Since $\tilde B_1\cdot D^2 >0$ for all $D \in P^\circ$, we know that the open connected subcone of $P^\circ$ corresponding to $A_2$ is strictly smaller that the relevant subcone corresponding to $A$, and hence the result follows in this case. By symmetry, the result is also true if both $U_1$ and $U_2$ lie on the arc $Q_2R$. In the case where one of the $U_i =R$, without loss of generality $U_1 =R$ and $U_2$ is in the open arc $RQ_1$, the same proof works, even for the limit case when $E_1 = \tilde B_3$, where $Q(1)$ consists of the points of $P^\circ$ lying on the appropriate side of the plane of symmetry. We are left with the case when $U_1$ lies on the open arc $Q_2 R$ and $U_2$ lies on the open arc $RQ_1$. If the point $A$ defined above has the Hessian positive there, then the previous argument works. If not, we let $Z_1$ denote the point where the line $L_1$ intersects the branch $B_2 B_3$ of the Hessian. The previous proof therefore works so long as the $x$-coordinate of $A_2$ is not greater than the $x$-coordinate of $Z_1$. Let us consider now an arbitary point $Z$ on the branch $B_2B_3$ of the Hessian, then it lies on the tangent at some (unique) point $U$ in the arc $Q_2R$ of the bounded component of the Hessian, and hence corresponds to a point $U'$ of $C_2$. As $Z$ moves from $B_2$ towards $B_3$, the point on $Q_2R$ moves from $Q_2$ towards $R$, and the corresponding point $U'$ on $C_2$ moves from $B_2$ towards $R$. With $Z$ we also associate a point on $C_1$ at which the tangent line to $C_1$ contains $Z$; as $Z$ moves from $B_2$ towards $B_3$, this point moves from $B_2$ towards the midpoint of $C_1$. To understand why these statements are true, the reader should consult Figure 4. The conic defined by $Z\cdot D^2 =0$ is a line pair with singularity on the branch $B_1B_2$ of the Hessian and $G_Z$ vanishes only twice on the projectivised boundary of $P$, namely at $U'$ and the point on $C_1$ where the tangent contains $Z$. Thus one line of the line pair $Z\cdot D^2 =0$ does not meet the projectivised boundary of $P$, and the other line $L_Z$ joins the above two given points. We shall be interested in the points of $V_1\cup V_2$ which lie below the line $L_Z$ in $V_2$ and above the line $L_Z$ in $V_1$, and the above analysis shows that the set of such points shrinks as we move from $B_2$ towards $B_3$, noting that two distinct such lines $L_Z$ meet at an affine point outside $V_1\cup V_2$. There is a plane in ${\bf R}^3$ corresponding to $L_Z$, and the half of $P^\circ$ in we shall be interested (lying on one side of this plane) becomes smaller as $Z$ moves towards $B_3$. If now we take $Z=Z_1$, the intersection of $L_1$ with the branch $B_2B_3$ of the Hessian, we have already observed that $L_{Z_1} =L$. To $L$ we have an associated plane in ${\bf R}^3$, with $Q(1)$ lying on one side of this plane and $P^\circ \cap \{ Z_1 \cdot D^2 >0 \}$ lying on the other. In the remaining case, we now let $Z_2$ denote the first point on the branch $B_2B_3$ of the Hessian vertically above $A_2$, a point on the arc between $Z_1$ and $B_3$. The above analysis shows that $P^\circ \cap \{ Z_2 \cdot D^2 >0 \}$ is a subcone of $P^\circ \cap \{ Z_1 \cdot D^2 >0 \}$. Since $\tilde B_1\cdot D^2 >0$ for all $D \in P^\circ$, we note that $Q(2)$ is contained in the subcone corresponding to $Z_2$. We deduce therefore that $Q(2)$ is on the opposite side to $Q(1)$ of the plane corresponding to $L$, and hence $Q(1)\cap Q(2)$ is empty as required. \end{proof} \end{prop} \begin{cor} Suppose that the real elliptic curve has $-2 < k <0$, then the statement of Proposition 1.8 holds. \begin{proof} The argument here may be reconstructed from the proof of Corollary 2.3, using the result we have just proved, and is left as an exercise for the reader. \end{proof} \end{cor} \section{The case of the elliptic curve having invariant $k<-2$} Let us now consider the remaining possibility with smooth Hessian, namely $k<-2$; here the bounded component of the elliptic curve is tangent to the asymptotic line $x+y = k/(k-1)$ but in this case lies below the line. We illustrate this situation schematically by Figure 5, where again we show the affine branches of the Hessian together with the affine curve $C_1$. As usual, we assume that the hybrid component $P^\circ$ under consideration has projectivised boundary corresponding to $C_1 \cup C_2$, If the index at a point $E \not\in P^\circ$ in the open upper half-space is $(1,q)$ with $q\le 2$, we let $A=(a,b,1)$ denote the point in the affine plane $z=1$ determined by $E$. Let $Q$ denote a connected component of the subcone of $P^\circ$ given by $E\cdot D^2 >0$, by Lemma 3.3 of \cite{WilBd}, a convex subcone of $P^\circ$. We now list, as was done in Section 4 of \cite{WilPic3}, the points on $C_1$ and $C_2$ where the quadratic $G_A$ vanishes. \begin{figure} \centering \includegraphics[width=12cm]{Figure5.png} \caption{Schematic picture when $k<-2$} \end{figure} As before it is clear for $A = (a,b,1)$ how many times (and where) $G_A=0$ intersects $C_1$. It will intersect $C_1$ twice if $a \ge {1\over {1-k}}$ and $b \ge {1\over {1-k}}$, at no points if $a < {1\over {1-k}}$ and $b < {1\over {1-k}}$, and once otherwise. For $a < {1\over {1-k}}$, $b < {1\over {1-k}}$, there are no zeros of $G_A$ on $C_2$ either, and the previous argument shows that $G_A$ would be negative on all of $P$, since this is true for $A\in V_2$. Thus if $a \le {1\over {1-k}}$, $b \le {1\over {1-k}}$, then $G_A$ would be negative on $P^\circ$ and so the subcone defined by $G_A >0$ is empty. If for instance $a > {1\over {1-k}}$, $b = {1\over {1-k}}$ then $G_A$ has a zero on the affine branch $C_1$ and at the point $B_2$ at infinity, and the whole of $C_2$ is visible from $A$. We note that $G_A(B_1) >0$ if and only if $a > {1\over {1-k}}$ and $G_A(B_2) >0$ if and only if $b>{1\over {1-k}}$. An additional feature compared with the previous case is that $G_A$ can intersect the projectivised boundary of $P$ at two points on $C_1$ and two on $C_2$, and that will happen when $A$ is in the open region with boundary consisting of a segment of the line $x = {1\over {1-k}}$, a segment of the line $y = {1\over {1-k}}$, and the `lower' arc (i.e. not containing $Q_3$) of the bounded component of the Hessian between $Q_1$ and $Q_2$; as before $Q_i$ denotes the point on the bounded component of the Hessian where the asymptote to the cubic at $B_i$ is tangent. In contrast to the previous case, this time the arc of the Hessian between $Q_1$ and $Q_2$ lies in the quadrant $x \ge {1\over {1-k}}$, $y\ge {1\over {1-k}}$. We now need to understand what happens when $A$ lies on the tangent line at some point $U$ in the lower arc $Q_1Q_2$. We assume by symmetry that $U$ lies in the open arc $Q_1 R$, leaving it to the reader to check what happens when $U$ is $Q_1$ or $R$, including the limit case when $E$ is a positive multiple of $\pm \tilde B_3$. Under the given assumption, the tangent line meets the branch $B_1B_3$ of the Hessian. We note that $A = (a,b,1)$ is below (and to the right of) this intersection point, with $a < {1\over {1-k}}$. We know that $G_A$ vanishes at $U'$ and is positive at $B_2$, and there is a unique component of $P^\circ \cap \{ G_A > 0 \}$, whose closure contains $\tilde B_2$; the second point on the projectivised boundary of $P$ corresponds to the point on $C_1$ where the tangent line contains $A$. Thus the part of the boundary of $P^\circ \cap \{ G_A > 0 \}$ corresponding to points in $C_2$ is just the arc $U'B_2$, where as usual $U'=\alpha (U)$. When $a = {1\over {1-k}}$, we know that $G_A$ vanishes (twice) at $B_1$, and for $A$ between this point and $U$, there is a point $U_1$ on the arc $Q_1U$ whose tangent also contains $A$, and there are two components of $P^\circ \cap \{ G_A > 0 \}$, one of which has its boundary points corresponding in $C_2$ to the arc $B_2 U'$ and the other with boundary points corresponding in $C_2$ to the arc $U_1'B_1$. For $A= U$, we get two real lines meeting at $U'$, each line joining $U'$ to one of the two points of $C_1$ for which the tangent contains $U$. Moving now to $A$ lying below $U$ but with $b > {1\over {1-k}}$, we obtain a point $U_2$ in the arc $UQ_2$ where the tangent contains $A$, and we still have two components, but with the relevant arcs on $C_2$ now being $B_2 U_2'$ and $U' B_1$. By the time we reach the point with $b = {1\over {1-k}}$, the first of these components of $P^\circ \cap \{ G_A > 0 \}$ has shrunk to the empty set (with $G_A$ vanishing along the ray generated by $\tilde B_2$), and from then on we just have one component of the intersection, and the relevant arc in $C_2$ is now $U'B_1$. With this description in hand, the required proposition about visibility follows easily. \begin{prop} Suppose the elliptic curve has one component, with invariant $ k<-2$. Given a component $Q$ of $P^\circ \cap \{ E\cdot D^2 >0 \}$, associated to a real class $E$ with $E^3 \le 0$ and index $(1,q)$ where $q\le 2$, all points $S$ of $C_2$ whose negative multiples are on the boundary of $Q$ are visible (with respect to the cone on $V_2$) from $E$. \begin{proof} Since visibility is a closed property, we may assume that $E$ is represented by a point $A$ in the affine plane. We now just observe, in the various possibilities for $A$ lying on the tangent line at $U$, which points of $C_2$ are visible from $A$. By symmetry (and noting that visibility is a closed condition) we may assume that $U$ lies in the open arc $Q_1R$. By the classical result used in previous sections, the point of intersection of the tangent at $U$ with the branch $B_1B_3$ of the Hessian is also just the intersection with the tangent line to $C_2$ at $U'$. Thus the whole closed arc $B_2 U'$ is visible from $A$. As $A$ moves down the tangent line, the visible points from $A$ constitute a larger arc containing the arc $B_2U'$, until when we reach the point with $a = {1\over {1-k}}$, all of $C_2$ is visible from $A$. This then remains true until $A$ has $b < {1\over {1-k}}$, at which stage the visible points of $C_2$ form an arc containing $U'B_1$. Thus we have verified the Proposition in all cases. \end{proof} \end{prop} \begin{prop} Suppose the elliptic curve has one component, with invariant $ k<-2$. Let $E_1$, $E_2$ be real classes (for $i=1,2$) at which $E_i^3 \le 0$ and the index is $(1,q_i)$ for $q_i \le 2$, and suppose there are components $Q(i)$ of $P^\circ \cap \{G_{E_i}>0\}$ for $i=1,2$ whose intersection is non-empty; then some non-trivial open arc in $C_2$ is in the boundaries of both $-Q(1)$ and $-Q(2)$. \begin{proof} As before we show the converse, and in particular assuming that the arcs on $C_2$ corresponding to the $Q(i)$ are disjoint, we show that $Q(1) \cap Q(2)$ is empty. Analogous to the proof of Proposition 3.2, we then have points $U_1$, $U_2$ on the lower arc $Q_1 Q_2$ of the bounded component (with $U_2$ in the arc $Q_1 U_1$), with corresponding images $U_1'$, $U_2'$ on $C_2$ (with $U_2'$ in the arc $U_1'B_1$), where without of generality $U_1'$ is an endpoint of the arc $-Q(1)\cap C_2$ and $U_2'$ an endpoint of the arc $-Q(2)\cap C_2$. As in the proof of Proposition 3.2, we shall assume that the $E_i$ define points $A_i$ in the affine plane, since as we note below the limit cases $E_i = \pm \tilde B_3$ follow by only a minimal change to the argument. Thus $Q(1)$ corresponds to a point $A_1$ on the tangent line $L_1$ at $U_1$, with $A_1$ above (and to the left of) $U_1$, whilst $Q(2)$ corresponds to a point $A_2$ on the tangent line $L_2$ at $U_2$, with $A_2$ below (and to the right of) $U_2$. Thus $Q(1)$ contains $\tilde B_2$ and $Q(2)$ contains $\tilde B_1$ in their boundaries. Under these conventions, we show that $Q(1)\cap Q(2)$ is empty. \begin{figure} \centering \includegraphics[width=11cm]{Figure6.png} \caption{Diagram for the proof of Proposition 4.2} \end{figure} Recall that for all $A\in L_1$ the conic $A\cdot D^2 =0$ passes though $U_1' \in C_2$, and if $A \ne U_1$ the conic is smooth there with tangent line $L$ independent of choice of $A$. Moreover $-Q(1) \cap V_2$ corresponds to some $A_1$ above (and to the left of) $U_1$ on $L_1$, with $-Q(1)\cap V_2$ lying above $L$ in $V_2$ (and $Q(1)\cap V_1$ lying below $L$ in $V_1$); in the case where $V_2 \cap \{ A_1 \cdot D^2 >0 \}$ has two components, $Q(1)$ corresponds to the upper component, that is the one containing $B_2$ in its boundary, with $Q(1)$ containing $\tilde B_2$ in its boundary. A similar statement holds for $A \in L_2$, where there is a common tangent $M$ at $U_2'$ for all the conics $A\cdot D^2 =0$ where $A\ne U_2$ is in the tangent line at $U_2$. Moreover $-Q(2) \cap V_2$ corresponds to some $A_2$ below (and to the right of) $U_2$ on $L_2$, with $-Q(2) \cap V_2$ lying below $M$ in $V_2$; in the case where $V_2 \cap \{ A_2 \cdot D^2 >0 \}$ has two components, $Q(2)$ corresponds to the lower component, and in particular contains $\tilde B_1$ in its boundary. The proof is now analogous to those of Propositions 2.2 and 3.2. By symmetry, we may assume that $U_2$ lies on the arc $Q_1R$, and we denote by $B$ the intersection point of $L_1$ and $L_2$. Let us first consider the case when $A_1$ is not in the segment $U_1B$, with $A_2$ on $L_2$ below (and to the left of) $U_2$, as shown schematically in Figure 6, where the argument we give below does not care whether or not $A_2$ is in the segment $U_2B$. Again for clarity in the diagram, we have omitted the arc $Q_1Q_2$ of the Hessian, which is tangent to $L_1$ at $U_1$ and $L_2$ at $U_2$. The smaller subdiagram in Figure 6 again illustrates the picture in $V_2$. In this case we can for instance let $A$ be the point on $L_2$ horizontally to the right of $A_1$. We assume first that the Hessian at $A$ is non-negative. Since $\tilde B_2 \cdot D^2 >0$ on $P^\circ$, we have that $V_2 \cap \{ A_1 \cdot D^2 >0\}$, or the upper component of this (corresponding to $Q(1)$) if there are two components, is contained in the upper component of $V_2 \cap \{ A \cdot D^2 >0\}$, which in turn lies above the line $M$ in $V_2$, whilst $V_2 \cap \{ A_2 \cdot D^2 >0\}$, or the lower component of this if there are two components, lies below $M$. From this it follows that $Q(1)$ and $Q(2)$ are disjoint. We note that this argument remains valid in the limit case when $U_2 =R$ and $E_2 = -\tilde B_3$, when $Q(2)$ is just the intersection of $P^\circ$ with a half-space. If however the Hessian at $A$ is negative, we modify the argument in a similar way to the proof of Proposition 3.2; we let $Z_1$ denote the point where $L_2$ meets the branch $B_1B_3$ of the Hessian, and so the $y$-coordinate of $Z_1$ is not greater than the $y$-coordinate of $A_1$. We note that for $Z$ on the branch $B_1B_3$ of the Hessian, only one line (denoted $L_Z$) from the line pair $Z\cdot D^2 =0$ meets the projectivised boundary of $P$, and it meets it in two points, one being the image on $C_2$ under the Steinian involution of the point on the arc $Q_1R$ where the tangent line contains $Z$, and the other being the point on $C_1$ where the tangent line contains $Z$. We now argue as in the proof of Proposition 3.2; by consulting Figure 5, the reader will check that as $Z$ moves along the branch from $B_1$ towards $B_3$, the first of these points moves from $B_1$ towards $R'$ on $C_2$ and the second of them moves on $C_1$ from $B_1$ towards the midpoint of $C_1$. From this it follows that two such distinct lines will always meet at an affine point not in $V_1\cup V_2$, and that the regions in $V_2$ above $L_Z$ and in $V_1$ below $L_Z$ decrease as one moves $Z$ towards $B_3$. For $ Z= Z_1$, the by now familiar argument shows that $L_{Z_1} =M$. We now mimic the argument from Proposition 3.2. We let $Z_2$ denote the point on the branch $B_1B_3$ of the Hessian immediately to the right horizontally of $A_1$ --- recall that we have assumed that $y$-coordinate of $Z_1$ is not greater than the $y$-coordinate of $A_1$. We have that the points of $V_2$ corresponding to $-Q(1)$ are contained in the region of $V_2$ above the line $L_{Z_2}$, which in turn as argued above is contained in the region of $V_2$ above the line $L_{Z_1} =M$. Similarly, the points of $V_1$ corresponding to $Q(1)$ are contained in the region of $V_1$ below the line $L_{Z_2}$, which in turn is contained in the region of $V_1$ below the line $L_{Z_1} =M$. Thus $Q(1)$ and $Q(2)$ lie on opposite sides of the plane corresponding to $M$ and hence are disjoint. By symmetry the result is also true when $A_2$ is not in the segment $U_2B$, with $A_1$ on $L_1$ above (and to the left of) $U_1$, when we can mimic the previous argument, for instance taking $A$ to be the point of $L_1$ vertically above $A_2$ and arguing in an analogous fashion to before. The final case therefore that we need to consider will be when $A_1$ lies on the segment $BU_1$ of $L_1$ and $A_2$ lies on the segment $BU_2$ of $L_2$ . Let $U$ be any point on the arc $Q_1Q_2$; then $U\cdot D^2 =0$ consists of a line pair with singularity at $U' = \alpha (U)$, each line joining $U'$ to one of the two points on $C_1$ where the tangent passes through $U$. In this way we have an upper and a lower line in $V_2$, and an upper and lower component of $V_2 \cap \{ U\cdot D^2 >0 \}$ (both unbounded). As $U$ moves along the arc towards $Q_2$, we check that the upper component in $V_2$ becomes smaller and the lower component bigger. Letting $\bar A_2$, $\bar B$ and $\bar A_1$ in the arc $Q_1Q_2$ denote the horizontal projections of $A_2$, $B$ and $A_1$, we check (as in the proof of Proposition 2.2) that the lower component of $V_2 \cap \{ \bar A_2 \cdot D^2 >0 \}$ is contained in the lower component of $V_2 \cap \{ \bar B \cdot D^2 >0 \}$, whilst the upper component of $V_2 \cap \{ \bar A_1 \cdot D^2 >0 \}$ is contained in the upper component of $V_2 \cap \{ \bar B \cdot D^2 >0 \}$. Since the upper and lower components of $V_2 \cap \{ \bar B \cdot D^2 >0 \}$ are disjoint, and the relevant component of $V_2 \cap \{ A_i \cdot D^2 >0 \}$ is contained in the relevant component of $V_2 \cap \{\bar A_i \cdot D^2 >0 \}$ for $i=1,2$, we deduce that the relevant components of $V_2 \cap \{ A_1 \cdot D^2 >0 \}$ and $V_2 \cap \{ A_2 \cdot D^2 >0 \}$ are disjoint as claimed, as are the corresponding components in $V_1$ by a similar argument, thus verifying that the convex open cones $Q(1)$ and $Q(2)$ are indeed disjoint. \end{proof} \end{prop} \begin{cor} Suppose that the real elliptic curve has $k < -2$, then the statement of Proposition 1.8 holds. \begin{proof} The argument may again be reconstructed from the proof of Corollary 2.3, using the result we have just proved, and is left as an exercise for the reader. \end{proof} \end{cor} Therefore in the case when the real elliptic curve has one component, we have completed (via Proposition 1.7 and Corollaries 3.3 and 4.3) the proof of Theorem 1.6, and hence we have proved (via Proposition 1.2) the relevant part of our Main Theorem. \bibliographystyle{amsalpha}
1,314,259,996,903
arxiv
\section{Introduction} The prediction of the trajectories followed by small spherical inclusions embedded in a given flow has been, and remains, a topic of intense interest. This problem indeed has obvious applications in many fields of physics, as well as in engineering science. In order to determine accurately these trajectories, a detailed understanding of the force acting on the particles is required. In the case where the non-linear convective terms, involved in the Navier-Stokes equations written in a frame of reference moving with the particle, can be totally neglected, the force acting on an isolated solid sphere immersed in a Newtonian fluid is provided by the well-known Basset-Boussinesq-Oseen (BBO) equation (Boussinesq 1885; Basset 1888). The various terms appearing in this equation are the drag, the added-mass force and the history force. An equivalent equation for fluid inclusions immersed in an unsteady uniform flow (in the absence of inertia effects) has been obtained by Gorodtsov (1975) (see also Yang \& Leal 1991; Galindo \& Gerbeth 1993). However, the creeping-flow assumption is no longer valid in many situations. For example, when an inclusion is released into a quiescent fluid, the unsteadiness of the velocity perturbation eventually vanishes while convective terms are no longer negligible far from the inclusion (Oseen's problem). The determination of the particle-induced flow therefore requires us to solve a steady equation, in which linearised convective terms are involved (see Proudman \& Pearson 1957). As a result, it is found that the drag on the sphere is enhanced. Another striking example is provided by the lift force which appears when a particle is embedded in a pure shear flow, owing to convective inertia effects (Saffman 1965, 1968; McLaughlin 1991; Asmolov \& McLaughlin 1999; Candelier \& Souhar 2007), as well as a shear-induced drag correction (see Harper \& Chang 1968; Miyazaki, Bedeaux \& Bonet Avalos 1995). Similarly, when particles are immersed in a solid-body rotation flow, convective inertia effects induce lift forces, as well as drag corrections in both the axial and the radial directions (Childress 1964; Herron, Davis \& Bretherton 1975; Gotoh 1990; Candelier, Angilella \& Souhar 2005; Candelier \& Angilella 2006; Candelier 2008). In order to determine the influence of inertia terms on the force acting on a particle, the method of matched asymptotic expansions is generally used, and more specifically, that devised by Childress (1964) (see also Saffman 1965) and generalized to fluid inclusion by Legendre \& Magnaudet (1997). In this paper, the classical method is first presented in a general way, and then an alternative matching procedure, which is based on a series expansion of the far-field solution of the problem, performed in the sense of generalized functions, is proposed. \section{Description of the classical method} Formally, in the problems mentioned in the introduction, the non-dimensional perturbed fluid motion equations, in which, in particular, lengths are scaled by the radius of the particle, can be written as follows \begin{eqnarray} \boldsymbol{\nabla} \cdot \vec{w} &=& 0 \:,\label{div}\\ - \boldsymbol{\nabla} p +\boldsymbol{\triangle} \vec{w} &=& \vec{q}_\epsilon\:, \label{eq_gene2}\\ \vec{w} = \vec{u}_r \mbox{ on } r=1\:, \quad &\mbox{and}& \quad \vec{w} \to 0 \mbox{ as } r \to \infty\:, \label{eq_gene3} \end{eqnarray} where $\vec{u}_r$ is the relative velocity of the particle (i.e. $\vec{u} - \vec{v}$, where $\vec{u}$ is the velocity of the particle, $\vec{v}$, that of the fluid, and where the rotation of the particle has been neglected). In this kind of problem, the creeping flow equations are perturbed by a term $\vec{q}_\epsilon$, which is such that \begin{equation} \lim_{\epsilon \to 0} \vec{q}_\epsilon \to 0\:, \label{vanish} \end{equation} and whose analytical expression naturally depends on the case considered. In a region close to the particle, i.e. characterized by $r \sim 1$ , and which is usually called the inner zone, the solution of equations (\ref{div}) to (\ref{eq_gene3}) is expanded formally as \begin{equation} \vec{w}= \vec{w}_0 + \epsilon \vec{w}_1 + O(\epsilon^2) \quad \mbox{and} \quad p = p_0 + \epsilon \:p_1 + O(\epsilon^2) \label{expansion_inner} \end{equation} where the zeroth-order velocity and pressure satisfy the creeping flow equations (i.e. $\vec{q}_\epsilon = 0$). As shown by Saffman (1965), the inner problem is generally not regular since the boundary condition at infinity cannot be satisfied by the term $\vec{w}_1$ which is found to be of order $O(r)$ for large $r$ in the great majority of cases. As a consequence, the first-order correction terms have to be matched to an outer solution, i.e. a solution which valid far from particle (i.e. $r \gg 1$). The far-field solution is obtained by considering that in this region, the inclusion is seen by the fluid as a punctual force, modelled by Dirac-source term whose strength corresponds to that of a Stokes drag (with a minus sign), and which leads us to \begin{equation} - \boldsymbol{\nabla} p + \boldsymbol{\triangle} \vec{w} + 6\:\pi\: \vec{u}_r \delta(\vec{x}) = \vec{q}_\epsilon \:. \label{eq_far_field} \end{equation} In terms of stretched coordinates $\tilde{\vec{x}} = \epsilon \vec{x}$, and after noticing that $\delta(\vec{x}) = \epsilon^3 \delta(\tilde{\vec{x}})$, equation (\ref{eq_far_field}) can be re-written as follows \begin{equation} - \tilde{\boldsymbol{\nabla}} p' + \tilde{\boldsymbol{\triangle} } \vec{w}' + 6\:\pi\: \vec{u}_r \epsilon\:\delta(\tilde{\vec{x}})= \vec{q}'_{1} \:, \label{eq_far_field2} \end{equation} where the fluid velocity, the pressure and the perturbation term, are now are denoted with a prime, in order to distinguish them from the variables written in normal coordinates. Let us denote by $\epsilon \: \vec{w}_{\mbox{\scriptsize out}}'$ the (normalized) solution of equation (\ref{eq_far_field2}). For later convenience, let us also introduce $\epsilon \: \vec{w}_{St}'$, the well-known Stokeslet solution (here written in terms of stretched-coordinates) which simply corresponds to the solution of (\ref{eq_far_field2}) in the particular case $\vec{q}'_{1}=0$. The last step of the method consists in matching the inner and the outer solution in a region where both the solutions are supposed to be valid. Such a region is actually defined by $r \sim 1/\epsilon$, where we should have \begin{equation} \vec{w}_0 + \epsilon \vec{w}_1 \sim \epsilon \vec{w}_{\mbox{\scriptsize out}}'\:. \label{raccordement} \end{equation} In view of the fact that in the matching region $r \gg 1$, the velocity $\vec{w}_0$ naturally tends to the Stokeslet solution, i.e. $\vec{w}_0 \to \epsilon \vec{w}'_{St}$, it can be inferred, after simplifiying the parameter $\epsilon$ that $$ \vec{w}_1(\vec{x}) \sim \vec{w}_{\mbox{\scriptsize out}}'(\tilde{\vec{x}}) - \vec{w}'_{St}(\tilde{\vec{x}})\:. $$ After rewriting $\vec{w}_1(\vec{x})$ and $\vec{w}'(\tilde{\vec{x}}) - \vec{w}'_{St}(\tilde{\vec{x}})$ in terms of an intermediary variable (see for instance Hinch 1991) $$ \boldsymbol{\eta} = \frac{\vec{\tilde{x}}}{\epsilon^{1-\alpha}} = \epsilon^{\alpha} \vec{x}\:, \quad \mbox{where} \quad 0 < \alpha < 1\:, $$ and by taking the limit when $\epsilon \to 0$ for a fixed value of $\boldsymbol{\eta}$, we are led to the following matching condition \begin{equation} \lim_{|\vec{x}| \to \infty} \vec{w}_1(\vec{x}) = \lim_{|\tilde{\vec{x}|}\to 0} (\vec{w}_{\mbox{\scriptsize out}}'(\tilde{\vec{x}}) - \vec{w}'_{St}(\tilde{\vec{x}}))\:. \label{mathching} \end{equation} Note that in general, the solution of (\ref{eq_far_field2}) is obtained by using Fourier transforms, in order to deal with the Dirac-source term involved in it. Thus, by defining the Fourier transform as follows \begin{equation} \vec{\hat{f}}(\vec{k}) = \int_{\mathbb{R}^3} \vec{f}(\tilde{\vec{x}}) \exp(-i\:\vec{k}\cdot \tilde{\vec{x}} ) \mbox{d} \tilde{\vec{x}} \:, \label{def_Fourier} \end{equation} where $i$ is the imaginary unit (i.e. $i^2=-1$), and the inverse Fourier transform by $$ \vec{f}(\vec{x},\:t) = \frac{1}{(2\pi)^3} \int_{\mathbb{R}^3}\vec{\hat{f}}(\vec{k}) \exp(i\:\vec{k}\cdot \tilde{\vec{x}} )\mbox{d} \vec{k} \:, $$ equation (\ref{mathching}) generally reads as \begin{equation} \lim_{|\vec{x}| \to \infty} \vec{w}_1(\vec{x}) = \frac{1}{(2\pi)^3} \int_{\mathbb{R}^3} ( \hat{\vec{w}}_{\mbox{\scriptsize out}}'(\vec{k}) - \hat{\vec{w}}'_{St}(\vec{k}))\:\mbox{d} \vec{k} \:. \label{matching2} \end{equation} Physically, (\ref{matching2}) means that in the matching region, the perturbation term $\vec{w}_1$ matches a uniform stream of velocity. In practice, this uniform flow is linked to the relative velocity of the particle by a linear relation of the form $$ \frac{1}{(2\pi)^3} \int_{\mathbb{R}^3} ( \hat{\vec{w}}_{\mbox{\scriptsize out}}'(\vec{k}) - \hat{\vec{w}}'_{St}(\vec{k}))\:\mbox{d} \vec{k}= -\tens{M} \cdot \vec{u}_r $$ where $\tens{M}$ defines a mobility-like tensor. In return, this outer uniform flow exerts on the particle, an additional Stokes drag so that the force acting on the particle finally reads as $$ \vec{f}_1 = -6 \pi (\tens{I} + \epsilon \tens{M}) \cdot \vec{u}_r\:, $$ where $\tens{I}$ is the identity tensor. As discussed in the introduction of the paper, the method of matched asymptotic expansions has been applied successfully in many physical situations. However, in some cases, the integral involved in the matching procedure (\ref{matching2}) may be difficult to solve, owing to the complexity of the analytical expression of its integrand. In what follows, an alternative matching procedure is proposed, which can help to perform the matching between the inner and the outer expansions in the cases where the classical procedure fails. \section{Series expansions of generalized functions} The alternative matching procedure is based on series expansions of generalized functions. Let us then first recall basic definitions and fundamental results concerning distributions. Suppose that $f(\vec{x}):\:\mathbb{R}^3 \to \mathbb{R}$ is a locally integrable function, and let $\phi(\vec{x}):\:\mathbb{R}^3 \to \mathbb{R}$ be a test function in the Schwartz space $\mathcal{S}(\mathbb{R}^3)$ (function space of all infinitely differentiable functions that are rapidly decreasing at infinity). The tempered distribution $\mathcal{T}_f$ which corresponds to the function $f$ is defined by $$ \left< \mathcal{T}_f \:,\phi\right> = \int_{\mathbb{R}^3} f(\vec{x}) \phi(\vec{x}) \mbox{d} \vec{x} \:. $$ According to this definition, several other mathematical tools can be defined, as for instance, the partial derivative of a distribution with respect to a spatial coordinates, say $x_i$, \begin{equation} \left< \frac{\partial \mathcal{T}_f}{\partial x_i} \:,\phi\right> \triangleq - \left< \mathcal{T}_f\:,\: \frac{\partial \phi}{\partial x_i}\right> \:. \end{equation} Also, the Fourier transform of a distribution, that we shall denote by $\mathcal{F}(\mathcal{T}) = \hat{\mathcal{T}}_{f}$ can be defined by \begin{equation} \left< \hat{\mathcal{T}}_{f}\:,\:\phi \right> \triangleq \left< \mathcal{T}_f\:,\:\hat{\phi} \right> \:. \end{equation} Note that the symbol $ \triangleq$ used in these two last equations stands for 'equal by definition'. \vspace{11pt} Now in the case where the function $f(\vec{x})$ is integrable over the whole space, i.e. \begin{equation} \int_{\mathbb{R}^3} f(\vec{x}) \mbox{d} \vec{x} = C\:, \quad \mbox{where $C$ is a constant} \label{int_f} \end{equation} a fundamental result is that, in the sense of generalized function, \begin{equation} \lim_{\epsilon \to 0} \frac{1}{\epsilon^3} \:f\left(\frac{\vec{x}}{\epsilon}\right) \to C\:\delta(\vec{x})\:. \label{prop1} \end{equation} The demonstration of this result is rather simple and it is instructive to examine it (see for instance Boccara 1997). Briefly, to demonstrate (\ref{prop1}), stretched coordinates $\tilde{\vec{x}} = \epsilon \vec{x}$ are generally introduced, which allows the function $$ g_\epsilon(\tilde{\vec{x}} ) = \frac{1}{\epsilon^3} \:f\left(\frac{\vec{\tilde{x}}}{\epsilon}\right)\:, $$ to be defined. By a change of variable in (\ref{int_f}), one can readily verify that, \begin{equation} \int_{\mathbb{R}^3} g_{\epsilon} \:\mbox{d} \tilde{\vec{x}} = C\:, \end{equation} so that the effect of the distribution $\mathcal{T}_{g_\epsilon}$ on a test function $\phi$ can be arbitrarily re-written as $$ \left< \mathcal{T}_{g_\epsilon} \:,\phi\right> = \int_{\mathbb{R}^3} g_\epsilon(\tilde{\vec{x}}) \Big(\phi(\tilde{\vec{x}})- \phi(0) \Big) \mbox{d} \tilde{\vec{x}} + C \phi(0) \:. $$ Taking the limit when $\epsilon \to 0$, and re-introducing unstretched coordinates therefore yields $$ \lim_{\epsilon \to 0} \left< T_{g_\epsilon} \:,\phi\right> = \lim_{\epsilon \to 0} \int_{\mathbb{R}^3} f(\vec{x}) \Big(\phi(\tilde{\vec{x}})- \phi(0) \Big) \mbox{d} \vec{x} + C \phi(0) \:. $$ For a fixed value of $\vec{x}$, and because $f$ is locally integrable, the integral term vanishes when $\epsilon \to 0$, so that $$ \lim_{\epsilon \to 0} \left< T_{g_\epsilon} \:,\phi\right> = C \phi(0)\quad \Longleftrightarrow \quad \lim_{\epsilon \to 0} T_{g_\epsilon} = C \delta\:. $$ To take the analyse one step further, let us now consider a generalized function, say $\mathcal{T}$, which is defined by \begin{equation} \mathcal{T} = \lim_{\epsilon\to 0} \frac{1}{\epsilon^{3+k}} \:f\left(\frac{\vec{x}}{\epsilon}\right) \:, \label{def_T} \end{equation} where $k$ is an arbitrary integer. Let us further consider three other integers, $\ell$, $m$ and $n$, which are such that $\ell+m+n=k$. According to (\ref{prop1}), it can be inferred that, for any combination of $\ell$, $m$ and $n$, we should have \begin{equation} x_1^\ell \:x_2^m \:x_3^n \mathcal{T} = C \: \delta(\vec{x}) \:, \quad \mbox{where} \quad C = \int_{\mathbb{R}^3} x_1^\ell\: x_2^m\: \:x_3^n\:f(\vec{x}) \mbox{d} \vec{x} \:. \label{eq_distribution} \end{equation} In terms of generalized function, solving equation (\ref{eq_distribution}) leads us to \begin{equation} \mathcal{T} = C \frac{(-1)^k}{\ell ! m! n!} \frac{\partial^k \delta}{\partial x_1^\ell \partial x_2^n \partial x_3^m}\:. \end{equation} Note that any derivative of the delta distribution of lower order than $k$ is also a solution of (\ref{eq_distribution}), however such terms are not compatible with (\ref{def_T}) so that they should be zero. By taking into account every possible combination, we are finally led to the following results: \begin{itemize} \item in the case $k=1$ $$ \lim_{\epsilon\to 0} \frac{1}{\epsilon^{4}} \:f\left(\frac{\vec{x}}{\epsilon}\right) = \sum_{i=1}^3 C_i \frac{\partial \delta}{\partial x_i} \quad \mbox{with} \quad C_i = \int_{\mathbb{R}^3} x_i\:f(\vec{x}) \mbox{d} \vec{x} \:, $$ \item in the case $k=2$ $$ \lim_{\epsilon\to 0} \frac{1}{\epsilon^{4}} \:f\left(\frac{\vec{x}}{\epsilon}\right) = \sum_{i=1}^3 \sum_{j\geq i}^{3} C_{ij} \frac{\partial^2 \delta}{\partial x_i \partial x_j} \:, $$ $$ \mbox{with } C_{ij} = \int_{\mathbb{R}^3} x_i x_j\:f(\vec{x}) \mbox{d} \vec{x} \mbox{ if $i \neq j$, and else } C_{ii} = \frac{1}{2} \int_{\mathbb{R}^3} x_i^2 \:f(\vec{x}) \mbox{d} \vec{x} \:, $$ \item and so on $$ \lim_{\epsilon \to 0} \frac{1}{\epsilon^{3+k}} \:f\left(\frac{\vec{x}}{\epsilon}\right) \to \underbrace{\sum_{i_1=1}^{3}\sum_{i_2\geq i_1}^{3} \ldots \sum_{i_n\geq i_{n-1}}^{3}}_{k \:\mbox{\scriptsize times}} C_{i_1i_2 \ldots i_n}\:\frac{\partial^k \delta(\vec{x})}{\partial x_{i_1}\partial x_{i_2} \ldots \partial x_{i_n}} \:, $$ $$ \mbox{with } C_{i_1i_2 \ldots i_n}= \frac{(-1)^k}{\ell ! m! n!} \int_{\mathbb{R}^3} x_1^\ell\: x_2^m \:x_3^n\:f(\vec{x}) \mbox{d} \vec{x} \:, $$ where $\ell$, $m$ and $n$ are determined by the number of occurrences, respectively, of the indices $1$, $2$ and $3$ in the sequence $i_1i_2 \ldots i_n$. \end{itemize} \subsection{A simple example} In order to illustrate how such results can be used to approximate a perturbed form of the Green function of a differential equation, let us first consider a very simple example based on a steady Schr{\"o}dinger-like equation of the form : $$ \triangle f - \epsilon^2\:f= \delta \:. $$ To determine the Green function of this equation, the (spatial) Fourier transform defined in (\ref{def_Fourier}) can be used, which yields \begin{equation} \hat{f} = -\frac{1}{k^2+\epsilon^2} \:, \label{pot_pertube} \end{equation} and then, calculating the inverse Fourier transform of $\hat{f}$ leads us to $$ \mathcal{F}^{-1}(\hat{f}) = f(\vec{x}) = -\frac{\exp(-\epsilon \:r) }{4\pi r} \quad \mbox{where} \quad r=|\vec{x}|\:. $$ This solution can be expanded, with respect to $\epsilon$, as follows \begin{equation} -\frac{\exp(-\epsilon \:r) }{4\pi r} = -\frac{1}{4\pi\:r} + \frac{\epsilon}{4\pi} - \frac{r \epsilon^2}{8\pi} + \frac{r^2 \epsilon ^3}{24 \pi} + O(\epsilon^4)\:. \label{expansion} \end{equation} Obviously, such a series could not have been retrieved directly from a naive expansion of (\ref{pot_pertube}) (i.e. performed in the sense of classical function) since only powers of 2 would be involved in it. In contrast, (\ref{expansion}) can be retrieved if the series is performed in the sense of generalized function. Similarly as for classical functions, such a series reads as $$ -\frac{1}{k^2+\epsilon^2} = \hat{\mathcal{T}}_0 + \epsilon \: \hat{\mathcal{T}}_1 + \epsilon^2 \: \hat{\mathcal{T}}_2 + \ldots \epsilon^n \: \hat{\mathcal{T}}_n + O(\epsilon^{n+1}) $$ where \begin{equation} \mathcal{T}_n = \frac{1}{n! }\lim_{\epsilon \to 0} \frac{\mbox{d}^n }{\mbox{d} \epsilon^n} \left( -\frac{1}{k^2+\epsilon^2}\right) \:. \label{eq_det_tn} \end{equation} In our simple example, by calculating the first term, we are led to $$ \hat{\mathcal{T}}_0 = \lim_{\epsilon \to 0} -\frac{1}{k^2+\epsilon^2} = -\frac{1}{k^2} \quad \mbox{and} \quad \mathcal{F}^{-1}(\hat{\mathcal{T}}_0) = -\frac{1}{4\pi\:r}\:. $$ For the second term, $$ \hat{\mathcal{T}}_1 = \lim_{\epsilon \to 0} \frac{\mbox{d}}{\mbox{d} \epsilon} \left( -\frac{1}{k^2+\epsilon^2} \right) = \lim_{\epsilon \to 0} \frac{1}{\epsilon^3} \frac{2}{((k/\epsilon)^2 + 1)^2}\:. $$ According to the fact that $$ \int_{\mathbb{R}^3} \frac{2}{((k^2 + 1)^2} \mbox{d}\vec{k} = 2\pi^2\:, $$ this yields $$ \hat{\mathcal{T}}_1 = 2 \pi^2 \delta(\vec{k}) \quad \mbox{and} \quad \mathcal{F}^{-1}(\hat{\mathcal{T}}_1) = \frac{1}{4\pi}\:, $$ since $\mathcal{F}^{-1}(\delta) = 1/(8\pi^3)$. By pursuing the expansion in a similar way, we are finally led to $$ \hat{\mathcal{T}}_2 = \frac{1}{k^4} \quad \mbox{and} \quad \mathcal{F}^{-1}(\hat{\mathcal{T}}_2) = - \frac{r}{8\pi}\:, $$ $$ \hat{\mathcal{T}}_3 = - \frac{\pi^2}{3} \Delta_k \delta(\vec{k}) \quad \mbox{and} \quad \mathcal{F}^{-1}(\hat{\mathcal{T}}_3) = \frac{r^2}{24\pi}\:, \quad \mbox{etc} \ldots $$ so that (\ref{expansion}) is indeed recovered. \section{The alternative matching procedure} According to the results presented in the previous section, an alternative matching procedure can now be proposed. Indeed, by considering the fact that the parameter $\epsilon$ is small compared to unity, the Fourier transform of the solution of the (unstretched) outer equation (\ref{eq_far_field}) can be expanded in terms of generalized functions $$ \vec{\hat{w}} = \hat{\mathcal{T}}_0 + \epsilon \hat{\mathcal{T}}_1+ \epsilon^2 \: \hat{\mathcal{T}}_2 + \ldots + \epsilon^n \: \hat{\mathcal{T}}_n $$ where similarly as in the previous case, the generalized functions $\mathcal{T}_n$ are determined by \begin{equation} \mathcal{T}_n = \frac{1}{n! }\lim_{\epsilon \to 0} \frac{\mbox{d}^n\vec{\hat{w}} }{\mbox{d} \epsilon^n} \:. \label{eq_det_tn} \end{equation} According to (\ref{vanish}), it is readily found that $\hat{\mathcal{T}}_0$ simply corresponds to the Fourier transform of a Stokeslet, so that, in the matching zone, its inverse (spatial) Fourier transform naturally matches the leading order term $\vec{w}_0$ of the inner expansion (\ref{expansion_inner}). In our problems, the second term is always found to be of the form $$ \hat{\mathcal{T}}_1 = \lim_{\epsilon\to 0} \frac{1}{\epsilon^3} \:\vec{f}\left(\frac{\vec{k}}{\epsilon}\right)\:, $$ which implies that $$ \hat{\mathcal{T}}_1 = \vec{C} \:\delta(\vec{k}) \quad \mbox{where} \quad \vec{C} = \int_{\mathbb{R}^3} \frac{\mbox{d} \vec{\hat{w}} }{\mbox{d}\epsilon} \Big|_{\epsilon=1} \: \mbox{d} \vec{k}\:. $$ By noticing that the inverse spatial Fourier transform of $\delta(\vec{k})$ is given by $1/(2\pi)^3$, and similarly as in the classical method, is observed that in the matching zone $r\sim 1/\epsilon$, the perturbation term $\vec{w}_1$ of the inner expansion (\ref{expansion_inner}) matches a uniform velocity stream given by $\vec{C}/(2\pi)^3$, and we are led to the same conclusions as for the classical matching procedure. \section{Concluding remarks} It is worth mentioning that formally $$ \frac{1}{8\pi^3}\int_{\mathbb{R}^3} \frac{\mbox{d} \vec{\hat{w}} }{\mbox{d}\epsilon} \Big|_{\epsilon=1} \: \mbox{d} \vec{k} = \int_{\mathbb{R}^3} ( \hat{\vec{w}}_{\mbox{\scriptsize out}}'(\vec{k}) - \hat{\vec{w}}'_{St}(\vec{k}))\:\mbox{d} \vec{k} $$ which means that the two matching procedures obviously provide us with the same result. In some ways, the difference between the two approaches can be viewed as an inversion between taking the limit when $\epsilon \to 0$, and then performing the integration, or conversely, performing first the integration, and then taking the limit. Also, let us mention that the alternative matching procedure proposed here has been tested in many configurations where the classical method applies well as, in particular, the problem considered by Oseen (Proudman \& Person 1957) or that considered by Herron {\it et al.} (1975). Let us finally mention that this alternative method has been specifically developed to allows us to determine the drag correction induced by the flow perturbation on a particle in the problem recently addressed by Ardekani \& Stoker (2010). These authors have investigated the flow produced by a point force, intended to represent a settling particle, in a stratified fluid, at small Reynolds and Péclet numbers. In particular, in this study, the creeping flow solution is perturbed by buoyancy effects, and therefore, it can be cast into the formalism described in \S 2, except that the integral involved in the classical matching procedure (\ref{matching2}) cannot be solved analytically, owing to the complexity of the analytical expression of its integrand. These results, which have also been generalized to the unsteady case, have been the subject of a companion paper by Candelier, Mehaddi \& Vauquelin 2013.
1,314,259,996,904
arxiv
\section{Introduction} The notion that collisionless plasma shocks can accelerate charged particles into high energies resulting into the observed cosmic ray spectrum, has been around for many years and by now it is widely accepted that this process can account for the majority of the energetic, non-thermal particle distributions that we infer in various astrophysical environments. The work of the late 70s by a number of authors (e.g. \cite{Krymsky77}, \cite{Bell78a}, \cite{Bell78b}) established in principle the basic mechanism of particle diffusive acceleration in non-relativistic shocks. Since then, considerable work has been done analytically and numerically on the subject, however questions still need to be answered on this acceleration mechanism especially at relativistic shock speeds. In the present paper we will present simulations from non-relativistic up to relativistic shock speeds, in order to test the acceleration efficiency of the first order Fermi mechanism. More precisely, these simulations include sub-luminal and super-luminal shock acceleration with an aim to draw conclusions on the primary spectra of the relevant astrophysical sources such as Supernovae Remnants, Active Galactic Nuclei hot spots and Gamma Ray Bursts. \section{Numerical simulations} The purpose of using a Monte Carlo code is to simulate the particle transport equation, in our case for mild and relativistic shock velocities. The appropriate time independent Boltzmann equation is given by \begin{displaymath} \Gamma(V+\upsilon\mu )\frac{\partial f}{\partial x}=\frac{\partial f}{\partial t}\arrowvert_{c} \end{displaymath} where $V$ is the fluid velocity, $\upsilon$ the velocity of the particle, $\Gamma$ the Lorentz factor of the fluid frame, $\mu$ the cosine of the particle's pitch angle and $\frac{\partial f}{\partial t}|_{c}$ the collision operator. The frames of reference used in the code are the local fluid frame, the normal shock frame and the de Hoffman-Teller frame. Pitch angle is measured in the local fluid frame, while the value $x$ is the distance of the particles to the shock front, where the shock is assumed to be placed at $x=0$. Specifically, in order for the above equation to be solved by the Monte Carlo technique, we assume that i) the collisions represent scattering in pitch angle and ii) the scattering is elastic in the fluid frame. \begin{figure}[t] \begin{center} \includegraphics [width=6.8cm]{spectra0.01-6G.89-40.ps} \end{center} \caption{Two sub-luminal shock spectra, one for a non-relativistic shock of a gamma factor equal to 1.0001 (i.e. $V$=0.01c) and the other for a gamma equal to 5.0 (i.e. $V$=0.98c) with an inclination angle of 89 and 39 degrees respectively. One can see the almost perfect superposition of the spectra, showing the equivalence of acceleration efficiency between the non-relativistic \textit{perpendicular} shock and the relativistic sub-luminal oblique one. } \end{figure} \begin{figure}[h] \begin{center} \includegraphics [width=7cm]{datsuper10.89.50000test2.ps} \caption{Super-luminal shock spectrum for a gamma factor of 10 at 89 degrees. } \end{center} \end{figure} We begin the simulation by injecting $10^{5}$ particles far upstream. First order Fermi (diffusive) acceleration can be simulated provided that the particles' guidance center undergoes multiple scatterings with the assumed magnetized media and in each shock crossing the particles gain an amount of energy. The mean free path $\lambda$ is calculated in the respective fluid frames (i.e. upstream or downstream) assuming a momentum dependence to the mean free path of the particle such as, $\lambda=\lambda_o \cdot p$, where $\lambda_o=10 \cdot r_g$, which is related to the spatial diffusion coefficient, $\kappa$. Specifically for oblique shocks, in the shock normal which lies in the $x$ direction, the diffusion coefficient $\kappa$, equals to $\kappa_{\|}cos^{2} \psi$, where $\kappa_{\|}=\lambda v/3$ and $\psi$ is the inclination of the shock to the magnetic field lines. A pitch angle diffusion algorithm is used (e.g. see \cite{MeliQb}). All results are given at the downstream side of the shock frame. In general, flow into and out of the shock discontinuity is not along the shock normal, but a transformation is possible into the shock frame to render the flows along the normal (e.g. \cite{BegelKirk90}) and for simplicity we assume that such a transformation has already been made. During our simulations continuous Lorentz transformations are performed from the local fluid rest frames to the shock frame to check for shock crossings.\\ Furthermore, for the super-luminal shock conditions (see \cite{MeliQb}) we consider a helical trajectory motion of each test-particle of momentum $p$, in the fluid frames upstream or downstream respectively, where the velocity coordinates ($v_x, v_y, v_z$) of the particle are calculated in a three dimensional space. In principle, we follow the helix trajectory of each particle in time $t$, where $t$ is the time from detecting the shock intersection at $x$, $y$, $z$. Further details of this method can be found in the work mentioned above. All particles leave the system if they escape far downstream at the spatial boundary $r_b$, or alternatively, if they reach a specified maximum energy $E_{max}$, for computational convenience, even though other physical parameters describing particle escape or energy loss would need to be taken into account in more realistic situations. The downstream spatial boundary required, can be initially estimated from the solution of the convection-diffusion equation in a non-relativistic, large-angle scattering approximation in the downstream plasma which gives the chance of return to the shock as, $exp(-V_{2}r_{b}/x_i)$. In fact, we have performed many runs with different spatial boundaries to investigate the effect of the size of the acceleration region on the spectrum, as well as to find a region where the spectrum is size independent. We note that in the pitch angle diffusion case, the inherent anisotropy due to the high downstream sweeping effect may greatly modify this analytical estimate. \begin{figure}[t] \begin{center} \includegraphics [width=7cm]{datsuper500.50.89test2.ps} \caption{Two super-luminal shock spectra for a gamma factor of 500, one for an inclination angle of 50 degrees and the other for an inclination of 89 degrees. The spectra reach a maximum of $10^{14}$eV for protons. This cut-off in the energy is due to the stream flow, sweeping the particles downstream limiting their chance of returning back to the shock. } \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics [width=7.0cm] {datsuper1000.76test2.ps} \caption{A super-luminal shock spectrum for a gamma factor of 900 at 75 degrees. One sees (simulated spectra for other shock inclinations are comparable) the smooth power-law formation of the spectrum. Nevertheless, even for such high Lorentz factor the super-luminal shocks do not seem to be as efficient as the sub-luminal ones (e.g. \cite{MeliQb}), since the simulations showed that almost all of the particles are advected downstream just after a shock cycle.} \end{center} \end{figure} \section{Results} In this section we present results of the simulations shown in the figures 1-4.\\ Two sub-luminal shock spectra are shown in figure 1, one for a non-relativistic shock of a gamma factor equal to 1.0001 (i.e.~$V$=0.01c) and the other for a gamma equal to 5.0 (i.e.~$V$=0.98c) with an inclination angle of 89 and 39 degrees respectively. One can see the almost perfect superposition of the spectra, showing the equivalence of the acceleration efficiency between the non-relativistic \textit{perpendicular} shock and the relativistic sub-luminal oblique one. The above correlation of efficiency between non-relativistic perpendicular shocks and mild relativistic oblique ones, applies to many other shock spectra we simulated, as long as the non-relativistic shocks are nearly perpendicular. For the case shown here, the maximum energy corresponds to $\sim 10^{7.5}$GeV for protons. Adding to the above, \cite{MeliBierm06} have shown that non-relativistic perpendicular shocks, are efficient accelerators for cosmic rays reaching energies as high as $\sim 10^{17}$eV, since the acceleration in those shocks seems to be faster under certain diffusion conditions \cite{Jok87}. Also as \cite{MeliQb} have shown, sub-luminal highly relativistic shocks are the most efficient accelerators comparing to the above, resulting in an average cosmic ray energy gain of $\sim \Gamma^{3.4}$ after two complete shock cycles. For this case, $\Gamma$ is the boost factor of the shock, while in the simulations the test-particles are considered already relativistic with an initial Lorentz gamma, $\gamma=\Gamma$+100. \\ One further result of the simulations, is that for oblique shocks below a Lorentz factor of $\Gamma$=5 (i.e. $V$=0.98c) as higher the inclination of the shock as greater its efficiency to accelerate particles up to $10^{17}$eV (i.e. protons, iron nuclei). \\ In figure 2 we show a super-luminal shock spectrum for a gamma factor of 10 at 89 degrees. In figure 3 one sees two super-luminal shock spectra for a gamma factor of 500, one for an inclination angle of 50 degrees and the other for an inclination of 89 degrees. Simulations show that the inclination of the shock in the super-luminal cases does not change the results. The cut-off is due to the flow, sweeping the particles downstream limiting their chance to return back to the shock, diminishing the chances for these shocks to be efficient cosmic ray accelerators. In figure 4, a super-luminal shock spectrum for shock gamma factor of 900 at 75 degrees is shown. One sees the smooth power-law formation of the above spectra (in contrast to the sub-luminal spectra formations in \cite{MeliQb}). Nevertheless, even for as high Lorentz gamma as 900, the accelerated particles do not reach very high energies (e.g. $10^{14}$eV for protons) since almost all of them are advected downstream, after their helix trajectory performs a shock-cycle crossing. All the simulated super-luminal shock spectra can be well fitted by a power-law with a spectral index ranging between $\sim 2.0-2.3$ in contrast to the work of \cite{MeliQb} concerning sub-luminal relativistic shocks. The latter spectra appear flatter, at the highest speeds with a characteristic plateau-like structure, since first, a highly relativistic shock catches the particles up within less a shock cycle, which does not allow sufficient time for isotropisation and second, because relativistic shocks are sensitive to the applied particle scatter model \cite{MeliBecker07} . \section{Conclusions} We discussed the mechanism of diffusive shock acceleration, presenting simulations in order to test the efficiency of sub-luminal and super-luminal relativistic shocks in a quantitative comparison to non-relativistic ones. It is certain that super-luninal shocks are not as efficient first order Fermi cosmic ray accelerators as the sub-luminal ones. Nevertheless, the super-luminal shocks can be well fitted by a power-law with a spectral index ranging between $\sim 2.0-2.3$ in contrast to the relativistic sub-luminal ones which give flatter spectra. Specifically, the super-luminal shocks are efficient in accelerating particles up to a maximum energy of around $10^{14}$eV (i.e. protons or iron nuclei). On the other hand, the sub-luminal relativistic oblique shocks are better accelerators concerning a maximum energy above $ 10^{18}$eV, as shown in similar works as well, but the spectral features are not as smooth as in super-luminal ones (which are given by a 'clean' power-law), indicating a strong connection to the kinematical details of the particle diffusion while crossing the relativistic shock front. Interestingly, it is shown that perpendicular non-relativistic shocks seem to be as efficient accelerators as sub-luminal mild relativistic ones. There is work under way concerning the initial parameters which could affect the shock behaviour in highly relativistic speeds. \section{Acknowledgements} The project is funded by the European Social Fund and National Resources (EPEAEK II) PYTHAGORAS.
1,314,259,996,905
arxiv
\section{Introduction} Complex organic molecules are thought to be formed primarily on dust grains in dense cores, see reviews by \citet{dishoeckherbst2009} and \citet{caselliceccarelli2012}. Before the onset of star formation, the atomic and molecular reservoir is contained in large dark clouds. Due to the high densities ($\geq$ 10$^{4}$ cm$^{-3}$) and low temperatures (10 K) reached in these environments, gas-phase species will freeze out on sub-micron sized grains forming ice mantles on timescales shorter than the lifetime of the cloud. It is here that atoms and molecules can potentially react with each other to form the zeroth order ice species like ammonia, methane, water and methanol. UV radiation interacts with these ice mantles by dissociating molecules to produce radicals and by photodesorbing species back to the gas phase. If these radicals are sufficiently mobile, they can find each other on the grain and react to form even more complex first generation (organic) species \citep{garrodherbst2006}. However, it is not entirely clear if UV radiation is essential to form these complex molecules or whether they can also be formed just by thermal processing and atom bombardment of solid CO with C, N and O atoms \citep{tielenscharnley1997}. In this context methylamine, CH$_{3}$NH$_{2}$, is a particularly interesting molecule, since its formation is hypothesised by \citet{garrod2008} to be completely dependent on radicals produced by UV photons, and is one of the few molecules that can definitely not be produced in the routes starting from solid CO: \\ \\ CH$_{4} + h\nu \rightarrow $CH$_{3}^{\bullet} + $H \\ \\ NH$_{3} + h\nu \rightarrow $NH$_{2}^{\bullet} + $H \\ \\ CH$_{3}^{\bullet} + $NH$_{2}^{\bullet} \rightarrow $CH$_{3}$NH$_{2}$ \\ \\ These radicals can form in the ice mantles in the dark cloud or in the protostellar phase through cosmic-ray induced photons and/or UV photons from the protostar. After gravitational collapse of the cloud and formation of a protostar, the dust around it will start to warm up. The increased temperature will cause the radicals to become mobile on the grains and react with each other, forming methylamine. Further heating will evaporate the formed methylamine from the grain and raise its gas-phase abundance. Another interesting amine-containing molecule is formamide, NH$_{2}$CHO. This is so far the most abundantly observed amine-containing molecule \citep[e.g.,]{halfen2011,bisschop2007}, making it an interesting molecule to compare with other amines like methylamine. In contrast with CH$_3$NH$_2$, this molecule can possibly be produced by reactions of H and N with solid CO. The comparison of the abundances of these two species could potentially give more information about the relative importance of UV-induced versus thermal grain surface reactions. Hot cores are particularly well-suited to study methylamine. These high mass star-forming regions reach high temperatures between 100 to 300 K and are known for their rich complex organic chemistry \citep{walmsley1992,dishoeckblake1998, tielenscharnley1997, ehrenfreund2000,caselliceccarelli2012}. The ice covered grains move inwards to the protostar and will heat up. When sufficient temperatures are reached, molecules will start to desorb depending on their respective binding energies. Less abundant molecules mixed with water ice will desorb together with water around 100 K. Previous detections of methylamine have all been made toward the galactic center. \citet{kaifu1974} first detected CH$_{3}$NH$_{2}$ in Sagittarius B2 and Orion A. Later that same year \citet{fourikis1974} reported the detection of methylamine in the same sources, but with a different telescope. Much more sensitive surveys by \citet{turner1991}, \citet{nummelin2000}, \citet{halfen2013}, \citet{belloche2013} and \citet{neill2014} also all detected methylamine lines toward SgrB2, with typical inferred abundance ratios with respect to NH$_{2}$CHO between 0.5 to 3. No detections of methylamine have been reported in sensitive surveys with modern detectors toward Orion, however \citep{blake1987,turner1991,sutton1995,schilke1997,crockett2014}. To study the importance of UV processing of ice-covered dust grains, we present the results of searches for methylamine in a number of hot cores (see Table \ref{sourcelist}). These results are combined and compared with data from \citet{bisschop2007} and \citet{isokoski2013}, which were taken toward the same hot cores with the same telescope and analysis method and include detections of NH$_{2}$CHO and other nitrogen-containing species. In Section \ref{obs} the observational details are given, followed by the analysis method in Section \ref{data}. Section \ref{result} summarizes all the results of our analysis and these are discussed in Section \ref{discus}. Finally conclusions are drawn in Section \ref{con}. \section{Observations} \label{obs} \begin{table*} \caption[]{Source list and source parameters} \label{sourcelist} $$ \begin{tabular}{l r r r r r r r r r r} \hline \noalign{\smallskip} Source & RA & Dec & $\theta_{\rm{S}}^{a}$ & $\theta_{\rm{B}}^{a}$ & $L^{a}$ & $d^{a}$ & $V_{\rm{LSR}}$ & $\Delta V$ & $\delta \nu$ & $RMS$ \\ & J2000 & J2000 & AU & AU & $L_{\odot}$ & (kpc) & (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) & (mK) \\ \noalign{\smallskip} \hline \noalign{\smallskip} AFGL 2591 & 20:29:24.60 & $+$40:11:18.9 & 1800 & 21000 & 2.0E+04 & 1.0 & -5.5 & 4.0 & 1.28 & 10 \\ G24.78 & 18:36:12.60 & $-$07:12:11.0 & 13000 & 162000 & 7.9E+05 & 7.7 & 111.0 & 6.3 & 1.28 & 9 \\ G31.41+0.31 & 18:47:34.33 & $-$01:12:46.5 & 7840 & 166000 & 2.6E+05 & 7.9 & 98.7 & 7.3 & 1.28 & 7 \\ G75.78 & 20:21:44.10 & +37:26:40.0 & 5600 & 86100 & 1.9E+05 & 4.1 & -0.04 & 5.6 & 1.28 & 9 \\ IRAS 18089-1732 & 18:11:51.40 & $-$17:31:28.5 & 2750 & 49000 & 3.2E+04 & 2.3 & 33.8 & 4.5 & 1.28 & 9 \\ IRAS 20216+4104 & 20:14:26.40 & +41:13:32.5 & 1753 & 34400 & 1.3E+04 & 1.6 & -3.8 & 6.0 & 1.28 & 10 \\ NGC 7538 IRS1 & 23:13:45.40 & +61:28:12.0 & 4900 & 58800 & 1.3E+05 & 2.8 & -57.4 & 4.0 & 1.28 & 10 \\ W3(H$_{2}$O) & 02:27:04.60 & +61:52:26.0 & 2400 & 42000 & 2.0E+04 & 2.0 & -46.4 & 5.0 & 1.28 & 11 \\ W 33A & 18:14:38.90 & $-$17:52:04.0 & 4500 & 84000 & 1.0E+05 & 4.0 & 37.5 & 4.9 & 1.28 & 11 \\ \noalign{\smallskip} \hline \end{tabular} $$ $^{a}$ Data taken from \cite{bisschop2007} and \cite{isokoski2013}.\\ \end{table*} Observations were performed with the James Clerk Maxwell Telescope (JCMT) \footnote{The James Clerk Maxwell Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the National Research Council of Canada, and (until 31 March 2013) the Netherlands Organisation for Scientific Research.} on the sources listed in Table \ref{sourcelist} between July 2010 and August 2011. The sources were selected based on their particularly rich chemistry, being isolated, having narrow line widths to prevent line confusion and on their relatively nearby distance \citep{bisschop2007, fontani2007, rathborne2008, isokoski2013}. \cite{nummelin1998} detected methylamine emission lines between 218 to 263 GHz toward Sgr B2N. Therefore the RxA3 front-end double side band receiver, functioning between 210 to 276 GHz, was chosen to observe the hot cores. The 250 and 1000 MHz wide back-end ACSIS configurations were used. A number of methylamine transitions covering a range of excitation energies were selected in this frequency range based on high Einstein $A$ coefficients and lack of line confusion (Table \ref{methylamine}). However, not all transitions were observed for all sources. The 235735 MHz transition was only recorded for W3(H$_{2}$O) and the 260293 MHz transition only toward W3(H$_{2}$O) and NGC 7538 IRS1. Because double side band spectra were obtained, our spectra contain transitions from two different frequency regimes superposed. To disentangle lines from the two side bands, each source was observed twice with an 8 MHz shift in the local oscillator setting between the two observations. This allows each transition to be uniquely assigned to either of the two side bands. In the 230 GHz band, the JCMT has a beam size ($\theta_{B}$) of 20-21$''$. Spectra were scaled from the antenna temperature scale, $T_{\rm{A}}^{*}$, to main beam temperature, $T_{\rm{MB}}$, by using the main beam efficiency of 0.69 at 230 GHz. Integration times were such that $T_{\rm{RMS}}$ is generally better than 10 mK for data binned to 1.3 km s$^{-1}$ velocity bins. Noise levels were improved by adding the shifted spectra together in a narrow frequency region around the CH$_{3}$NH$_{2}$ lines, effectively doubling the integration time. \begin{table} \caption[]{Methylamine transitions observed in this study$^{a}$} \label{methylamine} $$ \begin{tabular}{c c r c r} \hline \noalign{\smallskip} Transition & Freq & $E_{\rm{up}}$ & $A$ & $g_{\rm{up}}$ \\ & (MHz) & (K) & (s$^{-1}$) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} 4$_{2}$ $\rightarrow$ 4$_{1}$$^{b}$ & 229310.298 & 36.9 & 1.32E-05 & 108 \\ 7$_{2}$ $\rightarrow$ 7$_{1}$$^{b}$ & 229452.603 & 75.5 & 5.88E-06 & 60 \\ 8$_{2}$ $\rightarrow$ 8$_{1}$$^{c}$ & 235735.037 & 92.8 & 6.13E-05 & 204 \\ 6$_{2}$ $\rightarrow$ 6$_{1}$$^{b}$ & 236408.788 & 60.8 & 5.94E-05 & 52 \\ 2$_{2}$ $\rightarrow$ 2$_{1}$$^{b}$ & 237143.530 & 22.0 & 3.82E-05 & 60 \\ 10$_{2}$ $\rightarrow$ 10$_{1}$$^{d}$ & 260293.536 & 132.7 & 2.26E-05 & 52 \\ \noalign{\smallskip} \hline \end{tabular} $$ $^{a}$ Data from JPL database for molecular spectroscopy.\\ $^{b}$ Transition observed in all sources.\\ $^{c}$ Only observed in W3(H$_{2}$O).\\ $^{d}$ Only observed in W3(H$_{2}$O) and NGC 7538 IRS1.\\ \end{table} \section{Data analysis} \label{data} \begin{figure*} \includegraphics[width=\hsize]{Spec2edit2.png} \caption{JCMT spectra of the massive hot cores G31.41+0.31, G24.75, NGC 7538 IRS1 and W3(H$_{2}$O). The 229310 and 229452 MHz transitions in the lower sideband are indicated in red and that at 236408 MHz in the upper sideband in blue. In green is the baseline, obtained by fitting line free portions of the spectrum. In all spectra the H$_{2}$CS 7$_{1}$ $\rightarrow$ 6$_{1}$ transition at 236726 MHz is fitted to determine the typical linewidth in the sources, as listed in Table \ref{sourcelist}. } \label{spechc} \end{figure*} To analyse the data, exactly the same method as described by \cite{bisschop2007} and \cite{isokoski2013} was used. It will be shortly reiterated here. The hot core spectra corrected for source velocity were analysed with the "Weeds" extension \citep{maret2011} of the Continuum and Line Analysis Single-dish Software (CLASS\footnote{\textbf{http://www.iram.fr/IRAMFR/GILDAS}}) coupled with the Jet Propulsion Laboratory (JPL\footnote{\textbf{http://spec.jpl.nasa.gov}}) database for molecular spectroscopy \citep{pickett1998}. Focus was on identifying the transitions of methylamine listed in Table \ref{methylamine}, but other lines in the spectra were measured as well (see Table \ref{fulltrans} Appendix). After each positive identification the integrated main-beam temperature, \textit{$\int T_{ \rm{MB}}dV$}, was determined by gaussian fitting of the line. From the integrated main-beam intensity the column density \textit{$N_{\rm{up}}$} and thus the \textit{beam-}averaged total column density \textit{$N_{\rm{T}}$} could be determined, assuming Local Thermodynamic Equilibrium (LTE) at a single excitation temperature $T_{\rm{rot}}$:\\ \begin{eqnarray} \label{colden} \frac{3k\int T_{\rm{MB}}dV}{8\pi^{3}\nu\mu^{2}S} = \frac{N_{\rm{up}}}{g_{\rm{up}}} = \frac{N_{T}}{Q(T_{\rm{rot}})}e^{-E_{\rm{up}}/T_{\rm{rot}}} \end{eqnarray} \\ where \textit{$g_{\rm{up}}$} is the level degeneracy, $k$ the Boltzmann constant, \textit{$\nu$} the transition frequency, \textit{$\mu$} the dipole moment and \textit{S} the line strength. $Q(T_{\rm{rot}})$ is the rotational partition function and $E_{\rm{up}}$ is the upper state energy in Kelvin. In case of a non-detection, $3\sigma$ upper limits were determined from the Root Mean Square (RMS) of the base line of the spectra in combination with the velocity resolution \textit{$\delta \nu$} and line width \textit{$\Delta V$}: \\ \begin{eqnarray} \label{sigma} \sigma = 1.2\sqrt{\delta \nu \Delta V}\cdot RMS \end{eqnarray} \\ \textit{$\Delta V$} is estimated from other transitions (see Table \ref{sourcelist}) in the spectra, for example from the nearby H$_{2}$CS 7$_{1} \rightarrow$ 6$_{1}$ transition, and assumed to be the same for all transitions in the spectral range. A telescope flux calibration error of 20\% is taken into account in the 1.2 factor. The $3\sigma$ value is then used in the same way as the main-beam intensity of detected lines to obtain the upper limit on the total column density through Eq.~\ref{colden}. Since no rotational temperature can be determined for a non-detection, this has to be estimated. In the models of \citet{garrod2008} the peak abundance temperatures for methylamine range from 117 to 124 K depending on the model used. \citet{oberg2009} determined that methylamine forms in CH$_{4}$/NH$_{3}$ UV irradiation experiments and sublimates at 120 K. There is a small difference between laboratory and hot core desorption temperatures, because of the pressure difference between the two. Also, if CH$_{3}$NH$_{2}$ is embedded in water ice the desorption temperature will probably be limited to roughly 100 K, when water desorbs in space. Therefore $T_{\rm{rot}}$ is assumed to be 120 K when methylamine lines could not be identified, but the effects of lower and higher rotation temperatures are explored as well. Correction for beam dilution is done in the same way as \citet{bisschop2007}: \begin{eqnarray} \label{eta} \eta_{BF} = \frac{\theta_{S}^{2}}{\theta_{S}^{2}+\theta_{B}^{2}} \end{eqnarray} resulting in the \textit{source-}averaged column density: \begin{eqnarray} \label{ns} N_{S} = \frac{N_{T}}{\eta_{BF}} \end{eqnarray} The beam diameter $\theta_{B}$ is set at 21". For the source diameter, $\theta_{S}$, values have been taken from \citet{bisschop2007} and \citet{isokoski2013} and constitute the area where the temperature is 100 K or higher and hot gas-phase molecules are present. Both beam and source diameters are listed in AU in Table \ref{sourcelist}. Using the CASSIS line analysis software \footnote{CASSIS has been developed by IRAP-UPS/CNRS (http://cassis.irap.omp.eu).} it was verified that the source-averaged column densities are still small enough that the observed lines are optically thin. \section{Results and comparison with astrochemical models} \label{result} \subsection{CH$_3$NH$_2$ limits} Figure \ref{spechc} presents examples of spectra obtained for our sources, whereas Figure~\ref{2-2} in the Appendix shows the $2_2-2_1$ line in all sources. In general, no transitions of CH$_{3}$NH$_{2}$ are detected. Only one possible methylamine transition is identified in G31.41+0.31 coincident with the 6$_{2}$ $\rightarrow$ 6$_{1}$ line at 236408 MHz, with an integrated intensity of 0.44 K kms$^{-1}$. Following the procedure summarized in Section \ref{data}, a column density of 3.4 $\times$ 10$^{17}$ cm$^{-2}$ is inferred from this line assuming $T_{\rm rot}=120$~K. However, modelling of the spectrum shows that the other targeted CH$_3$NH$_2$ lines, 4$_{2}$ $\rightarrow$ 4$_{1}$ and 2$_{2}$ $\rightarrow$ 2$_{1}$, should have comparable or even higher intensities if this identification is correct (Figure \ref{tmb}). The 8$_{2}$ $\rightarrow$ 8$_{1}$ line should be readily detected but was not observed toward G31.41+0.31. This makes it unlikely that the detected feature belongs to methylamine, since we would expect to see at least two other CH$_3$NH$_2$ transitions in our spectrum. In Figure \ref{coldens} upper limit column densities of the six investigated transitions of methylamine are plotted versus rotational temperature taking a typical 3$\sigma$ = 0.100 K kms$^{-1}$. At 120 K the 8$_{2}$ $\rightarrow$ 8$_{1}$, 235735 MHz transition gives the lowest limits on the column densities, see Figure \ref{coldens}. However, since this particular transition was only included in the observations for one source, the second most sensitive transition at 120 K, 2$_{2}$ $\rightarrow$ 2$_{1}$, will be used (see Figure \ref{2-2} in the Appendix for a blow-up of this particular spectral region in all investigated sources). All following molecular ratios are based on CH$_{3}$NH$_{2}$ column densities obtained from this line, assuming T$\rm _{rot}$ = 120 K. The corresponding upper limits are presented in Table \ref{ratiosmeth} \begin{figure} \centering \includegraphics[width=\hsize]{coldens8.png} \caption{Column densities for the six methylamine transitions plotted versus temperature. This plot is made for a 3$\sigma$ limit of 0.1 K kms$^{-1}$, as found for W3(H$_{2}$O). This figure demonstrates that the $8_2-8_1$ transition (green) gives the most sensitive limits on column density for the relevant range of excitation temperatures in hot cores, when observed. The other five transitions (4$_{2}$ $\rightarrow$ 4$_{1}$, black; 7$_{2}$ $\rightarrow$ 7$_{1}$, gold; 10$_{2}$ $\rightarrow$ 10$_{1}$, cyan; 2$_{2}$ $\rightarrow$ 2$_{1}$, blue and 6$_{2}$ $\rightarrow$ 6$_{1}$, red) clearly imply higher column densities. Only below 40 K does the 2$_{2}$ $\rightarrow$ 2$_{1}$ line give lower column density limits. } \label{coldens} \end{figure} \subsection{Abundance ratio comparison} Combined with NH$_{2}$CHO and CH$_{3}$OH column densities from \citet{bisschop2007} and \citet{isokoski2013} derived in the same way, abundance ratios for methylamine and formamide with respect to each other and to methanol are calculated. These ratios are listed in Table \ref{ratiosmeth}. Methanol is chosen as a reference since it is the most readily observed complex organic molecule. Its disadvantage is that some of the transitions have high optical depth and that a cold component may be present \citep{isokoski2013}, but this is circumvented by only taking the warm methanol column density derived from optically thin lines. Abundances relative to methanol rather than H$_2$ are preferred since the H$_2$ column depends on extrapolation of dust models to smaller scales than actually observed \citep{bisschop2007}. Another point that needs to be taken into account is that the models of \citet{garrod2008} do show a slight overproduction of CH$_{3}$OH, which could influence the comparison between the ratios. Overall, the abundance ratios are estimated to be accurate to a factor of a few. It should be noted that methylamine and formamide have significantly different dipole moments (1.31 and 3.73 Debye respectively) and could therefore be excited in different ways. Formamide has a larger critical density than methylamine, so the situation could arise where the critical density is not reached for formamide or even both molecules. The corresponding excitation temperatures will then be lower. In particular, the situation in which the critical density is not reached for formamide but is for methylamine, could affect the inferred ratios. As can be seen from Figure \ref{coldens}, if $T_{\rm rot}$ drops from 120 to 50 K, the column density drops by a factor of a few, depending on transition. If $T_{\rm rot}$ were 120 K for methylamine but 50 K for formamide, the observed column density of formamide would be lower than that listed here and thus result in a higher CH$_{3}$NH$_{2}$/NH$_{2}$CHO ratio. We note, however, that there is no observational evidence that $T_{\rm rot}$ is systematically lower than 100 K for formamide \citep{bisschop2007}. Table~\ref{ratiosmeth} includes the observational results toward Sgr B2, the only source where methylamine is firmly detected, from \citet{turner1991}, \citet{belloche2013} and \citet{neill2014}. These results, obtained over the course of more than two decades, agree well with each other within the estimated uncertainties due to slightly different adopted source sizes. \citet{nummelin2000} also detect methylamine in their Sgr B2 survey but find a surprisingly small beam filling factor and consequently very large column density compared with most other complex organic molecules. If their beam filling factor for CH$_3$NH$_2$ is taken to be the same as for NH$_2$CHO, the \citet{nummelin2000} ratios are more in line with those derived by \citet{turner1991}, \citet{belloche2013} and \citet{neill2014}. The non-detections of methylamine toward the chemically rich and well studied Orion hot core imply abundance limits that are at least a factor of 5 lower than for SgrB2 \citep{neill2014,crockett2014}. Table~\ref{ratiosmeth} also contains the model results from \cite{garrod2008}, who present three hot core models which differ from each other by their warm-up timescale from 10 to 200 K. The timescales for F(ast), M(edium) and S(low) are $5\times10^{4}$, $2\times10^{5}$ and $1\times10^{6}$ years, respectively, and start after the cold collapse phase. In the slow models more time is spent in the warm-up phase where radicals are mobile. Values used in this comparison are taken from the so-called reduced ice composition, where cold phase methane and methanol abundances were modified to match observations of these ices toward W33A, NGC 7538 IRS9 and Sgr A*, see \citet{gibb2000b}. Another comparison can be made with the gas-phase abundances in protoplanetary disk models of \citet{walsh2014} which have similar or higher densities and temperatures as in protostellar cores. Their ratios range from 7.2 $\times$ 10$^{-1}$ to 6.5 $\times$ 10$^{-2}$ for CH$_{3}$NH$_{2}$/CH$_{3}$OH, 4.2 $\times$ 10$^{-1}$ to 1.5 for CH$_{3}$NH$_{2}$/NH$_{2}$CHO and 1.7 to 8.8 $\times$ 10$^{-2}$ for the NH$_{2}$CHO/CH$_{3}$OH. These ratios are close to the predicted values of \citet{garrod2008} listed in Table~\ref{ratiosmeth}, which may be partly due to using the same surface-chemistry network. From Table \ref{ratiosmeth} several trends become apparent for our results. The CH$_{3}$NH$_{2}$/NH$_{2}$CHO limits lie about an order of magnitude above model values whereas the CH$_{3}$NH$_{2}$/CH$_{3}$OH limit approximately matches with theoretical predictions. Because the observed values are actually 3$\sigma$ upper limits, this suggests that models overproduce CH$_3$NH$_2$. For the sources with the most stringent limits, such as G31.41+0.31 and the $8_2-8_1$ line in W3(H$_{2}$O), the CH$_{3}$NH$_{2}$/CH$_{3}$OH limits are comparable or even lower than the abundance ratios for Sgr B2. The third ratio, NH$_{2}$CHO/CH$_{3}$OH, is also found to be lower than the models by up to one order of magnitude. Close inspection of the \citet{bisschop2007} data shows that the NH$_2$CHO column densities may have larger uncertainties than quoted in their figures and tables. We have therefore re-analysed all NH$_2$CHO data from that paper taking larger uncertainties into account. In general, this leads to lower NH$_2$CHO column densities. Even using the upper limits from this re-analysis as well as those from \citet{isokoski2013} (which were obtained with generous error bars), the NH$_{2}$CHO$_{upper}$/CH$_{3}$OH ratios are significantly lower than the models. This suggests that both the methylamine and formamide abundances are too high in the models. The Sgr B2 detections tend to have lower CH$_3$NH$_2$/NH$_2$CHO and CH$_3$NH$_2$/CH$_3$OH ratios than our upper limits and are also somewhat below the models, but generally do not differ more than a factor of a few. The SgrB2 NH$_2$CHO/CH$_3$OH ratios are also closer to the models results, at least for the faster models. However, the Orion Compact Ridge NH$_2$CHO/CH$_3$OH value from \citet{crockett2014} is comparable to that found for our sources and clearly lower than the models. Further observations are needed to determine to what extent Sgr B2 is a special case. To further elucidate the differences between theory and our and the Sgr B2 observations, an additional analysis was made of the CH$_{3}$NH$_{2}$/CH$_{3}$CN ratio. These results are listed in Table \ref{ratioscya}. Acetonitrile is produced in the gas-phase, but more abundantly on grains: an important route to its formation is via CH$_{3}^{\bullet}$ + CN$^{\bullet} \rightarrow$ CH$_{3}$CN, according to \citet{garrod2008}. This would mean that both molecules compete for the methyl radical on the surface, thus relating the two molecules. Our observed ratios involving CH$_3$CN are clearly at odds with the theoretical predictions. The observed CH$_3$NH$_2$/CH$_3$CN ratios are in most cases an order of magnitude lower than theory and approach the observed ratios for Sgr B2. However, the observed CH$_{3}$CN/CH$_{3}$OH ratios are 1-2 orders of magnitude higher than theoretical predictions. Both these cases point to CH$_3$CN being underproduced in the models. Finally, abundance ratios, with some notable exceptions, do not vary more than an order of magnitude between different sources, as also found by \citet{bisschop2007} for other species. \begin{table*} \caption[]{Upper limit column densities and abundance ratios for methylamine.} \label{ratiosmeth} $$ \begin{tabular}{l l l l l l} \hline \noalign{\smallskip} Source & $N_{\rm S,CH_{3}NH_{2}}$ & CH$_{3}$NH$_{2}$/NH$_{2}$CHO & CH$_{3}$NH$_{2}$/CH$_{3}$OH & NH$_{2}$CHO/CH$_{3}$OH & NH$_{2}$CHO$_{upper}$/CH$_{3}$OH \\ & cm$^{-2}$ & & & & \\ \noalign{\smallskip} \hline \noalign{\smallskip} Model F & & 1.1 & 3.4E-02 & 3.1E-02 & 3.1E-02 \\ Model M & & 1.7 & 1.0E-01 & 7.3E-02 & 7.3E-02 \\ Model S & & 1.3 & 1.3E-01 & 1.0E-01 & 1.0E-01 \\ \hline \noalign{\smallskip} AFGL 2591 & <1.9E+16 & - & - & <3.9E-01 & - \\ G24.78 & <2.4E+16 & <3.3E+01 & <8.5E-02 & 2.6E-03 & 9.0E-04 \\ G31.41+0.31 & <5.8E+16 & <2.8E+01 & <4.9E-02 & 1.8E-03 & 3.8E-03 \\ G75.78 & <3.5E+16 & <1.7E+02 & <3.1E-01 & 1.8E-03 & 2.6E-02\\ IRAS 18089-1732 & <4.2E+16 & <5.0E+01 & <1.9E-01 & 3.8E-03 & 7.9E-03 \\ IRAS 20216+4104 & <6.4E+16 & - & <2.2 & - & -\\ NGC 7538 IRS1 & <2.0E+16 & <3.5E+01 & <1.8E-01 & 4.8E-03 &2.1E-04 \\ W3(H$_{2}$O) & <5.0E+16 & <3.9E+01 & <5.0E-02 & 1.3E-03 &6.4E-04 \\ W3(H$_{2}$O)* & <1.7E+16 & <1.3E+01 & <1.7E-02 & 1.3E-03 & 6.4E-04 \\ W 33A & <5.7E+16 & <2.7E+01 & <2.9E-01 & 1.1E-02 & 4.6E-03\\ \hline \noalign{\smallskip} Sgr B2$^{a}$ & 1.2E+14 & 5.7E-01 & 1.7E-02 & 1.3E-02 \\ Sgr B2(M)$^{b}$ & 4.5E+16 & 3.2 & 1.7E-02 & 5.2E-03 \\ Sgr B2(N)$^{b}$ & 6.0E+17 & 4.3E-01 & 3.3E-02 & 7.8E-02 \\ Sgr B2(N)$^{c}$ & 5.0E+17 & 2.1 & 1.0E-01 & 4.8E-02 \\ Orion Compact Ridge$^{d}$ & - & - & - & 1.6E-03 \\ \noalign{\smallskip} \hline \end{tabular} $$ \tablefoot{Column densities for the assumed source size and upper limit abundance ratios for methylamine, derived from the $2_2-2_1$ line assuming T$_{\rm rot}$ = 120 K. The values for NH$_{2}$CHO and CH$_{3}$OH where taken from \cite{bisschop2007} and \cite{isokoski2013}. The upper limits of NH$_{2}$CHO were determined by our own re-analysis of the \cite{bisschop2007} data and taken from the appendix of \cite{isokoski2013}. \textbf{*} Column density calculated for the $8_2-8_1$ line. \textbf{References.} $^{a}$ \citet{turner1991}, beam sizes between 65" and 107", assuming no beam dilution; $^{b}$ \citet{belloche2013}, beam sizes between ~25" and ~30", assuming a $3''$ source size for (N) and $5''$ source size for (M); $^{c}$ \citet{neill2014}, beam sizes between ~10" and ~40", assuming source size of 2.5$''$; and $^{d}$\citet{crockett2014} beam sizes between 44" and 11" and assuming a $10''$ size of the Compact Ridge.} \end{table*} \begin{table} \caption[]{Upper limit column densities and abundance ratios for methylamine.} \label{ratioscya} $$ \begin{tabular}{l l l} \hline \noalign{\smallskip} Source & CH$_{3}$NH$_{2}$/CH$_{3}$CN & CH$_{3}$CN/CH$_{3}$OH \\ \noalign{\smallskip} \hline \noalign{\smallskip} Model F & 5.5E+01 & 6.3E-04 \\ Model M & 3.8E+01 & 2.6E-03 \\ Model S & 1.5E+01 & 8.6E-03 \\ \hline \noalign{\smallskip} AFGL 2591 & <5.3 & <7.5E-02\\ G24.78 & <5.1E-01 & 2.1E-01\\ G31.41+0.31 & <3.6 & 5.9E-02*\\ G75.78 & <1.9E+01 & 1.6E-02\\ IRAS 18089-1732 & <8.9 & 1.1E-02*\\ IRAS 20216+4104 & <4.3E+01 & 5.2E-02\\ NGC 7538 IRS1 & <2.5 & 6.8E-02\\ W3(H$_{2}$O) & <7.2 & 7.0E-03\\ W 33A & <2.1 & 1.4E-01\\ \hline \noalign{\smallskip} Sgr B2$^{a}$ & 1.2 & 1.5E-02\\ Sgr B2(N)$^{b}$ & 3.0E-01 & 1.1E-01 \\ Sgr B2(M)$^{b}$ & 2.5E-01 & 6.7E-02 \\ Sgr B2(N)$^{c}$ & 5.9E+01 & 1.7E-02 \\ Orion Compact Ridge$^{d}$ & - & 1.1E-02 \\ \noalign{\smallskip} \hline \end{tabular} $$ \tablefoot{Abundance ratios for methylamine. The values for CH$_{3}$OH and CH$_{3}$CN were taken from \cite{bisschop2007} and \cite{isokoski2013}. \\ *Ratio derived from optically thin $^{13}$C isotope. \textbf{References.} $^{a}$ \citet{turner1991}, $^{b}$ \citet{belloche2013}, $^{c}$ \citet{neill2014} and $^{d}$ \citet{crockett2014}.} \end{table} \section{Discussion} \label{discus} Despite a significant number of succesfully identified molecules (see Table \ref{fulltrans} in the Appendix for examples of W3(H$_{2}$O) and G31.41+0.31), only upper limits were found for methylamine in the various hot cores, limiting the conclusions that can be drawn. Nevertheless, trends are seen in our abundance ratios. The results suggest that theoretically predicted abundances for both methylamine and formamide are too high. In contrast, acetonitrile is found to be underproduced in the models. In the following, each of these species is discussed individually. \subsection{CH$_3$NH$_2$} \citet{garrod2008} suggest that methylamine is primarily formed by grain-surface chemistry using UV to create the CH$_3$ and NH$_2$ radicals from photodissociation of primarily CH$_4$ and NH$_3$. Perhaps the amount of UV processing is overestimated in these models. An alternative route is hydrogen atom addition to solid HCN, proposed by \citet{theule2011} and found to lead to both CH$_{2}$NH (methanimine) and CH$_3$NH$_2$. \citet{walsh2014} find in their models that methylamine is indeed efficiently formed on grains at 10 K by atom addition reactions to solid CH$_{2}$NH. \citet{burgdorf2010} have detected HCN ice on Triton, but so far no detection of solid HCN has been made in the ISM. Methanimine is actually readily observed in the gas-phase \citep{turner1991,nummelin1998,belloche2013} so the presence of both species makes the H-atom addition scheme probable. However, \citet{halfen2013} detect CH$_2$NH in Sgr B2(N) at a rotational temperature of 44 K, which is distinctly colder than the 159 K observed for CH$_3$NH$_2$, suggesting that the two molecules may not co-exist. An alternative route would therefore be to form these molecules by two different gas-phase reaction pathways (CH$^{\bullet}$(g) + NH$_{3}$(g) $\rightarrow$ CH$_{2}$NH + H and CH$_{3}^{\bullet}$(g) + NH$_{3}$(g) $\rightarrow$ CH$_{3}$NH$_{2}$ + H), with CH being present primarily in the colder outer envelope and CH$_3$ in the warmer center. Further modeling is needed to determine whether these gas-phase reactions can reproduce the observed abundances quantitatively. \subsection{NH$_2$CHO} Formamide also appears to be overproduced in the hot core model. Since \citet{garrod2008} use both gas-phase, radical and atom addition reactions to form formamide, it is difficult to pin down where the discrepancies could come from. It is known that NH$_{2}$CHO is formed in CO:NH$_{3}$ mixtures after UV and electron irradiation \citep{grim1989, demyk1998, jones2011} and it has also been proposed that it can form from H- and N-atom addition to solid CO \citep{tielenscharnley1997}. Gas-phase formation from CO and NH$_3$ is viable as well \citep{hubbard1975}, although these experiments were conducted under high-pressure conditions, not the low pressures applicable in the ISM. Further quantification of both gas-phase and solid phase routes through laboratory experiments is needed. Recent laboratory experiments by \citet{fedoseev2014} do not find NH$_2$CHO production in H- and N-atom bombardment studies of solid CO, consistent with a large barrier for H- addition to HNCO found in ab initio calculations \citep{nguyen1996}, so perhaps the efficiency of this route has been overestimated in the models. An alternative solution would be that the high-mass sources studied here have not gone through a long (pre-stellar) phase in which the dust temperature was low enough for CO to be frozen out and turned into other molecules. \subsection{CH$_3$CN} The clear mismatches between theory and observations for the ratios involving CH$_3$CN point toward an underproduction of acetonitrile by more than an order of magnitude in the models. As with formamide, gas-phase, radical and atom addition reactions contribute to the formation of CH$_{3}$CN in the models, making it difficult to determine the cause. The main formation route in the models by radical addition of solid CH$_{3}^{\bullet}$ and solid CN$^{\bullet}$ has never experimentally been investigated. It would therefore be useful to determine if this is a viable solid state formation route and if it potentially has a higher efficiency than assumed. Alternatively it is possible that photodestruction of solid acetonitrile is not as efficient as assumed in the models. \citet{gratier2013} find high gas-phase CH$_{3}$CN abundances in the Horsehead PDR, indicative of a high photodesorption rate and slow destruction of CH$_{3}$CN in the ice. \citet{bernstein2004} indeed find slower photolysis of solid CH$_{3}$CN compared with other organic molecules. If such a slower photodissociation rate would also hold for gas-phase CH$_3$CN, it would be an attractive explanation why the CH$_3$CN rotational temperatures are generally higher than those of other complex molecules (e.g., \citet{bisschop2007} and many other hot core studies), since the molecule could then approach the protostar closer before being destroyed. However, the gas phase photoabsorption cross sections of CH$_3$CN are well determined and if the bulk of these absorptions lead to dissociation this would result in a photodissociation rate of gaseous CH$_3$CN at least as fast as that of CH$_3$OH \citep{vandishoeck2006a}. Another important parameter for all molecules studied here is the mobility of radicals and neutral molecules on the surface assumed in the gas-grain models. For many species no experimental data are available on diffusion barriers, only theoretically-inspired guesses. Observational evidence suggests that at least parts of the ice mantles are segregated in CO-rich and CO-poor layers \citep{tielens1991,pontoppidan2008}. Therefore, more knowledge of the structure of ice mantles and the mobility of radicals and neutral molecules as a function of surface temperature and in various chemical environments is necessary to determine if addition reactions are likely to happen and at which rates. \subsection{Prospects for ALMA} In the near future, much deeper searches for CH$_3$NH$_2$ can be carried out by ALMA (Appendix \ref{app_alma}). Figure \ref{almaspec1} shows that the strongest transitions within Band 6 are mainly located between 240 and 275 GHz and in Band 7 around 310, 340 and 355 GHz. In Table \ref{almameth} the strongest transitions in ALMA's Band 6 and 7 are listed. It becomes apparent that lines covered by Band 7 are more intense, but at the cost of a lower line density. Estimates done for the W 33A source with the CASSIS line analysis software and the ALMA Sensitivity Calculator show that ALMA should be able to reach the 3$\sigma$ detection limits for the CH$_3$NH$_2$ lines around 236 GHz in less than 1 hour of integration time, assuming the column density for methylamine of 1.2$ \times 10^{14}$ cm$^{-2}$ as found by \citet{turner1991} in a large beam and two orders of magnitude lower than those inferred here for a small source size. This estimate assumes a spectral resolution of 0.64 kms$^{-1}$ as used in our JCMT data, the number of ALMA antennas set to 34 (as in Cycle 2) and a synthesized beam of 1.1", appropriate for the W33A hot core (100 K radius). \section{Conclusions} \label{con} We have analysed nine hot core regions in search of methylamine. The molecule has not been convincingly detected, so upper limit abundances are determined for all the sources. From these limits, ratios of methylamine to other molecules (NH$_2$CHO, CH$_3$OH, CH$_3$CN) have been determined and compared with theory and Sagittarius B2 surveys. Our conclusions are as follows: \begin{enumerate} \item Trends in our results indicate that both methylamine and formamide are overproduced in the models of \citet{garrod2008}. Acetonitrile is underproduced with respect to these models. This is especially true for the slow models. \item Abundance ratios do not differ more than an order of magnitude between various sources suggesting that the (nitrogen) chemistry is very similar between hot cores, as has been found previously for other species. \item More (laboratory) studies are needed to clarify the formation pathway of methylamine and to determine differences and similarities with formamide, methanimine and, to a lesser extent, acetonitrile formation. \item The upper limits determined for CH$_{3}$NH$_{2}$ here can guide future more sensitive observations, especially with ALMA. Based on the ratios found in the Sgr B2 observations it is very likely that ALMA will reach the detection limit for methylamine in the sources studied here. Particularly strong transitions and spectral regions to target with ALMA are given. \end{enumerate} \begin{acknowledgements} We would like to thank C. Walsh, I. San Jose Garc\'{i}a, M. Drozdovskaya, N. van der Marel, M. Kama, M. Persson, J. Mottram, G. Fedoseev, and H. Linnartz for their support and input on this project. Thoughtful comments by the referee are much appreciated. Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA), by a Royal Netherlands Academy of Arts and Sciences (KNAW) professor prize, and by the European Union A-ERC grant 291141 CHEMPLAN. \end{acknowledgements} \bibliographystyle{aa}
1,314,259,996,906
arxiv
\section{INTRODUCTION} Massive stars ($\rm M \gtrsim 8 M_{\odot}$) play a vital role in the evolution of the universe given their radiative, mechanical and chemical feedback. They dictate the energy budget of the galaxies through powerful radiation, strong winds and supernovae events. Despite this, most aspects of the processes involved in their formation are far less understood in contrast to the low-mass regime. A universal theory elucidating the formation mechanism across the mass range, though much sought after, is still not well established. Tremendous efforts have been going on, since the last decade or so, to investigate whether high-mass star formation can be understood as a `scaled-up' version of the processes involved in the low-mas domain via the {\it Core Accretion} hypothesis. This advocates for formation of high-mass stars from pre-stellar cores to form single or binary protostars with enhanced accretion via a rotationally supported disk that also launches protostellar outflows. This model adequately circumvents the `radiation pressure problem', while leaving many questions open regarding the timescales of collapse and fragmentation in massive cores. Alternate theories, like {\it Competitive Accretion} and {\it Protostellar Merger}, have also been proposed as viable mechanisms. The debate is still not sealed on the preferred mechanism and the influence of prevailing conditions on each. On the observational front, probing the early stages of massive star formation, in particular, remains a challenge. Rarity of sources (owing to fast evolutionary time scales), large distances, complex, embedded and influenced environment and high extinction are factors which hinder the building up of a proper observational database crucial for validating proposed theories. The current status on the theoretical and observational scenarios of high-mass star formation can be found in the excellent reviews by \citet{2014prpl.conf..149T} and \citet{2018NatAs...2..478M} which also give an update on the literature in this field. \begin{figure*} \hspace*{-0.6cm} \centering \includegraphics[scale=0.47]{fig1.pdf} \caption{(a) Colour composite image of the region around {G12.42+0.50} using IRAC 3.6~{$\rm \mu m$} (blue), 4.5~{$\rm \mu m$} (green) and 8.0~{$\rm \mu m$} (red) bands. (b) A zoom-in showing the EGO {G12.42+0.50}. IRDCs are shown with the `$\times$' symbol and the position of IRAS~18079-1756 associated with {G12.42+0.50} is indicated with a diamond mark. The cross marks the position of the 2MASS point source, J18105109-1755496. The location of the $\rm H_2O$ maser is shown as a blue circle. (c) and (d) are colour composites created from the UKIDSS $J$ (1.25~{$\rm \mu m$}), $H$ (1.63~{$\rm \mu m$}) and $K$ (2.20~{$\rm \mu m$}) band data, covering the same area as in (a) and (b), respectively.} \label{irac_ukidss_rgb} \end{figure*} A step towards strengthening the observational domain was taken when the large-scale {\it Spitzer} Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) \citep{2003PASP..115..953B} unfolded the presence of a significant population of objects displaying enhanced and extended emission in the IRAC 4.5~{$\rm \mu m$} band. Following the conventional colour coding of GLIMPSE colour-composite images, these objects were christened as `green fuzzies' or `extended green objects' (EGOs) by \citet{2009ApJS..181..360C} and \citet{2008AJ....136.2391C}. Post detection, several studies focussed on ascertaining the nature of these objects and the research towards identification of the spectral carriers responsible for the enhanced 4.5~{$\rm \mu m$} emission were initiated \citep{{2004ApJS..154..333M},{2004ApJS..154..352N},{2005ApJ...630L.181R},{2005MNRAS.357.1370S}, {2006AJ....131.1479R},{2007MNRAS.374...29D},{2009ApJS..181..360C},{2009ApJ...702.1615C},{2010AJ....140..196D},{2011ApJ...743...56C},{2011ApJ...729..124C}, {2012ApJ...748....8T},{2012ApJS..200....2L},{2013ApJS..206...22C}}. Some of the above studies have associated EGOs with shock-excited $\rm H_2$ line and/or CO bandhead emission in protostellar outflows. In addition, observations show that majority of EGOs are co-located with infrared dark clouds (IRDCs) and with Class II Methanol masers which are distinct signposts of massive star formation \citep{{2006ApJ...641..389R}, {2007ApJ...662.1082R},{2005A&A...434..613S},{2006ApJ...638..241E}}. Studies till date support a picture wherein EGOs can be regarded as candidates for outflows from massive YSOs (MYSOs) and hence, they offer a unique sample of sources to investigate the early phases of massive star formation. \par In this paper, we focus on the EGO, {G12.42+0.50} catalogued as a ``possible" outflow candidate and associated with the luminous infrared source, IRAS~18079-1756 \citep{2008AJ....136.2391C}. The kinematic distance ambiguity towards {G12.42+0.50} has been resolved by \citet{2012ApJS..202....1H}. Following them and \citet{2010ApJ...710..150C}, we adopt a distance of 2.4~kpc in our study. \citet{2013ApJ...765..129V} estimate the far-infrared (FIR) luminosity, from the IRAS fluxes, to be $\rm \sim 10^{4} L_{\odot}$. From literature, {G12.42+0.50} is designated as an ultracompact (UC) {\mbox{H\,{\sc ii}}} \citep{{1984ApJ...281..225J},{2007ApJ...669L..37W}}. {G12.42+0.50} has been observed as part of several surveys such as the Millimeter Astronomy Legacy Team 90~GHz (MALT90) survey \citep{{2013PASA...30...57J},{2011ApJS..197...25F}} and the 6~cm Red MSX Source survey by \citet{2009A&A...501..539U}. The latter was aimed at identifying candidate MYSOs. Apart from this, a millimeter study of southern IRAS sources by \citet{1997ApJS..110...71O} reports IRAS~18079-1756 as an outflow candidate from the red- and blueshifted molecular outflow features observed in the $\rm CO~(2-1)$ transition and a redshifted line dip in the $\rm CS~(2-1)$ transition. $\rm H_2O$ maser emission is detected towards {G12.42+0.50} \citep{{1981ApJ...250..621J},{2013ApJ...764...61C}}. \citet{2011ApJS..196....9C}, in their study, have identified a 95~GHz Class I methanol maser towards {G12.42+0.50}. In addition, a few molecular line surveys also include {G12.42+0.50} \citep{{2003ApJS..149..375S},{2013ApJ...764...61C}}. \par In {Fig.~}\ref{irac_ukidss_rgb}, we present the near-infrared (NIR) and the mid-infrared (MIR) colour-composite images of the field of {G12.42+0.50} developed from the UKIDSS (Section \ref{ukidss}) and {\it Spitzer}-IRAC (Section \ref{mir}) data, respectively. The images not only reveal the characteristic, extended and enhanced 4.5~{$\rm \mu m$} emission defining the EGOs, but also show extended $K$-band nebulosity associated with {G12.42+0.50}. The morphology of the $K$-band emission is more confined to a narrower north-east and south-west stretch with a distinct dark lane in-between. A network of filamentary structures are seen towards the south-west and west, being more prominent in the NIR colour composite image. These filaments seem to converge towards {G12.42+0.50} suggesting a `hub-filament' scenario. Such systems have been detected in other star forming complexes and discussed in various studies \citep{{2013A&A...555A.112P},{2018ApJ...852...12Y}}. Two IRDCs, SDC 12.427+0.502 and SDC 12.408+0.512 from the catalogue of {\it Spitzer} dark clouds by \citet{2009A&A...505..405P}, are seen to lie on either side of {G12.42+0.50} and marked on the images in {Fig.~} \ref{irac_ukidss_rgb}. \par In presenting the multiwavelength study towards the {G12.42+0.50}, we have organized the paper in the following way. Section \ref{obs} outlines the observations and data reduction details, along with the archival databases used for this study. Section \ref{results} deals with the various results obtained. In Section \ref{discussion} we discuss the results, where we explore different scenarios to explain the nature of the radio continuum emission and elaborate on the gas kinematics. The summary of this comprehensive study is compiled in Section \ref{summary}. \section{OBSERVATIONS AND ARCHIVAL DATA} {\label{obs}} \subsection{Radio continuum observation} The ionized emission associated with {G12.42+0.50} is probed at low radio frequencies of 610 and 1390~MHz using the Giant Metrewave Radio Telescope (GMRT) located at Pune, India. GMRT consists of an array of 30 antennas, each of diameter 45~m and arranged in a Y-shaped configuration. The central square consists of 12 antennas spread randomly over an area of 1~km$^{\text 2}$ with the shortest baseline being $\sim$100~m. The remaining 18 antennas are uniformly stretched along the three arms ($\sim$14~km each) providing the longest baseline of $\sim$25~km. This hybrid configuration enables radio mapping of small-scale structures at high-resolution, along with large-scale, diffuse emission at low-resolution. \par Our GMRT observations were carried out on 2017 August 22 and 2017 July 21 at 1390 and 610~MHz, respectively, with a bandwidth of 32~MHz over 256 channels. We selected the radio sources 3C286 and 3C48 as primary flux calibrators. The phase calibrators, 1911-201 (at 1390~MHz) and 1822-096 (at 610~MHz) were observed after each 40-mins scan of the target, to calibrate the phase and amplitude variations over the full observing run. The details of the observations are given in {Table~}\ref{radio_obs}. Data reduction is performed using the NRAO Astronomical Image Processing Software (AIPS). \begin{table} \caption{Details of GMRT observations towards {G12.42+0.50}} \begin{center} \centering \begin{tabular}{l l l} \hline \hline \ Details & 610~MHz & 1390~MHz \\ \hline \ Date of Obs. & 21 July 2017 & 22 August 2017 \\ Flux calibrators & 3C286 & 3C286, 3C48 \\ Phase calibrators & 1822-096 &1911-201 \\ Integration time & $\sim 5$~hrs & $\sim 5$~hrs \\ Synthesized beam & $\sim7.6'' \times 4.8''$ & $\sim 3.0'' \times 2.4''$ \\ {\it rms} noise ($\mu $Jy/beam) & 94 & 29.7 \\ \hline \ \end{tabular} \label{radio_obs} \end{center} \end{table} The data sets are carefully examined to identify bad data (non-working antennas, bad baselines, RFI, etc) using the tasks {\tt UVPLT}, and {\tt TVFLG}. Subsequent flagging of the bad data is performed using the tasks {\tt UVFLG}, {\tt TVFLG} and {\tt SPFLG}. After flagging, the gain and bandpass calibration is carried out following standard procedure. Channel averaging was restricted to keep the bandwidth smearing negligible. The calibrated and channel averaged data are cleaned and deconvolved using the task {\tt IMAGR} by adopting the wide-field imaging procedure (``3D" imaging) to account for the w-term effect. Primary beam correction is done using the task {\tt PBCOR}. \par Galactic diffuse emission contributes to the system temperature, which becomes relevant at low frequencies (especially at 610~MHz). Since our target source is close to the Galactic plane and the flux calibrators are located away from this plane, rescaling of the final images becomes essential. The scaling factor is estimated under the assumption that the Galactic diffuse emission follows a power-law spectrum. The sky temperature, ${T_{sky}}$ at frequency, $\nu$ was determined using the equation \begin{equation} T_{sky} = T_{sky}^{408}\bigg(\frac{\nu}{408~\textrm{MHz}}\bigg)^\gamma \end{equation} \noindent where $\gamma$ is the spectral index of the Galactic diffuse emission and is taken to be -2.55 \citep{1999A&AS..137....7R} and $\it{T_{sky}^{\rm 408}}$ is the sky temperature at 408~MHz obtained from the all-sky 408~MHz survey of \citet{1982A&AS...47....1H}. We estimate the scaling factors to be 1.25 and 2.46 at 1390 and 610~MHz, respectively. These values are used to rescale our images. \subsection{Infrared observations} \subsubsection{Spectroscopy} \label{spectroscopy_steps} NIR spectroscopic observations towards {G12.42+0.50} were carried out with the 3.8-m United Kingdom Infrared Telescope (UKIRT), Hawaii. Observations were taken with the UKIRT 1-5~{$\rm \mu m$} Imager Spectrometer (UIST, \citealt{2004SPIE.5492.1160R}). UIST consists of a 1024$\times$1024 InSb array. In spectroscopy mode, the camera with a plate scale of 0.12{$\arcsec$}~pixel$^{-1}$ was used. The observations were made using the 4 ($\sim$0.48$\arcsec$)-pixel-wide and 120{$\arcsec$} long slit. Spectra were obtained in two grism set-ups, namely $HK$ and $KL$ that cover the spectral range of $1.395-2.506$~{$\rm \mu m$} and $2.229-2.987$~{$\rm \mu m$}, with spectral resolution of 500 and 700, respectively. Flat-field and Argon arc lamp observations were made ahead of the of the target observations on each night. The slit was oriented at an angle of 55$^{\circ}$ east of north centred on {G12.42+0.50} ($\rm \alpha_{J2000}= 18^{h}10^{m}51.1^s, \delta_{J2000} = -17\degree 55\arcmin 50\arcsec$) so as to also sample the outflow-like, extended feature towards the south-west of the central bright source. The telluric standard, SA0 160915, an A0V type star, was observed for telluric and instrumental corrections. Since the target has an extended morphology, nodding along the slit would result in overlapping features. Thus, the science target observation was performed by nodding between the target and a blank sky position. However, for the standard star, the source was nodded along the slit in an ABBA pattern between two positions A and B along the slit \citep{{2009MNRAS.397..849P},{2009MNRAS.399.2165R}}. Details of the observation are given in {Table~}\ref{UKIRT_obs_spec}. \begin{table} \caption{Details of UKIRT-UIST spectroscopic observations towards {G12.42+0.50}} \begin{center} \centering \begin{tabular}{c c c c c} \hline \hline \ Date~&Grism~& Exposure & Integration &Standard star \\ (yyyymmdd) & &time (s) &time (s) \\ \hline \ \ 20150402 &HK &120 &720 & SAO 160915 \\ ~~20150405 &KL &50 &300 & SAO 160915 \\ \hline \ \end{tabular} \label{UKIRT_obs_spec} \end{center} \end{table} \par The initial data reduction is carried out by the {\tt ORAC-DR} pipeline at UKIRT. Subsequent reductions are carried out using suitable tasks from the Starlink packages, {\tt FIGARO} and {\tt KAPPA} \citep{2008ASPC..394..650C}. The spectra from the two nodded beams of the reduced spectral image of the standard star are extracted and averaged after bad-pixel masking and flat-fielding. The averaged spectrum is then wavelength calibrated using the observed Argon arc spectrum. Photospheric lines are removed from the standard star spectrum after a careful interpolation of the continuum across these lines. The standard star spectrum is then divided by a blackbody spectrum of the temperature similar to the photospheric temperature of the standard star. Since our target involves extended emission, sky subtraction is done by subtracting the off-slit sky from the on-target spectral image followed by the FIGARO task {\tt POLYSKY} that subtracts the residual sky. Subsequent to this, correction for telluric lines is achieved by dividing the bad-pixel masked, flat-fielded, sky-subtracted and wavelength calibrated target spectral image by the standard star spectral image. As the sky conditions were not photometric, we have not performed the flux calibration. \subsubsection{Imaging} \label{cont_sub} We imaged {G12.42+0.50} in the broad-band {\it H} filter and the narrow-band filter centred on the {\mbox{[Fe\,{\sc ii}]}} line at 1.644~{$\rm \mu m$} using the UKIRT Wide-Field Camera (WFCAM, \citealt{2007A&A...467..777C}). The WFCAM consists of four 2048 $\times$ 2048 HgCdTe Rockwell Hawaii-II arrays each with a field of view of 13.65$\arcmin$ $ \times $ 13.65$\arcmin $ and a pixel scale of 0.4{$\arcsec$} pixel$^{-1}$. Details of the imaging observations are included in Table \ref{UKIRT_obs_img}. The data reduction was carried out by the Cambridge Astronomical Survey Unit (CASU). \begin{table} \caption{Details of UKIRT-WFCAM imaging observations made towards {G12.42+0.50}} \begin{center} \centering \begin{tabular}{c c c c } \hline \hline \ Date~& Filter~& Exposure & Integration \\ (yyyymmdd) & &time (s) &time (s) \\ \hline \ \ 20170705 &{\it H} & 5 & 180 \\ ~~20170705 &{\mbox{[Fe\,{\sc ii}]}} & 40 & 1440 \\ \hline \ \end{tabular} \label{UKIRT_obs_img} \end{center} \end{table} \par Continuum subtraction of the narrow-band {\mbox{[Fe\,{\sc ii}]}} is performed following the steps described in \citet{2005MNRAS.359....2V} employing multiple Starlink packages. The sky background was fitted and removed from the images using the KAPPA tasks {\tt SURFIT} and {\tt SUB}. The {\mbox{[Fe\,{\sc ii}]}} and {\it H}-band images are aligned using the task {\tt WCSALIGN}. Since the seeing conditions were different for the {\mbox{[Fe\,{\sc ii}]}} and {\it H}-band observations, the image with lower point spread function (PSF) was smoothed to the full width at half maximum (FWHM) of the image with larger PSF. For scaling the broad-band image, sky-subtracted flux counts of discrete point sources in both the narrow-band and broad-band images were measured. The average value of the ratio of the fluxes ($H/${\mbox{[Fe\,{\sc ii}]}}) was computed and used to scale the {\it H}-band image. Subsequently, the scaled {\it H}-band image was subtracted from the {\mbox{[Fe\,{\sc ii}]}} image to construct the continuum subtracted image. \subsection{NIR data from UWISH2 and UKIDSS survey} \label{ukidss} The UKIRT Widefield Infrared Survey for {H$_2$} (UWISH2) is a 180 square degree survey of the Galactic Plane to probe the 1-0 S(1) ro-vibrational line of {H$_2$} ($\lambda$ = 2.122~{$\rm \mu m$}) \citep{2011MNRAS.413..480F}. This survey used the WFCAM at UKIRT. CASU processed data of the region associated with {G12.42+0.50} was retrieved. We have also used the {\it K}-band image obtained as a part of the UKIRT Infrared Deep Sky Survey Galactic Plane Survey (UKIDSS-GPS, \citealt{2008MNRAS.391..136L}) from the WFCAM Science Archive. Continuum subtraction of the {H$_2$} image is carried out following the same procedures detailed in Section \ref{cont_sub}. \subsection{MIR data from the Spitzer Space Telescope and the Midcourse Space Experiment} \label{mir} In order to probe the emission at the MIR bands, we made use of the images of the region around {G12.42+0.50} from the archives of the {\it Spitzer Space Telescope} and the images from the Midcourse Space Experiment (MSX) survey \citep{2001AJ....121.2819P}. The Infrared Array Camera (IRAC) is one of the instruments on the {\it Spitzer Space Telescope} that has simultaneous broadband imaging capability at 3.6, 4.5, 5.8 and 8.0~{$\rm \mu m$} with angular resolutions $\sim$ 2.0{$\arcsec$} \citep{2004ApJS..154...10F}. The MSX survey mapped the Galactic plane in four mid-infrared spectral bands, 8.28, 12.13, 14.65, and 21.3~{$\rm \mu m$} at a spatial resolution of $ \sim $ 18.3{$\arcsec$}. In order to investigate the physical properties of the dust core associated with {G12.42+0.50}, we use the 12.13 and 14.65~{$\rm \mu m$} MSX band images, and the level-2 PBCD 8.0~{$\rm \mu m$} image of the GLIMPSE survey. \subsection{FIR data from Hi-GAL survey} FIR data used to study the nature of the cold dust emission was retrieved from the archives of the {\it Herschel Space Observatory}. This is a 3.4-m telescope that covers the spectral regime of $55-671$~{$\rm \mu m$} \citep{2010A&A...518L...1P}. We use the level 2 processed images from the Photodetector Array Camera and Spectrometer (PACS, \citealt{2010A&A...518L...2P}) and Spectral and Photometric Imaging Receiver (SPIRE, \citealt{2010A&A...518L...3G}) observed as a part of the Herschel infrared Galactic plane survey (Hi-GAL, \citealt{2010PASP..122..314M}). The Hi-GAL observations were carried out in `parallel mode’ covering 70, 160~{$\rm \mu m$} (PACS) as well as 250, 350 and 500~{$\rm \mu m$} (SPIRE). The images have resolutions of 5,13, 18.1, 24.9 and 36.4 {$\arcsec$} at 70, 160, 250, 300, and 500~{$\rm \mu m$}, respectively. \subsection{APEX+Planck data} The APEX+Planck image is a combination of the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) \citep{2009A&A...504..415S} at 870~{$\rm \mu m$}, which used the LABOCA bolometer array and the 850~{$\rm \mu m$} map from the Planck/HFI instrument. The combined data covers emission at larger angular scales, thus revealing the structure of the cold Galactic dust in more detail \citep{2016A&A...585A.104C}. The combined image has an angular resolution of 21{$\arcsec$}. \subsection{SMA observation} {G12.42+0.50} was observed using the Submillimeter Array (SMA) on 2008 July 1 and 8 in its extended configuration. The phase reference center was $\rm \alpha_{J2000}= 18^{h}10^{m}51.8^s, \delta_{J2000} = -17\degree 55\arcmin 56\arcsec$). In both observations, QSO 1924-292 was observed for gain correction and Callisto was used for flux-density calibration. The absolute flux level is accurate to about 15\%. Bandpass was corrected by observing QSO 3C454.3. The 345 GHz receivers were tuned to 267~GHz for the lower sideband and 277~GHz for the upper sideband. The frequency spacing across the spectral band is 0.812~MHz or $\rm \sim0.9~km~s^{-1}$. The 1.1 mm continuum data were acquired by averaging all the line-free channels over both the upper and lower spectral bands in the two datasets. The visibility data are calibrated with the IDL superset MIR package and imaged with the MIRIAD\footnote{\url{http://admit.astro.umd.edu/miriad/}} package. The MIRIAD task {\tt SELFCAL} is employed to perform self-calibration on the continuum data. The synthesized beam size and {\it rms} noise of the continuum emission from combining both compact and extended configuration data are $\rm \sim 1.5'' \times 1.0''$ and $\rm \sim 3~mJy/beam$, respectively. The lines are not imaged due to low signal-to-noise levels. \begin{figure*} \centering \includegraphics[scale=0.17]{fig2a.pdf} \quad\includegraphics[scale=0.17]{fig2b.pdf} \caption{(a) The grey scale shows the high resolution radio continuum map of {G12.42+0.50} at 1390~MHz with the contour levels 3, 6, 9, 18, 63, 150 and 172 times $\sigma$ ($\sigma \sim 29.7~\mu$Jy/beam). The beam size is $\sim 3.0'' \times 2.4''$. Positions of R1 and R2 are also labelled. The contours of the 6~cm radio map are overlaid in cyan with the contour levels 3, 4, 6, 12, 21 and 24 $\sigma$ ($\sigma \sim 0.15$ mJy/beam) and the beam size is $\sim 2.2'' \times 1.1''$. (b) The radio continuum map of {G12.42+0.50} at 610~MHz with contour levels 3, 6, 18, 38 and 60 times $\sigma$ ($\sigma \sim 94~\mu$Jy/beam). The beam size is $\sim7.6'' \times 4.8''$. The positions of the two radio peaks detected in the 1390~MHz map is indicated by `x'. The restoring beams in the 1390 and 610~MHz bands are represented as open ellipses towards the bottom-left of each image and of the 6~cm map is represented as an open cyan ellipse towards the bottom-right in (a). } \label{radio} \end{figure*} \subsection{Molecular line data from MALT90 survey} To understand the gas kinematics in our region of interest, molecular line data were obtained from the MALT90 survey \citep{{2013PASA...30...57J},{2011ApJS..197...25F}}. The survey, carried out using the ATNF Mopra 22-m telescope, has simultaneously mapped the transitions of 16 molecules near 90~GHz with a spectral resolution of $\rm 0.11~km~s^{-1}$. The Mopra Telescope is a 22-m single-dish radio telescope operated by The Commonwealth Scientific and Industrial Research Organisation's Astronomy and Space Science division. The data reduction was performed using CLASS90 (Continuum and Line Analysis Single-dish Software), a GILDAS\footnote{\url{http://www.iram.fr/IRAMFR/GILDAS}} software (Grenoble Image and Line Data Analysis Software). \subsection{JCMT archival data} The molecular line data for the $J=3-2$ transition of $\rm ^{12}CO$, $\rm ^{13}CO$ and $\rm C^{18}O$ were downloaded from the archives of the Heterodyne Array Receiver Program (HARP) mounted on the {\it James Clerk Maxwell Telescope} (JCMT) operated by the East Asian Observatory. JCMT is a 15~m telescope and is the largest single-dish astronomical telescope which operates in the submillimetre wavelength region of the spectrum. HARP is a Single Sideband array receiver that can be tuned between 325 and 375 GHz and has an instantaneous bandwidth of $ \sim $2 GHz and an Intermediate Frequency of 5 GHz. It comprises of 16 detectors laid out on a 4$\times$4 grid, with an on-sky projected beam separation of 30{$\arcsec$}. At 345 GHz the beam size is 14{$\arcsec$} \citep{2009MNRAS.399.1026B}. \subsection{TRAO observation} The molecular line data for the $J=1-0$ transition of $\rm ^{13}CO$ was obtained from the Taeduk Radio Astronomy Observatory (TRAO). TRAO is a 14~m radio telescope with a single-horn receiver system operating in the frequency range of 86 to 115 GHz and is located on the campus of the Korea Astronomy and Space Science Institute (KASI) in Daejeon, South Korea. The main FWHM beam sizes for the $\rm ^{12}CO~(1-0)$ and $\rm ^{13}CO~(1-0)$ lines are 45{$\arcsec$} and 47{$\arcsec$} respectively. The system temperature ranges from 150 K, for $\rm 86-110$~GHz to 450 K, for 115 GHz and $\rm ^{12}CO$ \citep{2018ApJS..234...28L}. \section{RESULTS} \label{results} \subsection{Emission from Ionized Gas} \label{radio_text} \begin{figure} \centering \includegraphics[scale=0.20]{fig3a.pdf} \quad\includegraphics[scale=0.205]{fig3b.pdf} \caption{(a) Spectral index map of {G12.42+0.50} between 1390 and 610~MHz. Black curves represent the spectral index levels. The blue contour shows the 5$\sigma$ ($\sigma \sim 0.4 \times 10^{-4}$ Jy/beam) level of the 610~MHz map used to construct the spectral index map. The red `x's mark the positions of the radio components, R1 and R2. The dashed purple line indicates the possible direction of the ionized jet. The spectral index varies from 0.3 to 0.7 along the possible jet axis. The error map is shown in (b). The errors involved are $\lesssim$ 0.15, barring a few pixels at the edges.} \label{specind} \end{figure} \begin{table*} \centering \small \caption{Peak coordinates, peak and integrated flux densities, and deconvolved sizes of the components R1 and R2 associated with {G12.42+0.50}.} \begin{center} \hspace*{-0.8cm} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \ & \multicolumn{2}{c}{~~~~~~~~~~Peak Coordinates~~~~} & \multicolumn{3}{c}{~~~Deconvolved size ($\arcsec$$\times$$\arcsec$)}~~~ &\multicolumn {3}{c}{ ~~~~~~~~~~~Peak flux (mJy/beam)~~~~~~~~~~~} & \multicolumn{3}{c}{~~~~~~~~~~~~Integrated flux (mJy)~~~~~~~~~~~} \\ \hline \ Component & RA (J2000) & Dec (J2000) &610 MHz & 1390 MHz & 5 GHz $^{\ast}$ & 610 MHz & 1390 MHz & 5 GHz $^{\ast}$ & 610 MHz & 1390 MHz & 5 GHz $^{\ast}$ \\ \hline \ R1 &18 10 51.10 &-17 55 49.30 & 2.6 $\times$ 0.6 & 1.9 $\times$ 1.7 & 1.4 $\times$ 1.2 & 4.4 & 5.3 & 3.5 & 4.7 & 7.9 & 6.2 \\ R2 &18 10 50.76 & -17 55 52.80 & - & 1.5 $\times$ 1.2 $^{\dagger}$ & 1.1 $\times$ 0.6 $^{\dagger}$ &- & 0.6 & 1.1 &- & 0.7 & 1.1 \\ \hline \end{tabular} $^{\dagger}$ Upper limits which is half the FWHM of the restoring beam.\\ $^{\ast}$ Values for R1 are from \citet{2009A&A...501..539U} and for R2 they are estimated from the available map. \label{peak_position} \end{center} \end{table*} The radio continuum maps at 1390 and 610~MHz, probing the ionized gas emission associated with {G12.42+0.50}, are shown in {Fig.~} \ref{radio}. The 1390~MHz map reveals the presence of a linear structure in the north-east and south-west direction comprising of an extended emission with two distinct and compact components, labelled R1 and R2 in the figure. The component R1 is well resolved, whereas R2 seems to be barely resolved. In this figure, we also plot the contours of the high-resolution 6~cm (5~GHz) map obtained using VLA by \citet{2009A&A...501..539U} as part of the RMS survey towards candidate massive YSOs. Both components are also visible in the 6~cm map. In comparison, the lower-resolution 610~MHz shows a single, almost spherical emission region with the peak position coinciding with R1. However, a discernible elongation is evident towards R2. In addition, the 1390~MHz map shows a narrow extension in the north-west and south-east direction. Given that the maps (especially 1390~MHz) have low level stripes in the said direction, it becomes difficult to comment on the genuineness of this feature. Table \ref{peak_position} compiles the coordinates, peak and integrated flux densities of R1 and R2. The deconvolved sizes and integrated flux densities are estimated by fitting 2D Gaussians using the task {\tt IMFIT} from Common Astronomy Software Application (CASA)\footnote{\url{https://casa.nrao.edu}} \citep{2007ASPC..376..127M}). At 610~MHz, the components are not resolved so the values obtained are assigned to R1 and hence should be treated as upper limits. For the component R1, the 5~GHz values are quoted from \citet{2009A&A...501..539U}. As for the component R2, that is barely resolved in both 1390~MHz and 5~GHz maps, we have set an upper limit to its size at both frequencies. This is taken to be the FWHM of the respective restoring beams \citep{2009A&A...501..539U}. Further, at 5~GHz, we take the peak flux density to be the same as the integrated flux density. In order to get an in-depth knowledge of the nature of the observed radio emission, we generate the spectral index map using our 1390 and 610~MHz maps. The spectral index, $\alpha$, is defined as $S_\nu \propto \nu^\alpha$, where, $S_\nu$ is the flux density at frequency $\nu$. GMRT is not a scaled array, hence, each frequency is sensitive to different spatial scales. To circumvent this, we generate new maps in the {\it uv} range ($\rm 0.7 - 47 K\lambda$) common to both frequencies. Keeping in mind the requirement of same pixel size and resolution, pixel and beam-matching is taken into account while generating the new maps. The spectral index map is then constructed using the task {\tt COMB} in AIPS. Further, to ensure reliable estimates of the spectral index, we retain only those pixels with flux density greater than 5$\sigma$ ($\sigma$ being the {\it rms} noise of the map) in both maps. The generated spectral index map and the corresponding error map, which has the same resolution as that of the 610~MHz map ($\sim7.6'' \times 4.8''$), are presented in {Fig.~}\ref{specind}. As seen from the figure, the spectral index values vary between 0.3 and 0.9 with the estimated errors involved being less than $\sim$ 0.15, barring a few pixels at the edges. These values indicate that the region is dominated by thermal bremsstrahlung emission of varying optical depth \citep{{1993RMxAA..25...23R},{1993ApJ...415..191C},{1999ApJ...527..154K},{2016ApJS..227...25R}}. Moreover, spectral index values in the range of $0.4-0.9$ are also typically seen in regions associated with thermal jets \citep[e.g.][]{{1975A&A....39....1P},{1986ApJ...304..713R},{2016MNRAS.460.1039P},{2016A&A...596L...2S}}. We will revisit these results obtained in a later section where we explore various scenarios to adequately explain the nature of the radio emission. \subsection{Emission from shock indicators} {\label{NIR_spectroscopy}} As discussed in the introduction, there is growing evidence in literature associating EGOs with MYSOs, notwithstanding the ongoing debate regarding their exact nature. Several mechanisms, like shocked emission in outflows, fluorescent emission or scattered continuum from MYSOs \citep{{2004ApJS..154..352N},{2010AJ....140..196D},{2012ApJ...748....8T}}, are invoked to identify the spectral carriers of the enhanced 4.5~{$\rm \mu m$} emission. The picture of shocked emission from outflows suggests the spectral carriers to be molecular and atomic shock indicators like {H$_2$} and {\mbox{[Fe\,{\sc ii}]}} as well as the broad CO bandhead. All of these have distinct features within the 4.5~{$\rm \mu m$} IRAC band. However, \citet{2012MNRAS.419..211S}, while investigating the population of MYSOs in the G333.2-0.4 region, opine that the excess 4.5~{$\rm \mu m$} could not be attributed to the {H$_2$} lines as these would be too faint to be detected at this wavelength. Instead, they support a scattered continuum or the CO bandhead origin. From the $L$- and $M$-band spectra of two EGOs, \citet{2010AJ....140..196D} show the {H$_2$} line hypothesis to be consistent with one of them (G19.88-0.53), while in the other target (G49.27-0.34), the spectra shows only continuum emission. So far, spectroscopic studies of EGOs in the 4.5~{$\rm \mu m$} and the NIR are few \citep{{2010AJ....140..196D},{2015A&A...573A..82C},{2016ApJ...829..106O}}, thus keeping the debate on their nature ongoing. In the NIR domain, a few studies have focussed towards narrow-band imaging \citep{{2012ApJS..200....2L},{2013ApJS..208...23L}}. Based on the UWISH2 survey images, \citet{{2012ApJS..200....2L},{2013ApJS..208...23L}} present a complete {H$_2$} line emission census of EGOs in the Northern Galactic Plane. \subsubsection{Narrow-band Imaging}\label{NIR_imaging} \begin{figure} \centering \includegraphics[scale=0.16]{fig4.pdf} \caption{(a) Continuum subtracted {H$_2$} image made using the UWISH2 survey data. (b) Continuum subtracted {\mbox{[Fe\,{\sc ii}]}} image made using UKIRT-WFCAM observations towards {G12.42+0.50}. The positions of the identified radio components R1 and R2 are indicated. The blue contours represent the 4.5~{$\rm \mu m$} emission with the levels 3, 60, 120, and 220$\sigma$ ($\sigma \sim 1.5$~MJy/sr). The red rectangles show the orientation of the slit and denote the apertures used for spectra extraction (see text in Section \ref{NIR_spectra})}. \label{NIR_narrowband} \end{figure} \par {H$_2$} line emission towards {G12.42+0.50} has been investigated in \citet{2012ApJS..200....2L}. They ascribe the extended emission seen in the continuum-subtracted image to be the result of residuals of continuum subtraction rather than real {H$_2$} line emission. In order to carefully scrutinize the NIR picture of {G12.42+0.50}, we revisit the {H$_2$} line emission from images retrieved from the UWISH2 survey. In addition, we also probe the {\mbox{[Fe\,{\sc ii}]}} line image which is a robust indicator of shocks as compared to the {H$_2$} lines \citep{2014ApJS..214...11S}. \par Following the procedure outlined in Section \ref{cont_sub} we construct the continuum subtracted {H$_2$} and {\mbox{[Fe\,{\sc ii}]}} line images which are presented in {Fig.~}\ref{NIR_narrowband}. In the continuum-subtracted {H$_2$} image, the morphology is similar to that obtained by \citet{2012ApJS..200....2L}. An extended emission is seen towards the peak of the 4.5~{$\rm \mu m$} emission coinciding with the location of the the radio component R1. Ideally a narrow-band continuum filter should enable a better continuum subtraction but in the absence of the same, we have ensured PSF matching and proper scaling of the broad $K$-band image. Contrary to the suggestion by \citet{2012ApJS..200....2L}, we believe that the extended {H$_2$} line emission detected in the continuum-subtracted image is genuine. This finds strength in the spectra obtained and discussed in the next section. In addition, diffuse line emission is seen towards the north-east and east of R1 as well towards the south-west. The continuum-subtracted {\mbox{[Fe\,{\sc ii}]}} image shows a weak, extended emission coinciding with the brighter part of the {H$_2$} line emission. \subsubsection{NIR spectroscopy}\label{NIR_spectra} \begin{figure*} \centering \includegraphics[scale=0.60]{fig5.pdf} \caption{The $HK$ spectrum of {G12.42+0.50} extracted over the apertures A1, A2 and A3. The aperture A1 covers the radio component R1 and the extended {H$_2$} emission seen towards the north-east, A2 covers the second radio component R2 and A3 samples the detached, extended emission seen towards the south-west. The shaded area marks the region of poor sky transparency. The identified spectral lines along aperture A1 are marked over the spectrum with the details given in Table \ref{spectral_lines}. No emission lines above the noise level are detected in the spectra extracted over A2 and A3.} \label{spechk_A2A3} \end{figure*} \begin{figure*} \hspace{0.2cm} \includegraphics[scale=0.56]{fig6.pdf} \caption{The $KL$ spectrum of {G12.42+0.50} extracted over the apertures A1, A2 and A3. The regions covered by all the three apertures are the same as given in {Fig.~}\ref{spechk_A2A3}. The identified spectral lines along aperture A1 are marked over the spectrum with the details given in Table \ref{spectral_lines}. No emission lines above the noise level are detected in the spectra extracted over A2 and A3.} \label{spec-l} \end{figure*} \begin{table} \caption{Lines detected in the spectra extracted from aperture A1 towards {G12.42+0.50}.} \begin{center} \begin{tabular}{c c} \hline \hline \ Line & Wavelength ($\mu$m) \\ \hline \ {\mbox{[Fe\,{\sc ii}]}} &1.644 \\ {H$_2$} 1-0 S(3) &1.958 \\ He I &2.059 \\ {H$_2$} 1-0 S(1) &2.122 \\ {H$_2$} 1-0 S(0) &2.224 \\ {H$_2$} 1-0 Q(1) &2.407 \\ {H$_2$} 1-0 Q(2) &2.413 \\ {H$_2$} 1-0 Q(3) &2.424 \\ \hline \ \end{tabular} \label{spectral_lines} \end{center} \end{table} As is clear from earlier discussions, studies towards identifying the spectral carriers of the 4.5~{$\rm \mu m$} emission are crucial in understanding the nature of EGOs and confirming their association with MYSOs. Given the lack of sensitive spectrometers in the 4.5~{$\rm \mu m$} region, spectroscopy in the NIR becomes indispensable. We probe {G12.42+0.50} with NIR spectroscopy to understand further the results obtained from narrow-band imaging. From the continuum-subtracted line images shown in {Fig.~}\ref{NIR_narrowband} and the UKIDSS $K$-band image shown in {Fig.~}\ref{irac_ukidss_rgb}, presence of faint nebulosity around the peak position (that coincides with the 4.5~{$\rm \mu m$} peak) and towards the south-west is clearly visible. The slit orientation shown in {Fig.~}\ref{NIR_narrowband} ensures that the regions harbouring the radio components and the extended {H$_2$} line emission towards the north-east of the peak and the detached elongated nebulosity towards the south-west are probed. \par The $HK$ spectra extracted over the three identified apertures (marked in {Fig.~}\ref{NIR_narrowband}) are shown in {Fig.~}\ref{spechk_A2A3}. The top-panel of {Fig.~}\ref{spechk_A2A3} shows the spectrum over aperture A1 with the line details listed in Table \ref{spectral_lines}. This aperture covers the radio component R1 and portions of the extended {H$_2$} emission seen towards the north-east of the 4.5~{$\rm \mu m$} peak. The spectrum shows clear detection of three emission lines of molecular {H$_2$} with the most prominent feature being the $\rm 1-0 S(1)$ line at 2.122~{$\rm \mu m$}. No {H$_2$} line is detected in the blue part ($\rm 1.5 - 1.8~\mu m$) of the spectrum but there is a weak {\mbox{[Fe\,{\sc ii}]}} line detected at 1.644~{$\rm \mu m$} These lines of {H$_2$} and {\mbox{[Fe\,{\sc ii}]}} are commonly observed in outflows/jets. In addition, He I at 2.059~{$\rm \mu m$} is also seen in the extracted spectrum. Apart from the emission lines, the continuum slope is seen rising towards the red thus, indicating a highly reddened source. {Fig.~}\ref{spechk_A2A3} also plots the extracted spectra over the apertures A2 and A3 in the middle and lower panels, respectively. Aperture A2 covers the second radio component R2 and aperture A3 samples the detached, extended emission seen towards the south-west. No emission lines above the noise level are detected in these and the spectra displayed are flat. \par In {Fig.~}\ref{spec-l}, we present the extracted spectra in the $KL$ band. The displayed spectra has been truncated at 2.45~{$\rm \mu m$} due to poor signal-to-noise ratio owing to less than optimal sky transparency. In aperture A1, three additional emission lines of molecular {H$_2$} are prominent. The other two apertures do not show the presence of any spectral feature. The detected lines are listed in {Table~}\ref{spectral_lines}. \par The observed {H$_2$} line emissions seen in the spectra of {G12.42+0.50} can be attributed to either thermal or non-thermal excitation. The thermal emission mostly originates from shocked neutral gas in outflows/jets that are heated up to a few 1000 K, whereas, the non-thermal emission is understood to be due to UV fluorescence by non-ionizing UV photons. These two competing mechanisms populate different energy levels thus yielding different line ratios \citep{{2003MNRAS.344..262D},{2015A&A...573A..82C},{2016MNRAS.456.2425V}}. UV fluorescence excites higher vibrational levels. The {H$_2$} lines detected in {G12.42+0.50} originate from the upper vibrational level, $\nu = 1$ suggesting a low level of excitation. The absence of high vibrational state transitions supports the shock-excited origin of the detected lines. Lack of fluorescent {H$_2$} line emission in {G12.42+0.50} may also be due to veiling of UV photons from the central star due to high extinction. Nevertheless, given the association with an outflow source, shock-excited origin is most likely the case. \subsection{Emission from the dust component} \label{dust} \begin{figure*} \hspace*{0.3cm} \includegraphics[scale=0.45]{fig7.pdf} \caption{Dust emission in the region associated with {G12.42+0.50} at the mid- and far-infrared wavelengths ($ \rm 3.6~\mu m-1.1~mm$). All the images from (a)$-$(j) have the same field of view. Skeletons of six clearly identified filaments are overlaid on the 8.0 and 350~{$\rm \mu m$} maps. The position of the EGO, {G12.42+0.50} is shown within a purple circle on the 3.6, 4.5, 5.8 and 8.0~{$\rm \mu m$} IRAC maps. The location of the two infrared dust bubbles (MWP1G012417+005383 and MWP1G012419+005399) are indicated by white `x's on the 8.0~{$\rm \mu m$} map, (d). (d) and (j) show the retrieved aperture of clump C1. Another clump detected towards south-east of {G12.42+0.50} is shown in the 870~{$\rm \mu m$} map, (j). (k) shows the SMA 1.1~mm map with the contour levels 3, 12, 21, 30 and 39 times $ \sigma $ ($ \sigma \sim 3 $~mJy/beam). The blue `x's on the 1.1~mm map mark the positions of the radio components R1 and R2.} \label{FIR} \end{figure*} The dust emission at MIR and FIR wavelengths sampled in the IRAC, Hi-Gal, ATLASGAL-Planck and SMA wavelengths (3.6~{$\rm \mu m$}$-$1.1~mm) in the region associated with {G12.42+0.50} is shown in {Fig.~}\ref{FIR}. In the IRAC bands, various emission mechanisms come into play and contribute towards the warm dust component \citep{2008ApJ...681.1341W}. Thermal emission from the circumstellar dust heated by the stellar radiation and emission from the UV excited polycyclic aromatic hydrocarbons in the Photo Dissociation Regions are known to be the dominant contributors. In the shorter IRAC wavelengths (3.6, 4.5~{$\rm \mu m$}), where mostly the stellar sources are sampled, emission from the stellar photosphere would also be appreciable. Apart from this, shock-excited {H$_2$} line emission and diffuse emission in the Br$ \alpha $ and Pf$ \beta $ lines would also exist. Further, in case of {\mbox{H\,{\sc ii}}} regions, one expects significant contribution from the Ly$ \alpha $ heated dust \citep{1991MNRAS.251..584H}. The morphology in the IRAC bands is similar and the emission becomes more prominent at 8.0~{$\rm \mu m$}. Dark filamentary features (bright in the negative images shown) are seen in silhouette towards the south-west in the 8.0~{$\rm \mu m$} map. The skeletons of the six clearly identified filamentary features are overlaid on the 8.0~{$\rm \mu m$} map. In addition, an extended emission feature is seen towards the north-west of {G12.42+0.50}, being prominent in the 5.8, 8.0, and 70~{$\rm \mu m$} images. Two infrared dust bubbles (MWP1G012417+005383 and MWP1G012419+005399) are found to be associated with this feature and are marked in {Fig.~}\ref{FIR}(d). No further literature is available on these bubbles so we drop them in further discussion. \par As we move towards the longer wavelengths, cold dust emission associated with {G12.42+0.50} becomes enhanced and more extended. From the ATLASGAL-Planck combined 870~{$\rm \mu m$} map, we identify two clumps using the 2D {\it Clumpfind} algorithm \citep{1994ApJ...428..693W} with 2$\sigma$ ($\sigma$ = 0.3 Jy/beam) threshold and optimum contour levels. The apertures of the identified clumps are overlaid on the 870~{$\rm \mu m$} map in {Fig.~}\ref{FIR}(j). While one of the clumps, hereafter C1, is associated with {G12.42+0.50}, another clump lies towards the south-east of {G12.42+0.50} at an angular distance of $\sim 6'$. From the $\rm H^{13}CO^+$ molecular line data (Section \ref{molecular-line}), we estimate the LSR velocity of this clump to be $\rm 31.5~km~s^{-1}$. Comparing this with the estimated LSR velocity of {G12.42+0.50} ($\rm 18.3~km~s^{-1}$), it is unlikely that the clump has any association with {G12.42+0.50}. The identified filaments now appear in emission and are shown on the 350~{$\rm \mu m$} map. Interestingly, these filaments seem to converge towards clump C1. As mentioned in the introduction, the morphology has an uncanny resemblance to a hub-filament structure, detailed discussion of which is presented in Section \ref{hub_filament}. Furthermore, in the high-resolution 1.1~mm SMA map, the inner region of the cold dust clump, C1 associated with {G12.42+0.50} is seen to harbour two, dense and bright compact cores labelled on the map as SMA1 and SMA2. Additionally, a few bright emission knots are detected in the SMA map including the one highlighted as SMA3 which coincides with the radio component R2. \subsubsection{Properties of SMA cores} From the SMA 1.1~mm map shown in {Fig.~}\ref{FIR}(k), SMA1 and SMA2 show up as dense, compact cores possibly in a binary system. SMA3, on the other hand, looks more like a clumpy region of density enhancement. Following the method described by \citet{2008A&A...487..993K} the masses of the SMA components are computed using the equation \begin{eqnarray} M & = & \displaystyle 0.12 \, M_{\odot} \left( {\rm e}^{1.439 (\lambda / {\rm mm})^{-1} (T / {\rm 10 ~ K})^{-1}} - 1 \right) \nonumber \\ & & \displaystyle \left( \frac{\kappa_{\nu}}{0.01 \rm ~ cm^2 ~ g^{-1}} \right)^{-1} \left( \frac{F_{\nu}}{\rm Jy} \right) \left( \frac{d}{\rm 100 ~ pc} \right)^2 \left( \frac{\lambda}{\rm mm} \right)^{3} \label{sma_mass_eqn} \end{eqnarray} \noindent where the opacity is \begin{equation} \kappa_{\nu}=0.1({\nu}/\textrm{1000~GHz})^{\beta}~{\rm cm}^{2} {\rm g}^{-1} \end{equation} $\beta$ is the dust emissivity spectral index which is fixed at 2.0 \citep{{1983QJRAS..24..267H},{1990AJ.....99..924B},{2010A&A...518L.102A}}. $F_\nu$ is the integrated flux density of each component, $d$ is the distance to the source and $\lambda$ is the wavelength taken as 1.1~mm. The temperature, $T$ is taken to be 26.8~K for SMA1 and SMA2 and 22.7~K for SMA3, from their positions in the dust temperature map (Section \ref{cold_dust}). The peak positions and flux densities, integrated flux densities, the deconvolved sizes and the masses of the of the 1.1~mm SMA cores are presented in {Table~}\ref{sma_params}. The deconvolved sizes and integrated flux densities of the cores are evaluated by fitting 2D Gaussians to each component using the 2D fitting tool of CASA viewer. From the mass and size estimates, SMA1 and SMA2 qualify as potential high-mass star forming cores satisfying the criterion, $ m(r) > 870{\rm M_\odot}(r/\rm pc)^{1.33} $ \citep{2010ApJ...716..433K}. \begin{figure} \centering \includegraphics[scale=0.40]{fig8.pdf} \caption{Spectral energy distribution of the dust core associated with {G12.42+0.50} in the wavelength range of 3.6 to 870~{$\rm \mu m$}. Assumed 15\% errors are indicated. The solid curve represents the best fit two-component model with a warm component at 183~K and a cold envelope at 25~K.} \label{Dust_SED} \end{figure} \begin{table*} \caption{Physical parameters of the 1.1~mm continuum emission near {G12.42+0.50}.} \begin{center} \begin{tabular}{c c c c c c c } \hline \hline Component &\multicolumn{2}{c}{Peak position} & Deconvolved size &Integrated flux &Peak flux &Mass \\ &RA (J2000) $({^h}~{^m}~{^s})$ &Dec (J2000) $(\degree~\arcmin~\arcsec)$ & ($\arcsec$$\times$$\arcsec$) & (mJy) & (mJy/beam) &(M{$_\odot$}) \\ \hline \ SMA1 &18 10 51.3 &-17 55 46.3 &1.4$ \times $0.5 &190 &109 & 14.8 \\ SMA2 &18 10 51.4 &-17 55 48.1 &1.3$ \times $0.4 &221 &136 & 17.2 \\ SMA3 &18 10 50.8 &-17 55 52.8 &1.5$ \times $0.7 &57 &34 & 5.5 \\ \hline \ \end{tabular} \label{sma_params} \end{center} \end{table*} \begin{table*} \caption{Integrated flux densities of the dust core associated with {G12.42+0.50}.} \begin{center} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \ Wavelength ({$\rm \mu m$}) &3.6 &4.5 &5.8 &8.0 &12.13 &14.65 &70 &160 &250 &350 &500 &870 \\ \hline \ Flux density (Jy) &0.6 &0.9 &3.3 &8.4 &57.7 &129.8 &1942.6 &2575.5 &1140.7 &535.3 &182.5 &35.9 \\ \hline \ \end{tabular} \label{SED_flux} \end{center} \end{table*} \begin{table*} \tiny \caption{Derived physical parameters of identified clumps associated with {G12.42+0.50}. The peak position, radius, mean temperature and column density, total column density, mass, and volume number density of the identified clumps are listed.} \begin{center} \centering \begin{tabular}{c c c c c c c c c} \hline \hline Clump &\multicolumn{2}{c}{Peak position} &Radius &Mean $T_d$ &Mean $N(H_2)$ &$\Sigma N(H_2)$ & Mass & Number density, $n(H_2)$ \\ & $\rm \alpha(J2000)~({^h}~{^m}~{^s})$ & $\rm \delta(J2000)~(\degree~\arcmin~\arcsec)$ &(pc) &(K) & (${\sc 10^{22}}$ cm$^{-2}$) &($10^{23}$ cm$^{-2}$) & (M$_\odot$) &($10^{3}$ cm$^{-3}$) \\ \hline \ C1 &18 10 49.64 & -17 55 59.40 & 0.8 & 19.9$\pm$1.9 & 3.3$\pm$0.9 & 23.2 &1375 & 10.4 \\ C2 & 18 10 42.75 & -17 57 08.92 & 0.3 & 16.1$\pm$0.4 & 1.2$\pm$0.1 & 1.0 & 59 & 10.7 \\ C3 & 18 10 36.91 & -17 55 02.54 & 0.4 & 17.1$\pm$0.8 & 0.7$\pm$0.1 & 1.5 & 92 & 4.2 \\ C4 & 18 10 37.83 & -17 58 04.57 & 0.4 & 16.7$\pm$0.7 & 0.9$\pm$0.1 & 2.1 & 127 & 4.9 \\ C5 & 18 10 27.03 & -17 58 17.79 & 0.4 & 17.7$\pm$0.9 & 0.7$\pm$0.1 & 1.1 & 66 & 4.4 \\ C6 & 18 10 23.08 & -17 59 55.48 & 0.4 & 18.1$\pm$0.9 & 0.7$\pm$0.1 & 1.2 & 70 & 4.3 \\ C7 & 18 10 19.16 & -17 59 27.19 & 0.2 & 18.2$\pm$0.8 & 0.7$\pm$0.1 & 0.4 & 24 & 7.2 \\ C8 & 18 10 43.71 & -17 58 04.98 & 0.5 & 15.8$\pm$0.6 & 1.2$\pm$0.2 & 3.6 & 214 & 6.0 \\ C9 & 18 10 38.76 & -18 00 24.61 & 0.2 & 15.6$\pm$0.7 & 1.2$\pm$0.2 & 0.7 & 41 & 11.1 \\ C10 & 18 10 35.81 & -18 00 52.40 & 0.3 & 16.0$\pm$0.8 & 1.1$\pm$0.2 & 0.9 &55 & 9.5 \\ C11 & 18 10 43.67 & -18 00 24.95 & 0.7 & 16.3$\pm$0.8 & 1.0$\pm$0.2 & 6.0 & 359 & 3.5 \\ C12 & 18 10 38.73 & -18 02 02.59 & 0.2 & 16.3$\pm$1.1 & 0.8$\pm$0.2 & 0.5 & 29 & 9.7 \\ \hline \\ \end{tabular} \label{clump} \end{center} \end{table*} \begin{figure} \centering \includegraphics[scale=0.30]{fig9a.pdf} \quad\includegraphics[scale=0.30]{fig9b.pdf} \quad\includegraphics[scale=0.30]{fig9c.pdf} \caption {(a) Column density, (b) Temperature and (c) Reduced ${\chi^{2}}$ maps towards {G12.42+0.50} generated using the {\it Herschel} FIR data and the ATLASGAL-Planck data. The {\it Clumpfind} retrieved clump,C1 and the visually identified clumps from the column density map are marked on the maps. The `x's mark the positions of the peak column densities of each clump in the column density map. Skeletons of the filaments identified from the 8.0~{$\rm \mu m$} map is overlaid on the column density and temperature maps.} \label{temp_cd} \end{figure} \subsubsection{SED modelling of C1} \label{2_comp} \par In an attempt to understand the properties of the dust clump (C1) associated with {G12.42+0.50}, we model the infrared flux densities with a two-component modified blackbody using the following functional form \citep{1998ApJ...507..794L} \begin{equation} S_{\nu}=[\Omega_1\;{\it a}\;B_{\nu}(T_1) + \Omega_2\;(1-{\it a})\;B_{\nu}(T_2)] (1-e^{-\tau_\nu}) \end{equation} where \begin{equation} \tau_{\nu} = \mu_{H_2}m_{H}\kappa_{\nu}N(H_2) \label{tau} \end{equation} \noindent where $S_\nu$ is the integrated flux density of C1, $\Omega_1$ and $\Omega_2$ are the solid angles subtended by the apertures used for estimating the flux densities at the FIR and MIR wavelengths, respectively. $a$ is the ratio of optical depth in the warmer component to the total optical depth, $B_{\nu}(T_1)$ and $B_{\nu}(T_2)$ are the blackbody functions at dust temperatures $T_1$ and $T_2$, respectively, $\mu_{H_2}$ is the mean molecular weight \citep[taken as 2.8;][]{2008A&A...487..993K}, $m_{H}$ is the mass of hydrogen atom, $\kappa_{\nu}$ is the dust opacity and $N(H_2)$ is the hydrogen column density. For opacity, we assume the function $\kappa_{\nu}=0.1({\nu}/\rm 1000~GHz)^{\beta}~cm^{2}~g^{-1}$, where $\beta$ is the dust emissivity spectral index for which, a value of 2.0 is adopted as in the previous section. \par In addition to the {\it Spitzer}-IRAC, {\it Herschel} and ATLASGAL wavebands, we have also included flux densities from the MSX survey\footnote{\url{https://irsa.ipac.caltech.edu/applications/MSX/MSX/}} at 12.13 and 14.65~{$\rm \mu m$} to constrain the model in the MIR wavelength. The integrated flux densities of the dust clump at the MIR wavelengths are measured within the area defined by the 4$\sigma$ contour level of the 8.0~{$\rm \mu m$} image (274 arcsec$^2$) and longward of 70~{$\rm \mu m$}, the integration is done over the area defined by the {\it Clumpfind} aperture for C1 (13720 arcsec$^2$). Background emission is estimated using the same apertures on nearby sky region (visually scrutinized to be smooth) and subtracted. Estimated flux densities are listed in Table \ref{SED_flux}. \citet{{2009A&A...504..415S},{2013A&A...551A..98L}} use a conservative 15\% uncertainty in the flux densities of the {\it Herschel} bands. We adopt the same value here for all the bands. Model fitting is carried out using the non-linear least square Levenberg-Marquardt algorithm with $ T_1$, $T_2$, $N(H_2)$ and $a$ taken as free parameters. The best fit temperature values are 25.0$\pm$1.0~K (cold) and 183.2$\pm$12.0~K (warm), respectively. The model fit also gives an estimate of the hydrogen column density, $ N(H_2) = \rm 2.1 \times 10^{22}~cm^{-2} $. This result shows that the dust clump in {G12.42+0.50} consists of an inner warm component surrounded by an extended outer, cold envelope traced mostly by the FIR wavelengths. It should be noted here that we have excluded the data points below 8.0~{$\rm \mu m$} while fitting the model. This is because the emission at 4.5 and 5.8~{$\rm \mu m$} may largely be dominated by shock excitation and the 3.6~{$\rm \mu m$} emission may arise from even hotter components. The SED and the best fit modified blackbody are shown in {Fig.~}\ref{Dust_SED}. The bolometric luminosity estimated from the two-component SED model over $8.0-870$~{$\rm \mu m$} is $ 2.8 \times 10^4 $~L{$_\odot$}. It is a factor of 1.6 higher to that obtained by \citet{2013ApJ...765..129V}, who use the IRAS band flux densities. However, our values are in fair agreement to the estimate of $\rm 3.2 \times 10^4$~L{$_\odot$} \citep{1997ApJS..110...71O} where flux densities between $\rm 2.1-1.3~mm$ are included. \subsubsection{Nature and distribution of cold dust emission} \label{cold_dust} \par We probe the nature of the cold dust associated with {G12.42+0.50}, using the {\it Herschel} FIR bands which cover the wavelength range ($ \rm 160-500 $~{$\rm \mu m$}) and the combined ATLASGAL-Planck data at 870~{$\rm \mu m$}. The dust temperature and the line-of-sight average molecular hydrogen column density maps are generated by a pixel-by-pixel modified single-temperature blackbody model fitting. While fitting the model, we assume the emission at these wavelengths to be optically thin. Following the discussion in several papers \citep{{2010A&A...518L..98P},{2010A&A...518L..99A},{2011A&A...535A.128B},{2018A&A...612A..36D}}, we exclude the 70~{$\rm \mu m$} data point as the optically thin assumption would not hold. In addition, the emission here would have significant contribution from the warm dust component thus modelling with a single-temperature blackbody would over-estimate the derived temperatures. Given this, the model fitting is done with only five points which lie on the Rayleigh-Jeans tail. \par The first step towards the generation of the temperature and column density maps is to have the maps from SPIRE, PACS and ATLASGAL-Planck in the same units. The units of the SPIRE map which is in MJy sr$^{-1}$ is converted to Jy pixel$^{-1}$ which is the unit for the 160~{$\rm \mu m$} PACS map. Similarly, the ATLASGAL-Planck map that has the unit of Jy beam$^{-1}$ is also converted to Jy pixel$^{-1}$. The maps are at different resolutions and pixel sizes. The pixel-by-pixel routine makes it mandatory to convolve and regrid the maps to a common resolution and pixel size of 36{$\arcsec$} and 14{$\arcsec$}, respectively which are the parameters of the 500~{$\rm \mu m$} map (as it has the lowest resolution). Convolution kernels are taken from \citet{2011PASP..123.1218A} for the {\it Herschel} maps. Since no pre-made convolution kernel is available for the ATLASGAL-Planck map, we use a Gaussian kernel. These preliminary steps are carried out using the software package, HIPE\footnote{The software package for {\it Herschel} Interactive Processing Environment (HIPE) is the application that allows users to work with the Herschel data, including finding the data products, interactive analysis, plotting of data, and data manipulation.} \par The maps include sky/background emission which is a result of the cosmic microwave background and the diffuse Galactic emission. In order to correct for the flux offsets due to this background contribution, we select a relatively uniform and dark region (free of bright, diffuse or filamentary emission) at a distance of $\sim$0.25{\degree} from {G12.42+0.50}. The same region is used for background subtraction in all the five bands. Using the method described in several papers \citep{{2011A&A...535A.128B},{2013A&A...551A..98L},{2017MNRAS.465.4753R},{2017MNRAS.472.4750D},{2018A&A...612A..36D}} the background values, ${I_{bg}}$ are estimated to be -2.31, 2.15, 1.03, 0.37 and 0.08 Jy pixel$^{-1}$ at 160, 250, 350, 500 and 870~{$\rm \mu m$}, respectively. The negative flux value at 160~{$\rm \mu m$} is due to the arbitrary scaling of the PACS images. \par To probe an extended area encompassing {G12.42+0.50} and the related filaments, we select a 12.8{\arcmin}$\times$12.8{\arcmin} region centred at $\rm \alpha_{J2000}=18^{h}10^{m}41.8^s, \delta_{J2000}=-17\degree 57\arcmin 23\arcsec$. The model fitting algorithm was based on the following formulation \citep{{1990MNRAS.244..458W},{2011A&A...535A.128B},{2013A&A...551A..98L},{2015MNRAS.447.2307M}}: \begin{equation} S_{\nu}(\nu)-I_{bg}(\nu) = B_{\nu}(\nu,T_d)\; \Omega\; (1-e^{-\tau_\nu}) \end{equation} where $\tau_{\nu}$ is given by Eqn. \ref{tau}, $S_{\nu}$ is the observed flux density, $B_{\nu}(\nu,T_d)$ is the Planck function, $T_d$ is the dust temperature, $\Omega$ is the solid angle in steradians, from where the flux is measured (solid angle subtended by a $ \rm 14''\times 14'' $ pixel) and the rest of the parameters are the same as used in the previous section. Following the same procedure discussed in Section \ref{2_comp}, SED modelling for each pixel is carried out keeping the dust temperature, $T_d$ and column density, $N(H_2)$ as free parameters. The dust temperature and column density maps generated are displayed in {Fig.~}\ref{temp_cd} along with the reduced ${\chi^{2}}$ map. The reduced ${\chi^{2}}$ map indicates that the fitting uncertainties are small with a maximum value of 4 towards the bright central emission where the 250~{$\rm \mu m$} image ({Fig.~}\ref{FIR}(e)) has a few bad pixels. The column density map reveals a dense, bright region towards clump, C1 that envelopes {G12.42+0.50}. Also clear is increased density along the filamentary structures identified in Section \ref{dust}. The apertures of the clump C1 identified from the 870~{$\rm \mu m$} is overlaid on the maps. Using $3 \times 3$ pixel grids, local column density peaks are identified above 3$ \sigma$ threshold ($\rm \sigma = 2.3\times 10^{21}~cm^{-2}$). 11 additional clumps were thus identified located within the 3$ \sigma$ contour. Subsequent to this, a careful visual inspection is done and ellipses are marked to encompass most of the clump emission. \par Two high-temperature regions are seen in the dust temperature map coinciding with {G12.42+0.50} and the two bubbles discussed earlier. The warmest temperature in the map is found to be 28.6~K and is located a pixel to the north-east of SMA1, SMA2 and peak position of R1. The mean dust temperature and column density of C1 is found to be 19.9~K and 3.3$\rm \times 10^{22}~cm^{-2}$, respectively. It has to be noted here that the mean temperature we obtain here is less than the temperature of the cold component we estimate from the two-component model by $\rm \sim 5~K$. This is because, unlike the two-component modelling, here we do not include the emission at 70~{$\rm \mu m$}. Similarly the column density we obtain here is greater than the column density estimated using the two-component fit by a factor of $\sim 1.6$. A striking feature noticed is the distinct low dust temperatures along the filaments. \subsubsection{Properties of cold dust clumps} \label{clump_text} Several physical parameters of the identified clumps are derived. The enclosed area within the {\it Clumpfind} retrieved aperture of C1 is used to determine the effective radius, $ \rm r=(A/\pi)^{0.5} $ \citep{2010ApJ...712.1137K}, where A is the area. For the visually identified clumps ($ \rm C2-C12 $), the effective radius is taken to be the geometric mean of the semi-major and semi-minor axes of the ellipses bounding the clumps. From the derived column density values, we estimate the mass of the dust clumps using the following expression \begin{equation} M_{\rm C} = \mu_{H_2} m_H A_{\rm pixel} \Sigma N (H_2) \end{equation} where $A_{\rm pixel}$ is the area of a pixel in $\rm cm^2$, $\mu_{H_2}$ is the mean molecular weight (2.8), $m_{H}$ is the mass of hydrogen atom. The volume number density of the clump is estimated using the expression, \begin{equation} n_{(H_2)} = \frac{3~M_{\rm C}}{4\pi R^3 \mu m_H} \end{equation} The peak position, radius, mean temperature and column density, integrated column density, mass, and volume number density of the identified clumps are listed in {Table~}\ref{clump}. The clump enclosing {G12.42+0.50}, C1 is the largest and most massive clump having a radius 0.8~pc, column density $\rm 3.3\times 10^{22}~cm^{-2}$ and mass 1375~M{$_\odot$}. \citet{2015MNRAS.450.1926H} derives the radius, column density and mass of the clump associated with {G12.42+0.50} to be 0.57~pc, $\rm 1.3\times 10^{23}~cm^{-2}$ and 724~M{$_\odot$}, respectively. Apart from a larger size estimated by us, the other factors contributing to this difference in the estimated values of mass and column density are the different opacity and dust temperature values adopted by \citet{2015MNRAS.450.1926H}. \subsection{Molecular line emission from G12.42+0.50} \label{molecular-line} \begin{table*} \caption{Details of the detected molecular line transitions towards the clump, C1 enveloping {G12.42+0.50}. The details are extracted from Table 2 of \citet{2014A&A...562A...3M} and Table 2 of \citet{2011ApJS..197...25F}.} \begin{center} \begin{tabular}{l l} \hline \hline Transition & Comments \\ \hline \ H$^{13}$CO$^+~(1-0) $ & six hyperfine (hf) components; high-density and ionization tracer \\ $\rm C_2H~(1-0)~3/2-1/2$ & three hf components; photodissociation region tracer\\ HCN $ (1-0) $ & three hf components; high-density and infall tracer \\ $\rm HCO^+~(1-0)$ & high-density, infall, kinematics and ionization tracer\\ HNC $ (1-0) $ & three hf components; high-density and cold gas tracer \\ $\rm HC_3N~(10-9) $ & six hf components; High-density and hot-core tracer\\ $\rm N_2H^+~(1-0)$ & 15 hf components, seven have a different frequency; high density and CO-depleted gas tracer\\ \hline \ \end{tabular} \label{molecule_details} \end{center} \end{table*} \begin{figure*} \vspace*{0.6cm} \centering \includegraphics[scale=0.45]{fig10.pdf} \caption{Spectra of the optically thin molecular lines ($\rm H^{13}CO^+$, $\rm HC_3N$, $\rm C_2H$ and $\rm N_2H^+$) associated with {G12.42+0.50} obtained from the MALT90 survey. The spectra are extracted towards the peak of the 870~{$\rm \mu m$} ATLASGAL emission. The dashed blue line indicates the LSR velocity, $\rm 18.3~km~s^{-1} $, estimated from the the optically thin $\rm H^{13}CO^+$ line. The magenta lines indicate the location of the hyperfine components for each transition.} \label{molecule_thin} \end{figure*} \begin{figure*} \vspace*{0.6cm} \includegraphics[scale=0.45]{fig11.pdf} \caption{Same as {Fig.~}\ref{molecule_thin} but for the optically thick transitions of $\rm HCO^+$, HCN, and HNC.} \label{molecule_thick} \end{figure*} \begin{table*} \caption{Parameters of the optically thin molecular transitions detected towards {G12.42+0.50}. The line width ($\Delta V $), main beam temperature ($ \rm T_{mb} $) and velocity integrated intensity ($\rm \int T_{mb}$) are obtained from the {\tt hfs} fitting method of CLASS90 for all the molecules except for $\rm H^{13}CO^+$, for which a single Gaussian profile is used to fit the spectrum. The column densities ($N$) of molecules are estimated using RADEX, and their fractional abundances ($x$) are determined using the mean {H$_2$} column density of the clump, C1.} \begin{center} \begin{tabular}{l c c c c c} \hline \hline Transition & $\Delta V$ & $\rm T_{mb}$ & $\rm \int T_{mb}$ & $N$ &$x$\\ &($\rm km~s^{-1}$) &(K) &($\rm K~km~s^{-1}$) &($\rm 10^{14}~cm^{-2}$) & ($10^{-9}$)\\ \hline \ $\rm H^{13}CO^+$ &2.9 &1.2 &3.6 & 0.1 & 0.3\\ $\rm N_2H^+$ &3.2 &2.5 &8.5 & 4.1 & 12.4\\ $\rm HC_3N$ &3.0 &1.3 &4.1 &1.4 & 4.2\\ $\rm C_2H$ &2.7 &1.3 &3.8 & 5.5 & 16.7\\ \hline \ \end{tabular} \label{mol_table} \end{center} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.41]{fig12a.pdf} \quad\includegraphics[scale=0.41]{fig12b.pdf} \caption{(a) Rotational transition lines of isotopologues of the $\rm CO~(3-2)$ observed towards {G12.42+0.50} fitted with double Gaussians. The spectra of $\rm ^{12}CO$, $\rm ^{13}CO$ and $\rm C^{18}O$ are boxcar-smoothed by three, eight and eleven channels which correspond to velocity smoothing of 1.2, 3.4 and $\rm 4.6~km~s^{-1}$, respectively. The fit to the $\rm ^{12}CO$ spectrum is depicted in green, $\rm ^{13}CO$ in blue and $\rm C^{18}O$ in red. The dashed magenta line corresponds to the LSR velocity, $\rm 18.3~km~s^{-1}$. The positions of the red- and blueshifted components are indicated in green, blue and red lines for $\rm ^{12}CO$, $\rm ^{13}CO$ and $\rm C^{18}O$ lines respectively. (b) Spectrum of the $J=1-0$ transition of $\rm ^{13}CO$ obtained from TRAO. The Gaussian fit to the spectrum is sketched in red. The dashed blue line corresponds to the LSR velocity. The spectrum shows a blue shifted single peak, indicated by a red line. The red and blue lobes are not resolved, probably due to larger beam size of TRAO compared to JCMT. } \label{CO_plot} \end{figure*} \begin{table*} \caption{The retrieved parameters, peak velocities, velocity widths and peak fluxes of the molecular transitions $\rm ^{12}CO~(3-2)$, $\rm ^{13}CO~(3-2)$, $\rm C^{18}O~(3-2)$ and $\rm ^{13}CO~(1-0)$ towards {G12.42+0.50}. R and B in parentheses denote the red and blueshifted components.} \begin{center} \begin{tabular}{l c c c c c c } \hline \hline Transition &\multicolumn{2}{c}{$V$} & \multicolumn{2}{c}{ $\Delta V$} & \multicolumn{2}{c}{$\rm T_{mb}$} \\ &\multicolumn{2}{c}{($\rm km~s^{-1}$)} &\multicolumn{2}{c}{($\rm km~s^{-1}$)} &\multicolumn{2}{c}{(K)}\\ \hline \ $\rm ^{12}CO~(3-2)$ &21.0 (R) & 15.8 (B) & 2.8 (R) &3.7 (B) &10.7 (R) &29.1(B) \\ $\rm ^{13}CO~(3-2)$ &19.9 (R) & 16.6 (B) & 2.0 (R) &3.9 (B) &7.7 (R) &21.9 (B)\\ $\rm C^{18}O~(3-2)$ &19.3 (R) & 17.4 (B) & 1.2 (R) &2.4 (B) &6.6 (R) &11.2 (B) \\ $\rm ^{13}CO~(1-0)$ &\multicolumn{2}{c}{17.8} &\multicolumn{2}{c}{3.0} &\multicolumn{2}{c}{13.3} \\ \hline \ \end{tabular} \label{CO_table} \end{center} \end{table*} The molecular line emission provides information on the kinematics and chemical structure of a molecular cloud in addition to throwing light on its evolutionary stage. Data from the MALT90 survey, JCMT archives and observation from TRAO are used to probe these aspects in the star forming region associated with {G12.42+0.50}. \par Of the 16 molecules covered by the MALT90 survey, 7 molecular species, namely $\rm HCO^+$, $\rm H^{13}CO^+$, HCN, HNC, $\rm C_2H$, $\rm N_2H^+$ and $\rm HC_3N$ are detected towards the clump C1 enveloping {G12.42+0.50}. The details of the detected transitions taken from \citet{2014A&A...562A...3M} and \citet{2011ApJS..197...25F} are listed in {Table~}\ref{molecule_details}. \citet{2014A&A...562A...3M} also gives an excellent review on the physical conditions and environment required for the formation of these species. The spectrum of each molecule is extracted towards the 870 {$\rm \mu m$}, ATLASGAL emission peak. The spectra of the optically thin molecular species, $\rm H^{13}CO^+$, $\rm C_2H$, $\rm N_2H^+$ and $\rm HC_3N$, are shown in {Fig.~}\ref{molecule_thin} and the spectra of the optically thick molecular species, $\rm HCO^+$, HCN and HNC, are plotted in {Fig.~}\ref{molecule_thick}. We use the hyperfine structure ({\tt hfs}) method of CLASS90 to fit the observed spectra for the optically thin transitions of $\rm C_2H$, $\rm N_2H^+$ and $\rm HC_3N$. Since the molecule $\rm H^{13}CO^+$ has no hyperfine components, a single Gaussian profile is used to fit the spectrum. The Gaussian fit yields a LSR velocity of $\rm 18.3~km~s^{-1}$, which is consistent with the value estimated using the $\rm N_2H^+$ line of the same survey ($\rm 18.3~km~s^{-1}$; \citealt{2015MNRAS.451.2507Y}). The fit to the spectra are indicated by solid red line, and the LSR velocity and the location of the hyperfine components are indicated by the dashed blue and solid magenta lines, respectively in {Fig.~}\ref{molecule_thin}. The retrieved line parameters that include the peak velocity ($V_{\rm LSR}$), line width ($\Delta V$), main beam temperature ($\rm T_{mb}$) and the velocity integrated intensity ($\rm \int T_{mb}$) are tabulated in {Table~}\ref{mol_table}. Beam correction is applied to the antenna temperature to obtain the main beam temperature using the equation, $\rm T_{mb} = T_A/\eta_{mb}$ \citep{2014ApJ...786..140R}, where $\rm \eta_{mb}$ is assumed to be 0.49 \citep{2005PASA...22...62L} for the MALT90 data. \par To estimate the column density of these transitions, we use RADEX, a one dimensional non-local thermodynamic equilibrium radiative transfer code \citep{2007A&A...468..627V}. The input parameters to RADEX include the peak main beam temperature, background temperature assumed to be 2.73 K \citep{{2006MNRAS.367..553P},{2015MNRAS.451.2507Y}}, kinetic temperature, which is assumed to be same as the dust temperature \citep{{2012ApJ...756...60S},{2016ApJ...833..248Y}, {2016ApJ...818...95L}}, line width, and {H$_2$} number density. The dust temperature and {H$_2$} number density towards the clump, C1 of {G12.42+0.50} are taken from {Table~}\ref{clump} presented in Section \ref{clump_text}. The column densities of the optically thin transitions are also tabulated in {Table~}\ref{mol_table}. From the mean hydrogen column density of the clump, we also calculate the fractional abundances of the detected molecules. These estimates are in good agreement with typical values obtained for IR dark clumps and IRDCs \citep{{2014A&A...562A...3M},{2011A&A...527A..88V}}. \par From {Fig.~}\ref{molecule_thick}, it is evident that the $ J=1-0 $ transitions of the molecules, $\rm HCO^+$, HCN and its metastable geometrical isomer, HNC, display distinct double-peaked line profiles with self-absorption dips coincident with the LSR velocity. The blue-skewed profile seen in $\rm HCO^+$ is very prominent with the blueshifted emission peak being much stronger than the redshifted one. In case of the HCN transition, the central hyperfine component shows a blue-skewed double profile where the redshifted component is rather muted in the noise. Such blue asymmetry is usually indicative of infalling gas \citep{{2003ApJ...592L..79W}, {2016A&A...585A.149W}}. In Section \ref{infall}, we discuss in detail the $\rm HCO^+$ line profile. In comparison, in the HNC transition, the blueshifted and redshifted peaks have similar intensities. Similar line profiles are detected towards the star forming region AFGL 5142 \citep{2016ApJ...824...31L}. These authors have attributed it to low-velocity expanding materials entrained by high-velocity jets. An alternate reason could be of a collapsing envelope. In case of {G12.42+0.50}, however, no conclusive explanation can be proposed given the resolution of the data. Higher resolution observations are hence required to resolve the kinematics and explain the double peaked profile of HNC. \par The rotational transition line data of the isotopologues of the CO molecule, $\rm ^{12}CO~(3-2)$, $\rm ^{13}CO~(3-2)$ and $\rm C^{18}O~(3-2)$ taken from archives of JCMT and $\rm ^{13}CO~(1-0)$ observed with TRAO are used to understand the large-scale outflows associated with {G12.42+0.50}. The rotational transitions of the CO molecule is an excellent tracer of outflow activity in star forming regions \citep{{2001ApJ...552L.167Z},{2002A&A...383..892B}}. Different transitions trace different conditions of the ISM and probe different parts of the cloud. While the CO $J=3-2$ transition has a distinct upper energy level temperature and critical density of 33.2~K and $ \rm 5 \times 10^4~cm^{-3} $, respectively \citep{1999ApJ...527..795K}, the lower $J$ CO transitions effectively trace the kinematics of low density material of the cloud \citep{2013A&A...549A...5R}. Typically, the $\rm ^{12}CO$ line is optically thick and the $\rm ^{13}CO$ and $\rm C^{18}O$ lines are optically thin and are high density tracers. While $\rm ^{12}CO$ can effectively map the spatial and kinematic extent of the outflows and $\rm ^{13}CO$ can map them to some extent, the $\rm C^{18}O$ can trace the cloud cores under the optically thin assumption \citep{2015MNRAS.453.3245L}. The spectra of these molecular species are extracted towards the peak of the 870~{$\rm \mu m$}, ATLASGAL emission and shown in {Fig.~}\ref{CO_plot}(a) and (b). The spectra of the isotopologues of $\rm CO~(3-2)$ transition show red and blueshifted profiles. However, the $\rm ^{13}CO~(1-0)$ transition shows a single component, blueshifted profile. This is due to the large beam size of TRAO where the blue and the red components are unresolved. A double Gaussian is used to fit the spectra of $\rm ^{12}CO~(3-2)$, $\rm ^{13}CO~(3-2)$ and $\rm C^{18}O~(3-2)$, and a single Gaussian profile is fitted to the $\rm ^{13}CO~(1-0)$ line. The fitted profiles are also shown in the Figures. The retrieved parameters are peak velocities, velocity widths, and peak fluxes which are listed in {Table~}\ref{CO_table}. Beam correction is applied to the antenna temperature, taking $\rm \eta_{mb}$ to be 0.64 for the JCMT \citep{2009MNRAS.399.1026B} and 0.54 for TRAO \citep{2018ApJS..234...28L}. Detailed discussion on the outflow feature will be presented in Section \ref{outflow}. \section{DISCUSSION} \label{discussion} \subsection{Nature of radio emission}\label{nat_rad} Based on the GMRT maps and the radio spectral index estimation, two scenarios unfold in understanding the nature of the radio emission. The thermal radio emission could be explained as due to individual ultracompat (UC) {\mbox{H\,{\sc ii}}} regions or given the association with an EGO, one can explore the case of an ionized jet. We discuss the possibilities of these two scenarios in the following sections. \subsubsection{UC {\mbox{H\,{\sc ii}}} region} \label{UCHii} \begin{table*} \caption{Physical parameters of the radio continuum emission from the UC {\mbox{H\,{\sc ii}}} region associated with component R1 of {G12.42+0.50}.} \begin{center} \begin{tabular}{c c c c c c c c c c} \hline \hline & \ Source & $\theta_{src}$ &Radius & $T_e$ & $N_{Ly}$ & log ($N_{Ly}$) &$EM$ & $n_e$ & $t_{dyn}$ \\ & &(arcsec) & (pc) & (K ) & ($10^{45}$ s$^{-1}$) && (pc~cm$^{-6}$ ) &(cm$^{-3}$) & ($10^{-3}$~yr) \\ \hline \ & R1 & 1.8 & 0.01 &7416$\pm$437 &4.1 &45.6 & 1.8 $\times$ 10$^6$ & 9.4$\times$ 10$^3$ & 0.4 \\ \hline \ \end{tabular} \label{radio_physical_param_tab} \end{center} \end{table*} We first investigate under the UC {\mbox{H\,{\sc ii}}} region framework. Morphologically, R1 appears to be a compact, spherical radio source. The association of R1 with a hot molecular core ($\sim $183 K; Section \ref{2_comp}) supports the interpretation of the emission as being due to photoionization, since hot cores are often associated with UC {\mbox{H\,{\sc ii}}} regions \citep{{2000prpl.conf..299K},{2002ARA&A..40...27C},{2016A&A...593A..49B}}. Assuming the continuum emission at 1390~MHz to be optically thin and arising from a homogeneous, isothermal medium, we derive the Lyman continuum photon flux ($N_{Ly}$), the emission measure (EM) and the electron number density ($n_e$). These physical parameters are estimated using the following formulation \citep{2016A&A...588A.143S} \begin{equation} \bigg[\frac{N_{Ly}}{\textrm s^{-1}}\bigg] = 4.771 \times 10^{42} \bigg[\frac{S_\nu}{\textrm {Jy}}\bigg] \bigg[\frac{T_e}{ \textrm {K}}\bigg]^{-0.45} \bigg[\frac{\nu}{\textrm {GHz}}\bigg]^{0.1} \bigg[\frac{d}{\textrm {pc}}\bigg]^2 \end{equation} \begin{multline} \bigg[\frac{EM}{\textrm{pc}\; \textrm{cm}^{-6}}\bigg] = 3.217 \times 10^{7} \bigg[\frac{S_\nu}{\textrm{Jy}}\bigg] \bigg[\frac{\nu}{\textrm{GHz}}\bigg]^{0.1} \bigg[\frac{T_e}{\textrm{K}}\bigg]^{0.35} \\ \bigg[\frac{\theta_{src}}{\textrm{arcsec}}\bigg]^{-2} \end{multline} \begin{multline} \bigg[\frac{n_e}{\textrm{cm}^{-3}}\bigg] = 2.576 \times 10^{6} \bigg[\frac{S_\nu}{\textrm{Jy}}\bigg]^{0.5} \bigg[\frac{\nu}{\textrm{GHz}}\bigg]^{0.05} \bigg[\frac{T_e}{\textrm K}\bigg]^{0.175} \\ \bigg[\frac{\theta_{src}}{\textrm {arcsec}}\bigg]^{-1.5} \bigg[\frac{d}{\textrm {pc}}\bigg]^{-0.5} \end{multline} \noindent where $S_\nu$ is the integrated flux density of the ionized region, $T_e$ is the electron temperature, $\nu$ is the frequency, $\theta_{src}$ is the deconvolved size of the ionized region, and $d$ is the distance to the source. We estimate $T_e$ from the derived electron temperature gradient in the Galactic disk by \citet{2006ApJ...653.1226Q}. We use their empirical relation, $ T_e = (5780 \pm 350) + (287 \pm 46) R_G$, where $R_G$ is the Galactocentric distance. $R_G$ is estimated to be 5.7~kpc following \citet{2008ApJ...684.1143X}. This yields an electron temperature of 7416$\pm$437~K. The derived physical parameters of the UC {\mbox{H\,{\sc ii}}} region are listed in Table \ref{radio_physical_param_tab}. \par If a single ZAMS star is responsible for the ionization of this UC {\mbox{H\,{\sc ii}}} region, then from \citet{1973AJ.....78..929P}, the estimated Lyman continuum photon flux corresponds to a spectral type of $\rm B1-B0.5$. Following \citet{2011MNRAS.416..972D}, the Lyman continuum flux from the UC {\mbox{H\,{\sc ii}}} region is suggestive of a massive star of mass $\sim 9-12$~M$_\odot$. As discussed earlier, the estimate is made under the assumption of optically thin emission. Hence, this result only provides a lower limit, since the emission at 1390~MHz could be partially optically thick as is evident from our radio spectral index estimations. In addition to this, several studies show that there could be appreciable absorption of Lyman continuum photons by dust \citep{{2001ApJ...555..613I},{2004ApJ...608..282A},{2011A&A...525A.132P}}. It is further noticed that if the total infrared luminosity of {G12.42+0.50} ($2.8 \times 10^4$, Section \ref{2_comp}) were to be produced by a ZAMS star, it would correspond to a star with spectral type between $\rm B0-O9.5$ \citep{1973AJ.....78..929P}. Taking a B0 star, the Lyman continuum photon flux is expected to be $ \rm 2.3 \times 10^{47}~s^{-1}$. At optically thin radio frequencies such a star could generate an {\mbox{H\,{\sc ii}}} region with a flux density of $\sim 400$~mJy, which is higher than observed flux density value of 7.9~mJy observed. This could be suggestive of the central source going though a strong accretion phase, with it still being in a pre-UC {\mbox{H\,{\sc ii}}} or very early UC {\mbox{H\,{\sc ii}}} region phase \citep{2010ApJ...725..734G}. An intense accretion activity could stall the expansion of the {\mbox{H\,{\sc ii}}} region which results in weaker radio emission. The above picture is congruous with the infall scenario associated with R1 and the evidence of global collapse of the molecular cloud associated with {G12.42+0.50}. Detailed discussion on molecular gas kinematics are presented in Section \ref{infall}. \begin{figure} \centering \includegraphics[scale=0.38]{fig13.pdf} \caption{The UKIRT-UIST spectrum extracted over a 6 pixel wide aperture centered on the radio component, R1 is shown here. The spectral range is chosen to be the same as the VLT-ISAAC spectrum towards the infrared source, IRAS~18079-1756, associated with {G12.42+0.50}, studied by \citet{2003A&A...408..313K}. The absorption lines identified by these authors are indicated in the plot.} \label{kendall} \end{figure} \par From the Lyman continuum photon flux and the electron density estimates, we compute the radius of the Str\"omgren sphere, that is defined as the radius at which the rate of ionization equals the rate of recombination, assuming that the {\mbox{H\,{\sc ii}}} region is expanding in a homogeneous and spherically, symmetric medium. The radius of the Str\"omgren sphere, $R_s$ is given by the expression, \begin{equation} R_s = \bigg(\frac{3 N_{Ly}}{4 \pi n_{0}^2 \alpha_B}\bigg)^{1/3} \end{equation} \noindent where $\alpha_B$ is the radiative recombination coefficient taken to be $\rm 2.6 \times 10^{-13}~cm^3~s^{-1}$ \citep{1997ApJ...489..284K} and $n_0$ is the mean number density of atomic hydrogen which is estimated to be $\rm 2.1 \times 10^4~cm^{-3}$ from the clump detected in the column density map (Section \ref{cold_dust}). Thus, the radius of the Str\"omgren sphere, $R_s$ for the resolved component, R1 is calculated to be 0.007~pc. Using this, the dynamical age the {\mbox{H\,{\sc ii}}} region is determined from the expression \begin{equation} t_{dyn} = \bigg[\frac{4\, R_s}{7\, c_i}\bigg] \bigg[\bigg(\frac{R_{\mbox{H\,{\sc ii}}}}{R_s}\bigg)^{7/4} - 1\bigg] \end{equation} \noindent where $R_{\mbox{H\,{\sc ii}}}$, is the radius of the {\mbox{H\,{\sc ii}}} region, $c_i$ is the isothermal sound speed in the ionized medium, which is typically assumed to be $\rm 10~km~s^{-1}$. $R_{\mbox{H\,{\sc ii}}}$ is estimated to be 0.01~pc by taking the geometric mean of the deconvolved size given in {Table~}\ref{peak_position}. The dynamical age of the UC {\mbox{H\,{\sc ii}}} region associated with component R1 is determined to be $ 0.4 \times 10^3$~yr. Since this estimation is made under a not so realistic assumption that the medium in which the {\mbox{H\,{\sc ii}}} region expands is homogeneous, the results obtained may be considered representative at best. The derived physical parameters of the UC {\mbox{H\,{\sc ii}}} region are tabulated in {Table~} \ref{radio_physical_param_tab}. The estimated values of electron density and emission measure are in the range found for UC {\mbox{H\,{\sc ii}}} regions around stars of spectral type $\rm B1-B0.5 $ \citep{1994ApJS...91..659K}. Furthermore, the size estimates for UC {\mbox{H\,{\sc ii}}} regions are proposed to be $\rm \lesssim 0.1~pc$ \citep{{1989ApJS...69..831W},{2002ASPC..267...81K}}, in agreement with that derived for the component R1. The dynamical timescales obtained indicate a very early phase of the UC {\mbox{H\,{\sc ii}}} region \citep{{1989ApJS...69..831W},{2002ARA&A..40...27C}}. \citet{1989ApJS...69..831W} estimate that it would take $\sim 10^4$~yr for an UC {\mbox{H\,{\sc ii}}} region to expand against the gravitational pressure of the confining dense molecular cloud. \par On a careful scrutiny of the point sources in the region, it is seen that a red 2MASS\footnote{This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the NASA and the NSF} source (J18105109-1755496; $J$ = 13.727, $H$ = 11.011, $K$ = 9.351) is located at the peak position of R1 (within $\sim 0.3''$). Investigating its location in $JHK$ colour-colour diagrams (e.g {Fig.~} 6(d) of \citet{2017MNRAS.472.4750D}) suggests a highly embedded Class II YSO which in all likelihood could be the ionizing source. Detailed spectroscopic observations of this source is presented in \citet{2003A&A...408..313K}. In the observed wavelength range of $1.67-1.75$~{$\rm \mu m$}, the VLT/ISAAC $H$-band spectra, presented by these authors, show the presence of broad absorption features of He I ($\sim$ 1.7~{$\rm \mu m$}) and hydrogen. We did a careful examination of our UKIRT spectroscopic observations. We extracted the spectrum over a 6 pixel wide aperture (estimated from other stellar sources along the slit) centred on R1, a zoom in of which is shown in {Fig.~}\ref{kendall}. The spectral range is chosen such that it matches the VLT spectrum of \citet{2003A&A...408..313K} (refer {Fig.~}5 of their paper). Inspite of the poor signal-to-noise, the spectrum does show hint of the Br 11 line and possibly the Br 10 line as well as detected by \citet{2003A&A...408..313K}. Based on these absorption lines and the absence of the 1.693~{$\rm \mu m$} He II absorption line, \citet{2003A&A...408..313K} suggest this source to be a main-sequence star of spectral type B3 ($\pm 3$ subclasses). This is consistent with the spectral type derived from our measured radio flux densities. However, the absence of emission lines in their observed spectra prompted the authors to speculate a late evolutionary stage. This contradicts the results obtained from our $HK$ and $KL$ spectra which show the presence of several emission lines that are listed in Table \ref{spectral_lines}, indicating an early evolutionary phase. The results from the molecular line analysis discussed in Section \ref{kinematic_signature} is also in agreement with this picture. The compact component R2 can either be an independent UC {\mbox{H\,{\sc ii}}} region or an externally ionized density clump. If we consider it as an UC {\mbox{H\,{\sc ii}}} region then the observed Lyman continuum flux translates to an ionizing source of spectral type $\rm B3-B2$ \citep{1973AJ.....78..929P} and a mass of $6-9$~M$_\odot$ \citep{2011MNRAS.416..972D}. \subsubsection{A possible thermal jet?} \label{jet} \par Even with the compelling possibility of R1 being an UC {\mbox{H\,{\sc ii}}} region, we explore an alternate scenario along the lines of a possible thermal jet. This is motivated by the very nature of {G12.42+0.50} which is identified as an EGO and hence likely to be associated with jets/outflows. Further, several observational manifestations are consistent with the characteristics of thermal radio jets listed in \citet{1996ASPC...93....3A} and \citet{1997IAUS..182...83R}. {G12.42+0.50} is a weak radio source (integrated flux density $<$ 10~mJy) displaying a linear morphology, including components R1 and R2, in the north-east and south-west direction. It is also seen to be associated with a large scale molecular outflow ({Fig.~}\ref{outflow_moment}(a), Section \ref{outflow}) with the candidate jet located at its centroid position and the observed elongation aligned with the outflow axis. From the radio spectral index map shown in {Fig.~}\ref{specind}, we see that along the direction of the radio components R1 and R2, the spectral index varies between $\sim 0.3-0.7$. These values of spectral index are consistent with the radio continuum emission originating due to the thermal free-free emission from an ionized collimated stellar wind \citep{{1975A&A....39....1P},{1986ApJ...304..713R},{1998AJ....116.2953A}}. Similar range of spectral index values are also cited in literature for systems harbouring thermal radio jets \citep{{1994ApJ...420L..91A}, {2010ApJ...725..734G},{2016A&A...596L...2S}}. Additional support for the thermal jet hypothesis comes from the angular size spectrum. \citet{2016ApJ...826..208G} and \citet{2017ApJ...843...99H} have discussed the trend of the angular size spectrum where the jet features show a decrease in size with frequency as expected from the variation of electron density with frequency \citep{{1975A&A....39....1P},{1986ApJ...304..713R}}. In case of {G12.42+0.50}, the 1390~MHz and 5~GHz sizes show this trend with the upper limit from 610~MHz being consistent. It should be noted here that in the 5~GHz map, all structures upto $\sim 20\arcsec$ would be well-imaged \citep{2009A&A...501..539U}. However, the size dependence is not conclusive given the resolution of the two maps. Presence of shock-excited emission lines in the NIR (Section \ref{NIR_spectroscopy}) further corroborates with this ionized jet scenario. Additionally, a $\rm H_2O$ maser is seen to be associated with {G12.42+0.50} \citep{2013ApJ...764...61C}, located at an angular distance of $ \sim $12{$\arcsec$} from the radio peak. The position of this is indicated in {Fig.~}\ref{irac_ukidss_rgb}(b) and (c) and {Fig.~}\ref{outflow_moment}(b). $\rm H_2O$ masers have often been found in the vicinity of thermal radio jets, and in some cases both the thermal jet and $\rm H_2O$ masers are powered by the same star \citep{1995ApJ...453..268G}. \par The two competing schemes deliberated above are in good agreement with our observation making it difficult to be biased towards any. However, recent studies speculate about the co-existence of UC/HC {\mbox{H\,{\sc ii}}} regions and ionized jets. From the investigation of the nature of the observed centimetre radio emission in G35.20-0.74N, \citet{2016A&A...593A..49B} discuss the possibility of it being a UC {\mbox{H\,{\sc ii}}} region as well as a radio jet being powered by the same YSO suggesting an interesting transitional phase where the UC {\mbox{H\,{\sc ii}}} region has started to form while infall and outflow processes of the main accretion phase is still ongoing. Similar scenario is also invoked for the MYSO, G345.4938+01.4677 by \citet{2016ApJ...826..208G}. Both these examples conform well with our results. \citet{2010ApJ...725..734G} discuss about a string of radio sources which are likely to be the ionized emission due to shocks from fast jets wherein the separation of the inner lobe from the central object is $\sim 0.03$~pc. \citet{2003ApJ...587..739G} also examines a radio triple source in which case the central source harbours a high-mass star in an early evolutionary phase and ejects collimated stellar wind which ionizes the surrounding medium giving rise to the observed radio emission. In this case, the separation between the central source and the radio lobe is $\sim$ 0.14~pc. For {G12.42+0.50}, component R2, at a distance of $\sim 0.07$~pc from R1, can also be conjectured to be a clumpy, enhanced density region (SMA3) ionized by the emanating jet. The star forming region of {G12.42+0.50} has also been speculated to be harbouring a cluster \citep{{1984ApJ...281..225J},{2003A&A...408..313K}}. With the detection of R1, R2, SMA1 and SMA2, it reveals itself as a potentially active star forming complex. \subsection{Kinematic Signatures of gas motion} \label{kinematic_signature} \subsubsection{Infall activity} \label{infall} \begin{table} \caption{The infall velocity, $V_{\rm inf}$ and mass infall rate, $\dot{M}_{\rm inf}$ of the clump, C1 associated with {G12.42+0.50}, estimated using the blue-skewed optically thick $\rm HCO^+$ line } \begin{center} \begin{tabular}{c c c c} \hline \hline $V_{\rm LSR}$ &$V_{\rm inf}$ & $ \delta V$ & $\dot{M}_{\rm inf}$ \\ ($\rm km~s^{-1}$) &($\rm km~s^{-1}$) & &($10^{-3}$ M$_\odot$ yr$^{-1}$) \\ \hline \ 18.3 & 1.8 &-0.6 & 9.9 \\ \hline \ \end{tabular} \label{infall-tab} \end{center} \end{table} \begin{figure} \includegraphics[scale=0.30]{fig14.pdf} \caption{(a) The grey scale shows the 4.5~{$\rm \mu m$} map. The green contours represent the ATLASGAL 870~{$\rm \mu m$} emission. The contour levels are 3, 9, 27, 72 and 108 times $\sigma$ ($ \sigma\sim 0.06$~Jy/beam). The $\rm HCO^+$ spectrum shown in blue and the $\rm H^{13}CO^+$ spectrum shown in red are overlaid. The spectra are boxcar-smoothed by five channels that corresponds to a velocity smoothing of $\rm 0.6~km~s^{-1}$. The dashed vertical lines indicate the LSR velocity estimated by averaging the peak positions of the $\rm H^{13}CO^+$ line in all the regions where the line is detected. The peak position of the 870~{$\rm \mu m$} emission is marked by a magenta `x'. (b) The $\rm HCO^+$ spectrum extracted towards the ATLASGAL 870~{$\rm \mu m$} peak . The best fit obtained using the `two-layer' model is shown in red. The solid blue line represents the LSR velocity, $\rm 18.3~km~s^{-1} $ derived from the optically thin $\rm H^{13}CO^+$ line and the dashed blue line represents the LSR velocity obtained from the model fit. The red arrow points to a blue-wing which could indicate a possible molecular outflow. } \label{infall_grid} \end{figure} \begin{table*} \caption{Best fit parameters retrieved from the model for the self-absorbed $\rm HCO^+$ line observed towards {G12.42+0.50}.} \begin{center} \begin{tabular}{c c c c c c c c} \hline \hline $\tau_0$ & $\Phi$ & $J_c$ & $J_f$ & $J_r$ & $ V_{\rm cont} $ & $\sigma$ & $V_{\rm rel}$ \\ &&(K) &(K) &(K) &($\rm km~s^{-1}$) &($\rm km~s^{-1}$) &($\rm km~s^{-1}$) \\ \hline \ 1.3 & 0.3 &13.9 &7.4 &10.6 & 17.9 &0.7 &1.3 \\ \hline \ \end{tabular} \label{two_layer} \end{center} \end{table*} The double-peaked, blue-asymmetric $\rm HCO^+$ line profile with a self-absorption dip shown in {Fig.~}\ref{molecule_thick} is a characteristic signature of infall activity \citep{{2005ApJ...620..800D},{2006A&A...445..979P},{2013A&A...555A.112P},{2018ApJ...852...12Y}}. In order to probe the gas motion in the entire clump associated with {G12.42+0.50}, we generate a grid map of the $\rm HCO^+$ line profile which is presented in {Fig.~}\ref{infall_grid}(a). $\rm HCO^+$ line profiles are displayed in blue. For comparison, we also plot the optically thin transition, $\rm H^{13}CO^+$, in red. The grey scale map shows the 4.5~{$\rm \mu m$} map emission with the ATLASGAL contours (in green) overlaid. The spectra shown here are averaged over regions gridded to an area given by the square of the beam size (36{$\arcsec$}) of the Mopra radio telescope. The $\rm HCO^+$ spectrum displays blue-skewed line profiles in all the grids within ATLASGAL contour revealing a strong indication of the clump in global collapse. For the molecular cloud to be collapsing, the gravitational energy of the cloud has to overcome the kinetic energy that supports it from collapsing. The gravitational stability of the cloud can be inspected using the virial parameter, $ \alpha_{\rm vir} = 5\sigma^2R/(GM_{\rm C}) $ which needs to be lower than unity \citep{1992ApJ...395..140B} for a collapsing cloud. $\sigma$ is the velocity dispersion which is taken from the FWHM of the optically thin $\rm H^{13}CO^+$ line and is estimated to be $\rm 1.2~km~s^{-1}$. Taking $R$ and $ M_{\rm C} $ as the radius and mass of the clump, C1, the virial parameter, $\alpha_{\rm vir}$ is calculated to be $\sim 0.9$. In comparison, \citet{2018ApJ...852...12Y} obtain a value of 0.58 for the EGO G022.04+0.22 and \citet{2011A&A...530A.118P} in their study of massive cores obtain values in the range $0.1-0.8$. Given the presence of infall and outflow activity, that could significantly increase the velocity dispersion, the derived estimate towards {G12.42+0.50} is likely to be an overestimate. \par To support the picture of protostellar infall, we estimate the infall velocity and mass infall rate. First, to quantify the blue-skewness of the $\rm HCO^+$ profile, we calculate the asymmetry parameter, $\delta V$, using the following expression \citep{2013RAA....13...28Y}, \begin{equation} \delta V = \frac{(V_{\rm thick}-V_{\rm thin})}{\Delta V_{\rm thin}} \end{equation} Here, $\delta V$ is defined as the ratio of the difference between the peak velocities of the optically thick line, $V_{\rm thick}$ and the optically thin line, $V_{\rm thin}$, and the FWHM of the optically thin line denoted by $\Delta V_{\rm thin}$. Using values of $ V_{\rm thin} = 18.3{\rm ~km~s^{-1}}$ and $\Delta V_{\rm thin}=2.9{\rm ~km~s^{-1}}$ from the Gaussian fit to the $\rm H^{13}CO^+$ line and $ V_{\rm thick} = 16.5{\rm ~km~s^{-1}}$, the peak of the blue component of the $\rm HCO^+$ line, $\delta V$ is estimated to be $-0.6$. According to \citet{1997ApJ...489..719M}, the criteria for a bona fide blue-skewed profile is $\delta V < -0.25$. Furthermore, we estimate the mass infall rate ($\dot{M}_{\rm inf}$) of the envelope using the equation, $\dot{M}_{\rm inf}=4\pi R^2V_{\rm inf}\rho$ \citep{2010A&A...517A..66L}, where $V_{\rm inf}$ = $V_{\rm thin} - V_{\rm thick}$ = $V_{\rm H^{13}{CO}^+} - V_{\rm HCO^+}$ is the infall velocity and $\rho$ is the average volume density of the clump given by $\rho = M_{\rm C}/\frac{4}{3} \pi R^3$. The clump mass, $M_ {\rm C}$ and radius, $R$ are taken from Section \ref{clump_text}. The infall velocity, $V_{\rm inf}$ and the mass infall rate are estimated to be $\rm 1.8~km~s^{-1}$ and $\rm 9.9 \times 10^{-3}$ M$_\odot$ yr$^{-1}$, respectively. The mass infall rate estimate is higher compared to the value of $6.4 \times 10^{-3}$ M$_\odot$ yr$^{-1}$ derived by \citet{2015MNRAS.450.1926H}. As discussed in Section \ref{clump_text}, our clump mass and radius estimates are higher. Nevertheless, both the estimates fall in the range seen in other high mass star forming regions \citep{{2010ApJ...710..150C},{2010A&A...517A..66L},{2013MNRAS.436.1335L}}. \par To further understand the properties of the infalling gas, we extend our analysis and fit the $\rm HCO^+$ line with a `two-layer' model following the discussion in \citet{2013MNRAS.436.1335L}. Here, we briefly repeat the salient features of the model with a description of the equations and the terms. In this model, a continuum source is located in between the two layers, with each layer having an optical depth, $ \tau_0 $ and velocity dispersion, $ \sigma $, and an expanding speed, $ V_{\rm rel} $ with respect to the continuum source. This is the infall velocity introduced earlier. $ V_{\rm rel} $ is negative if the gas is moving away and positive when there is inward motion. The brightness temperature at velocity, $ V $ is given by \begin{multline} \Delta T_{B}=(J_{f}-J_{cr})[1-exp(-\tau_{f})]\\ +(1-\Phi)(J_{r}-J_{b})\times[1-exp(-\tau_{r}-\tau_{f})] \end{multline} where \begin{equation} J_{cr}=\Phi J_{c}+(1-\Phi)J_{r} \end{equation} and \begin{equation} \tau_{f}=\tau_{0}exp\bigg[\frac{-(V-V_{\rm rel}-V_{\rm cont})^{2}}{2\sigma^{2}}\bigg] \end{equation} \begin{equation} \tau_{r}=\tau_{0}exp\bigg[\frac{-(V+V_{\rm rel}-V_{\rm cont})^{2}}{2\sigma^{2}}\bigg] \end{equation} \noindent Here $J_{c}$, $J_{f}$, $J_{r}$, $J_{b}$ are the Planck temperatures of the continuum source, the ``front" layer, the ``rear" layer and the cosmic background radiation, respectively. $J$ is the blackbody function at temperature, $T$ and frequency, $\nu$ and is expressed as \begin{equation} J=\frac{h\nu}{k}\frac{1}{exp (T_{0}/T)-1} \end{equation} \noindent where $T_0=h\nu/k$, $h$ is Planck's constant, and $k$ is Boltzmann's constant. $\Phi$ and $V_{\rm cont}$ are the filling factor and systemic velocity (or the LSR velocity) of the continuum source, respectively. The $\rm HCO^+$ profile and the fitted spectrum (in red) are displayed in {Fig.~}\ref{infall_grid}(b). The LSR velocities determined from the model fit (dashed blue) and the optically thin transition of $\rm H ^{13}CO^+$ (solid blue) are also shown in the figure. The blue component of the $\rm HCO^+$ line shows a clear presence of broadened wing likely to be due to outflow. To avoid contamination from this outflow component, we restrict the velocity range between $\rm 16.1-21.0~km~s^{-1} $ while fitting the model. The model derived parameters are listed in {Table~}\ref{two_layer}. The model fitted values are fairly consistent (slightly smaller) with our previous estimates. \subsubsection{Outflow feature} \label{outflow} \begin{figure*} \centering \includegraphics[scale=0.20]{fig15.pdf} \caption{(a) The {\it Spitzer} IRAC colour composite image of {G12.42+0.50}, overlaid with the SMA 1.1~mm emission contours in black, with the contour levels same as in {Fig.~}\ref{FIR}(k). The $\rm ^{12}CO~(3-2) $ emission integrated from the peak of the blueshifted profile to the blue wings ($\rm 9.3-15.8~km~s^{-1} $) is represented using blue contours and from the peak of the red profile to the red wings ($\rm 20.8-27.3~km~s^{-1} $) is represented using red contours. The contours start from the $5~\sigma$ level for both the red and blue lobes and increases in steps of $3~\sigma$ and $4~\sigma$, respectively ($\rm \sigma=2.7~K~km~s^{-1}$ for red lobe and $\rm \sigma=2.3~K~km~s^{-1}$ for blue lobe ). The yellow line defines the cut along which the position-velocity (PV) diagram is made. The cut is selected in such a way that it passes through the red and blue lobes and also through the extended green emission. The red and blue lobes of the molecular outflow lie along a similar axis as the ionized jet. (b) The colour scale represents the 1.1~mm continuum emission from SMA observed towards {G12.42+0.50} with the contour levels same as in {Fig.~}\ref{Dust_SED}(k). Radio emission at 1390~MHz is represented by yellow contours with the contour levels same as that in {Fig.~}\ref{radio}(a). The restoring beams of the 1390~MHz map and 1.1~mm map are indicated at the bottom- right and left of the image, respectively. The `x's indicate the positions of R1 and R2. The white circle marks the position of the $\rm H_2O$ maser in the vicinity of {G12.42+0.50}.} \label{outflow_moment} \end{figure*} \begin{figure} \centering \includegraphics[scale=0.21]{fig16.pdf} \caption{ The PV diagram of the $\rm ^{12}CO~(3-2) $ transition along the cut shown in yellow in {Fig.~}\ref{outflow_moment}(a) at a position angle of 32$^\circ$. The contour levels are 4, 9, 14 and 18 times $\sigma$ ($\sigma \sim 1.0~\rm K$). The zero offset in the PV diagram corresponds to the position of the central coordinate of {G12.42+0.50} ($\rm \alpha_{J2000}= 18^{h}10^{m}51.1^s, \delta_{J2000} = -17\degree 55\arcmin 50\arcsec$). The LSR velocity, $\rm 18.3~km~s^{-1} $, is represented by the dashed red line. } \label{pv} \end{figure} \begin{figure} \centering \includegraphics[scale=0.45]{fig17.pdf} \caption{Channel maps of $\rm ^{12}CO~(3-2)$ line associated with molecular cloud harbouring {G12.42+0.50}. Each box contains a pair of maps corresponding to the red- and blueshifted emission at the same offset from the LSR velocity. The channel widths are indicated at the top left of each map. The red contours correspond to the red wing and the blue contours correspond to the blue wing. The contours start from the 3$\sigma$ level of each map and increases in steps of 3$\sigma$.} \label{channel_fwhm} \end{figure} Massive molecular outflows are ubiquitous in star forming regions \citep{2002A&A...383..892B} and often co-exist with ionized jets \citep[e.g.][]{{1996ASPC...93....3A},{2016MNRAS.460.1039P}}. The jets are believed to entrain the gas and dust from the ambient molecular cloud, thus driving molecular outflows. According to several studies, broad wings of the optically thick lines like $\rm HCO^+$ are well accepted signatures of outflow activity \citep[e.g.][]{{2007ApJ...663.1092K},{2010A&A...520A..49S},{2013ApJ...771...24S}}. As mentioned in the previous section, broadening of the blue wing of the infall tracer line of $\rm HCO^+$ is seen in {G12.42+0.50}. Given the association with an EGO and the alignment with a large scale CO outflow features, the origin of the broad blue wing can be attributed to be due to the outflow. Alternate scenarios like unresolved velocity gradients \citep{2014A&A...565A.101T} or gravo-turbulent fragmentation \citep{2005ApJ...620..786K} have been invoked for broadened wings but are less likely to be the case here. In this section, we focus on the rotational transition lines of CO that are well known tracers of molecular outflow, and investigate the outflow kinematics of the molecular cloud associated with {G12.42+0.50} using the archival data of the isotopologues of $\rm CO~(3-2)$ transition from JCMT and $\rm ^{13}CO~(1-0)$ observation from TRAO. \par The red and blueshifted velocity profiles of the CO transitions shown in {Fig.~}\ref{CO_plot}(a) can be attributed to emission arising from distinct components of the CO gas that are moving in opposite directions away from the central core. We note that the peaks have different shifts with respect to the LSR velocity with the $\rm ^{12}CO~(3-2)$ line showing the maximum shift and $\rm C^{18}O~(3-2)$ line has the minimum shift. The peaks of the red component of $\rm ^{12}CO~(3-2)$, $\rm ^{13}CO~(3-2)$ and $\rm C^{18}O~(3-2)$ transitions are shifted by 2.5, 1.6 and $\rm 1.0~km~s^{-1}$ from the LSR velocity. For the blue component the shifts are 2.5, 1.7 and $\rm 0.9~km~s^{-1}$, respectively. $\rm ^{12}CO$ molecule, having the lowest critical density among the three, effectively traces the outer envelope of the molecular cloud, hence showing the maximum shift and the $\rm C^{18}O$ molecule, the densest among the three species is a tracer of the dense core of the molecular cloud and thus shows the minimum shift. \par In order to map the outflow in the vicinity of {G12.42+0.50}, we construct the zeroth moment map of the two components using the task, {\tt IMMOMENTS} in CASA. The zeroth moment map is the integrated intensity map that gives the intensity distribution of a molecular species within the specified velocity range. The $\rm ^{12}CO~(3-2)$ emission is integrated from the peak of the blueshifted profile to the blue wings that corresponds to the lower velocity channels ranging from $\rm 9.3-15.8~km~s^{-1} $ for the blue component and from the peak of the redshifted profile to the red wing that corresponds to the higher velocity channels ranging from $\rm 20.8-25.3~km~s^{-1} $ for the red component. The contours are shown overlaid on the {\it Spitzer} IRAC colour composite image in {Fig.~}\ref{outflow_moment}(a). The figure reveals the presence of two distinct, spatially separated red and blue lobes. High-velocity gas is also seen towards the tail of the blue component. The location of the 1.1~mm dense cores, SMA1, SMA2 and SMA3 are also marked in this figure. The central part covering the brightest portion of the IRAC emission (location of the EGO), is shown in {Fig.~}\ref{outflow_moment}(b) with the spatial distribution of the ionized gas overlaid on the 1.1~mm dust emission. To corroborate with the zeroth moment map showing the outflow lobes, in {Fig.~}\ref{pv}, we show the position-velocity (PV) diagram constructed along the outflow direction (position angle of $\rm \sim 32^\circ$; east of north) highlighted in {Fig.~}\ref{outflow_moment}(a). The direction along which the PV diagram is made is chosen such that both the red and blue lobes are sampled and it also covers the region of extended 4.5~{$\rm \mu m$} emission of {G12.42+0.50}. The zero offset in the PV diagram corresponds to the position of the central coordinate of the EGO, {G12.42+0.50} ($\rm \alpha_{J2000}= 18^{h}10^{m}51.1^s, \delta_{J2000} = -17\degree 55\arcmin 50\arcsec$). As expected, the PV diagram also clearly reveals distinct red and blue components of the $\rm ^{12}CO~(3-2)$ emission from the LSR velocity of the cloud represented by a red dashed line. Towards the lower region of the PV diagram we can trace a weaker redshifted $\rm ^{12}CO~(3-2)$ component consistent with the high-velocity tail seen in the zeroth moment map. \par To further probe the velocity structure of the cloud associated with {G12.42+0.50}, we generate channel maps of the $\rm ^{12}CO~(3-2) $ emission following the method outlined in \citet{2014A&A...565A..34S}. To define suitable velocity ranges and identify the blue and redshifted outflow emission, we set the inner limits of the velocity at $\sim V_{\rm LSR} \pm {\rm FWHM}/2$, where FWHM is the $\rm H^{13}CO^+$ linewidth. Taking the offset to be $\rm \pm 1.5~km~s^{-1}$ from the LSR velocity, the inner velocity limit for the red and the blueshifted lobes are estimated to be $\rm 19.8~km~s^{-1}$ and $\rm 16.8~km~s^{-1}$, respectively. The channel maps constructed are shown in {Fig.~}\ref{channel_fwhm}. Each grid displays a pair of maps with a velocity width of $\rm 1~km~s^{-1}$. Prominent outflow features begin to appear at velocities $\rm \sim 13.8$ and $\rm 21.8~km~s^{-1}$ from the blue and red components, respectively. Beyond these velocities there is no contribution from the central core. Closer to the LSR velocity, the emission is rather complex making it difficult for the outflow features to be discernible. This is understandable since near the LSR velocity, the emission from the outflow components is likely to be contaminated by the infall motion and contribution from the diffuse gas. The channel maps are comparable to those presented by \citet{2009ApJ...696...66Q} for the region G240.31+0.07 with similar complex velocity structure near the LSR. The channel maps also show the presence of a redshifted component between velocities $\rm 20.8-22.8~km~s^{-1}$ overlaping with the blue lobe towards the south-west. As will be discussed later in Section \ref{hub_filament}, such a velocity distribution can be indicative of accretion through filaments. \par Morphologically, the spatially separated red and blue lobes associated with {G12.42+0.50} resembles a wide-angle bipolar outflow seen in the star forming region G240.31+0.07 studied by \citet{2009ApJ...696...66Q}. These authors suggest the wide-angle bipolar outflow as the ambient gas being swept up by an underlying wide-angle wind and is driven by one of the three mm peaks located close to the geometric centre of the bipolar outflow. Only a handful of studies have found the presence of wide-angle bipolar molecular outflows associated with high mass star formation \citep[e.g.][]{{1998ApJ...507..861S},{2009ApJ...696...66Q}}. \citet{1998ApJ...507..861S} investigate the likely driving source of the poorly collimated molecular outflow associated with G192.16. Coexistence of wide-angle CO outflow with shock-excited {H$_2$} emission, prompted them to conclude that the G192.16 outflow is powered by the combination of a disk-wind and a jet. Given the likely association of {G12.42+0.50} with an ionized jet supported by the presence of shock-excited NIR {H$_2$} lines, we propose a similar picture of coexistence of disk-wind and a jet, where the wide-angle bipolar CO outflow is likely to be driven by the underlying wide-angle wind. \par Of crucial importance is the identification of the driving source for this outflow. \citet{2008A&A...488..579M} elucidates about the star forming region IRAS~18151-1208, where two detected bipolar outflows are shown to be powered by two mm sources, MM1 and MM2. They have also detected a third mm core, MM3 that does not show any outflow activity. \citet{2002A&A...383..892B}, in their statistical study of massive molecular outflows, state that a large fraction of their sample show bipolar outflow and these are seen to be associated with massive mm sources and in most cases, are centred on the mm peaks. As seen in {Fig.~}\ref{outflow_moment}, two mm cores (SMA1 and SMA2) are located towards the centroid of the bipolar outflow associated with {G12.42+0.50}. These are shown to be potential high-mass star forming cores, Further, the absence of radio emission and IR sources imply a very early evolutionary phase. We further investigate whether SMA1 and/or SMA2 are the powering sources of the CO bipolar outflow in {G12.42+0.50}. \par Following \citet{2009ApJ...696...66Q}, the dynamical timescale of the outflow seen associated with {G12.42+0.50} is computed using the expression, $T_{dyn} = L_{flow}/v_{max} $, where $ L_{flow} \sim 1.2~\rm pc$ is the half length of the end-to-end extension of the flow and $v_{max} \sim 9.0~\rm km~s^{-1}$ is the maximum flow velocity from the LSR velocity of the cloud. This yields a dynamical timescale of the outflow that is $\rm 1.3 \times 10^5$~yr. Comparing with the results obtained by \citet{2009ApJ...696...66Q}, \citet{1998ApJ...507..861S} and \citet{2003ApJ...584..882S}, our estimated value is in agreement with massive molecular outflow from an UC {\mbox{H\,{\sc ii}}} region. This result supports our unfolding picture of radio component R1, where coexistence of an UC {\mbox{H\,{\sc ii}}} region and an ionized thermal jet is seen and the large-scale CO outflow can also be attributed to the UC {\mbox{H\,{\sc ii}}} region. However, it should also be kept in mind that unlike \citet{2009ApJ...696...66Q}, there is no SMA core coinciding with R1. Moreover, their results are based on interferometric observations, whereas, the JCMT CO outflow data used here are from single-dish measurements. In these single-dish observations, the inner outflow jets are mostly unresolved and the measurements are less sensitive to high-velocity outflow emission resulting in an overestimation of the dynamical ages. Hence, one cannot rule out the possibility of SMA1 and/or SMA2 being the outflow driving sources similar to the case of AFGL 5142 \citep{2016ApJ...824...31L}. If indeed the binary cores, SMA1 and SMA2 are the outflow driving cores, then the nature of the radio emission in R1 (and R2) can be thought of as ionized jet emission driven by the mm cores which also drives the large scale CO outflows detected. Possibility of multiple outflows from the mm cores and the UC {\mbox{H\,{\sc ii}}} region also exists \citep{{2002A&A...387..931B},{2003A&A...408..601B}}. This advocates for high-resolution observations for a better understanding. \subsubsection{Hub-filament system} \label{hub_filament} \begin{figure} \centering \includegraphics[scale=0.35]{fig18.pdf} \caption{Channel maps of $\rm ^{12}CO$ emission is shown here with each channel having a velocity width of $\rm 0.5~km~s^{-1} $. For each map, the black contours represent the $\rm ^{12}CO$ emission starting from the $3\sigma$ level and increasing with a step of $3\sigma$.} \label{filament} \end{figure} \begin{figure} \centering \includegraphics[scale=0.35]{fig19.pdf} \caption{The velocity peaks of $\rm ^{12}CO~(3-2) $ extracted along the filaments is overlaid on the column density map of the region associated with {G12.42+0.50} shown in grey scale. The positions of all the filaments are also labelled. } \label{velocity_filament} \end{figure} From the above discussion we hypothesize a picture of global collapse of the molecular cloud harbouring the EGO, {G12.42+0.50}. Interestingly, {Fig.~}\ref{FIR}(d) and (h), unfold the presence of large scale filamentary structures along the south-west direction of {G12.42+0.50}, all merging at the location of the clump, C1 enveloping {G12.42+0.50}. As mentioned earlier, the concurrence of a collapsing cloud with converging filaments suggest a hub-filament system. In literature, hub-filament systems are common in sites of high-mass star formation \citep[e.g.][]{{2013A&A...555A.112P},{2016ApJ...824...31L},{2018ApJ...852...12Y}}. In such systems, converging flows are detected where matter funnels in through the filaments into the hub, where accretion is most pronounced. Morphologically, the molecular cloud system associated with {G12.42+0.50} resembles the hub-filament system associated with the star forming region, G22 \citep{2018ApJ...852...12Y} and the IRDC, SDC335 ($\rm SDC335.579-0.292$) \citep{2013A&A...555A.112P}. \par To delve deeper into this picture, we investigate the velocity structure of the filaments. To proceed, we construct the channel maps of the $\rm ^{12}CO$ emission which are illustrated in {Fig.~}\ref{filament}. The velocity ranges are selected by examining the JCMT $\rm ^{12}CO$ data cube and choosing the range where the $\rm ^{12}CO$ emission is detected. The velocity width of each channel is chosen to be $\rm 0.5~km~s^{-1}$ similar to that used by \citet{2019A&A...621A.130L} to investigate the $\rm C^{18}O~(2-1)$ emission associated with IRDC, G351.776-0.527. The spatial coincidence of the $\rm ^{12}CO$ emission with the filaments is remarkable. The gas associated with the filaments is consistently redshifted with respect to the LSR velocity of the clump, C1. The velocity of the cloud along the filaments peak in the velocity range of $\rm \sim18-23~km~s^{-1}$. The variation in velocity suggests bulk gas motion along the filaments. From the channel maps we can see that the velocity of the molecular gas along the filament decreases as it approaches the central core, with the maximum velocity at the south-west end of the filament. It has to be noted here that the $\rm ^{12} CO~(3-2)$ transition is also a tracer of molecular outflow and as shown in Section \ref{outflow}, {G12.42+0.50} is also an outflow source. Hence, the decrease in velocity near the clump, C1 may be attributed to the interaction with the molecular outflow. \par To further elaborate on the velocity structure, we extract the spectra of $\rm ^{12}CO~(3-2)$ along the filaments with a step size of half the angular resolution ($ \sim 15''/2$) of the JCMT-HARP observation. The peak velocities estimated by fitting 1D Gaussian profiles to the spectra are shown in {Fig.~}\ref{velocity_filament} as colour-coded circles overlaid on the {H$_2$} column density map. Along the filaments, F1 and F6 it can be seen that the velocity is within the range $ \rm 17-19~km~s^{-1} $, closer to the LSR velocity, and increasing towards the clump, C1. However, along the filaments F2, F3, and F4 the velocity is on the higher side of the LSR velocity, ranging from $ \rm 19-21~km~s^{-1} $ and decreasing towards the central clump, C1. But in the case of filament, F5, we do not clearly notice any velocity gradient, and the values are seen to vary in the range $ \rm 19-20~km~s^{-1} $. Similar velocity gradients along filaments are detected in star forming regions such as SDC335 \citep{2013A&A...555A.112P}, G22 \citep{2018ApJ...852...12Y} and AFGL 5142 \citep{2016ApJ...824...31L}. Following these authors, we also attribute the velocity gradients to be due to gas inflow through filaments. A number of other mechanisms have also been proposed, which include filamentary collapse, filament collision, rotation, expansion and wind-acceleration to explain the observed velocity distributions \citep{2014A&A...561A..83P}. If the velocity distribution were to be explained by expansion scenario, we should have observed red-skewed velocity profiles of optically thick lines ($\rm HCO^+$, HCN and HNC) towards the clump, C1 \citep{2018ApJS..234...28L}. On the contrary, we observe blue-skewed velocity profiles, hence rendering the expansion picture to be unlikely. Further, we do not observe any cloud collision signature of enhanced line-widths at the junction \citep{2018ApJ...852...12Y}. The molecular outflow likely to be driven by the thermal jet in the clump, C1, cannot possibly explain the red-skewed velocity along the filaments, $ \rm F2-F5 $, since these fall along the blue lobe of the detected outflow. Also, in general, outflows are located between filaments \citep{2016ApJ...824...31L}. Hence, on comparison with earlier studies and the lack of evidence to prove otherwise, we infer that the observed velocity gradient is a result of gas inflow along the filaments, although there could be contribution from the outflowing gas. Nonetheless, $\rm ^{12}CO~(3-2)$ being an optically thick molecular line transition, cannot effectively probe the velocity distribution within the filaments. Thus, high resolution observations of optically thin lines would give a better picture. As discussed in Section \ref{clump_text}, 11 eleven clumps, $ \rm C2-C12$, lie along the identified filaments. It is also seen that along the filaments the dust temperature is lower than C1. The mass estimates show that these are less massive than the central core (clump, C1). \citet{2018ApJ...852...12Y}, in the study of the hub-filament system associated with G22, finds two clumps more massive than the other clumps along the filament. These clumps,that dominate the emission at wavelengths longer than 24~{$\rm \mu m$} are the most active star forming regions in G22. This concurs well with {G12.42+0.50}, where the most massive clump, C1, is an active star forming clump. \section{SUMMARY} \label{summary} We carried out a comprehensive multiwavelength study towards the EGO, {G12.42+0.50} and its associated environment. Our main results are summarized as follows \begin{enumerate} \item The radio continuum emission mapped at 1390~MHz reveals a linear structure extended in the north-east and south-west direction with the presence of two compact radio components, R1 and R2 that are unresolved at 610~MHz. The peak emission at 610~MHz is coincident with the component R1. \item We explore different scenarios to explain the nature of the ionized emission. Under the UC {\mbox{H\,{\sc ii}}} framework, assuming the emission at 1390~MHz to be optically thin, the observed Lyman continuum flux translates to an ionizing source of spectral type of $\rm B1-B0.5$. An alternative picture of ionized thermal jet is examined, given various observed characteristics including the spectral index values of $0.3-0.9$ in the region. We are prompted to consider the co-existence of the UC {\mbox{H\,{\sc ii}}} region with an ionized jet being powered by the same YSO. IRAS~18079-1756 (2MASS J18105109-1755496), a deeply embedded Class II YSO is likely to be the driving source. \item Presence of shock-excited {H$_2$} and [FeII] line emission is confirmed from NIR narrow-band imaging and spectroscopy in concurrence with the jets/outflows picture. \item A massive central clump, C1 is identified from the 870~{$\rm \mu m$} map which envelopes the detected radio and the enhanced and extended 4.5~{$\rm \mu m$} emission. The clump has a mass 1375~M{$_\odot$} and total luminosity $\rm 2.8 \times 10^4~L_\odot$. Two-component modeling shows the presence of an inner warm component surrounded by an extended outer, cold envelope traced mostly by the FIR wavelengths \item Seven molecular species from the MALT90 survey are detected towards the EGO, {G12.42+0.50}. The optically thick lines, $\rm HCO^+$ and HCN show signatures of protostellar infall. From the blue-skewed profile of the $\rm HCO^+$ line, infall velocity and mass infall rate are estimated to be $\rm 1.8~km~s^{-1}$ and $\rm 9.9 \times 10^{-3}~M_\odot yr^{-1}$. \item From the line observations of the $J=3-2$ transition of the molecular species $\rm ^{12}CO$, $\rm ^{13}CO$ and $\rm C^{18}O$, we detect the presence of a wide-angle bipolar outflow. From the dynamical age, $\rm 1.3 \times 10^5~yr$, of the bipolar outflow it seems likely that the UC {\mbox{H\,{\sc ii}}} drives the same though the possibility of the SMA cores (SMA1 and SMA2) being the powering source(s) cannot be ruled out. \item Signature of a hub-filament system is seen in the 8.0~{$\rm \mu m$} and FIR images and is supported by the constructed column density and dust temperature maps. A detailed study of the gas kinematics agrees with bulk motion in the filaments and suggest a likely picture of gas inflow along the filaments to C1. \item A conjectured hypothesis of the EGO, {G12.42+0.50}, satisfying the multiwavelength observations, could be an active star forming complex where very early evolutionary cores (SMA1 and SMA2) are seen. Apart from this, an accreting (likely through filaments) MYSO in an initial phase of an UC {\mbox{H\,{\sc ii}}} region and driving a large-scale molecular outflow entrained by a likely ionized thermal jet is detected. \end{enumerate} \section*{ACKNOWLEDGEMENTS} We thank the referee for critically going through the manuscript and giving valuable suggestions. We thank the staff of the GMRT, who made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. We also thank the staff of UKIRT for their assistance in the observations. UKIRT is owned by the University of Hawaii (UH) and operated by the UH Institute for Astronomy. When some of the data reported here were acquired, UKIRT was supported by NASA and operated under an agreement among the University of Hawaii, the University of Arizona, and Lockheed Martin Advanced Technology Center; operations were enabled through the cooperation of the East Asian Observatory. This research made use of NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, Caltech under contract with NASA. This publication also made use of data products from {\it Hersche}l (ESA space observatory) and the Millimetre Astronomy Legacy Team 90 GHz (MALT90) survey. We also made use of the ATLASGAL data products. The ATLASGAL project is a collaboration between the Max-Planck-Gesellschaft, the European Southern Observatory (ESO) and the Universidad de Chile.
1,314,259,996,907
arxiv
\section{Introduction} A Lie-Vessiot system, as defined in \cite{BM2008}, is a system of non-autonomous differential equations, \begin{equation}\label{GeneralNA} \dot x_i = F_i(t,x_1,\ldots,x_n),\end{equation} such that there exist $r$ functions $f(t)$ of the parameter $t$ verifying: $$F_i(t,x_1,\ldots,x_n) = \sum_{j=1}^rf_j(t)(A_jx_i),$$ where $A_1,\ldots,A_s$ are autonomous vector fields which infinitesimally span a \emph{pretransitive} Lie group action. Such systems were introduced by S. Lie at the end of 19th century (see, for instance \cite{Lie1893c}). The differential equation \eqref{GeneralNA}, interpreted as a non-autonomous vector field, in a manifold $M$, is a linear combination of the infinitesimal generators of the action of $G$ in $M$: $$\vec X = \frac{\partial}{\partial t} + \sum f_j(t)A_j.$$ In \cite{BM2008}, it is proven that a differential equation admits a \emph{superposition law} if and only if it is a Lie-Vessiot system related to a pretransitive Lie group action (this is the global version of a classical result exposed in \cite{Lie1893c}). The orbits by a pretransitive group action are homogeneous $G$-spaces, so that we can decompose a Lie-Vessiot system in a family of systems on homogeneous spaces. Therefore, Lie-Vessiot systems on homogeneous spaces are the building blocks of differential equations admitting superpostion laws. Here, we study Lie-Vessiot systems on algebraic homogeneous spaces $M$ with coefficients $f_i$ in a differential field $\mathcal K$ whose field of constants $\mathcal C$ is the field of definition of the phase space $M$. In this frame, a Lie-Vessiot system is seen as a derivation of the scheme $M_{\mathcal K}$, compatible with the canonical derivation of $\mathcal K$. \subsection*{Notation and Conventions} We denote differential and ordinary fields and rings by calligraphic letters $\mathcal C, \mathcal K, \ldots $ The canonical derivation of a differential ring $\mathcal K$ is denoted by $\partial_{\mathcal K}$ or just $\partial$ whenever it does not lead to confussion. Algebraic varieties are denoted by capital letters $M, G, \ldots$ The structure sheaf of $M$ is denoted by $\mathcal O_M$. If $M$ is a $\mathcal C$-algebraic variety and $\mathcal C\subset \mathcal K$, the space of $\mathcal K$-points of an algebraic variety $M$ is denoted by $M(\mathcal K)$. We write $M_{\mathcal K}$ for the $\mathcal K$-algebraic variety obtained after base change $M\times_{\mathcal C}\ensuremath{\mbox{\rm Spec}}(\mathcal K)$. If $p$ is a point of $M$ we denote by $\kappa(p)$ its quotient field and $p^\natural$ the valuation morphism $p^\natural\colon \mathcal O_{M,p}\to \kappa(p)$. \setcounter{tocdepth}{1} \tableofcontents \section{Algebraic Groups and Homogeneous Spaces} \subsection{Algebraic Groups} Let us consider a field $\mathcal C$ and its algebraic closure $\bar{\mathcal C}$. By an \emph{algebraic variety} over $\mathcal C$ we mean a reduced and separated scheme of finite type over $\mathcal C$. Along this text an \emph{algebraic group} means an algebraic variety endowed with an algebraic group law and inversion morphism. In particular, algebraic groups over fields of characteristic zero are smooth varieties (\cite{Mum} pp. 101--102). The functor of points of an algebraic group takes values on the category of groups. If $G$ is a $\mathcal C$-algebraic group, and $\mathcal K$ is a $\mathcal C$-algebra, then the set $G(\mathcal K)$ of $\mathcal K$-points of $G$ is naturally endowed with an structure of group. An algebraic group is an \emph{affine group} if it is an affine algebraic variety. The main example of an affine algebraic group is the General Linear Group, $$GL(n,\mathcal C) = \ensuremath{\mbox{\rm Spec}}\left( \mathcal C[x_{ij},\Delta]\right), \quad \Delta = \frac{1}{|x_{ij}|} .$$ We call algebraic linear groups to the Zariski closed subgroups of $GL(n,\mathcal C)$. It is well known that any affine algebraic group is isomorphic to an algebraic linear group. \subsection{Lie Algebra of an Algebraic Group} Let us consider $\mathfrak X(G)$ the space of regular vector fields in $G$, \emph{id est}, derivations of the sheaf $\mathcal O_G$ vanishing on $\mathcal C$. The Lie bracket of regular vector fields is a regular vector field, so $\mathfrak X(G)$ is a Lie algebra. \begin{definition} Let $A$ be a regular vector field in $G$, and $\psi\colon G\to G$ an automorphism of algebraic variety. Then, we define $\psi(A)$ the transformed vector field $\psi(A) = (\psi^\sharp)^{-1}\circ A\circ \psi^\sharp$. $$\xymatrix{\mathcal O_G\ar[r]^-{\psi(A)}\ar[d]_-{\psi^{\sharp}} & \mathcal O_G \\ \mathcal O_G \ar[r]^-{A} & \mathcal O_G \ar[u]_-{(\psi^\sharp)^{-1}}}$$ \end{definition} Any $\mathcal C$-point $\sigma$ of $G$ induces right and left translations, $R_{\sigma}$ and $L_{\sigma}$, which are automorphisms of the algebraic variety $G$. A $\bar{\mathcal C}$-point $\bar\sigma$ of $G$, induces translations in $G_{\bar{\mathcal C}}$. \begin{definition} The Lie algebra $\mathcal R(G)$ of $G$ is the space of all regular vector fields $A\in\mathfrak X(G)$ such that for all $\bar{\mathcal C}$-point $\sigma\in G(\bar{\mathcal C})$, $R_\sigma(A \otimes 1) =A \otimes 1$. In the same way, we define the Lie algebra $\mathcal L(G)$ of left invariant vector fields. \end{definition} The Lie bracket of two right invariant vector field is a right invariant vector field. The same is true for left invariant vector fields, so $\mathcal R(G)$ and $\mathcal L(G)$ are Lie sub-algebras of $\mathfrak X(G)$. For a point $x\in G$ its tangent space $T_\sigma G$ is defined as the space of $\mathcal C$-derivations from the ring of germs of regular functions, $\mathcal O_{G,\sigma}$ with values in its quotient field $\kappa(\sigma)$. It is a $\kappa(\sigma)$-vector space of the same dimentsion than $G$. Any regualr vector field $\vec X$ in $\mathfrak X(G)$, can be seen as a map $\sigma\mapsto \vec X_{\sigma}\in T_\sigma(G)$. Let us consider $e$ the identity element of $G$. If $\mathcal C$ is algebraically closed, for any vector $\vec v\in T_{e}G$ there are unique invariant vector fields $\vec R\in \mathcal R(G)$ and $L\in\mathcal L(G)$ such that $\vec R_e = \vec L_e = \vec v$ (see \cite{Mum} pp. 98--99). % \subsection{Algebraic Homogeneous spaces} \begin{definition} Let $G$ be a $\mathcal C$-\emph{algebraic} group. A $G$-space $M$ is an algebraic variety over $\mathcal C$ endowed with an algebraic action of $G$, $$G\times_{\mathcal C} M \xrightarrow{a} M,\quad (\sigma,x)\mapsto \sigma\cdot x.$$ \end{definition} Let $M$ be a $G$-space. Then for each extension $\mathcal C\subset \mathcal K$, the group $G(\mathcal K)$ acts on the set $M(\mathcal K)$. Therefore it is a $G(\mathcal K)$-set in the set theoretic sense. Given a point $x\in M$ its \index{isotropy subgroup} \emph{isotropy subgroup} is an \emph{algebraic} subgroup of $G$ that we denote by $H_x$. It is defined by equation $H_x\cdot x = x$. Note that it is not necessary for $x$ to be a rational point. The intersection of the isotropy subgroups of all \emph{closed} points of $M$ is a normal \emph{algebraic} subgroup $H_M \triangleleft G$. The action of $G$ is called \emph{faithful} if $H_M$ is the identity element $\{e\}$, and it is called \emph{free} if for any rational point $x$, $H_x=\{e\}$. It is called \emph{transitive} if for each pair of rational points $x,y\in M$ there is a $\sigma\in G$ such that $\sigma\cdot x = y$; \emph{id est} there is only one orbit. \begin{definition} Let us consider the induced morphism, $$(a\times Id)\colon G\times_{\mathcal C} M\to M\times_{\mathcal C} M,\quad (\sigma,x)\mapsto (\sigma x, x)$$ then, \begin{enumerate} \item[(1)] $M$ is an homogeneous $G$-space if $(a\times Id)$ is surjective. \item[(2)] $M$ is a principal homogeneous $G$-space if $(a\times Id)$ is an isomorphism. \end{enumerate} \end{definition} If $\mathcal C$ is algebraically closed, an homogeneous $G$-space is simply a \emph{transitive} $G$-space and a principal homogeneous $G$-space is a \emph{free and transitive} $G$-space. In such case, any principal homogeneous $G$-space over is isomorphic to $G$. \subsection{Existence of quotients: Chevalley's theorem} Let $V$ ve a $\mathcal C$-vector space, and $GL(V)$ the group of linear transformations of $V$. It is an $\mathcal C$-algebraic group, and it acts algebraically on any tensor space over $V$. Given a tensor $T$ we call \emph{stabilizer subgroup} of $T$ to the group of linear transformations $\sigma\in GL(V)$ for whom there exist a scalar $\lambda\in C$ such that $\sigma(t) = \lambda T$. In other words, the stabilizer subgroup of $T$ is the isotropy subgroup of the line $\langle T \rangle$ spanned by $T$ in the projectivization of the tensor space. \begin{theorem}[Chevalley, see \cite{Humphreys} p. 80]\label{ThChevalley} Let $V$ be a $\mathcal C$-vector space of finite dimension, and let $H\subset GL(V)$ be an algebraic subgroup. There exist a tensor, $$T\in \bigoplus_i \left(V^{\otimes n_i}\otimes_{\mathcal C} \left(V^{\otimes m_i}\right)^*\right)$$ such that $H$ is the stabilizer of $T$, $$H = \{\sigma\in GL(V) |\langle\sigma(T)\rangle = \langle T \rangle \}$$ \end{theorem} From this result we obtain that for a linear algebraic group $G$ and an algebraic subgroup $H$, the quotient space $G/H$ is isomorphic to the orbit $O_{\langle T\rangle}$ in the projective space $\mathbb P\left(\bigoplus_i \left(V^{\otimes n_i}\otimes_C \left(V^{\otimes m_i}\right)^*\right)\right)$. It is a quasiprojective algebraic variety. There is a lack in the literature of an existence theorem for arbitrary quotients of an non-linear algebraic group over an arbitrary field. However, there is a result, due to M. Rosenlicht \cite{Rosenlicht1963}, saying that for any action of an algebraic group $G$ on an algebraic variety $V$, there exist a $G$-invariant open subset $U\subset V$ such that the geometrical quotient $U/G$ in the sense of Mumford exists. In the case of a subgroup $G'$ acting on $G$, this open subset must be right-invariant, and then it coincides with $G$. \subsection{Galois Cohomology} In this section, we assume that $\mathcal C$ is a \emph{perfect field}; note that this holds if $\mathcal C$ is of characteristic zero, which is the case we are interested in. In such case, any algebraic extension can be embedded into a Galois extension. Therefore, the algebraic closure $\bar{\mathcal C}$ is the inductive limit of all Galois extensions of $\mathcal C$. The group of $\mathcal C$-automorphisms of $\bar{\mathcal C}$ is then identified with the projective limit of all Galois groups, of algebraic extensions of $\mathcal C$. With the initial topology of the family of projections onto finite Galois groups, this is a compact totally disconnected group, that we denote $\ensuremath{\mbox{\rm Gal}}(\bar{\mathcal C}/\mathcal C)$. Let $G$ be a $\mathcal C$-algebraic group. The group of automorphisms acts on $G(\bar{\mathcal C})$ by composition. Let us consider $\mathbf G^k$ the set of continuous maps from $\ensuremath{\mbox{\rm Gal}}(\bar{\mathcal C}/ \mathcal C)^k$ onto $G(\bar{\mathcal C})$. In such case $\mathbf G^0 = G(\bar{\mathcal C})$. We consider the sequence: \begin{equation}\label{GCsequence} 0 \to \mathbf G^0 \xrightarrow{\delta_0} \mathbf G^1 \xrightarrow{\delta_1} \mathbf G^2, \end{equation} where the codifferential of $x\in \mathbf G^0$ is $(\delta_0 x)(\sigma) = x^{-1}\cdot \sigma(x)$, and the codifferential of $\varphi\in \mathbf G^1$ is $(\delta_1\varphi)(\sigma,\tau) = \varphi(\sigma\cdot\tau)^{-1}\cdot\varphi(\sigma)\cdot\sigma(\varphi(\tau))$. An element in the image of $\delta_0$ is called a \emph{coboundary}, the set of coboundaries is denoted by $B^1(G,C)$. An element $\varphi\in\mathbf G^1$ is called a \emph{$1$-cocycle} if $\delta_1\varphi$ vanish. The set of $1$-cocycles is denoted $Z^1(G,\mathcal C)$. Two $1$-cocycles are called \emph{cohomologous} if there is $x\in \mathbf G^0$ such that $\varphi(\sigma) = x^{-1}\cdot \psi(\sigma) \cdot \sigma(x)$. This is an \emph{equivalence relation} in $Z^1(G,\mathcal C)$. The quotient set $\mathbf Z^1(G,\mathcal C)/\sim$ is a pointed set, with distingished point the class of \emph{coboundaries}. Note that when $G$ is an abelian group the sequence \eqref{GCsequence} is a \emph{differential complex} and this quotient is the first cohomology group. \begin{definition} The zero Galois cohomology set of $G$ with coefficients in $\mathcal C$, $H^0(G,\mathcal C)$ is the kernel of $\delta_0$. It is a pointed set with distinguised point the identity. The first Galois cohomology set of $G$ with coefficients in $\mathcal C$, $H^1(G,\mathcal C)$, is the pointed set $Z^1(G,\mathcal C)/\sim$. \end{definition} From the definition of $\delta_0$ it is clear that $x\in H^0(G,\mathcal C)$ if an only if it is invariant under the action of $\ensuremath{\mbox{\rm Gal}}(\bar{\mathcal C}/\mathcal C)$. The fixed field of $\bar{\mathcal C}$ in precisely $\mathcal C$, therefore the zero Galois cohomology set coincides with the set of $\mathcal C$-points $G(\mathcal C)$. Therefore, we define the zero cohomology set $H^0(V,\mathcal C)$ of any $\mathcal C$-algebraic variety $V$ to be the set of $\mathcal C$-points $V(\mathcal C)$. Let $G'$ be an algebraic subgroup of $G$. In such case $H^0(G/G',\mathcal C)$ is a pointed set, with distinguised point the class of the identity. An element $x\in H^0(G/G',\mathcal C)$ is a $\mathcal C$-point of the homogeneous space $G/G'$. This $x$ is the class of a unique $\bar{\mathcal C}$-point $\bar x$ of $G$. The \emph{coboundary} $\partial_0 \bar x$ is a cocycle in $G'$, and its cohomology class $[\bar x]\in H^1(G',\mathcal C)$ does not depends on the election of $x$. We have a morphism of pointed sets $H^0(G/G',\mathcal C)\to H^1(G,\mathcal C)$ called the \emph{connecting morphism}. We obtain an exact sequence of pointed sets: $$0 \to H^0(G',\mathcal C) \to H^0(G,\mathcal C) \to H^0(G/G', \mathcal C) \to H^1(G',\mathcal C) \to H^1(G,\mathcal C)$$ and when $G'$ is a normal subgroup of $G$, the sequence $$H^1(G',\mathcal C) \to H^1(G,\mathcal C) \to H^1(G/G', \mathcal C)$$ is also exact (see \cite{Ko1}, p. 277--288). Using the previous exact sequence it is relatively easy to compute the first Galois cohomology set of several algebraic groups. We say that the first cohomology set of $G$ with coefficients in $\mathcal C$ \emph{vanish} if it consists of an only point. In particular the following results are well known: \begin{itemize} \item The first cohomology set of the additive group $H^1((\mathcal C,+),\mathcal C)$ vanish. \item The first cohomology set of the multiplicative group $H^1(\mathcal C^*,\cdot), \mathcal C)$ vanish. \item $H^1(GL(n,\mathcal C),\mathcal C)$ vanish. \item $H^1(SL(n,\mathcal C),\mathcal C)$ vanish. \item If $G$ is linear connected solvable group then $H^1(G,\mathcal C)$ vanish. \item If $\mathcal C$ is algebraically closed then for any algebraic group $H^1(G, {\mathcal C})$ vanish. \item If $S$ is a Riemann surface and $\mathcal M(S)$ is its field of meromorphic function then for any linear connected $\mathcal M(S)$-algebraic group $G$, $H^1(G, \mathcal M(S))$ vanish (this is a particular case of fields of dimension lower or equal than one, treated in \cite{Serre}). \item If $S$ is an open Riemann surface then for any connected $\mathcal M(S)$-algebraic group $H^1(G, \mathcal M(S))$ vanish (Grauert theorem, see \cite{Sibuya}). \end{itemize} The first Galois cohomology set classifies the \emph{principal homogeneous spaces over $G$.} This classification was first obtained by Ch\^atelet for some particular cases, here we follow Kolchin \cite{Ko1} (see p. 281--283). The main fact is that if the first Galois cohomology set vanish then all principal homogeneous spaces have rational points. \begin{theorem} Let $G$ be a $\mathcal C$-algebraic group and $M$ a principal homogeneous $G$-space. Then $M$ defines a class $[M]$ in $H^1(G,\mathcal C)$. This cohomology class classifies $M$ up to $\mathcal C$-isomorphisms. $M$ is isomorphic to $G$ if and only if $[M]$ is the distinguised point of $H^1(G,\mathcal C)$. Reciprocally any cohomology class of $H^1(G,\mathcal C)$ is the class of certain homogeneous $G$-space. \end{theorem} \subsection{Fundamental Fields} Consider a right invariant vector field $A\in \mathcal R(G)$. Then, $\vec A\otimes 1$ is a regular vector field in $G\times_{\mathcal C} M$. This vector field is projectable by the action of $G$ in $M$, $$a\colon G\times_{\mathcal C} M \to M, \quad \vec A \otimes 1 \mapsto \vec A^M.$$ \begin{definition}\label{DFfundamentalfield} We call algebra of fundamental field $\mathcal R(G,M)$ to the Lie algebra of regular vector fields in $M$ spanned by the projections $\vec A^M$ of vector fields $\vec A\otimes 1$, being $\vec A$ right invariant vector field in $G$. \end{definition} There is a canonical surjective Lie algebra morphism, $$\mathcal R(G)\to \mathcal R(G,M), \quad \vec A \to \vec A^M,$$ the kernel of this morphism is the Lie algebra of the kernel of the action $H_M$, $\mathcal R(H_M)\subset \mathcal R(G)$. In particular, the Lie algebra of fundamental fields $\mathcal R(G,G)$ in $G$ coincides with $\mathcal R(G)$. \section{Differential Algebraic Geometry} We can state that the differential algebraic geometry is with respect to the differential algebra the same than the classical algebraic geometry is with respect to the commutative algebra. In this sense, the differential algebraic geometry is the study of geometric objects associated with differential rings. Here we present the theory of schemes with derivations, which has been developed by Buium \cite{Bu}, and the theory of differential schemes, which is due to Keigher \cite{Ke1, Ke2}, Carra' Ferro (see \cite{Ca0}), and Kovacic \cite{Kov1}. \subsection{Differential Algebra} We present here some preliminaries in differential algebra. The main references for this subject are \cite{Ritt1950}, \cite{Ka}, \cite{Ko1}. A differential ring is a commutative ring $\mathcal A$ and a derivation $\partial_{\mathcal A}$. By a derivation we mean an application verifying the Leibnitz rule, $\partial_{\mathcal A}(ab) = a\cdot\partial_{\mathcal A}(b) + b\cdot\partial_{\mathcal A}(a)$. An element $a\in \mathcal A$ is called a constant if it has vanishing derivative $\partial a = 0$. Whenever it does not lead to confusion, we will write $\partial$ instead of $\partial_{\mathcal A}$. The subset $C_{\mathcal A}$ of constants elements is a subring of $\mathcal A$. When $\mathcal A$ is a field we call it a \emph{differential field}. In such a case, the constant ring $C_{\mathcal A}$ is a subfield of $\mathcal A$. An ideal $\mathfrak I\subset \mathcal A$ is a differential ideal if $\partial (\mathfrak I)\subset \mathfrak I$. Note that if $\mathfrak I$ is a differential ideal, then the quotient $\mathcal A/\mathfrak I$ is also a differential ring. For a subset $S\subset\mathcal A$ we denote $[S]$ for the smallest differential ideal containing $S$, and $\{S\}$ for the smallest radical differential ideal containing $S$. For an ideal $\mathfrak I\subset \mathcal A$ we denote $\mathfrak I'$ for the smallest differential ideal containing $\mathfrak I$, namely: $\mathfrak I' = \sum_i \partial^i(\mathfrak I).$ Localization by arbitrary multiplicative sytems is also suitable in differential rings. A ring morphism is called \emph{differential} if it is compatible with the derivation. In the category of differential rings, tensor product is also well defined. Consider $\mathcal K$ a differential field. A differential ring $\mathcal A$ endowed with a morphism $\mathcal K\hookrightarrow\mathcal A$ is called a \emph{differential $\mathcal K$-algebra}. If $\mathcal A$ is a differential field then we say that it is a \emph{differential extension} of $\mathcal K$. \index{differential!$\mathcal K$-algebra}\index{differential!extension} \subsection{Keigher Rings} If $\mathfrak I\subset \mathcal A$ is an ideal, we denote its radical ideal by $\sqrt{\mathfrak I}$, the intersection of all prime ideals containing $\mathfrak I$. In algebraic geometry, there is a one-to-one correspondence between the set of radical ideals of $\mathcal A$ and the set of Zariski closed subsets of $\ensuremath{\mbox{\rm Spec}}(\mathcal A)$, the prime spectrum of $\mathcal A$. In order to perform an analogous systematical study of the set of differential ideals - \emph{id est} differential algebraic geometry - we should require radicals of differential ideals to be also differential ideals. This property does not hold in the general case. We have to introduce a suitable class of differential rings. This class was introduced by Keigher (see \cite{Ke1}); we call them Keigher rings. \begin{definition} A Keigher ring is a differential ring verifying that for each differential ideal $\mathfrak I$, its radical $\sqrt{\mathfrak I}$ is also a differential ideal. \end{definition} \begin{definition} For any ideal $\mathfrak I\subset \mathcal A$ we define its differential core as $\mathfrak I_\sharp = \{a\in\mathfrak I\colon \forall n(\partial^na\in\mathfrak I)\}$. \end{definition} Keigher rings can be defined in several equivalent ways. The following theorem of characterization includes different possible definitions (see \cite{Kov1}, proposition 2.2.). \begin{theorem}\label{C2THE2.1.6} Let $\mathcal A$ be a differential ring. The following are equivalent: \begin{enumerate} \item[(a)] If $\mathfrak p\subset\mathcal A$ is a prime ideal, then $\mathfrak p_\sharp$ is a prime differential ideal. \item[(b)] If $\mathfrak I\subset\mathcal A$ is a differential ideal, and $S$ is a multiplicative system disjoint from $\mathfrak I$, then there is a prime maximal differential ideal containing $\mathfrak I$ disjoint with $S$. \item[(c)] If $\mathfrak I\subset\mathcal A$ is a differential ideal, then so is $\sqrt{\mathfrak I}$. \item[(d)] If $S$ is any subset, then $\{S\} = \sqrt{[S]}$. \item[(e)] $\mathcal A$ is a Keigher ring. \end{enumerate} \end{theorem} By a \emph{Ritt algebra} we mean a differential ring including the field $\mathbb Q$ of rational numbers. When studing differential equations in characteristic zero, differential rings considered are mainly Ritt algebras. A main property of Ritt algebras is that the radical of a differential ideals is a differential ideal (see for instance \cite{Ka}), therefore \emph{Ritt algebras are Keigher rings}. \begin{proposition} If $\mathcal A$ is a Keigher ring then for any differential ideal $\mathfrak I$, $\mathcal A/\mathfrak I$ is Keigher and for any multiplicative system $S$, $S^{-1}\mathcal A$ is Keigher. \end{proposition} \begin{proof} Assume $\mathcal A$ is Keigher. First, let us prove that $\mathcal A/\mathfrak I$ is Keigher. Consider the projection $\pi\colon\mathcal A\to\mathcal A/\mathfrak I$. Let $\mathfrak a$ be a differential ideal of $\mathcal A / \mathfrak I$. Then $\sqrt{\mathfrak a} = \pi(\sqrt{\pi^{-1}(\mathfrak a)})$ is a differential ideal. Second, consider a localization morphism $l\colon \mathcal A \to S^{-1}\mathcal A$. Let $\mathfrak a\subset S^{-1}\mathcal A$ be a differential ideal. Let us denote by $\mathfrak b$ the preimage $l^{-1}(\mathfrak a)$; it is a differential ideal and $l(\mathfrak b)\cdot S^{-1}\mathcal A = \mathfrak a$. Let us consider $\frac{a}{s}\in\sqrt{\mathfrak a}$. $\frac{a}{s}\frac{s}{1} = \frac{a}{1}\in\sqrt{\mathfrak a}$. For certain $n$, hence $\frac{a^n}{1}\in\mathfrak a$, $a^n\in\mathfrak b$ and $a\in\sqrt{\mathfrak b}$. $\mathcal A$ is Keigher, and then $\partial a\in\sqrt{\mathfrak b}$. Therefore $(\partial a)^m\in \mathfrak b$, so that $\left(\frac{\partial a}{1}\right)^m\in\mathfrak a$ and, for instance, $\frac{\partial a}{1}\in\sqrt{\mathfrak a}$. Finally, $$\partial\left(\frac{a}{s}\right) = \frac{\partial a}{1}\frac{1}{s} - \frac{a}{1}\frac{\partial s}{s^2}\in\sqrt{\mathfrak a},$$ and by \emph{(c)} of Theorem \ref{C2THE2.1.6} $S^{-1}\mathcal A$ is Keigher. \end{proof} \subsection{New Constants} From now on let $\mathcal K$ be a differential field, and let $\mathcal C$ be its field of constants. We assume that $\mathcal C$ is algebraically closed. A classical lemma of differential algebra (see \cite{Ko1} p. 87 Corollary 1) says that if $\mathcal A$ is a differential $\mathcal K$-algebra, then the ring of constant $C_{\mathcal A}$ is linearly disjoint over $\mathcal C$ with $\mathcal K$. Let us set this classical lemma in a more geometric frame. \begin{lemma}\label{LmDisjoint} Let $\mathcal A$ be an integral finitely generated differential $\mathcal K$-algebra. Then there is an affine subset $U\subset \ensuremath{\mbox{\rm Spec}}(\mathcal A)$ such that the ring of constants $C_{\mathcal A_U}$ is a finitely generated algebra over $\mathcal C$. \end{lemma} \begin{proof} Consider $Q(\mathcal A)$ the field of fractions of $\mathcal A$. The extension $\mathcal K \subset Q(\mathcal A)$ is of finite transcendency degree. Then, $\mathcal K \subset \mathcal K \cdot C_{Q(\mathcal A)} \subset Q(\mathcal A$) are extensions of finite transcendency degree, and there are $\lambda_1,\ldots\lambda_s$ in $C_{Q(\mathcal A)}$ such that $\mathcal K(\lambda_1,\ldots,\lambda_s) = \mathcal K\cdot C_{Q(\mathcal A)}$. Constants $\lambda_1$,$\ldots$,$\lambda_s$ are fractions $\frac{f_i}{g_i}$. Consider the affine open subset obtained by removing from $\ensuremath{\mbox{\rm Spec}}(\mathcal A)$ the zeroes of the denominators, $$U = \ensuremath{\mbox{\rm Spec}} A \setminus \bigcup_{i=1}^s (g_i)_0.$$ Then, $\lambda_i\in \mathcal A_U$ and $\mathcal K[C_{\mathcal A_U}] = \mathcal K[\lambda_1,\ldots,\lambda_s]$. We will prove that $C_{\mathcal A_U} = \mathcal C[\lambda_1,\ldots,\lambda_s]$. Let $\lambda\in C_{\mathcal A_U}$. It is certain polynomial in the variables $\lambda_i$ with coefficients in $\mathcal K$: $$\lambda = \sum_{I\in\Lambda}a_I\lambda^I, \quad a_I\in \mathcal K;$$ where $\Lambda$ is a suitable finite set of multi-indices. We can take this set in such way that the $\{\lambda^{I}\}_{I\in\Lambda}$ are linearly independent over $\mathcal K$, and then so they are over $\mathcal C$. $\{\lambda,\lambda^I\}_{I\in\Lambda}$ is a subset of $\mathcal K$-linearly dependents elements of $C_{\mathcal A_U}$. By \cite{Ko1} (p. 87 corollary 1) then they are $\mathcal C$-linearly dependent. Hence, $\lambda$ is $\mathcal C$-linear combination of $\{\lambda_I\}_{I\in\Lambda}$, $\lambda\in \mathcal C[\lambda_1,\ldots,\lambda_s]$ and finally $C_{\mathcal A_U} = \mathcal C[\lambda_1,\ldots,\lambda_s]$. \end{proof} \subsection{Differential Spectra} \index{differential!spectrum} \begin{definition} Let $\mathcal A$ be a differential ring. $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal A)$ is the set of all prime differential ideals $\mathfrak p\subset \mathcal A$. \end{definition} Let $S\subset\mathcal A$ any subset. We define the differential locus of zeroes of $S$, $\{S\}_{0}\subset\ensuremath{\mbox{\rm DiffSpec}}(\mathcal A)$ as the subset of prime differential ideals containing $S$. This family of subsets define a topology (having these subsets as closed subsets), that we call the \emph{Kolchin topology} or \emph{differential Zariski topology.} Note that $\{S\}_0 = (S)_0\cap \ensuremath{\mbox{\rm DiffSpec}}(\mathcal A)$. From that if follows: \begin{proposition} $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal A)$ with Kolchin topology is a topological subspace of $\ensuremath{\mbox{\rm Spec}}(\mathcal A)$ with Zariski topology. \end{proposition} From now on, let us consider the following notation: $X = \ensuremath{\mbox{\rm Spec}}(\mathcal A)$, and $X' = \ensuremath{\mbox{\rm DiffSpec}}(\mathcal A)$. Let us recall that a topological space is said \emph{reducible} if it is the non-trivial union of two closed subsets. It is said \emph{irreducible} if it is not reducible. A point of an irreducible topological space is said \emph{generic} if it is included in each open subset. The following properties of the differential spectrum are proven in \cite{Ke2} (see Proposition 2.1). \begin{proposition} $X'$ verifies: \begin{enumerate} \item[(1)] $X'$ is quasicompact. \item[(2)] $X'$ is $T_0$ separated. \item[(3)] Every closed irreducible subspace of $X'$ admits a unique generic point. The map $X'\to 2^{X'}$, that maps each point $x$ to its Kolchin closure $\overline{\{x\}}$ is a bijection between points of $X'$ and irreducible closed subspaces of $X'$. \end{enumerate} \end{proposition} Here we review some of the topological properties of the differential spectrum of Keigher rings. \begin{lemma}\label{LM3.8} Assume that $\mathcal A$ is a Keigher ring. Then each minimal prime ideal is a differential ideal. \end{lemma} \begin{proof} Then, let $\mathfrak p$ be a minimal prime. By Theorem \ref{C2THE2.1.6} (a), $\mathfrak p_\sharp$ is a prime differential ideal and $\mathfrak p_\sharp\subseteq \mathfrak p$. \end{proof} \begin{proposition} Assume that $\mathcal A$ is Keigher. Then, $X$ is an irreducible topological space if and only if $X'$ is an irreducible topological space. \end{proposition} \begin{proof} Just note that the irreducible components of $X'$ are the Kolchin closure of minimal prime ideals of $\mathcal A$. \end{proof} \begin{proposition} Assume $\mathcal A$ is Keigher. If $X'$ is connected, then $X$ is connected. \end{proposition} \begin{proof} Assume that $X = Y \sqcup Z$, then we have an isomorphism of rings $$(p_1,p_2)\colon\mathcal A \mapsto \mathcal O_X(Y) \times \mathcal O_X(Z), \quad a\mapsto (a|_X,a|_Y),$$ the kernel of each restriction $p_i$ is intersection of minimal prime ideals, so by Lemma \ref{LM3.8} they are differential ideals. Hence, the rings $\mathcal O_X(Y)$ and $\mathcal O_X(Z)$ are also differential rings. Then, $$X' = Y' \sqcup Z',$$ being $Y' = \ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(Y))$, $Z' = \ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(Z))$. We have proven that if $X$ disconnects, then $X'$ disconnects. \end{proof} \subsection{Structure Sheaf} We define the structure sheaf $\mathcal O_{X'}$ as in \cite{Kov1}. Let us consider the projection, $$\pi\colon\bigsqcup_{x\in X'} \mathcal A_x \to X'.$$ being $\bigsqcup_{x\in X'} \mathcal A_x$ the disjoint union of all the localized rings $\mathcal A_x$. We say that a section $s$ of $\pi$ defined in an open subset $U\subset X'$ is \emph{a regular function} if it verifies the following: for all $x\in U$ there exist an open neighborhood $x\in U_x$ and $a,b\in \mathcal A$ with $b(x)\neq 0$ $(b\not\in x)$, such that for all $y\in U_x$ with $b(y)\neq 0$, $s(y) = \frac{a}{b}\in \mathcal A_y$. Thus, a regular function is a section which is locally representable as a quotient. We write $\mathcal O_{X'}$ for the sheaf of regular functions in $X'$. By the above construction we can state: \begin{proposition} The stalk $\mathcal O_{X',x}$ is a ring isomorphic to $\mathcal A_x$. \end{proposition} \begin{theorem} Let us consider the natural inclusion $j\colon X'\hookrightarrow X$. The sheaf of regular functions $\mathcal O_{X'}$ is the restriction $\mathcal O_X|_{X'}$ of the sheaf of regular function in $X$. \end{theorem} \begin{proof} First, let us define a natural morphism of presheaves of rings on $X'$ between the inverse image presheaf $j^{-1}\mathcal O_{X}$ and $\mathcal O_{X'}$. Let us consider an open subset $U\subset X'$ and a section $s$ of the presheaf $j^{-1}\mathcal O_X$ defined in $U$. By definition of inverse image, there is an open subset $W$ of $X$ such that $W\cap X' \cap U$ and for what $s$ is written as a fraction $\frac{a}{b}\in\mathcal A_W$. This fraction is a section of $\mathcal O_{X'}(U)$, and it defines the presheaf morphism $$j^{-1}\mathcal O_X \to \mathcal O_{X'}.$$ This presheaf morphism induces a morphism between associated sheaves $\mathcal O_X|_{X'}$ and $\mathcal O_{X'}$. It is clear that this natural morphism induce the identity between fibers $\mathcal (j^{-1}\mathcal O_X)_x = \mathcal A_x \to \mathcal O_{X',x} = \mathcal A_x$, and then it is an isomorphism. \end{proof} \subsection{Global Sections} One of the main facts of the differential algebraic geometry is that the ring of global regular sections of $X'$ does not coincide with the differential ring $\mathcal A$. Of course there is a canonical morphism from $\mathcal A$ to $\mathcal O_{X'}(X')$. However there are non-vanishing elements giving rise to the zero section and non invertible elements giving rise to invertible sections. An element $a$ of $\mathcal A$ is called a \emph{differential zero} if its annihilator ideal is not contained in any proper differential ideal. The set of differential zeroes is denoted by $\mathfrak Z$. An element is called a differential unit if it is not contained in any proper differential ideal. The set of differential units is denoted by $\mathfrak U$. Then, there is a canonical \emph{injective} morphism, $\mathfrak U^{-1}\mathcal A/\mathfrak Z \hookrightarrow \mathcal O_{X'}(X').$ But in general this morphism is not surjective, \emph{id est}, there are regular functions that are not representable as fractions of $\mathcal A$. Therefore, the differential spectrum of $\mathcal O_{X'}(X')$ is not always isomorphic to $X'$. This problem is extensively discussed in \cite{Be2008}. \subsection{Differential Schemes} The study of differential schemes started within the work of Keigher \cite{Ke1, Ke2} and was continued by Carra' Ferro \cite{Ca0}, Buium \cite{Bu} and Kovacic \cite{Kov1}. Definitions are slightly different in each author approach, here we follow Kovacic. Let us remind that a \emph{locally ringed space} is a topological space $X$ endowed with an structure sheaf of rings $\mathcal O_X$ such that for all $x\in X$ the stalk $\mathcal O_{X,x}$ is a local ring. Thus, a \emph{locally differential ringed space} is a locally ringed space whose structure sheaf $\mathcal O_X$ is a sheaf of differential rings. A morphism of locally differential ringed spaces $f\colon X\to Y$ consist of a continous map together with a sheaves morphism $f^\natural\colon \mathcal O_X \to f_*\mathcal O_Y$. For the differential ring $\mathcal A$ it is clear that its differential spectrum $X'$ endowed with the structure sheaf $\mathcal O_{X'}$ is a locally differential ringed space. \begin{definition} An affine differential scheme is a locally differentially ringed space $X$ which is isomorphic to $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal A)$ for some differential ring $\mathcal A$. \end{definition} \begin{definition} A differential scheme is a locally differentially ringed space $X$ in which every point has a neighborhood that is an affine differential scheme. \end{definition} \begin{remark} Schemes are differential schemes, endowed with the trivial derivation. The category of differential schemes is an extension of the category of schemes, in the same way that the category of differential rings is an extension of the category of rings. \end{remark} By a \emph{morphism of differential schemes} $f\colon X\to Y$ we mean a morphism of locally ringed spaces, such that $f^{\sharp}\colon \mathcal O_Y \to f_*\mathcal O_X$ is a morphism of sheaves of differential rings. Let $\mathcal K$ be a differential field. A \emph{$\mathcal K$-differential scheme} is a differential scheme $X$ provided with a morphism $X\to \ensuremath{\mbox{\rm DiffSpec}}(\mathcal K)$, it means that $\mathcal O_X$ is a sheaf of differential $\mathcal K$-algebras. A morphism of differential schemes $f\colon X \to Y$ between two differential $\mathcal K$-schemes is a \emph{morphism of differential $\mathcal K$-schemes} if the sheaf morphism $f^\sharp \colon \mathcal O_Y \to f_*\mathcal O_X$ is a morphism of sheaves of differential $\mathcal K$-algebras. \subsection{Product of Differential Schemes} There is not a direct product in the category of differential schemes relative to a given basic differential scheme. This problem is discussed in \cite{Kov1}. However, in the case of differential schemes over a differential field $\mathcal K$ we can construct the direct product by patching tensor products, as it is usually done in algebraic geometry. Therefore, $$\ensuremath{\mbox{\rm DiffSpec}}(\mathcal A)\times_{\mathcal K}\ensuremath{\mbox{\rm DiffSpec}}(\mathcal B) = \ensuremath{\mbox{\rm DiffSpec}}(\mathcal A\otimes_{\mathcal K}\mathcal B).$$ Moreover, if $X$ and $Y$ are reduced differential $\mathcal K$-schemes then $X\times_{\mathcal K} Y$ is also reduced (see \cite{Kov2} Proposition 25.2). \subsection{Split of Differential Schemes} \begin{definition} Let $X$ be a differential scheme. Define the presheaf of rings $\mathcal C_X$ on $X$ by the formula, $$\mathcal C_X(U) = C_{\mathcal O_X(U)},$$ for any open subset $U\subseteq X$. \end{definition} From this definition it follows that $\mathcal C_X$ is a sheaf of rings and its fiber $\mathcal C_{X,x}$ is isomorphic to the ring of constants $\mathcal C_{\mathcal O_{X,x}}$. In particular, if $X$ is a $\mathcal K$-differential scheme $\mathcal C_X$ is a sheaf of $\mathcal C_{\mathcal K}$-algebras. \begin{definition} We call space of constants of $X$, $\ensuremath{\mbox{\rm Const}}(X)$ to the locally ringed space $(X,\mathcal C_X)$. \end{definition} \begin{definition} We say that $X$ is an almost-constant differential scheme if its space of constants $\ensuremath{\mbox{\rm Const}}(X)$ is a scheme. \end{definition} Let $X$ be an almost-constant scheme. Then, each open subset $U\subset X$ is also almost-constant. If $Y$ is a reduced closed subscheme of $X$ then $Y$ is almost-constant. In this way if $Y$ is a locally closed reduced subscheme of $X$, then $Y$ is almost-constant. Let $\mathcal K$ be a differential field, and $\mathcal C$ its field of constants. \begin{definition}\label{C2DEFsplitDS} A differential $\mathcal K$-scheme $X$ splits if there is a $\mathcal C$-scheme $Y$ and an isomorphism of $\mathcal K$-differential schemes, $$\phi\colon X \xrightarrow{\sim} Y \times_{\mathcal C} \ensuremath{\mbox{\rm DiffSpec}}(\mathcal K).$$ The isomorphism $\phi$ is called an splitting isomorphism for $X$. \end{definition} \begin{proposition}\label{Kov28.2} If $X$ is reduced and splits, then it is almost-constant and $$X \xrightarrow{\sim} \ensuremath{\mbox{\rm Const}}(X)\times_{\mathcal C} \ensuremath{\mbox{\rm DiffSpec}}(\mathcal K).$$ \end{proposition} \begin{proof} \cite{Kov2} proposition 28.2. \end{proof} \subsection{Strongly Normal Extensions} Strongly normal extensions are introduced by Kolchin \cite{Ko0}. They are differential field extensions whose group of automorphisms admits an structure of algebraic group. This notion has been recently characterized in terms of differential schemes by Kovacic \cite{Kov3}. This characterization is more convenient for our presentation of differential Galois theory, so that we will use it as a new definition. \begin{definition} $\mathcal K\to\mathcal L$ is a strongly normal extension if and only if the differential scheme $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L\otimes_{\mathcal K} \mathcal L)$ splits. In such case denote $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ to the scheme $\ensuremath{\mbox{\rm Const}}(\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L\otimes_{\mathcal K}\mathcal L))$. \end{definition} Note that prime differential ideals of $\mathcal L\otimes_{\mathcal K} \mathcal L$ whose quotient field is $\mathcal L$, correspond to $\mathcal K$-automorphisms of $\mathcal L$. If $\sigma$ is a $\mathcal K$-automorphism of $\mathcal L$, the kernel of the differential $\mathcal K$-algebra morphism, $$\mathcal L \otimes_{\mathcal K}\mathcal L \to \mathcal L,\quad a\otimes b \mapsto a\sigma(b),$$ is a prime differential ideal $\mathfrak p_{\sigma}$. Then, the set of rational points of $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L\otimes_{\mathcal K} \mathcal L)$ is naturally endowed with a group structure. This group structure descent to a structure of $\mathcal C$-algebraic group precisely when $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L\otimes_{\mathcal K} \mathcal L)$ splits. In such case the space of constant $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ is endowed with an structure of algebraic group. This problem is axhaustively treated in \cite{Kov3}. This approach gives us a parallelism with Galois extensions in classical theory of fields. Note that a field extension $k\to K$ is a Galois extension if and only if $\ensuremath{\mbox{\rm Spec}}(K\otimes_k K) = G \times_k \ensuremath{\mbox{\rm Spec}}(K)$ (see \cite{Sa}). We also obtain the scheme structure of the Galois group: it is the scheme of constants of $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L\otimes_{\mathcal K}\mathcal L)$. \subsection{Galois Correspondence for Strongly Normal Extensions} Let us consider as above $\mathcal K\subset\mathcal L$ a strongly normal extension of differential fields. To each subgroup $H\subset \ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ we assign the intermediate extension $\mathcal K\subset \mathcal L^H\subset \mathcal L$ of $H$-invariants. Reciprocally to each intermediate extension $\mathcal K \subset \mathcal F \subset \mathcal L$ we assign the subgroup $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal F)\subset \ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ of automorphisms of $\mathcal L$ that are differential $\mathcal F$-algebra automorphism. The Galois correspondence between closed subgroups and intermediate extensions is first shown by Kolchin (see \cite{Ko0} and \cite{Ko1}). \index{Galois!correspondence} \begin{theorem}\label{ThGaloisCorrespondence} The maps $$ H \mapsto \mathcal L^H \subset \mathcal L$$ from group subschemes of $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ to intermediate differential extensions and $$\mathcal F \mapsto \ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal F) \subset \ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$$ from intermediate differential extensions subgroup schemes, are bijective and inverse each other. The extension $\mathcal K\subset \mathcal F$ is strongly normal if and only if $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal F)$ is a normal subgroup of $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$. In such case $\ensuremath{\mbox{\rm Gal}}(\mathcal F/\mathcal K)$ is isomorphic to the quotient $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)/\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal F)$. \end{theorem} \subsection{Lie Extensions} The algebraic differential approach to Lie-Vessiot systems, in terms of differential fields, was initiated by K. Nishioka \cite{Ni3}. He relates the differential extensions generated by solutions of a Lie-Vessiot system with \emph{algebraic dependence on initial conditions}; a concept introduced by H. Umemura \cite{Umemura1985} in relation with the analysis of Painlev\'e differential equations. He also introduces the notion of \emph{Lie extension}, a differential field extension that carry the infinitesimal structure of a Lie-Vessiot system. Here we review some of his results, in order to relate them with the Galois theory of automorphic systems. Consider a differential field $\mathcal K$ of characteristic zero with algebraically closed constant field $\mathcal C$. Any considered differential extension of $\mathcal K$ is a subfield of certain fixed universal extension of $\mathcal K$. \begin{definition}\label{DefRationalDependence} We say that a differential extension $\mathcal K \subset \mathcal R$ depends rationally on arbitrary constants if there exist a differential field extension $\mathcal K\subset \mathcal M$ such that $\mathcal R$ and $\mathcal M$ are free over $\mathcal K$ and $\mathcal R\cdot \mathcal M = \mathcal M \cdot C_{\mathcal R \cdot \mathcal M}$. \end{definition} For a differential extension $\mathcal K \subset \mathcal L$ denote $\ensuremath{\mbox{\rm Der}}_{\mathcal K}(\mathcal L)$ the space of derivations of $\mathcal L$ that vanish over $\mathcal K$. This space is a $\mathcal K$-Lie algebra. \begin{definition}\label{DefLieExtension} We say that a differential extension $\mathcal K\subset \mathcal L$ is a Lie extension if $\mathcal C = C_{\mathcal L}$, there exists a $\mathcal C$-Lie sub algebra $\mathfrak g\subset \ensuremath{\mbox{\rm Der}}_{\mathcal K}(\mathcal L)$ such that $[\partial, \mathfrak g] \subset \mathcal K \mathfrak g$, and $\mathcal L \mathfrak g = \ensuremath{\mbox{\rm Der}}_{\mathcal K}(\mathcal L)$. \end{definition} \begin{theorem}[\cite{Ni3}]\label{ThNishioka2} Suppose that $\mathcal K$ is algebraically closed. Then every intermediate differential field of a strongly normal extension of $\mathcal K$ is a Lie extension. \end{theorem} \subsection{Schemes with Derivation} In this section we present some facts of the theory of schemes with derivations. This is mainly the point of view of \cite{Bu}. However we consider only regular derivations whereas A. Buium considers the more general case of meromorphic derivations. Our purpose is to relate schemes with derivations to differential schemes. Note that the regularity of the derivation is essential to Theorem \ref{C2THE2.3.2} below; hence it does not hold under Buium's definition. Let $X$ be a scheme. A derivation $\partial_X$ of the structure sheaf $\mathcal O_X$ is a law that assigns to each open subset $U\subset X$ a derivation $\partial_X(U)$ of the ring $\mathcal O_X(U)$. This law is assumed to be compatible with restriction morphisms. \begin{definition}\index{scheme!with derivation} A scheme with derivation is a pair $(X,\partial_X)$ consisting of a scheme $X$ and a derivation $\partial_X$ of the structure sheaf $\mathcal O_X$. \end{definition} Thus, a scheme with derivation is a scheme such that its structure sheaf is a sheaf of differential rings. A \emph{morphism of schemes with derivation} is a scheme morphism such that induces a morphism of sheaves of differential rings. Let $\mathcal K$ be a differential field. A \emph{$\mathcal K$-scheme with derivation} is a scheme with derivation $(X,\partial)$ together with a morphism $(X,\partial)\to(\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial)$. Thus, the structure sheaf of $X$ is a sheaf of differential $\mathcal K$-algebras. Let $(X,\partial_X)$, $(Y, \partial_Y)$ be two $\mathcal K$-schemes with derivation. Then the direct product $X\times_{\mathcal K}Y$ admits the derivation $\partial_X\otimes 1 + 1\otimes \partial_Y$. Then, $$(X\times_{\mathcal K} Y, \partial_X\otimes 1 + 1\otimes \partial_Y)$$ is the direct product of $(X,\partial_X)$ and $(Y,\partial_Y)$ in the category of schemes with derivation. \subsection{Differential Schemes and Schemes with Derivation} \begin{theorem}\label{C2THE2.3.2} Given a scheme with derivation $(X,\partial)$ there exist a unique topological subspace $X' \subset X$ verifying \begin{enumerate} \item[(1)] $X'$ endowed with the structure sheaf $\mathcal O_X|_{X'}$ and the derivation $\partial|_{X'}$ is a differential scheme. This differential scheme will be denoted $\ensuremath{\mbox{\rm Diff}}(X,\partial)$. \item[(2)] For each open affine subset $U\subset X$, $U\cap X' \simeq \ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(U),\partial)$. \end{enumerate} Furthermore, each morphism of schemes with derivation $\mathcal (X,\partial_X) \to (Y, \partial_Y)$ induces a morphism of differential schemes $\ensuremath{\mbox{\rm Diff}}(X,\partial_X) \to \ensuremath{\mbox{\rm Diff}}(Y,\partial_Y)$. The assignation $(X,\partial)\leadsto \ensuremath{\mbox{\rm Diff}}(X,\partial)$ is functorial. \end{theorem} \begin{proof} If $X$ is an affine scheme then the theorem holds, and $$X' = \ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(X)).$$ Let us consider the non-affine case. Let $(X,\partial_X)$ be an scheme with derivation, and let $\{U_i\}_{i\in\Lambda}$ be a covering of $X$ by affine subsets. The ring of sections $\mathcal O_X(U_i)$ is a differential ring for al $i\in\Lambda$, and its spectrum $\ensuremath{\mbox{\rm Spec}}(\mathcal O(U_i))$ is canonically isomorphic to $U_i$. For each $i\in \Lambda$ we take take $U_i'$ the differential spectrum $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(U_i))$, which is a topological subspace of $U_i$. Then $U_i'\subset U_i \subset X$. Let us define $X' = \bigcup_{i\in\Lambda} U'_i$. Thus, $X'$ is a locally differential ringed space with the sheaf $\mathcal O_X|_{X'}$. Let us prove that $X'$ is a differential scheme. First, let us prove that $U_i\cap X' = U'_i$. By construction we have, $U'_i \subset U_i\cap X'$. Let us consider $x\in U_i\cap X'$. It means that for certain $j\in\Lambda$, $x \in U_i\cap U_j$, and $x\in U'_j\subset U_j$. Let us consider an affine neighborhood $U_x$ of $x$ contained in such intersection. Because the inclusion $U_x\to U_j$, we have that $x\in U'_x = \ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(U_x))$. Then we have inclusions and restriction as follows: $$\xymatrix{U_x \ar[r]\ar[rd] & U_i \\ & U_j}\quad\quad \xymatrix{\mathcal O_X(U_x) & \ar[l] \mathcal O_X(U_i) \\ & \mathcal O_X(U_j) \ar[ul] }\quad\quad\xymatrix{U_x' \ar[r]\ar[rd] & U'_i \\ & U'_j}$$ We conclude that $x\in U'_i$. Secondly, let us prove that for any affine subset $U$, the intersection $U\cap X'$ is an affine differential scheme $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(U))$. Let $U$ be an affine subset, and let us denote $U'$ the differential spectrum $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal O_X(U)) $ that we consider as a subset of $U$. Let us consider $x\in U'$. Then, for certain $i\in\Lambda$, $x\in U\cap U_i$. Let $U_x$ be an affine neighborhood of $x$ such that $U_x\subset U\cap U_i$. Denote by $U'_x$ the differential spectrum of $\mathcal O_X(U_x)$. We have that $U'_x\subset U'_i$, and then $x\in U\cap X'$. Reciprocally let us consider $x\in U\cap X'$. Then for certain $i\in\Lambda$ we have $x\in U'_i$. By the same argument, we have that $x\in U$ is a prime differential ideal of $\mathcal O_X(U)$. The derivation $\partial$ induces derivations on the structure sheaf of $U\cap X$ for each affine open subset $U\subset X$. Then, it induce a derivation $\partial\colon \mathcal O_{X'}\to\mathcal O_{X'}$ and $\ensuremath{\mbox{\rm Diff}}(X,\partial) = (X', \mathcal O_{X}|_{X'},\partial|_{X'})$ is a differential scheme. Finally, let us consider $f\colon(X,\partial_X)\to(Y,\partial_Y)$ a morphism of schemes with derivation. If we assume that they are both affine schemes, then the theorem holds. In the general case, we cover $Y$ by affine subsets $\{U_i\}_{i\in\Lambda}$, and each fiber $f^{-1}(U_i)$ by affine subsets $\{V_{ij}\}_{i\in\Lambda, j\in\Pi}$. Then $f$ is induced by the family of differential ring morphisms $$f^\sharp_{ij} \colon\mathcal O_Y(U_j) \to \mathcal O_X(V_{ij}).$$ These morphisms induce morphisms, $$f'_{ij}\colon V_{ij}'\to U'_{i},$$ of locally differential ringed spaces which coincide on the intersections, and then they induce a unique morphism, $$f'\colon X'\to Y'.$$ \end{proof} \index{differential!point} \begin{definition} Let $(X,\partial)$ be an scheme with derivation. We will say that $x\in X$ is a differential point if $x\in \ensuremath{\mbox{\rm Diff}}(X,\partial)$. \end{definition} \begin{corollary} Let us consider $(X,\partial)$ an scheme with derivation, and $x$ a point of $X$. Then; the following are equivalent: \begin{enumerate} \item[(a)] $x\in X$ is a differential point. \item[(b)] For each affine neighborhood $U$, $x$ correspond to a differential ideal of $\mathcal O_X(U)$. \item[(c)] The maximal ideal $\mathfrak m_x$ of the local ring $\mathcal O_{X,x}$ is a differential ideal. \item[(d)] The derivation $\partial$ induces a structure of differential field in quotient field $\kappa(x)$. \item[(e)] The derivation $\partial$ restricts to the Zariski closure of $x$. \end{enumerate} \end{corollary} \subsection{Split of Schemes with Derivation} Let $Z$ be a scheme provided with the zero derivation. Then we will write $Z$ instead of the pair $(Z,0)$. Consider a differential field $\mathcal K$ and let $\mathcal C$ be its field of constants. \index{split!of schemes with derivation} \begin{definition}\label{C2DEFsplitSD} We say that a $\mathcal K$-scheme with derivation $(X,\partial)$ splits, if there is a $\mathcal C$-scheme $Y$, and an isomorphism $$\phi\colon(X,\partial) \xrightarrow{\sim} Y\times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K), \partial),$$ $\phi$ is called a splitting isomorphism for $(X,\partial)$. \end{definition} \begin{definition} The space of constants $\ensuremath{\mbox{\rm Const}}(X,\partial)$ is locally ringed space defined as follows: it is the topological subspace of differential points of $X$, endowed with restriction of the sheaf of constant regular functions. \end{definition} \begin{proposition} Suppose $(X,\partial)$ is Keigher, then $$\ensuremath{\mbox{\rm Const}}(X,\partial) = \ensuremath{\mbox{\rm Const}}(\ensuremath{\mbox{\rm Diff}}(X,\partial)).$$ \end{proposition} \begin{proof} As topological subspaces of $X$ they coincide by construction. Let $X' = \ensuremath{\mbox{\rm Diff}}(X,\partial)$. If $X$ is Keigher then $\mathcal O_{X'}(U) = \lim_{\substack{\to \\ U\subseteq V}}\mathcal O_X(V)$ (see \cite{Ca0}). And because of that we have, $$C\left(\lim_{\substack{\to \\ U\subseteq V}}\mathcal O_X(V)\right) = \lim_{\substack{\to \\ U\subseteq V}} C_{\mathcal O_X(V)},$$ and we finish. \end{proof} \index{scheme!with derivation!almost-constant} \begin{definition} $(X,\partial)$ is almost-constant if $\ensuremath{\mbox{\rm Const}}(X,\partial)$ is a scheme. \end{definition} \begin{proposition} If $(X,\partial)$ splits, then $\ensuremath{\mbox{\rm Diff}}(X,\partial)$ splits. If $(X,\partial)$ is reduced and split, then it is almost-constant and $$(X,\partial) \xrightarrow{\sim} \ensuremath{\mbox{\rm Const}}(X,\partial) \times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial).$$ \end{proposition} \begin{proof} Let us consider the splitting isomorphism $(X,\partial) \to Y\times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K), \partial)$. It is clear that $\ensuremath{\mbox{\rm Diff}}(Y\times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K), \partial)) = Y \times_{\mathcal C}\ensuremath{\mbox{\rm DiffSpec}}(\mathcal K)$. Then the above splitting isomorphism induces the splitting isomorphism of the differential scheme $\ensuremath{\mbox{\rm Diff}}(X,\partial)$. If $X$ is reduced, then $\ensuremath{\mbox{\rm Diff}}(X,\partial)$ is also reduced, and then we apply Proposition \ref{Kov28.2}. \end{proof} \section{Galois theory of Algebraic Lie-Vessiot Systems}\label{C3} In this chapter we discuss the Galois theory of Lie-Vessiot systems on algebraic homogeneous spaces. The field of functions of the independent variable is here a differential field $\mathcal K$ of characteristic zero and with a field of constants $\mathcal C$ that we assume to be algebraically closed. We modelize algebraic Lie-Vessiot systems with coefficients in $K$ as certain $\mathcal K$-schemes with derivation. We study the general solution of algebraic Lie-Vessiot systems. It means that we study the differential extensions of $\mathcal K$ that allow us to \emph{split} the Lie-Vessiot system, and the associated automorphic system. We find that they are strongly normal extensions in the sense of Kolchin \cite{Ko0}, and then we can apply Kovacic's approach to Kolchin's differential Galois theory. In fact, the Galois theory presented here should be seen as a generalization of the classical Picard-Vessiot theory, obtained by replacing the general linear group by an arbitrary algebraic group. However, the particular case of Picard-Vessiot theory contains all obstructions to solvability, because the non-linear part of an algebraic group over $\mathcal C$ is an abelian variety: abelian groups do not give obstruction to integration by quadratures. \subsection{Differential Algebraic Dynamical Systems} Here we establish a parallelism between dynamical systems and differential algebraic terminology. \emph{ From now on let us consider a differential field $\mathcal K$, and $\mathcal C$ its field of constants. We assume that $\mathcal C$ is algebraically closed and of characteristic zero.} We modelize non-autonomous dynamical systemas as schemes with derivation. The phase space is an algebraic variety $M$ over the constant field $\mathcal C$, and the extended phase space is $M_\mathcal K = M \times_{\mathcal C}\ensuremath{\mbox{\rm Spec}}(\mathcal K)$. Therefore, non-autonomous dynamical system on $M$ with coefficients in $\mathcal K$ is a derivation on $M_{\mathcal K}$. \begin{definition} A differential algebraic dynamical system is a $\mathcal K$-scheme with derivation $(M,\partial_M)$ such that $M$ is an algebraic variety over $\mathcal K$. We say that $(M,\partial_M)$ is non-autonomous if $\mathcal K$ is a non-constant differential field. \end{definition} There is a huge class of dynamical systems that can be seen as differential algebraic dynamical systems, as polynomial or meromorphic vector fields. It includes Lie-Vessiot systems in algebraic homogeneous spaces, hence it also includes systems of linear differential equations. Furthermore, a differential algebraic study of a dynamical system is suitable in the most general case, but results depend on the choice of an adequate differential field $\mathcal K$. For a differential algebraic dynamical system $(M,\partial_M)$ we have the associated differential scheme $\ensuremath{\mbox{\rm Diff}}(M,\partial_M)$. As a topological space this differential scheme is the set of \emph{all irreducible algebraic invariant subsets of the dynamical system}. By algebraic, we mean that they are objects defined by algebraic equations with coefficients in $\mathcal K$. Let us recall that for a $\mathcal K$-algebra $\mathcal L$ we denote by $M(\mathcal L)$ the set of $\mathcal L$-points of $M$. This sets consist of all the morphisms of $\mathcal K$-schemes from $\ensuremath{\mbox{\rm Spec}}(\mathcal L)$ to $M$, or equivalently, of all the rational points of the extended scheme $$M_{\mathcal L} = M \times_{\mathcal K} \ensuremath{\mbox{\rm Spec}}\mathcal L.$$ \begin{definition} Let $(M,\partial_M)$ be a $\mathcal K$-scheme with derivation. We call rational solution of $(M,\partial_M)$ any rational differential point $x\in\ensuremath{\mbox{\rm Diff}}(M,\partial_M)$. Let us consider a differential extension $\mathcal K\subset \mathcal L$. A solution with coefficients in $\mathcal L$ is an $\mathcal L$-point $x\in M(\mathcal L)$ such that the morphism $$x\colon (\ensuremath{\mbox{\rm Spec}}(\mathcal L),\partial)\to (M,\partial_M),$$ is a morphism of schemes with derivation. In such a case the image $x(0)=\mathfrak x$ of the ideal $(0)\subset\mathcal L$ by $x$ is a differential point $\mathfrak x\in \ensuremath{\mbox{\rm Diff}}(M,\partial_M)$ and its quotient field $\kappa(\mathfrak x)$ is an intermediate extension, $$\mathcal K \subset \kappa(\mathfrak x) \subset \mathcal L,$$ we say that $\kappa(\mathfrak x)$ is the differential field generated by $x\in M(\mathcal L)$. \end{definition} As in classical algebraic geometry, there is a one-to-one correspondence between solutions with coefficients in $\mathcal L$ of $(M,\partial_M)$ and rational solutions of the differential algebraic dynamical system after a base change, $(M,\partial_M)\times_{\mathcal K}(\ensuremath{\mbox{\rm Spec}}(\mathcal L),\partial)$. \begin{definition} Let us consider two differential algebraic dynamical systems over $\mathcal K$, $(M,\partial)$ and $(N,\partial)$. We say that $(M,\partial)$ reduces to $(N,\partial)$ if there is an algebraic variety $Z$ over $\mathcal C$ and, $$(M,\partial) = (N,\partial) \times_{\mathcal C} Z.$$ \end{definition} The notion of reduction is a generalization of the notion of split. In particular, to split means reduction to $(\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial)$. Given a differential algebraic dynamical system; what does it mean to \emph{integrate} the dynamical system? As algebraists, we shall use this term for writing down the general solution of the dynamical system by terms of known operations, mainly algebraic operations and quadratures. However, in the general context of dynamical systems there is not a general definition for \emph{integrability}. We are tempted to say that integrability is equivalent to split. Notwithstanding, there are several situations in which the general solution can be given, but there is not a situation of split. For example, algebraically completely integrable Hamiltonian systems \cite{AMV}. In such cases the flux is tangent to a global lagrangian bundle, and the generic fibers of this bundle are affine subsets of abelian varieties. It allows us to write down the global solution by terms of Riemann theta functions and Jacobi's inversion problem. However, this general solution can not be expressed in terms of the splitting of a scheme with derivation. Split is the differential algebraic equivalent to \emph{Lie's canonical form of a vector field}. The scheme with derivation $Z\times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial)$ should be seen as an extended phase space, and $\partial$ as the derivative with respect to the time parameter. The splitting morphism, $$(M,\partial) \to Z\times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial),$$ can be seen as Lie's canonical form, usually referred to, in dynamical system argot, as the \emph{flux box reduction}. Then $Z$ is simultaneously the algebraic variety of initial conditions, and the \emph{space of global solutions} of the dynamical system. Our conclusion is that the split differential algebraic dynamical systems are characterized by following the property: \emph{its space of solutions is parameterized by a scheme over the constants}. In the context of algebraic Lie-Vessiot systems we will see that algebraic solvability of the problem, is equivalent to the notion of \emph{split} (Theorem \ref{C3THE3.1.17}). And then, this notion plays a fundamental role in our theory. We will see that generically, a Lie-Vessiot equation does not split. If we want to solve it, then we need to admit some new functions by means of a differential extension of $\mathcal K\subset \mathcal L$. Thus, the dynamical system splits after a base change to $\mathcal L$. The Galois theory will provide us with the techniques for obtaining such extensions and studying their algebraic properties (Proposition \ref{C3PRO3.2.5}). \subsection{Algebraic Lie-Vessiot Systems} From now on we will consider a fixed characteristic zero differential field $\mathcal K$ whose field of constants $\mathcal C$ is algebraically closed. Let $G$ be a $\mathcal C$-algebraic group, and $M$ a faithful homogeneous $G$-space. \begin{definition} A non-autonomous algebraic vector field $\vec X$ in $M$ with coefficients in $\mathcal K$ is an element of the vector space $\mathfrak X(M)\otimes_{\mathcal C}\mathcal K$. \end{definition} A non-autonomous algebraic vector field $\vec X$ in $M$ is written in the form, $$\vec X = \sum_{i=1}^s f_i\vec X_i,$$ for certain elements $f_i\in \mathcal K$ and $\vec X_i\in\mathfrak X(M)$. We define the \emph{derivation $\partial_{\vec X}$ associated to $\vec X$} as the following derivation of the extended scheme $M_{\mathcal K}$: $$\partial_{\vec X}\colon \mathcal K \otimes_{\mathcal C} \mathcal O_M \to \mathcal K \otimes_{\mathcal C} \mathcal O_M,\quad a \otimes f \mapsto \partial a \otimes f + \sum_{i=1}^s(af_i\otimes \vec X_if).$$ \index{Lie-Vessiot system!algebraic} \begin{definition} A non-autonomous algebraic vector field $\vec X$ in $M$ with coefficients in $\mathcal K$ is called a Lie-Vessiot vector field if belongs to $\mathcal R(G,M)\otimes_{\mathcal C} \mathcal K$. The differential algebraic dynamical system $(M_{\mathcal K},\partial_{\vec X})$ is called a Lie-Vessiot system in $M$ with coefficients in $\mathcal K$. \end{definition} The group $G$ is, in particular, a faithful homogeneous $G$-space. Let us recall that the Lie algebra of fundamental fields on the group $G$ coincides with the Lie algebra of right invariant vector field $\mathcal R(G)$. Then, a Lie-Vessiot vector field in $G$ with coefficients in $\mathcal K$ is an element of $\mathcal R(G)\otimes_{\mathcal C} \mathcal K$. \begin{definition} We call automorphic vector fields to the Lie-Vessiot vector fields in $G$. An automorphic vector field $\vec A$ in $G$ with coefficients in $\mathcal K$ is an element of $\mathcal R(G)\otimes_{\mathcal C} \mathcal K$. \end{definition} The canonical isomorphism between $\mathcal R(G)$ and $\mathcal R(G,M)$ allows us to translate Lie-Vessiot vector fields in $M$ to automorphic vector fields in $G$. \index{automorphic!system!algebraic} \begin{definition} We call automorphic system associated to $(M,\partial_{\vec X})$ to the Lie-Vessiot system $(G_{\mathcal K},\partial_{\vec A})$, where $\vec A$ is the automorphic vector field whose corresponding Lie-Vessiot vector field in $M$ is $\vec X$. \end{definition} \emph{ From now on let $\vec X$ be a Lie-Vessiot vector field in $M$, with coefficients in $\mathcal K$, and let $\vec A$ be the associated automorphic vector field in $G$.} \subsection{Logarithmic Derivative} A $\mathcal K$-point of the algebraic group $G$ has coefficients in a differential field, so that it can be differentiated. The derivative of a $\mathcal K$-point of $G$ gives a tangent vector at a $\mathcal K$-point of $G_{\mathcal K}$. If we translate this tangent vector to a right invariant vector field, we obtain the logarithmic derivative. In order to do so we identify systematically the Lie algebra $\mathcal R(G)$ with the tangent space $T_eG = \ensuremath{\mbox{\rm Der}}_{\mathcal C}(\mathcal O_{G,e}, \mathcal C)$. It is also important to remark that the tangent space is compatible with extensions of the base field in the following way: $$\mathcal R(G)\otimes_{\mathcal C}\mathcal K \xrightarrow{\sim} T_e(G_{\mathcal K}) = \ensuremath{\mbox{\rm Der}}_{\mathcal K}(\mathcal O_{G_{\mathcal K},e}, \mathcal K).$$ In classical algebraic geometry it is assumed that derivations of $T_e(G_{\mathcal K})$ vanish on $\mathcal K$. However, automorphic systems are by definition compatible with the derivation $\partial$ of $\mathcal K$. Thus, the restriction of an automorphic vector field $\partial_{\vec A}$ to $e\in G_{\mathcal K}$ is not a tangent vector of $T_e(G_{\mathcal K})$: it is shifted by $\partial$. We have identifications of $\mathcal K$-vector spaces: $$\xymatrix{\mathcal R(G)\otimes_{\mathcal C}\mathcal K \ar[r]^-{\sim} & \mathcal R(G)\otimes_{\mathcal C} \mathcal K + \partial \ar[r]^-{-\partial} & T_e(G_{\mathcal K}) \\ \vec A \ar[r] & \partial_{\vec A} = \partial + \vec A \ar[r] & \vec A_{e}}$$ Let us consider $\sigma\in G(\mathcal K)$ and the canonical morphism $\sigma^\sharp$ of \emph{taking values in $\sigma$:} $$\sigma^\sharp\colon \mathcal O_{G_{\mathcal K},{\sigma}} \to \mathcal K, \quad f\mapsto f(\sigma).$$ Let us remember that there is a canonical form of extension of the derivation $\partial$ in $\mathcal K$ to a derivation in $G_{\mathcal K}$. We consider the direct product $G\times_{\mathcal C}(\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial)$ in the category of schemes with derivation. By abuse of notation \emph{we denote by $\partial$ this canonical derivation in $G_{\mathcal K}$}. By construction we have that $(G_{\mathcal K},\partial)$ splits -- the identity is the splitting morphism -- and $\ensuremath{\mbox{\rm Const}}(G_{\mathcal K},\partial) = G$. Let us consider the following \emph{non-commutative} diagram, \begin{equation}\label{EnonComm} \xymatrix{ \mathcal O_{G_{\mathcal K},{\sigma}} \ar[rr]^-{\sigma^\sharp}\ar[d]_-{\partial} & & \mathcal K \ar[d]^-{\partial} \\ \mathcal O_{G_{\mathcal K},{\sigma}} \ar[rr]^-{\sigma^\sharp} & & \mathcal K}. \end{equation} \begin{lemma} The commutator $\sigma' = [\partial,\sigma^\sharp]$ of the diagram \eqref{EnonComm} is a derivation vanishing on $\mathcal K$, and then $\sigma'$ belong to the tangent space $T_\sigma (G_{\mathcal K})$ (id est, the space of derivations $\ensuremath{\mbox{\rm Der}}_{\mathcal K}(\mathcal O_{G_{\mathcal K},\sigma},\mathcal K)$). \end{lemma} \begin{proof} $[\partial, \sigma^\sharp]$ is the difference between two derivations, and then it is a derivation. Let us consider $f\in\mathcal K\subset \mathcal O_{G_{\mathcal K}\sigma}$, then $\sigma'(f) = \partial f - \partial f = 0$. \end{proof} If $\sigma$ is a geometric point of $G_{\mathcal K}$, then $R_{\sigma^{-1}}$ is a automorphism of $G_{\mathcal K}$ sending $\sigma$ to $e$. It induces an isomorphism between the ring of germs $\mathcal O_{G_{\mathcal K},\sigma}$ and $\mathcal O_{G_{\mathcal K},e}$, and then an isomorphisms between the corresponding spaces of derivations: $$\xymatrix{T_\sigma(G_{\mathcal K}) \ar[rr]^-{R_{\sigma^{-1}}'} & & T_e(G_{\mathcal K}) \simeq \mathcal R(G) \otimes_{\mathcal C} \mathcal K}$$ \index{logarithmic derivative!algebraic} \begin{definition}\label{DEFLogDerAlg} Let $\sigma$ be a geometric point of $G_{\mathcal K}$; we call logarithmic derivative of $\sigma$, $l\partial(\sigma)$, to the automorphic vector fiel $R_{\sigma^{-1}}'([\partial,\sigma^\sharp])$. The logarithmic derivative is then a map: $$l\partial \colon G(\mathcal K) \to \mathcal R(G) \otimes_{\mathcal C} \mathcal K.$$ \end{definition} \begin{proposition} Properties of logarithmic derivative: \begin{enumerate} \item[(1)] Logarithmic derivative is functorial in $\mathcal K$; for each differential extension $\mathcal K\subset\mathcal L$ we have a commutative diagram: $$\xymatrix{G(\mathcal K) \ar[r]\ar[d]& \mathcal R(G) \otimes_{\mathcal C} \mathcal K \ar[d] \\ G(\mathcal L) \ar[r]& \mathcal R(G) \otimes_{\mathcal C} \mathcal L}$$ \item[(2)] Let us consider $\sigma$ and $\tau$ in $G(\mathcal K)$: $$l\partial(\sigma\tau) = l\partial(\sigma) + \ensuremath{\mbox{\rm Adj}}_\sigma(l\partial(\tau))$$ \item[(3)] Let us consider $\sigma\in G(\mathcal K)$: $$l\partial(\sigma^{-1}) = -\ensuremath{\mbox{\rm Adj}}_{\sigma}(l\partial(\sigma)).$$ \end{enumerate} \end{proposition} \begin{proof} (1) comes directly from the differential field extension, (2) comes from the right invariance, and (3) is corollary to (2). \end{proof} \subsection{Automorphic Equation} \index{automorphic!equation} \begin{theorem} Let us consider $\mathcal K\subset\mathcal L$ a differential extension. Then $\sigma\in G(\mathcal L)$ is a solution of the differential algebraic dynamical system $(G_{\mathcal K},\partial_{\vec A})$ if and only if $l\partial(\sigma) = \vec A$. \end{theorem} \begin{proof} Let us consider $\sigma\in G(\mathcal L)$, and let $\vec B$ be its logarithmic derivative. The space $\mathcal R(G)\otimes_{\mathcal C} \mathcal L$ is canonically identified with the Lie algebra of right invariant vector fields on the \emph{base extended} $\mathcal L$-algebraic group $G_{\mathcal L}$: $$\mathcal R(G)\otimes_{\mathcal C}\mathcal L = \mathcal R(G_{\mathcal L}).$$ By this identification, the automorphic vector field $\vec B$ is seen as a derivation $\vec B$ of the structure sheaf $\mathcal O_{G_{\mathcal L}}$. The germ $\vec B_{(\sigma)}$ at $\sigma$ of $\vec B$ is a derivation of the ring $\mathcal O_{G_{\mathcal L},\sigma}$. The composition with $\sigma^\sharp$ give us the tangent vector $\vec B_\sigma\in T_{\sigma}(G_{\mathcal L})$: $$\xymatrix{\mathcal O_{G_{\mathcal K},\sigma} \ar[r]^-{\vec B_{(\sigma)}} \ar[rrd]_-{\vec B_{\sigma}} & \mathcal O_{G_{\mathcal K},\sigma} \ar[rd]^-{\sigma^\sharp} & \\ & & \mathcal K}$$ The value of $\vec B$ at the identity point is, by definition, $l\partial(\sigma)$. Since $\vec B$ is a right invariant vector field we have $l\partial(\sigma) = R_{\sigma^{-1}}'(B_{\sigma}) = \sigma^\sharp\circ \vec B_{(\sigma)} \circ R_{\sigma^{-1}}^\sharp$ hence $\vec B_{\sigma}$ is equal to the commutator $[\partial, \sigma^\sharp]$ of Definition \ref{DEFLogDerAlg}. Then, $\vec B_{(\sigma)}$ is the defect of the diagram \eqref{EnonComm}; therefore the following diagram commutes: $$\xymatrix{ \mathcal O_{G_{\mathcal K},{\sigma}} \ar[rr]^-{\sigma^\sharp}\ar[d]_-{\partial + \vec B_{(\sigma)}} & & \mathcal K \ar[d]^-{\partial} \\ \mathcal O_{G_{\mathcal K},{\sigma}} \ar[rr]^-{\sigma^\sharp} & & \mathcal K}.$$ Furthermore, $\vec B$ is determined by the commutator $\vec B_\sigma = [\partial,\sigma^{\sharp}]$ and then it is unique right invariant vector field in $G_{\mathcal L}$ that forces the diagram to commute. Let us note that the commutation of the above diagram holds if and only if the kernel $\mathfrak m_{\sigma}$ of $\sigma^\sharp$ is a differential ideal. Then $\vec B$ is the unique right invariant vector field in $G_{\mathcal L}$ such that the maximal ideal $\mathfrak m_{\sigma}$ is a differential ideal. Let us note also that, this derivation $\partial + \vec B_{\sigma}$ is the germ in $\sigma$ of the automorphic derivation $$\partial_{\vec B} = \partial + \vec B,$$ we conclude that $\vec B$, the logarithmic derivative of $\sigma$, is the unique element of $\mathcal R(G)\otimes_{\mathcal C} \mathcal L$ such that $\sigma$ is a differential point of $(G_{\mathcal L}, \partial_{\vec B})$. \end{proof} Because of that we can substitute the automorphic system $\vec A$, for the so-called \index{equation!automorphic} \emph{automorphic equation}: \begin{equation}\label{EqAutomorphicAlgebraic} l\partial(x) = \vec A \end{equation} \subsection{Solving Lie-Vessiot Systems} \index{gauge!transformation} \begin{definition} Let us consider $\sigma\in G(\mathcal K)$. We call gauge transformation induced by $\sigma$ to the left translation $L_\sigma\colon G_{\mathcal K}\to G_{\mathcal K}$. \end{definition} \begin{lemma}\label{lmLV1} $(G_{\mathcal K},\partial_{\vec A})$ splits if and only if the automorphic equation \eqref{EqAutomorphicAlgebraic} has at least one solution in $G(\mathcal K)$. \end{lemma} \begin{proof} Assume $(G_{\mathcal K},\partial_{\vec A})$ splits. Let us consider the splitting isomorphism $$\psi\colon(G_{\mathcal K},\partial_{\vec A})\to Z \times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial).$$ Let $x$ be a $\mathcal C$-rational point of $Z$. Let us denote by $x_{\mathcal K}$ the corresponding $\mathcal K$-point of $G_{\mathcal K}$ obtained after the extension of the base field. Thus, $\psi^{-1}(x_{\mathcal K})$ is a solution of \eqref{EqAutomorphicAlgebraic}. Reciprocally, let us assume that there exists a solution $\sigma$ of \eqref{EqAutomorphicAlgebraic} in $G(\mathcal K)$. Let us consider the gauge transformation: $$L_{\sigma^{-1}}\colon G_{\mathcal K} \to G_{\mathcal K}.$$ It applies $\sigma$ onto the identity element $e\in G_{\mathcal K}$. But the logarithmic derivative $l\partial(e)$ vanishes, so that $L_{\sigma^{-1}}$ transforms $\partial_{\vec A}$ into the canonical derivation $\partial$. We conclude that $L_{\sigma^{-1}}$ is an splitting isomorphism. \end{proof} \begin{lemma}\label{lmLV2} Assume that $(G_{\mathcal K},\partial_{\vec A})$ splits. In such case we can choose the splitting isomorphism between the gauge transformations of $G_{\mathcal K}$. This gauge transformation induces the split of any associated Lie-Vessiot system $(M_{\mathcal K},\partial_{\vec X})$. \end{lemma} \begin{proof} We use the same argument as above. If it splits, $$s\colon (G_{\mathcal K},\partial_{\vec A})\to G \times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial) = (G,\partial),$$ then the preimage of the identity element $s^{-1}(e) = \sigma$ is a solution of the automorphic system. So that the gauge transformation $L_{\sigma^{-1}}\colon \sigma\mapsto e$ maps solutions of $(G_{\mathcal K},\partial_{\vec A})$ to solutions of $(G_{\mathcal K},\partial)$ and it is an splitting isomorphism. For any associated Lie-Vessiot system $(M_{\mathcal K}, \partial_{\vec X})$, and any point $x_0\in M(\mathcal C)$ we have that $L_{\sigma}(x_0)$ is a solution of $(M_{\mathcal K},\partial_{\vec X})$. So that $L_{\sigma}$ sends solutions of the canonical derivation $\partial$ to solutions of $\partial_{\vec X}$. Thus, its inverse $L_{\sigma^{-1}}$ is an splitting isomorphism for $(M_{\mathcal K},\partial_{\vec X})$. \end{proof} \begin{lemma}\label{LmAlmostConstantSplit} Let $Z$ be a $\mathcal C$-algebraic variety and $(Z_{\mathcal K},\vec D)$ a non-autonomous differential algebraic dynamical system over $\mathcal K$. If $(Z_{\mathcal K},\vec D)$ splits then $(Z_{\mathcal K},\vec D)$ is almost-constant and $\ensuremath{\mbox{\rm Const}}(Z_{\mathcal K},\vec D) \simeq Z$. \end{lemma} \begin{proof} Assume that $(Z_{\mathcal K},\vec D)$ splits. It implies that there exist an $\mathcal C$-scheme $Y$, such that $Z_{\mathcal K} = Y\times_{\mathcal C} \ensuremath{\mbox{\rm Spec}}(\mathcal K)$. We have that $Z_{\mathcal K} \simeq Y_{\mathcal K}$, and then $Z\simeq Y$. \end{proof} \begin{lemma}\label{lmLV3.5} Let $Z$ be a reduced $\mathcal C$-scheme. There is a one-to-one correspondence between closed subschemes of $Z$ and closed subschemes with derivation of $(Z_{\mathcal K},\partial) = Z \times_{\mathcal C}(\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial)$. \end{lemma} \begin{proof} First, let us consider the affine case. Assume $Z = \ensuremath{\mbox{\rm Spec}} \mathcal R$ for a $\mathcal C$-algebra $\mathcal R$. The ring of constants $C_{\mathcal R\otimes_{\mathcal C} \mathcal K}$ is $\mathcal R$ itself. It follows that $\ensuremath{\mbox{\rm Const}}(Z_{\mathcal K},\partial) = Z$. It is clear that $\mathcal R\otimes_{\mathcal C}\mathcal K$ is an almost-constant ring: each radical differential ideal is generated by constants. Because of that there is an one-to-one correspondence between radical ideals of $\mathcal R$ and radical differential ideals of $\mathcal K$. In the non-affine case, let us consider $Y$ a closed sub-$\mathcal C$-scheme of $Z$. The canonical immersion $(Y_\mathcal K,\partial)\subset (Z_{\mathcal K},\partial)$ identifies $Y$ with a closed sub-$\mathcal K$-scheme with derivation of $(Z_{\mathcal K},\partial)$. Reciprocally, let $(\tilde Y,\partial|_{\tilde Y})$ be a closed sub-$\mathcal K$-scheme with derivation of $(Z_{\mathcal K},\partial)$. Let us consider $\{U_i\}_{i\in\Lambda}$ an affine covering of $Z$. The collection $\{V_i\}_{i\in\Lambda}$ with $V_i = U_i\times_{\mathcal C} \mathcal K$ is then an affine covering of $Z_{\mathcal K}$. Each intersection $\tilde Y_i = \tilde Y|_{V_i}$ is an affine closed sub-$\mathcal K$-scheme of $V_i$. We are in the affine case: by the above argument there are closed sub-$\mathcal C$-schemes $Y_i\subset U_i$ such that $(\tilde Y_i,\partial|_{\tilde Y_i}) = Y_i \times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial)$. This family defines a covering of a closed sub-$\mathcal C$-scheme $Y = \bigcup_{i\in\Lambda} Y_i$ of $Z$. \end{proof} \begin{lemma}\label{lmLV4} Let $Z$ be a $\mathcal C$-algebraic variety and $(Z_{\mathcal K},\vec D)$ a non autonomous algebraic dynamical system over $\mathcal K$. Let $Y\subset Z$ a locally closed subvariety, and assume that $\vec D$ is tangent to $Y$, so that $(Y_{\mathcal K},\vec D|_Y)$ is a sub-$\mathcal K$-scheme with derivation. If $(Z_{\mathcal K},\vec D)$ splits then $(Y_{\mathcal K},\vec D|_Y)$ splits. \end{lemma} \begin{proof} By substituting $Z$ for certain open subset we can assume that $Y$ is closed. Let us consider the splitting isomorphism, $$\psi\colon(Z_{\mathcal K},\vec D) \to Z\times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial).$$ The image $\psi(Y_{\mathcal K},\vec D|_Y)$ is a locally closed subscheme with derivation of $Z\times_{\mathcal C}(\ensuremath{\mbox{\rm Spec}}(\mathcal K),\partial)$. By Lemma \ref{lmLV3.5} it splits. \end{proof} \begin{lemma}\label{lmLV5} Assume that the action of $G$ on $M$ is faithful. Then $(G_{\mathcal K},\partial_{\vec A})$ splits if and only if $(M_{\mathcal K}, \partial_{\vec X})$ splits. \end{lemma} \begin{proof} Lemma \ref{lmLV2} says that if $(G_{\mathcal K}, \partial_{\vec A})$ splits, then $(M_{\mathcal K},\partial_{\vec X})$ splits. Reciprocally, let us assume that $(M_{\mathcal K},\partial_{\vec X})$ splits. For each positive number $r$ we consider the natural lifting to the cartesian power $(M^r_{\mathcal K},\partial_{\vec X}^r)$. The splitting of $(M_{\mathcal K},\partial_{\vec X})$ induces the splitting of those cartesian powers differential algebraic dynamical system $(M^r_{\mathcal K},\partial_{\vec X}^r)$. For $r$ big enough there is a point $x\in M^r$ such that its orbit $O_x$ is a principal homogeneous space \emph{isomorphic} to $G$. Then $(O_{x,\mathcal K},\partial_{\vec X})$ is a locally closed sub-$\mathcal K$-scheme with derivation of $(M^r_{\mathcal K},\partial_{\vec X}^r)$. By Lemma \ref{lmLV4} it splits. We also know that $(O_{x,\mathcal K},\partial_{\vec X})$ is isomorphic to $(G_{\mathcal K},\partial_{\vec A})$. Finally, $(G_{\mathcal K},\partial_{\vec A})$ splits. \end{proof} \begin{theorem}\label{C3THE3.1.17} Assume that the action of $G$ on $M$ is faithful. Then the following are equivalent. \begin{enumerate} \item[(1)] The automorphic equation \eqref{EqAutomorphicAlgebraic} has a solution in $G(\mathcal K)$ \item[(2)] $(G_{\mathcal K},\partial_{\vec A})$ splits. \item[(3)] There is a gauge transformation of $G_{\mathcal K}$ sending $\vec A$ to $0$. \item[(4)] $(M_{\mathcal K},\partial_{\vec X})$ splits. \item[(5)] $(G_{\mathcal K},\partial_{\vec A})$ splits, is almost-constant, and $\ensuremath{\mbox{\rm Const}}(G_{\mathcal K},\partial_{\vec A}) \simeq G$. \item[(6)] $(M_{\mathcal K},\partial_{\vec A})$ splits, is almost-constant, and $\ensuremath{\mbox{\rm Const}}(M_{\mathcal K},\partial_{\vec X}) \simeq M$. \end{enumerate} \end{theorem} \begin{proof} Equivalence between (1) and (2) comes from Lemma \ref{lmLV1}. Equivalence between (2) and (3) comes from Lemma \ref{lmLV2}. (2) and (4) are equivalent by Lemma \ref{lmLV5}. By Lemma \ref{LmAlmostConstantSplit}, they all imply (5) and (6). \end{proof} \subsection{Splitting Field of an Automorphic System} Note that a differential extension $\mathcal K \subset \mathcal L$, induces a canonical inclusion, $$\mathcal R(G,M)\otimes_{\mathcal C}\mathcal K \subset \mathcal R(G,M)\otimes_{\mathcal C}\mathcal L;$$ so that a Lie-Vessiot vector field with coefficients in $\mathcal K$ is a particular case of a Lie-Vessiot vector field with coefficients in $\mathcal L$. So that if $(M_{\mathcal K},\partial_{\vec X})$ is a Lie-Vessiot system, then $(M_{\mathcal L},\partial_{\vec X})$ makes sense. \index{splitting extension} \begin{definition} We say that a differential extension $\mathcal K\subset\mathcal L$ is a splitting extension for $(M_{\mathcal K},\partial_{\vec X})$ if $(M_{\mathcal L},\partial_{\vec X})$ splits. \end{definition} From theorem \ref{C3THE3.1.17}, we know that $\mathcal K\subset\mathcal L$ is a splitting extension of {\nolinebreak$(M_{\mathcal K},\partial_{\vec X})$} if and only it is a splitting extension of $(G_{\mathcal K},\partial_{\vec A})$. Then we will center our attention in the automorphic vector field $\vec A$. \subsection{Action of $G(\mathcal C)$ on $G_{\mathcal K}$}\label{C3SS3.3.1} For each $\sigma\in G(\mathcal C)$, $R_{\sigma}$ is an automorphism of $G_{\mathcal K}$. The composition law is an action of $G$ on $G_{\mathcal K}$ by the right side, $$G_{\mathcal K} \times_{\mathcal C} G \to G_{\mathcal K}.$$ The vector field $\vec A$ is right invariant, so that we expect the differential points of $(G_{\mathcal K},\partial_{\vec A})$ to be invariant under right translations. In fact, the above morphism is a morphism of schemes with derivation, $$(G_{\mathcal K},\partial_{\vec A}) \times_{\mathcal C} G \to (G_{\mathcal K},\partial_{\vec A}).$$ We apply the functor $\ensuremath{\mbox{\rm Diff}}$, and then we obtain an action of the $\mathcal C$-algebraic group $G$ on the differential scheme $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$, $$\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}) \times_{\mathcal C} G \to \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}).$$ Assume that $(G_{\mathcal K},\partial_{\vec A})$ split. In such case, when we apply the functor $\ensuremath{\mbox{\rm Const}}$ to the previous morphism, we obtain a morphism of schemes, $$\ensuremath{\mbox{\rm Const}}(G_{\mathcal K},\partial_{\vec A})\times_{\mathcal C} G \to \ensuremath{\mbox{\rm Const}}(G_{\mathcal K},\partial_{\vec A}).$$ Because of the split we already knew that $\ensuremath{\mbox{\rm Const}}(G_{\mathcal K},\partial_{\vec A})$ is a $\mathcal C$-scheme isomorphic to $G$. Furthermore, the above morphism says that the action of $G$ by the right side on this $G$-scheme is canonical. We have proven the following: \begin{lemma} Assume that $(G_{\mathcal K},\partial_{\vec A})$ splits. Then $\ensuremath{\mbox{\rm Const}}(G_{\mathcal K},\partial_{\vec A})$ is a principal $G$-homogeneous space by the right side. \end{lemma} \subsection{Existence and Uniqueness of the Splitting Field}\label{C3SSexistenceuniqueness} \begin{lemma}\label{C3LEMcloseddiffpoint} There is a differential point $\mathfrak x\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ which is closed in the Kolchin topology. \end{lemma} \begin{proof} Let us consider the generic point $p_0\in G_{\mathcal K}$. In particular it is a differential point $p_0\in\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$. If $p_0$ is Kolchin closed, then we finish and the result holds. If not, then the Kolchin closure of $p_0$ contains a differential point point $p_1$ such that $p_0$ specializes on it $p_0\to p_1$. We continue this process with $p_1$. As $G_{\mathcal K}$ is an algebraic variety, and then a noetherian scheme, this process finish in a finite number of steps and lead us to a Kolchin closed point. \end{proof} \begin{lemma} Let $\mathfrak x \in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ be a closed differential point. Then its field of quotients $\kappa(\mathfrak x)$ is a differential extension of $\mathcal K$ with the same field of constants; $C_{\kappa(\mathfrak x)} = \mathcal C$. \end{lemma} \begin{proof} Reasoning by \emph{reductio ad absurdum} let us assume that there exists $c\in C_{\kappa(\mathfrak x)}$ not in $\mathcal C$. Let us consider an affine open neighborhood $U$ of $\mathfrak x$ and denote by $A$ its ring of regular functions. We identify $\mathfrak x$ with a maximal differential ideal $\mathfrak x\subset A$. Denote by $B$ the quotient ring $A/\mathfrak x$. $B$ is a differential subring of the differential field $\kappa(\mathfrak x)$. By Lemma \ref{LmDisjoint} there exist $b\in B$ such that the ring constants $C_{B_b}$ -- of the localized ring $B_b$ -- is a finitely generated $\mathcal C$-algebra. By reducing our original neighborhood $U$ -- removing the zeros of $b$ -- we can assume that $b$ is invertible and then the localized ring $B_b$ is just $B$. $C_B$ is a non-trivial finitely generated $\mathcal C$-algebra over $\mathcal C$, because it contains an element $c$ not in $\mathcal C$. So that there is a non-invertible element $c_2 \in C_B$. The principal ideal $(c_2)$ is a non trivial differential ideal in $B$. Let us consider a regular function $a_2$ such that $a_2(\mathfrak x) = c_2$. Then $\partial_{\vec A} a_2 \in \mathfrak x$ and $(a,\mathfrak x)$ is a non-trivial differential ideal of $A$ strictly containing $\mathfrak x$. We arrive to contradiction with the maximality of $\mathfrak x$. \end{proof} \begin{proposition}\label{C3PRO3.2.5} Let $\mathfrak x \in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ be a closed point. Then $\mathcal K\subset\mathcal \kappa(\mathfrak x)$ is a splitting extension of $(G_{\mathcal K},\vec A)$. \end{proposition} \begin{proof} Let $\mathfrak x$ be a closed point. Then the canonical morphism $\mathfrak x^\sharp$ of \emph{taking values in $\mathfrak x$,} $\mathfrak x^\sharp\colon \mathcal O_{G_{\mathcal K},\mathfrak x}\to \kappa(\mathfrak x)$ is a morphism of differential rings. Let $U$ be an affine neighborhood of the image of $\pi(\mathfrak x)$ by the canonical projection $\pi\colon G_{\mathcal K}\to G$. By composition we construct a morphism $\ensuremath{\mbox{\rm Spec}}(\kappa(\mathfrak x))\to U$, $$\xymatrix{ \mathcal O_G(U)\ar[rr]^-{\sigma^\sharp} \ar[d]_-{\pi^\sharp} & & \kappa(\mathfrak x) \\ \mathcal O_{G_{\mathcal K},\mathfrak x} \ar[rru] ^-{\mathfrak x^\sharp}}.$$ The morphism $\sigma^\sharp$ is the dual of a morphisms $\sigma$ from $\ensuremath{\mbox{\rm Spec}}(\kappa(\sigma))$ to $U$. In other words, $\sigma$ is a point of $G(\kappa(\mathfrak x))$. We consider $\sigma$ as a rational differential point of $(G_{\kappa(\mathfrak x)},\partial_{\vec A})$, and then it is a solution of the automorphic equation. By Lemma \ref{lmLV1}, $(G_{\kappa(\mathfrak x)},\partial_{\vec A})$ splits. \end{proof} \index{fundamental solution} \begin{definition} We say that $\sigma$, as defined in the above proof, is the fundamental solution of $\vec A$ associated with the closed differential point $\mathfrak x$. \end{definition} Let us consider the action of $G$ on $G_{\mathcal K}$ by right translations. The derivation $\partial_{\vec A}$ is invariant by right translations, and then it is a morphism of schemes with derivation: $$(G_{\mathcal K},\partial_{\vec A})\times_{\mathcal C} G \to (G_{\mathcal K},\partial_{\vec A})$$ We apply the functor $\ensuremath{\mbox{\rm Diff}}$, thus we obtain a morphism of differential schemes which is an algebraic action of $G$ on the set of differential points. $$\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})\times_{\mathcal C} G \to \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$$ \begin{proposition}\label{C3PROtransitivity} The action of $G(\mathcal C)$ on the set of closed points of $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ is transitive. \end{proposition} \begin{proof} Let us consider a Kolchin closed point $\mathfrak x\in\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$. Let $\mathcal L$ be the rational field of $\mathfrak x$. It is an splitting field for $(G_{\mathcal K},\partial_{\vec A})$. We have that $(G_{\mathcal L},\partial_{\vec A})$ splits, hence $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A})$ is an almost-constant differential scheme. Thus $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A})$ is homeomorphic to the principal homogeneous $G$-space $\ensuremath{\mbox{\rm Const}}(G_{\mathcal L},\partial_{\vec A})$. The differential extension $\mathcal K\subset\mathcal L$ induces a commutative diagram of schemes with derivation, $$\xymatrix{(G_{\mathcal L}, \partial_{\vec A}) \ar[rr] \ar[d]\times_{\mathcal C} G & & (G_{\mathcal L},\partial_{\vec A}) \ar[d]^-{\pi_1}\\ (G_{\mathcal K}, \partial_{\vec A}) \times_{\mathcal C} G \ar[rr] & & (G_{\mathcal K},\partial_{\vec A})}$$ and thus, a commutative diagram of differential schemes, $$\xymatrix{\ensuremath{\mbox{\rm Diff}}(G_{\mathcal L}, \partial_{\vec A}) \ar[rr] \ar[d]\times_{\mathcal C} G & & \ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A}) \ar[d]^-{\pi_2}\\ \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K}, \partial_{\vec A}) \times_{\mathcal C} G \ar[rr] & & \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})}.$$ Let $\mathfrak s$ be a Kolchin closed point of $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K}, \partial_{\vec A})$. The projection $\pi_2$ of the above diagram is exhaustive. Consider any $\mathfrak p \in \pi_2^{-1}(\mathfrak s)$, and let us consider a Kolchin closed point $x$ in the closure $\overline{\{\mathfrak p\}}$. Thus, $\pi_2(x)$ is in the closure $\overline{\{\mathfrak s\}}$. As $\mathfrak s$ is a Kolchin closed point we know that $\pi_2(x) = \mathfrak s$. Hence, there is a Kolchin closed point $x\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A})$ such that $\pi_2(x)=\mathfrak s$. Consider two Kolchin closed points $\mathfrak s, \mathfrak y\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$. Because of the above argument there are two Kolchin closed points $x,y\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A})$ such that $\pi_2(x) = \mathfrak s$ and $\pi_2(y)=\mathfrak y$. The set of Kolchin closed points of $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A})$ is a $G(\mathcal C)$-homogeneous space in the set theoretical sense. Then there is $\sigma\in G(\mathcal C)$ such that $x\cdot\sigma = y$, and by the commutativity of the diagram we have $\mathfrak s \cdot \sigma = \mathfrak y$. \end{proof} \begin{corollary}\label{C3CORuniqueness} Let $\mathfrak x$ and $\mathfrak y$ be two closed points of $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$. Then there exists an invertible $\mathcal K$-isomorphism of differential fields $\kappa(\mathfrak x)\simeq \kappa(\mathfrak y)$. \end{corollary} \begin{proof} There is a closed point $\sigma\in G$, such that $\mathfrak x\cdot \sigma = \mathfrak y$. Then $$R_\sigma\colon (G_{\mathcal K},\partial_{\vec A})\to (G_{\mathcal K},\partial_{\vec A})$$ is an automorphism that maps $\mathfrak x$ to $\mathfrak y$. Then it induces an invertible $\mathcal K$-isomorphism $$R_{\sigma}^\sharp\colon\kappa(\mathfrak y) \to \kappa(\mathfrak x).$$ \end{proof} \index{Galois!extension} \begin{definition}\label{C3DEFGaloisExt} For each closed point $\mathfrak x\in\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ we say that the differential extension $\mathcal K\subset \kappa(\mathfrak x)$ is a Galois extension associated to the non-autonomous differential algebraic dynamical system $(G_{\mathcal K},\partial_{\vec A})$. \end{definition} {\bf Notation. }\emph{ As we have proven, all Galois extensions associated to $(G_{\mathcal K},\partial_{\vec A})$ are isomorphic. From now on let us choose a closed point $\mathfrak x$ and denote by $\mathcal K\subset \mathcal L$ its corresponding Galois extension.} \begin{proposition}\label{PrpSplitL} A Galois extension is a minimal splitting extension for $(G_{\mathcal K},\partial_{\vec A})$ in the following sense: If $\mathcal K\subset \mathcal S$ is any splitting extension for $(G_{\mathcal K},\partial_{\vec A})$ then there is a $\mathcal K$-isomorphism of differential fields $\mathcal L\hookrightarrow\mathcal S$. \end{proposition} \begin{proof} If $\mathcal K\subset \mathcal S$ is an splitting extension, then $(G_{\mathcal S},\partial_{\vec A})$ splits. Hence, for each Kolchin closed differential point $x\in\ensuremath{\mbox{\rm Diff}}(G_{\mathcal S},\partial_{\vec A})$ the rational field of $x$ is $\mathcal S$. Let us consider the natural projection $\pi\colon(G_{\mathcal S},\partial_{\vec A})\to (G_{\mathcal K},\partial_{\vec A})$. We can choose a Kolchin closed point $x\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ such that $\pi(x) = \mathfrak x$. We have a morphism of $\mathcal K$-differential algebras between the corresponding rational fields $\pi^{\sharp}\colon \mathcal L \to \mathcal S$. \end{proof} \index{Picard-Vessiot extension} \begin{example}[Picard-Vessiot extensions] Let us consider system of $n$ linear differential equations $$\partial x = Ax,\quad A\in gl(n,\mathcal K),$$ and let us denote $a_{ij}$ for the matrix elements of $A$. The algebraic construction of the Picard-Vessiot extension is done as follows (cf. \cite{Ko1} and \cite{Vanderput}): Let us consider the algebra $\mathcal K[u_{ij}, \Delta]$, being $\Delta=|u_{ij}|^{-1}$ the inverse of the determinant. Note that it is the algebra of regular functions on the affine group $GL(n,\mathcal K)$. If is an affine group, and then it is isomorphic to the spectrum $$GL(n,\mathcal K) = \ensuremath{\mbox{\rm Spec}}(\mathcal K[u_{ij}, \Delta]).$$ We define the following derivation, $$\partial_{\vec A} u_{ij} = \sum_{k=1}^n a_{ik}u_{jk},$$ that gives to $\mathcal K[u_{ij},\Delta]$ the structure of differential $\mathcal K$-algebra, and to $(GL(n,\mathcal K),\partial_{\vec A})$ the structure of automorphic system. The set of Kolchin closed differential points od $\ensuremath{\mbox{\rm Diff}}(GL(n,\mathcal K),\partial_{\vec A})$ is the set of maximal differential ideals of $\mathcal R$. A Picard-Vessiot algebra is a quotient algebra $\mathcal K \subset \mathcal \mathcal K[u_{ij}, \Delta]/\mathfrak m$, and a Picard-Vessiot extension is a rational differential field $\mathcal K \subset \kappa(\mathfrak m)$. It is self-evident that the Picard-Vessiot extension is the particular case of Galois extension when the considered group is the general linear group. \end{example} \begin{lemma}\label{lmClosedPoint} Let $\mathcal K\subset \mathcal S$ be a splitting extension. The canonical projection $$\pi\colon \ensuremath{\mbox{\rm Diff}}(G_{\mathcal S},\partial_{\vec A}) \to \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K}, \partial_{\vec A})$$ is a closed map. \end{lemma} \begin{proof} It is enough to prove that the projection $\mathfrak y =\pi(y)$ of a closed point $y\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal S},\partial_{\vec A})$ is a closed point. Let us take a closed point $\mathfrak z\in \overline{\{\mathfrak y\}}$. Then $\pi^{-1}(\mathfrak z)$ is closed and there is a closed point $z\in \pi^{-1}(\mathfrak z)$. $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal S}, \partial_{\vec A})$ is a principal homogeneous $G$-space, there is a $\sigma\in G(\mathcal C)$ such that $z\cdot \sigma = y$, and then $\mathfrak z \cdot \sigma = \mathfrak y$. $G(\mathcal C)$ acts transitively in the space of closed points, and $\mathfrak z$ is closed, so that we have proven that $\mathfrak y$ is closed. In fact $\mathfrak y $ and $\mathfrak z$ are the same differential point. \end{proof} \begin{proposition}\label{PrpClosedProj} Let us consider any intermediate differential extension, $\mathcal K \subset \mathcal F \subset \mathcal S$, with $\mathcal K \subset \mathcal S$ an splitting extension. The projection, $$\pi\colon\ensuremath{\mbox{\rm Diff}}(G_{\mathcal F},\partial_{\vec A})\to \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}),$$ is a closed map. \end{proposition} \begin{proof} Let us consider the following diagram of projections: $$\xymatrix{ \ensuremath{\mbox{\rm Diff}}(G_{\mathcal S},\partial_{\vec A}) \ar[rr]^-{\pi_1} \ar[rd]^-{\pi_2} & & \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}) \\ & \ensuremath{\mbox{\rm Diff}}(G_{\mathcal F},\partial_{\vec A}) \ar[ru]^-{\pi} }$$ By Lemma \ref{lmClosedPoint} $\pi_1$ and $\pi_2$ are closed and surjective. Then $\pi$ is closed. \end{proof} \begin{lemma}\label{lmFSBaseChange} Let $\mathcal K \subset \mathcal F \subset \mathcal L$ be an intermediate differential extension of the Galois extension of $(G_{\mathcal K},\partial_{\vec A})$, and $\sigma$ the fundamental solution associated to $\mathfrak x$. Let us consider the sequence of base changes, $$\xymatrix{ \ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A}) \ar[r]^{\pi_1} & \ensuremath{\mbox{\rm Diff}}(G_{\mathcal F},\partial_{\vec A}) \ar[r]^{\pi_2} & \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}) \\ \quad \sigma \quad \ar[r] & \quad \mathfrak y \quad \ar[r] & \quad \mathfrak x \quad,}$$ then $\mathfrak y$ is closed in Kolchin topology, $\kappa(\mathfrak y)$ is the Galois extension $\mathcal L$ and $\sigma$ is the fundamental solution associated with $\mathfrak y$. \end{lemma} \begin{proof} By Proposition \ref{lmClosedPoint} $\pi_1$ is a closed map, so that $\mathfrak y$ is a closed point. The chain of projections induces a chain of differential extensions $\kappa(\mathfrak x) \subseteq \kappa(\mathfrak y) \subseteq \kappa(\sigma)$ but $\kappa(\mathfrak x) = \kappa(\sigma)$, and then we have the equality. \end{proof} \subsection{Galois Group} Here we give a purely geometrical definition for the Galois group associated to a Kolchin closed differential point. We prove strong normality of the Galois extensions, and identify our geometrically-defined Galois group with the group of automorphisms of the Galois extension. Let us consider the action of $G$ on $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ shown in Subsection \ref{C3SS3.3.1}: $$\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}) \times_{\mathcal C} G \to \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}).$$ \index{Galois!group} \begin{definition}\label{C3DEFGalois} Let $\mathfrak x\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ be a Kolchin closed differential point. We call Galois group of the system $(G_{\mathcal K},\partial_{\vec A})$ in $\mathfrak x$ to the isotropy subgroup of $\mathfrak x$ in $G$ by the above action, and denote it by $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. \end{definition} \begin{proposition}\label{C3PROalgebraicgroup} $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ is an algebraic subgroup of $G$. \end{proposition} \begin{proof} Denote by $H_{\mathfrak x}$ the Galois group in $\mathfrak x$. Let us consider the projection $\pi_1$ from $G_{\mathcal K}$ to $G$ induced by the extension $\mathcal C\subset \mathcal K$. Denote by $x$ the point $\pi_1(\mathfrak x)$, and let $U$ be an affine neighborhood of $x$. Then $U = G \setminus Y$ with $Y$ closed in $G$. $U_{\mathcal K}$ is an affine neighborhood of $\mathfrak x$ in $G_{\mathcal K}$. We have that the ring of regular functions in $U_{\mathcal K}$ is the tensor product $\mathcal O_{G}(U)\otimes_{\mathcal C} \mathcal K$. We identify $\mathfrak x$ with a maximal prime differential ideal $\mathfrak x\subset \mathcal O_G(U)\otimes_{\mathcal C}\mathcal K$. Let us consider a $\mathcal C$-point $\sigma$ of $G$. Then, for each $f\in \mathcal O_G(U)\otimes_{\mathcal C}\mathcal K$ we have that the right translate $R_{\sigma}^\sharp(f)$ is in $\mathcal O_G(U\cdot \sigma^{-1})\otimes_{\mathcal C}\mathcal K$. The morphism $$\pi_2\colon G \to G, \quad \sigma \mapsto R_{\sigma}(x),$$ is algebraic, and let $W$ be the complementary in $G$ of $\pi_2^{-1}(Y)$, $$W = G \setminus \pi_2^{-1}(Y),$$ $W$ is an open subset in $G$ verifying: \begin{enumerate} \item[(a)] for all $\sigma\in W(\mathcal C)$, $x\in U\cap U\cdot \sigma^{-1}$, \item[(b)] $H_{\mathfrak x}\subset W$. \end{enumerate} We will prove that the equations of $H_{\mathfrak x}$ in $W$ are algebraic. Let us consider $W_1$ an affine open subset in $W$. Let $\{\xi_1,\ldots, \xi_r\}$ be a system of generators of $\mathcal O_G(W)$ as $\mathcal C$-algebra. The composition is algebraic, $$\pi_3\colon U \times_{\mathcal C} W_1 \to G, \quad (y, \sigma)\mapsto y\cdot\sigma,$$ and it induces a morphism, $$\pi_3^\sharp \colon \mathcal O_{G,x} \to (\mathcal O_{G}(U)\otimes_{\mathcal C} \mathcal O(W_1))_{\pi_3^{-1}(x)},$$ and then for each $f\in \mathcal O_{G,x}$, $\pi_3^{\sharp}(f) = F(\xi)$, is a rational function in the $\xi_i$ with coefficients in $\mathcal O_{G,x}$. We identify $\mathfrak x$ with a prime ideal of $\mathcal O_{G}(U)\otimes_{\mathcal C}\mathcal K$. We consider a system of generators, $$\mathfrak x = (\eta_1,\ldots,\eta_r), \quad \eta_i\in\mathcal O_{G}(U)\otimes_{\mathcal C}\mathcal K.$$ Property (b) says that by the natural inclusion, $$j\colon \mathcal O_{G}(U) \otimes_{\mathcal C} \mathcal K \to (\mathcal O_{G}(U)\otimes_{\mathcal C} \mathcal O(W_1))_{\pi_3^{-1}(x)}\otimes_{\mathcal C} \mathcal K,$$ $j(\mathfrak x)$ spans a non trivial ideal of $(\mathcal O_{G}(U)\otimes_{\mathcal C} \mathcal O(W_1))_{\pi_3^{-1}(x)}\otimes_{\mathcal C} \mathcal K$, and then we have a commutative diagram: $$\xymatrix{\mathcal O_{G}(U) \otimes_{\mathcal C} \mathcal K \ar[rr]\ar[d]& & (\mathcal O_{G}(U)\otimes_{\mathcal C} \mathcal O(W_1))_{\pi_3^{-1}(x)}\otimes_{\mathcal C} \mathcal K \ar[d]^-{\pi_4} \\ \kappa(\mathfrak x) \ar[rr] & & (\kappa(\mathfrak x) \otimes_{\mathcal C} \mathcal O(W_1))_{\pi_3^{-1}(x)}}.$$ An element $\sigma\in W_1$ stabilizes $\mathfrak x$ if and only if $R_{\sigma}^\sharp(\eta_i) \in \mathfrak x$, and this is so if and only if $\pi_4(j(\eta_i)) = 0$ for $i=1,\ldots, r$. Let us consider a basis $\{e_\lambda\}_{\lambda\in\Lambda}$ of $\kappa(\mathfrak x)$ over $\mathcal C$. For each $i$, we have a finite sum: $$\pi_4(j(\eta_i)) = \frac{\sum_{\alpha} G_{i\alpha}(\xi)e_\alpha}{\sum_{\beta} H_{i\beta}(\xi)e_\beta},$$ and then $G_{i\alpha}(\xi)\in \mathcal O(W_1)$ are the algebraic equations of $H_{\mathfrak x}$ in $W_1$. \end{proof} \begin{remark}\label{RemarkGaloisK} Let $\mathfrak x$ be a Kolchin closed differential point as above, and $H\subset G$ the Galois group of $(G_{\mathcal K},\partial_{\vec A})$ in $\mathfrak x$. Then $H_{\mathcal K} = H \times_{\mathcal C}\ensuremath{\mbox{\rm Spec}}(\mathcal K)$ is the stabilizer subgroup of $\overline{\{\mathfrak x\}}$, the Zariski closure of $\mathfrak x$, by the action of composition by the right side: $$G_{\mathcal K}\times_{\mathcal K} G_{\mathcal K} \to G_{\mathcal K}.$$ However, the morphisms $R_{\sigma}$ for $\sigma\in H_{\mathcal K}$ are not in general morphisms of schemes with derivation. In the same sense, for any field extension $\mathcal K\subset \mathcal L$, $H_{\mathcal L}\subset G_{\mathcal L}$ is the stabilizer group of $\overline{\pi^{-1}(\mathfrak x)}$, the Zariski closure of the preimage of $\mathfrak x$, where $\pi$ is the natural projection from $G_{\mathcal L}$ to $G_{\mathcal K}$. This means that $H_{\mathcal L}$ stabilizes the fiber, in the following sense: for each $\mathcal L$-point $\sigma\in H_{\mathcal L}$, $R_{\sigma}\colon G_{\mathcal L}\to G_{\mathcal L}$ induces, $$R_{\sigma}|_{\overline{\pi^{-1}(\mathfrak x)}}\colon \overline{\pi^{-1}(\mathfrak x)}\to \overline{\pi^{-1}(\mathfrak x)}.$$ \end{remark} \begin{proposition}\label{C3PROcongugated} Consider two Kolchin closed differential points $\mathfrak x, \mathfrak y$ in $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K}, \partial_{\vec A})$. The groups $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ and $\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(G_{\mathcal K},\partial_{\vec A})$ are isomorphic conjugated algebraic subgroups of $G$. \end{proposition} \begin{proof} The group of $\mathcal C$-points of $G$ acts transitively in the set of closed differential points. Hence, there exists $\sigma\in G(\mathcal C)$ with $\mathfrak x \cdot \sigma = \mathfrak y$, and then $H_{\mathfrak x} \cdot \sigma = \sigma \cdot H_{\mathfrak y}$. \end{proof} \begin{theorem}\label{C3THEsne} The Galois extensions associated to $(G_{\mathcal K},\partial_{\vec A})$ are strongly normal extensions. \end{theorem} \begin{proof} Let us consider a Galois extension $\mathcal K \subset \mathcal L$. Thus, $\mathcal L$ is the rational field of certain Kolchin closed differential point that we denote by $\mathfrak x$. Let us consider $\sigma\in G_{\mathcal L}$ the fundamental solution associated to $\mathfrak x$. We have that $\sigma$ projects onto $\mathfrak x$ and the gauge transformation $L_{\sigma^{-1}}$ is a splitting morphism. We define the morphism $\psi$ of schemes with derivation trough the following commutative diagram: $$\xymatrix{(G_{\mathcal L}, \partial_{\vec A})\ar[rr]^-{\pi} & & (G_{\mathcal K},\partial_{\vec A}) \\ (G_{\mathcal L}, \partial) = G \times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal L),\partial) \ar[u]^-{L_{\sigma}}\ar[urr]_-{\psi}}$$ Denote by $H$ the Galois group in $\mathfrak x$. We have that $(H_{\mathcal L},\partial)\subset (G_{\mathcal L},\partial)$ is a closed subscheme with derivation. The group $H_\mathcal L$ is the preimage of $H$ by the projection from $G_{\mathcal L}$ to $G$. By remark \ref{RemarkGaloisK} $H_{\mathcal L}$ is the stabilizer of the $\overline{\pi^{-1}(\mathfrak x)}$ in $G_{\mathcal L}$. It means that for any point $z$ of $G_{\mathcal L}$ whose projection is addherent to $\mathfrak x$ and any $\mathcal L$-point $\tau$ of $H_{\mathcal L}$, the right translate $z\cdot\tau$ is also addherent to $\mathfrak x$. In particular we have that $ \psi(\tau) = \mathfrak x$, and then $$(H_{\mathcal L},\partial) \subset \overline{\psi^{-1}(\mathfrak x)}.$$ Reciprocally, let us consider an $\mathcal L$-point $\tau \in \psi^{-1}(\mathfrak x)$. Therefore $\pi(\sigma\cdot \tau)$ is addherent to $\mathfrak x$. The following diagram is commutative: $$\xymatrix{G_{\mathcal L} \times_{\mathcal L} G_{\mathcal L} \ar[rr] \ar[d] & & G_{\mathcal L}\ar[d]\\ G_{\mathcal K} \times_{\mathcal K} G_{\mathcal K} \ar[rr] & & G_{\mathcal K}}$$ We deduce that, for any other preimage $\bar\sigma$ of $\mathfrak x$ by $\pi$, the right translated $\bar\sigma\cdot\tau$ also projects onto $\overline{\{\mathfrak x\}}$. Thus, $\tau$ stabilizes $\overline{\pi^{-1}(\mathfrak x)}$, so that $\tau\in (H_{\mathcal L},\partial)$. Finally we have the identity: $$\psi^{-1}(\mathfrak x) = (H_{\mathcal L},\partial) = H\times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal L),\partial).$$ On the other hand we apply the affine stalk formula (Proposition \ref{CBformula}, that comes from the classical stalk formula, Theorem \ref{StalkFormula}, in Appendix \ref{ApA}) to $\mathfrak x$. We obtain the isomorphism: $$\pi^{-1}(\mathfrak x) \simeq (\ensuremath{\mbox{\rm Spec}}(\mathcal L\otimes_{\mathcal K} \mathcal L), \partial).$$ From the definition of $\psi$ we know that $L_{\sigma}$ gives us an isomorphism between the fibers $\pi^{-1}(\mathfrak x)$ and $\psi^{-1}(\mathfrak x)$. This restricted morphism $L_\sigma|_{(H_{\mathcal L},\partial)}$ is a splitting morphism $$\xymatrix{\overline{(\ensuremath{\mbox{\rm Spec}}(\mathcal L\otimes_{\mathcal K} \mathcal L), \partial)}\ar[rr]^-{\pi} & & \{\mathfrak x\} \\ H \times_{\mathcal C}(\ensuremath{\mbox{\rm Spec}}(\mathcal L), \partial) \ar[u]^-{L_\sigma|_{(H_{\mathcal L},\partial)}} \ar[urr]_-{\psi}}$$ of the tensor product $\mathcal L\otimes_{\mathcal K}\mathcal L$. All differential point $\tau\in \overline{(\ensuremath{\mbox{\rm Spec}}(\mathcal L\otimes_{\mathcal K} \mathcal L, \partial)}$ must be be in the preimage of $\mathfrak x$, because of the maximality of $\mathfrak x$ as differential point if $G_{\mathcal L}$. If follows that $\ensuremath{\mbox{\rm Diff}}(\ensuremath{\mbox{\rm Spec}}(\mathcal L\otimes_{\mathcal K} \mathcal L, \partial) = \ensuremath{\mbox{\rm DiffSpec}}(\mathcal L\otimes_{\mathcal K} \mathcal L)$. And then, we obtain an isomorphism $$\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L\otimes_{\mathcal K} \mathcal L) \to H\times_{\mathcal C}\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L),$$ it follows that $\mathcal K \subset \mathcal L$ is strongly normal. \end{proof} \begin{remark}\label{C3REM3.2.20} Following \cite{Kov2}, $\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L \otimes_{\mathcal K}\mathcal L)$ is the set of admissible $\mathcal K$-isomorphism of $\mathcal L$, modulo generic specialization. In the case of a strongly normal extension $\mathcal K\subset \mathcal L$ the space of constants $\ensuremath{\mbox{\rm Const}}(\ensuremath{\mbox{\rm DiffSpec}}(\mathcal L \otimes_{\mathcal K}\mathcal L))$ is an algebraic group and its closed points correspond to differential $\mathcal K$-algebra automorphisms of $\mathcal L$. Let us consider the previous splitting morphism, $$H \times_{\mathcal C} (\ensuremath{\mbox{\rm Spec}}(\mathcal L), \partial) \to (\ensuremath{\mbox{\rm Spec}}(\mathcal L \otimes_{\mathcal K}\mathcal L), \partial)$$ if we apply the constant functor $\ensuremath{\mbox{\rm Const}}$, we obtain a isomorphism of $\mathcal C$-algebraic varieties, $$H \xrightarrow{s} \ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K),$$ where $H$ and $\ensuremath{\mbox{\rm Gal}}(\mathcal L /\mathcal K)$ are algebraic groups. To each $\tau\in H$, we have $\mathfrak x\cdot \tau = \mathfrak x$, and the $R_{\tau}^\sharp \colon \mathcal L \to \mathcal L$. We have $R_{\tau}^\sharp \circ R_{\bar\tau}^\sharp = R_{\tau\bar\tau}^\sharp$ and it realizes $H$ as a group of differential $\mathcal K$-algebra automorphisms of $\mathcal L$. \end{remark} \begin{theorem}\label{C3THEauto} The Galois group $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ is the group of differential $\mathcal K$-algebra automorphisms of the Galois extension $\mathcal K \subset \kappa(\mathfrak x)$. \end{theorem} \begin{proof} Denote, as above, by $H\subset G$ the Galois group and by $\mathcal L$ the Galois extension $\kappa(\mathfrak x)$. We consider the isomorphism $s$ stated in remark \ref{C3REM3.2.20}. Let us prove that $s$ is an isomorphism of algebraic groups over $\mathcal C$, and that for $\tau\in H(\mathcal C)$, $s(\tau)$ is the automorphism $R^{\sharp}_{\tau}$ of $\mathcal L$, induced by the translation $R_\tau$. We already know that $s$ is a scheme isomorphism. We have to prove that it is a group morphism. For $\tau\in H$, let us compute $s(\tau)$. First, let us denote by $\bar\tau$ the point of $H_{\mathcal L}$ obtained from $\tau$ after the base extension from $\mathcal C$ to $\mathcal L$. It is a differential point of $(H_{\mathcal L},\partial)$. Then $L_{\sigma}(\bar\tau) = R_{\tau}(\sigma)\in\pi^{-1}(\mathfrak x)$. We identify $R_{\tau}(\sigma)$ with a differential point of $\pi^{-1}(\mathfrak x)$. By the stalk formula we have that $\pi^{-1}(\mathfrak x) = (\ensuremath{\mbox{\rm Spec}}(\mathcal O_{G_{\mathcal K},\mathfrak x}\otimes_{\mathcal K}\mathcal L),\partial)$. We identify $R_\tau(\sigma)$ with a prime differential ideal of $\mathcal O_{G_{\mathcal K},\mathfrak x}\otimes_{\mathcal K}\mathcal L$. Because $\pi(R_{\tau}(\sigma)) = \mathfrak x$, the morphism $R_{\tau}(\sigma)^\sharp$ factorizes, $$\xymatrix{\mathcal O_{G_{\mathcal K},\mathfrak x}\otimes_{\mathcal K}\mathcal L \ar[d]^-{\mathfrak x^{\sharp}\otimes Id} \ar[rrd]^-{R_\tau(\sigma)^{\sharp}} \\ \kappa(\mathfrak x) \otimes_{\mathcal K} \mathcal L \ar[rr]_-{\psi} & & \mathcal L}$$ and then the kernel of $\psi$ is the prime differential ideal defining the automorphism $s(\tau)$, $$\psi(a\otimes b) = s(\tau)(a)\cdot b$$ Let us consider the right translation $R_{\tau}$, $$\xymatrix{G_{\mathcal L} \ar[r]^-{R_\tau}\ar[d] & G_{\mathcal L} \ar[d] \\ G_{\mathcal K} \ar[r] & G_{\mathcal K}}\quad\quad\xymatrix{\sigma \ar[r]^-{R_\tau}\ar[d]& L_{\sigma}(\bar\tau) \ar[d] \\ \mathfrak x \ar[r] & \mathfrak x}$$ we have a commutative diagram between the local rings, $$\xymatrix{\mathcal L & \mathcal L \ar[l]_-{Id} \\ \mathcal O_{G_{\mathcal L},\pi^{-1}(\mathfrak x)} \ar[u]^-{\sigma^\sharp} & \mathcal O_{G_{\mathcal L},\pi^{-1}(\mathfrak x)} \ar[l]_-{R_{\tau}^{\sharp}} \ar[u]_-{R_\tau(\sigma)^\sharp} \\ \mathcal O_{G_{\mathcal K},\mathfrak x} \ar[u] & \mathcal O_{G_{\mathcal K},\mathfrak x} \ar[l]^-{R_{\tau}^\sharp} \ar[u] },$$ where $\mathcal O_{G,\pi^{-1}(\mathfrak x)} = \mathcal O_{G_{\mathcal K},\mathfrak x}\otimes_{\mathcal K}\mathcal L$, and the morphism $R_{\tau}^\sharp$ on these rings is defined as follows: $$\mathcal O_{G_{\mathcal K},\mathfrak x}\otimes_{\mathcal K}\mathcal L \to \mathcal O_{G_{\mathcal K},\mathfrak x}\otimes_{\mathcal K}\mathcal L, \quad a\otimes b \mapsto R_{\tau}^\sharp(a)\cdot b.$$ It is then clear that morphism $\psi$ defined above sends, $$\psi\colon (a\otimes b) \mapsto R_{\tau}^\sharp(a)\cdot b$$ and then its kernel defines the automorphism $R_{\tau}^\sharp$ and we finally have found $R_{\tau}^\sharp = s(\tau)$. \end{proof} \index{Galois!correspondence} \subsection{Galois Correspondence} There is a Galois correspondence for strongly normal extensions (theorem \ref{ThGaloisCorrespondence}). It is naturally transported to the context of algebraic automorphic systems. Let $\mathcal L$ be a Galois extension, which is the rational field $\kappa(\mathfrak x)$ of a Kolchin closed point $\mathfrak x$ as above. Let $\mathcal F$ be an intermediate differential extension, $$\mathcal K \subset \mathcal F \subset \mathcal L.$$ We make base extensions sequentially so that we obtain a sequence of schemes with derivations, $$(G_{\mathcal L},\partial_{\vec A}) \to (G_{\mathcal F},\partial_{\vec A})) \to (G_{\mathcal K},\partial_{\vec A}),$$ and the associated sequence of differential schemes, $$\ensuremath{\mbox{\rm Diff}}(G_{\mathcal L},\partial_{\vec A}) \to \ensuremath{\mbox{\rm Diff}}(G_{\mathcal F},\partial_{\vec A}) \to \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}).$$ Let $\sigma\in G(\mathcal L)$ be the fundamental solution induced by $\mathfrak x$. We obtain a sequence of differential points: $$\sigma \mapsto \mathfrak y \mapsto \mathfrak x.$$ They are Kolchin closed and $\sigma$ is the fundamental solution associated to $\mathfrak x$ and $\mathfrak y$ (Lemma \ref{lmFSBaseChange}). The stabilizer subgroup of $\mathfrak y$ is a subgroup of the stabilizer subgroup of $\mathfrak x$. We have inclusions of algebraic groups, $$\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(G_{\mathcal F},\partial_{\vec A}) \subset \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}) \subset G.$$ In particular we have that $\mathcal K \subset \mathcal F$ is a strongly normal extension if and only if $$\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(G_{\mathcal F},\partial_{\vec A}) \lhd \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}).$$ \begin{proposition} Assume that $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ is the whole group $G$, and $\mathcal K\subset \mathcal F$ is a strongly normal extension. Then the quotient group $$\bar G = G/\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(G_{\mathcal F},\partial_{\vec A})$$ exists. Let $\vec B$ be the projection of $\vec A$ in $\mathcal R(\bar G)\otimes_{\mathcal C}\mathcal K$. Then, there is a unique closed differential point $\mathfrak z\in \ensuremath{\mbox{\rm Diff}}(\bar G_{\mathcal K},\partial_{\vec B})$, and, $$\ensuremath{\mbox{\rm Gal}}_{\mathfrak z}(\bar G_{\mathcal K},\partial_{\vec B}) = \bar G.$$ \end{proposition} \begin{proof} The quotient realizes itself as the group of automorphisms of the differential $\mathcal K$-algebra $\mathcal F$. The extension $\mathcal K \subset\mathcal F$ is strongly normal, and then this group is algebraic by Galois correspondence (Theorem \ref{ThGaloisCorrespondence}). The induced morphism $$\pi\colon \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A}) \to \ensuremath{\mbox{\rm Diff}}(\bar G_{\mathcal K},\partial_{\vec A})$$ restricts to the differential points, and it is surjective. The hypothesis $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\vec A)=G$ implies that $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ consist in the only point $\{\mathfrak x\}$, and then $\ensuremath{\mbox{\rm Diff}}(\bar G_{\mathcal K},\partial_{\vec A}) = \{\mathfrak z\}$. Hence, $\mathfrak z$ is the generic point of $G_{\mathcal K}$ and the Galois group is the total group. \end{proof} Reciprocally let us consider an algebraic subgroup $H\subset \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. Then $H$ is a subgroup of differential $\mathcal K$-algebra automorphisms of $\mathcal L$. Let $\mathcal F =\mathcal L^H$ be its field of invariants. We have again a sequence of non-autonomous algebraic dynamical systems $$(G_{\mathcal L},\partial_{\vec A}) \to (G_{\mathcal F}, \partial_{\vec A}) \to (G_{\mathcal K}, \partial_{\vec A}).$$ Let again $\sigma$ be the fundamental solution induced by $\mathfrak x$, we have the sequence of closed differential points, $$\sigma\mapsto \mathfrak y \mapsto \mathfrak x$$ \begin{proposition}\label{Prop318} Let us consider an intermediate differential field, $$\mathcal K\subset \mathcal F \subset \mathcal L,$$ as above, and $H = \ensuremath{\mbox{\rm Aut}}(\mathcal L/\mathcal F)$, then \begin{enumerate} \item[(a)] $H$ is the Galois group $\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(G_{\mathcal F},\partial_{\vec A}) \subset \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. \item[(b)] $\mathcal K\subset\mathcal F$ is strongly normal if and only if $H\lhd \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. In such case $\ensuremath{\mbox{\rm Aut}}(\mathcal F/\mathcal K) = \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})/H$. \end{enumerate} \end{proposition} \begin{proof} By considering the identification of the Galois group with the group of automorphisms, the result is a direct translation of the Galois correspondence for strongly normal extensions (see \cite{Kov2} Theorem 20.5, Theorem \ref{ThGaloisCorrespondence} in this text). \end{proof} In particular, each algebraic group admits a unique normal subgroup of finite index, the connected component of the identity. Let $\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ be the connected component of the identity of $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ and, $$\ensuremath{\mbox{\rm Gal}}^1_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}) = \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})/\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}),$$ which is a finite group. In such case we have: \begin{enumerate} \item[(a)] The invariant field $\mathcal L^{\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})}$ is the relative algebraic closure ${\mathcal K^\circ}$ of $\mathcal K$ in $\mathcal L$. \item[(b)] $\mathcal K \subset {\mathcal K^\circ}$ is an algebraic Galois extension of Galois group $\ensuremath{\mbox{\rm Gal}}^1_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. \item[(c)] $\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(G_{{\mathcal K^\circ}},\partial_{\vec A}) = \ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. \end{enumerate} Thus, we can set out: \begin{proposition} $\mathcal K$ is relatively algebraically closed in $\mathcal L$ if and only if its Galois group is connected. \end{proposition} \subsection{Galois Correspondence and Group Morphisms} Here, we relate the Galois correspondence and the projection of automorphic vector fields through algebraic group morphisms. It is self evident that a group morphism $\pi\colon G\to \bar G$ sends an automorphic system $\vec A$ in $G$ with coefficients in $\mathcal K$ to an automorphic system $\pi(\vec A)$ in $\bar G$ with coefficients in $\mathcal K$. Furthermore we know that $\pi(\vec A)$ is an automorphic system in the image of $\pi$ which is a subgroup of $\bar G$. By restricting our analysis to this image, we can assume that $\pi$ is a surjective morphism. \begin{theorem}\label{ThGM} Let $\pi \colon G\to \bar G$ be a surjective morphism of algebraic groups, and $\vec B$ the projected automorphic system $\pi(\vec A)$. Then: \begin{itemize} \item[(1)] $\mathfrak y = \pi(\mathfrak x)$ is a closed differential point of $\ensuremath{\mbox{\rm Diff}}(\bar G_{\mathcal K},\partial_{\vec B})$. \item[(2)] $\kappa(\mathfrak y)$ is a strongly normal intermediate extension of $\mathcal K \subset \kappa(\mathfrak y) \subset \mathcal L$. \item[(3)] $\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(\bar G_{\mathcal K},\partial_{\vec B}) = \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})/(\ker (\pi)\cap \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}))$. \item[(4)] Let $\mathfrak z$ be a Kolchin closed point of $(G_{\kappa(\mathfrak y)},\partial_{\vec A})$ in the fiber of $\mathfrak x$. Then $\ensuremath{\mbox{\rm Gal}}_{\mathfrak z}(G_{\kappa(\mathfrak y)},\partial_{\vec A}) = \ker(\pi) \cap \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ \end{itemize} \end{theorem} \begin{proof} (1) Let $\mathfrak s$ be a closed point of $\ensuremath{\mbox{\rm Diff}}(\bar G_{\mathcal K},\partial_{\vec B})$ adherent to $\mathfrak y$. Then $\pi^{-1}(\mathfrak x)$ is a closed subset of $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ and it contains a closed point $\mathfrak z$. $G(\mathcal C)$ acts transitively in the set of closed points, and then there is $\tau\in G(\mathcal C)$ such as $\mathfrak x = \mathfrak z \cdot \tau$. Thus, $\mathfrak y = \mathfrak s\cdot \pi(\tau)$, so that $\mathfrak y$ is closed, $\mathfrak s = \mathfrak x$, and furthermore $\pi(\tau) \in \ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(\bar G_{\mathcal K},\partial_{\vec B})$. (2) $\pi^\sharp\colon \kappa(\mathfrak y)\to \mathcal L$ is a differential $\mathcal K$-algebra morphism, and $\kappa(\mathfrak y)$ is realized as an intermediate extension $\mathcal K \subset \kappa(\mathfrak y) \subset \mathcal L$. It is a strongly normal if and only if the subgroup of $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ fixing $\kappa(\mathfrak y)$ is a normal subgroup. We identify $Gal(\mathcal L/\mathcal K)$ with $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K}, \partial_{\vec A})$. Then $\tau$ fixes $\kappa(\mathfrak y)$ if and only if $\pi(\tau) = e$. This subgroup fixing $\kappa(\mathfrak y)$ is $\ker(\pi)\cap \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K}, \partial_{\vec A})$. By hypothesis, $\ker(\pi)$ is a normal subgroup of $G$, and then its intersection with $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K}, \partial_{\vec A})$ is a normal subgroup. Finally, be obtain (3) and (4) by Galois correspondence. \end{proof} \subsection{Lie Extension Structure on Intermediate Fields} Differential field approach to Lie-Vessiot systems was initiated by K. Nishioka, in terms of the notions of rational dependence on arbitrary constants and Lie extensions (see definitions \ref{DefRationalDependence} and \ref{DefLieExtension}). Here we relate our results with these notions. \begin{theorem}\label{Nihioka} Assume one of the following: \begin{enumerate} \item[(a)] $\mathcal K$ is algebraically closed. \item[(b)] The Galois group of $(G_{\mathcal K},\partial_{\vec A})$ is $G$. \end{enumerate} Let $y$ be a particular solution of $(M_{\mathcal K}, \partial_{\vec X})$ with coefficients in a differential field extension $\mathcal K\subset \mathcal R$. Assume that $\mathcal R$ is generated by $y$. Then: \begin{enumerate} \item[(i)] $\mathcal K \subset \mathcal R$ depends rationally on arbitrary constants. \item[(ii)] $\mathcal K \subset \mathcal R$ is a Lie extension. \end{enumerate} \end{theorem} \begin{proof} (i) $\mathcal R$ is an intermediate extension of the splitting field of the automorphic system which is a strongly normal extension. It is a stronger condition than the one of Definition \ref{DefRationalDependence}, thus $\mathcal R$ depends rationally on arbitrary constants. (ii) If $\mathcal K$ is algebraically closed, then the result comes directly from Theorem \ref{ThNishioka2}. For the case $(b)$, some analysis on the infinitesimal structure of $\mathcal R$ is must be done. If the Galois group is $G$, then there are not non-trivial differential points in $G_{\mathcal K}$, nor in $M_{\mathcal K}$. Then $\mathcal R$ coincides with $\mathcal M(M_{\mathcal K})$, the field of meromorphic functions in $M_{\mathcal K}$. Fundamental vector fields of the action of $G$ on $M$ induce derivations of the corresponding fields of meromorphic functions so that we have a Lie algebra morphism, $$\mathcal R(G) \to \ensuremath{\mbox{\rm Der}}_{\mathcal K}(\mathcal R), \quad \vec A_i \mapsto \vec X_{i},$$ and the derivation in $\partial$ in $\mathcal R$ is seen in $\mathcal M(\mathcal R)$ as the Lie-Vessiot system $$\bar \partial = \partial + \sum_{i=1}^r f_i \vec X_i.$$ From that, we have that, $$[\bar \partial, \mathcal R(G)] \subset \mathcal R(G) \otimes_{\mathcal C} \mathcal K,$$ and because the vector fields $\vec X_{i}$ span the tangent vector space to $M$, we have that the morphism, $$\mathcal R(G) \otimes_{\mathcal C} \mathcal R \to \ensuremath{\mbox{\rm Der}}_{\mathcal K}(\mathcal R)$$ is surjective. According to Definition \ref{DefLieExtension} we conclude that $\mathcal R$ is a Lie extension. \end{proof} \section{Algebraic Reduction and Integration}\label{C4} Here we present the algebraic theory of reduction and integration of algebraic automorphic and Lie-Vessiot systems. Our main tool is an algebraic version of Lie's reduction method, that we call \emph{Lie-Kolchin} reduction. Once we have developed this tool we explore different applications. \subsection{Lie-Kolchin Reduction Method} In \cite{BMCharris}, when discussing the general topic of analytic Lie-Vessiot systems, we have shown the Lie's method for reducing an automorphic equation to certain subgroups, once we know certain solution of a Lie-Vessiot associated system. This method is local, because it is assumed that we can choose a suitable curve in the group for the application of the algorithm. A germ of such a curve exists, but it is not true that a suitable global curve exists in the general case. In the algebraic realm we will find obstructions to the applicability of this method, highly related to the structure of principal homogeneous spaces over a non algebraically closed field, and then to Galois cohomology. We will show that the application of the Lie's method in the algebraic case leads us directly to Kolchin reduction theorem of a linear differential system to the Lie algebra of its Galois group. Because of this, we decided to use the nomenclature of \emph{Lie-Kolchin reduction method}. \subsection{Lie-Kolchin Reduction} \emph{From now on, let us consider a differential field $\mathcal K$ of characteristic zero. The field of constant is $\mathcal C$, that we assume to be algebraically closed. Let $G$ be an algebraic group over $\mathcal C$, and let $\vec A$ be an algebraic automorphic vector field in $G$ with coefficients in $\mathcal K$. We also fix a Kolchin closed point $\mathfrak x$ of $\ensuremath{\mbox{\rm Diff}}(G_{\mathcal K}, \partial_{\vec A})$ and denote by $\mathcal L$ its associated Galois extension.} \begin{lemma}\label{LmPrincipalStalk} Let $G'\subset G$ be an algebraic subgroup, and let $M$ be the quotient homogeneous space $G/G'$. Then: \begin{enumerate} \item[(a)] $M_{\mathcal K} = G_{\mathcal K}/G'_{\mathcal K}$ \item[(b)] Let us consider the natural projection morphism $\pi_{\mathcal K}\colon G_{\mathcal K} \to M_{\mathcal K}$. For each rational point $x\in M_{\mathcal K}$, $\pi_{\mathcal K}^{-1}(x) \subset G_{\mathcal K}$ is an homogeneous space of group $G'_{\mathcal K}$. \end{enumerate} \end{lemma} \begin{proof} (a) $\mathcal C$ is algebraically closed, and then the geometric quotient is universal; (a) is the fundamental property of geometric universal quotients (see \cite{Sa}). (b) The isotropy subgroup $H_x$ of $x$ is certain algebraic subgroup isomorphic and conjugated with $G'_\mathcal K$. The action of $(H_x)_{\mathcal K}$ on $G$ preserves the stalk $\pi_{\mathcal K}^{-1}(x)$, $$\psi\colon (H_x)_{\mathcal K} \times_{\mathcal K} \pi_{\mathcal K}^{-1}(x) \to \pi_{\mathcal K}^{-1}(x),$$ the induced morphism $$(\psi\times Id) \colon (H_x)_{\mathcal K} \times_{\mathcal K} \pi_{\mathcal K}^{-1}(x) \to \pi_{\mathcal K}^{-1}(x)\times_{\mathcal K} \pi_{\mathcal K}^{-1}(x)$$ is the restriction of the isomorphism $$G_{\mathcal K} \times_{\mathcal K} G_{\mathcal K} \to G_{\mathcal K} \times_{\mathcal K} G_{\mathcal K}, \quad (\tau,\sigma) \mapsto (\tau\cdot\sigma, \sigma),$$ and then it is an isomorphism. \end{proof} Let $M$ be an homogenous space over $G$, and $\vec X$ the Lie-Vessiot vector field induced in $M$ by the automorphic vector field $\vec A$. Let us fix a rational point $x_0$ of $M$ and denote by $H_{x_0}$ the isotropy subgroup at $x_0$. \begin{lemma}\label{lmSolutionx0} Assume that $x_0\in M$ is a constant solution of $(M_{\mathcal K}, \partial _{\vec X})$. Then: $$\vec A \in \mathcal R(H_{x_0})\otimes_{\mathcal C} \mathcal K.$$ \end{lemma} \begin{proof} There is a solution $\tau$ of $\vec A$ with coefficients in $\mathcal L$ such that $x_0 = \tau\cdot x_0$. Therefore $\tau \in (H_{x_0})_{\mathcal L}$ and its logarithmic derivative is an automorphic vector field in $H_{x_0}$, $$\l\partial(\tau) \in \mathcal R(H_{x_0}) \otimes_{\mathcal C} \mathcal L.$$ Taking into account that $l\partial(\tau) = \vec A$, we obtain $\vec A \in \mathcal R(H_{x_0})\otimes_{\mathcal C}\mathcal K.$ \end{proof} \begin{theorem}[Main Result]\label{C4THE4.1.5} Let us assume that $(M_{\mathcal K}, \partial_{\vec X})$ has a solution $x$ with coefficients in $\mathcal K$. If $H^1(H_{x_0}, \mathcal K)$ is trivial, then there exists a gauge transformation $L_{\tau}$ of $G_{\mathcal K}$ that sends the automorphic vector field $\vec A$ to: $$\vec B = \ensuremath{\mbox{\rm Adj}}_{\tau}(\vec A) + l\partial(\tau),$$ with $\vec B \in \mathcal R(H_{x_0}) \otimes_{\mathcal C} \mathcal K$ an automorphic vector field in $H_{x_0}$. \end{theorem} \begin{proof} Let us consider the canonical isomorphism $G/H_{x_0}\to M$ that sends the class $[\sigma]$ to $\sigma\cdot x_0$. Now, let us consider the base extended morphism, $$\pi\colon G_{\mathcal K} \to M_{\mathcal K}, \qquad \tau \mapsto \tau\cdot x_0.$$ We are under the hypothesis of Lemma \ref{LmPrincipalStalk} (b). Therefore the stalk $\pi^{-1}(x)$ is a principal homogeneous space of group $(H_{\mathcal K})_x$ which is a subgroup of $G_{\mathcal K}$ conjugated to $(H_{x_0})_{\mathcal K}$. Because of the vanishing of the Galois cohomology, there exist a rational point $\tau_1\in \pi^{-1}(x)$, and then $\tau_1\cdot x_0 = x$. Define $\tau = \tau_1^{-1}$. Let us consider the gauge transformation, $$L_{\tau}\colon (G_{\mathcal K},\partial_{\vec A}) \to (G_{\mathcal K}, \partial_{\vec B}) \quad\quad L_\tau\colon (M_{\mathcal K},\partial_{\vec X}) \to (M_\mathcal K, \partial_{\vec Y}),$$ where $\vec Y$ is the Lie-Vessiot vector field in $M$ induced by $\vec B$. We have that $\tau\cdot x = x_0$ is a constant solution of $(M_{\mathcal K}, \partial_{\vec Y})$. By Lemma \ref{lmSolutionx0}, $\vec B$ is an automorphic field in $H_{x_0}$. \end{proof} \begin{proposition}\label{PrpRationalX} Assume that there is a rational point $x_0\in M$ such that $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})\subset H_{x_0}$, then there exists a rational solution $x\in M(\mathcal K)$ of $\vec X$. \end{proposition} \begin{proof} Let us consider the fundamental solution $\sigma$ associated to $\mathfrak x$. We consider it as an $\mathcal L$-point of $G$, $$\sigma\colon \ensuremath{\mbox{\rm Spec}}(\mathcal L)\to G_{\mathcal K}.$$ It is determined by the canonical morphism of \emph{taking values in $\sigma$}, $$\sigma^\sharp \colon \mathcal O_{G_{\mathcal K},\mathfrak x} \to \mathcal L = \kappa(\mathfrak x).$$ Now, let us consider the projection $\pi\colon G\to M$, $\tau\mapsto \tau\cdot x_0$. It induces a morphism $\pi\colon G_\mathcal K(\mathcal L) \to M_{\mathcal K}(\mathcal L)$. Let us consider $x = \pi(\sigma)$. This point $x$ is an $\mathcal L$ point of $M$ and then it is a morphism $$x\colon \ensuremath{\mbox{\rm Spec}}(\mathcal L) \to M_{\mathcal K}.$$ Let $\bar x\in M_{\mathcal K}$ be the image of $x$; then $x$ is determined by the morphism $x^\sharp$ defined by the following composition: $$\xymatrix{\mathcal O_{M_{\mathcal K},\bar x} \ar[r]^-{\pi^\sharp}\ar[rrd]_-{x^\sharp} & \mathcal O_{G_{\mathcal K},\mathfrak x}\ar[rd]^-{\sigma^\sharp} \\ & & \mathcal L}$$ We are going to prove that $x$ is a rational point of $M_\mathcal K$. Let us consider $\tau \in \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. Therefore we have $R_{\tau}(x) = x$, and the following diagram is commutative: $$\xymatrix{\mathcal O_{M_{\mathcal K},\bar x} \ar[rrrd]^-{x^\sharp}\ar[rd]\ar[rddd]_-{x^\sharp}\\ & \mathcal O_{G_{\mathcal K},\mathfrak x}\ar[rr]_-{\sigma^{\sharp}}\ar[dd]^-{(\sigma\tau)^\sharp} & & \mathcal L \ar[lldd]^-{R_\tau^\sharp} \\ & &\\ & \mathcal L }$$ For each $f\in \mathcal O_{X_{\mathcal K},\bar x}$, we have $x^\sharp(f) = R_\tau^\sharp(x^\sharp(f))$. This equality holds for all $\tau\in H_{x_0}$. Hence, $x^\sharp(f)$ an element of $\mathcal L$ that is invariant for any differential $\mathcal K$-algebra automorphism of $\mathcal L$. In virtue of the Galois correspondence the fixed field of $\mathcal L$ by the action of ${\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})}$ is $\mathcal K$ . Thus, $x^\sharp(f) \in \mathcal K$. \end{proof} \begin{theorem}\label{ThKolchinH} Let us consider an algebraic subgroup $G'$ of $G$ verifying: \begin{itemize} \item[(1)] $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K}, \partial_{\vec A} ) \subset G'$, \item[(2)] $H^1(H,\mathcal K)$ is trivial. \end{itemize} Then there exist a gauge isomorphism $L_{\tau}$ of $G$ with coefficients in $\mathcal K$ reducing the automorphic system $\vec A$ to an automorphic system in $H$, $$\vec B = \ensuremath{\mbox{\rm Adj}}_{\tau}(\vec A) + l\partial(\tau),$$ belongs to $\mathcal R(G')\otimes_{\mathcal C} \mathcal K$. \end{theorem} \begin{proof} By Proposition \ref{PrpRationalX} there exists a rational solution of the Lie-Vessiot system in $M$ associated to $\vec A$. Theorem \ref{C4THE4.1.5} says that such a reduction exists. \end{proof} Denote by $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}^0(G_{\mathcal K},\partial_{\vec A})$ the connected component of the identity of the Galois group $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. \begin{corollary} Let ${\mathcal K^\circ}$ be the relatively algebraic closure of $\mathcal K$ in $\mathcal L$. Assume that $H^1(\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}),{\mathcal K^\circ})$ is trivial. Then there is a gauge transformation $L_\tau$, $\tau$ with coefficients in ${\mathcal K^\circ}$ such that $$\vec B = \ensuremath{\mbox{\rm Adj}}_{\tau}(\vec A) + l\partial(\tau)$$ belongs to $\mathcal R(\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}))\otimes_{\mathcal C}{\mathcal K^\circ}$. \end{corollary} \begin{proof} We know that the Galois group of the automorphic system with coefficients in ${\mathcal K^\circ}$ is precisely $\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ (see, for instance, remark (c) in \cite{BM}, below Proposition 18). We apply then Theorem \ref{ThKolchinH}. \end{proof} \begin{corollary}\label{LmConnectedH} If $H^1(\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}),\mathcal K)$ is trivial then $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ is connected. \end{corollary} \begin{proof} If $H^1(\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}),\mathcal K)$ is trivial, then we can reduce the automorphic system to an automorphic system in $\mathcal R(\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})) \otimes_{\mathcal C}\mathcal K$. Note that $\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ and $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ have the same Lie algebra. Therefore the Galois group of the reduced equation is contained in $\ensuremath{\mbox{\rm Gal}}^0_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. \end{proof} The following is an extension of the classical result of Kolchin on the reduction a system of linear differential equations to the Lie algebra of its Galois group \cite{Ko1} \index{theorem!Kolchin of reduction} \begin{theorem}[Kolchin]\label{C4THE4.1.10} Let us consider the relative algebraic closure ${\mathcal K^\circ}$ of $\mathcal K$ in $\mathcal L$. There is a gauge transformation $L_\tau$, $\tau$ with coefficients in ${\mathcal K^\circ}$, such that, $$\vec B = \ensuremath{\mbox{\rm Adj}}_{\tau}(\vec A) + l\partial(\tau)$$ belongs to $\mathcal R(\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A}))\otimes_{\mathcal C}{\mathcal K^\circ}$. \end{theorem} \begin{proof} Denote by $H$ the Galois group $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$. Let us consider $M = G/H$, and let us denote by $x_0\in M$ the origin which is the class of $H$ in $M$. Let $\vec Y$ be the Lie-Vessiot vector field in $M$ associated to $\vec A$. In virtue of Proposition \ref{PrpRationalX}, the canonical projection $G(\mathcal L)\to M(\mathcal L)$ sends the fundamental solution $\sigma$ to a solution $x$ of $(M,\partial_{\vec Y})$ with coefficients in $\mathcal K$. Let us consider the projection: $$\pi\colon G_{\mathcal K}\to M_{\mathcal K}.$$ Lemma \ref{LmPrincipalStalk} says that the stalk $\pi^{-1}(x)$ is a principal homogeneous space modeled over the group $H_{\mathcal K}$. Let us denote by $P\subset G_{\mathcal K}$ such homogeneous space. Note that $P$ is $\overline{\{\mathfrak x\}}$, the closure of $\mathfrak x$ in Zariski topology. We have the isomorphism, $$\psi\colon P\times_{\mathcal K} H_{\mathcal K} \to P \times_{\mathcal K} P, \quad (\tau, g) \to (\tau, \tau g),$$ Let $\tau$ be a closed point of $P$. Its rational field $\kappa(\tau)$ is an algebraic extension of $\mathcal K$. We have that $x = \tau\cdot x_0$. Thus, we can apply Lie-Kolchin reduction method. $L_{\tau^{-1}}$ is a gauge transformation with coefficients in $\kappa(\tau)$: $$L_{\tau^-1}\colon G_{\kappa(\tau)}\to G_{\kappa(\tau)},$$ that sends the automorphic vector field $\vec A$ to an automorphic vector field $\vec B$ in $H$ with coefficients in $\kappa(\tau)$. In order to finish the proof we have to see that $\kappa(\tau)$ is a subfield of the relative algebraic closure $\mathcal K^\circ$ of $\mathcal K$ in $\mathcal L$. It is enough to see that $\mathcal K \subset \kappa(\tau)$ is an intermediate differential extension of $\mathcal K\subset\mathcal L$. Furthermore, if $\kappa(\tau)$ is an intermediate differential extension then it coincides with $\mathcal K^\circ$ because of the Galois correspondence. Let us consider then the following base extension and natural projection, $$P_{\kappa(\tau)} = P \times_{\mathcal K} \ensuremath{\mbox{\rm Spec}}(\kappa(\tau)), \quad \quad \pi_1\colon P_{\kappa(\tau)}\to P.$$ The product $P_{\kappa(\tau)}$ is a principal homogeneous space modeled over $H_{\kappa(\tau)}$. Moreover, $\tau$ induces a rational point of $P_{\kappa(\tau)}$. Hence, the Galois cohomology cohomology class of $P_{\kappa(\tau)}$ is trivial, so that it is isomorphic to $H_{\kappa(\tau)}$ as homogeneous space. $P_{\kappa(\tau)}$ has as many connected components as $H_{\kappa(\tau)}$. We write it as the disjoint union of its connected components. $$P_{\kappa(\tau)} = \bigsqcup_{i\in\Lambda} P_i.$$ For each $i\in \Lambda$, the restriction $P_i\to P$ is an isomorphism of $\mathcal K$-schemes, and $\pi_1$ is a trivial covering. But each $P_i$ is a $\kappa(\tau)$-scheme, and then each component induces in $P$ an structure of $\kappa(\tau)$-scheme. Hence we have a realization of $\kappa(\tau)$ as intermediate extension $$\mathcal K \subset \kappa(\tau) \subset \mathcal L.$$ Thus, $\kappa(\tau) = {\mathcal K^\circ}$. \end{proof} \subsection{Integrability by Quadratures} To integrate an automorphic system by quadratures means to write down a fundamental solution by terms of a formula. This formula should involve the solutions of certain simpler equations. We assume that we have a geometrical meccano to express these solutions. We refer to elements of such a meccano as \emph{quadratures}. Those simpler equations are like the building blocks of our integrability theory. Depending of which simpler equations we consider as \emph{integrable} we obtain different theories integrability. In theory of Lie-Vessiot systems the elements of our formulas are the \emph{exponential maps of Lie groups} and \emph{indefinite integrals}. From a geometric point of view, it is reasonable to consider automorphic systems in \emph{abelian groups} as \emph{integrable}. Let us consider an abelian Lie group $G$. Then, the exponential map, $$\exp\colon \mathcal R(G) \to G,$$ is a group morphism, and moreover, $\mathcal R(G)$ is the universal covering of $G$. An automorphic equation, $$\frac{d\log}{dt}(x) = \sum_{i=1}^n f_i(t)\vec A_i, \quad \vec A_i\in \mathcal R(G)$$ is integrated by the formula, $$\sigma(t) = \exp\left(\sum_{i=1}^n \left(\int_{t_0}^t f_i(\xi) d\xi\right) \vec A_i\right).$$ This formula involves the integral of $t$ dependent functions, and the exponential map of the Lie group. Assuming that we are able of realize these operations \emph{a reasonable point of view is to consider al automorphic equations in abelian groups integrable}. This assumption is done in \cite{Ve2}, and followed in \cite{Bryant}. On the other hand, the algebraic case has a new kind of richness. An abelian Lie group splits in direct product of circles an lines, but an abelian algebraic group can carry a higher complexity, for example in the case of abelian varieties. In such case the exponential map is the solution of the Abel-Jacobi inversion problem. In \cite{Ko0} Kolchin develops a theory of integrability generalizing Liouville integrability, in which just quadratures in one dimensional abelian groups are allowed. It reduces the case to quadratures in the additive group, the multiplicative group and elliptic curves. \subsection{Quadratures in the Additive Group} Let us consider an automorphic equation in the additive group $\mathcal C$. The additive group is its own Lie algebra, and the logarithmic derivative is the usual derivative. Thus, the automorphic equations are written in the following form: \begin{equation}\label{EqAutomorphicAdditive} \partial x = a, \quad a \in \mathcal K. \end{equation} \begin{definition} An extension of differential fields $\mathcal K \subset \mathcal L$ is an integral extension if $\mathcal L$ is $\mathcal K(b)$, with $\partial b \in \mathcal K$. We say that $b$ is an integral element over $\mathcal K$. \end{definition} It is obvious that the Galois extension of equation \eqref{EqAutomorphicAdditive} is an integral extension of $\mathcal K$, with $b = \int a$. The additive group (of a field of characteristic zero) has no algebraic subgroups. Therefore, if $a$ is algebraic over $\mathcal K$, then $a\in\mathcal K$. Hence we have two different possibilities for integral extensions: \begin{itemize} \item $b\in \mathcal K$, $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K) = \{e\}$, \item $b\not\in\mathcal K$, $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K) = \mathcal C$. \end{itemize} \subsection{Quadratures in the Multiplicative Group} Let us consider now an automorphic equation in the multiplicative group. For the complex numbers $\mathbb C^*$ the exponential map is the usual exponential. In the general case of an algebraically closed field of characteristic zero, we can build the exponential map for $\mathcal C^*$. However, it does not take values in $\mathcal C^*$ but in a bigger group. We avoid such a construction, and then we consider the exponential just as an algebraic symbol. The logarithmic derivative in $\mathcal C^*$ coincides with the classical notion of logarithmic derivative, $$\mathcal K^* \to \mathcal K, \quad x \mapsto \frac{\partial x}{x}.$$ The general automorphic equation in the multiplicative group is written as follows: \begin{equation} \frac{\partial x}{x} = a,\quad a \in\mathcal K. \end{equation} \begin{definition} An extension of differential fields $\mathcal K \subset \mathcal L$ is an exponential extension if $\mathcal L = \mathcal K(b)$, with $\frac{\partial b}{b} \in \mathcal K$. We say that $b$ is an exponential element over $\mathcal K$. \end{definition} $\mathcal C^*$ has cyclic finite subgroups. Then, we can obtain exponential extensions that are algebraic. There appears the following casuistic: \begin{itemize} \item $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ is the multiplicative group $\mathcal C^*$ if $b$ is transcendent over $\mathcal K$. \item $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ is a cyclic group $(\mathbb Z_n)^*$ if $b^n\in\mathcal K$ for certain $n$. It means that there is $c\in\mathcal K$ that $\frac{\partial c}{n c} = a$. In such case, $b^n = c$. \end{itemize} Reciprocally, any algebraic Galois extension of $\mathcal K$ with a cyclic Galois group is an exponential extension. Here, it is a an essential point that $\mathcal C$ is algebraically closed. \subsection{Quadratures in Abelian Varieties} Abelian varieties provide us examples of non linearizable automorphic systems. For the following discussion, let us assume that the constant field of $\mathcal K$ is the field of complex numbers $\mathbb C$. Let $G$ be a complex abelian variety of complex dimension $g$. Let us consider a basis of holomorphic differentials $\omega_1,\ldots,\omega_g$, and $A_1,\ldots,A_g$,$B_1,\ldots,B_g$ a basis of the homology of $G$, we can assume that $\int_{A_i}\omega_j = \delta_{ij}$. Define the Jacobi-Abel map, $$G \xrightarrow{\sim} \mathbb C^g/\Lambda, \quad p \mapsto \left(\int_e^p\omega_1,\ldots \int_e^p \omega_g\right).$$ The exponential map is given by the exponential universal covering of the torus and the inversion of the Jacobi-Abel map. $$\xymatrix{\mathbb C^{g} \ar[d]_-{\exp} \ar[dr] \\ G \ar[r]^-{j} & \mathbb C^g/\Lambda }$$ A projective immersion of $G$ in $\mathbb P(\mathbb C,d)$, for $d$ big enough, is given by terms of \index{theta functions} theta functions, $z \mapsto \left(\theta_0(z)\colon \ldots \colon\theta_d(z)\right)$. Hence there are some homogeneous polynomial constrains $\{P(\theta_0,\ldots,\theta_d)=0\}$. The quotient $\frac{\theta_i}{\theta_j}$ defines a meromorphic abelian function in $G$ (see \cite{Mum} Chapter 1, Section 3, p. 30). Let us consider affine coordinates in $G$, $x_i = \frac{\theta_i}{\theta_0}$. We can project the vector fields of $\mathcal R(\mathbb C^g)$ to $G$, $$\frac{\partial}{\partial z_i} \mapsto \sum_j F_{ij}(x_1,\ldots,x_d)\frac{\partial}{\partial x_j}, \quad F_{ij}(x_1,\ldots,x_d) = \frac{\frac{\partial \theta j}{\partial z_i}\theta_0 - \frac{\partial \theta_0}{\partial z_i}\theta_j}{\theta_0^2}$$ being $F_{ij}$ abelian functions, and then rational functions in the $x_j$. The automorphic system in $\mathbb C^g$ $$\sum_i a_i \frac{\partial}{\partial z_i},\quad a_i\in \mathcal K$$ is seen in $A$ as a non linear system an $A$, \begin{equation}\label{EqAA} \dot x_j = \sum_i a_iF_{ij}(x_1,\ldots,x_d), \quad \{P(1,x_1,\ldots x_d)=0\}. \end{equation} If $b_1,\ldots, b_d$ are integral elements over $\mathcal K$ such that $\partial b_i = a_i$, then the solution of the automorphic system \eqref{EqAA} is: $$x_j = \frac{\theta_j(b)}{\theta_0(b)}, \quad\quad \left(\theta_0\left(b\right) \colon\ldots \colon \theta_d\left(b\right) \right).$$ \begin{definition} A strongly normal extension $\mathcal K \subset \mathcal L$ whose Galois group is an abelian variety is called an abelian extension. \end{definition} For an automorphic system in an abelian variety $A$ we have that the Galois group is an algebraic subgroup of $A$. Then its identity component is an abelian variety. The Galois extension is then, $$\mathcal K \subset {\mathcal K^\circ} \subset \mathcal L,$$ being $\mathcal K^\circ\subset \mathcal L$ an abelian extension. \begin{example} Let us consider an algebraically completely integrable hamiltonian system in the sense of Adler, Van Moerbecke and Vanhaecke (see \cite{AMV}) $\{H,H_2,\ldots, H_n\}$ in $\mathbb C^{2n}$. Assume that $\{H_i(x,y) = h_i\}$ are the equations of the affine part of an abelian variety $G$. The Hamilton equations, \begin{equation}\label{EqHamilton} \dot x_i = \frac{\partial H}{\partial y_i}, \quad \dot y_i = -\frac{\partial H}{\partial x_i}, \quad H_i(x,y) = h_i \end{equation} are an automorphic system $\vec H$ in $G$ with constant coefficients $\mathcal K = \mathbb C$. In the generic case, $G$ is a non-resonant torus, and then it is densely filled by a solution curve of the equations \eqref{EqHamilton}. We conclude that $(G,\partial_{\vec H})$ has not proper differential points: its differential spectrum consist only of the generic point. In such case, the Galois extension of the system is $\mathbb C \subset \mathcal M(G)$, the field of meromorphic functions in $G$. \end{example} \begin{example} \emph{Automorphic systems in elliptic curves}: Let us examine the case of an elliptic curve $\mathcal E$ over $\mathcal C$. Assume that $\mathcal E$ is given as a projective subvariety of $\mathbb P(2, \mathcal C)$ in Weierstrass normal form. $$t_0t_2^2 = 4t_1^3 - g_2t_0^2t_1 - g_3t_0^3$$ We take affine coordinates $x = \frac{t_1}{t_0}$ and $y = \frac{t_2}{t_0}$. The Lie algebra $\mathcal R(\mathcal E)$ is then generated by the vector field, $$\vec v = y\frac{\partial}{\partial x} + (12x^2 - g_2)\frac{\partial}{\partial y}$$ Every automorphic vector field in $\mathcal E$ with coefficients in $\mathcal K$ is written in the form $a\vec v$ with $a\in\mathcal K$. A solution of the automorphic equation is a point of $\mathcal E$ with values in the Galois extension $\mathcal L$. Such solution have homogeneous coordinates $(1\colon \xi \colon \eta)$ such that $\eta = a^{-1}\partial \xi$, and $\xi$ is a solution of the single differential equation, \begin{equation}\label{EqAE} (\partial \xi)^2 = a^2(4\xi^2 - g_1\xi - g_2). \end{equation} If we know a particular solution $b$ of \eqref{EqAE} then we can write down the general solution $(1\colon \xi \colon \eta)$ of the automorphic equation by means of the addition law in $\mathcal E$ (see \cite{Ko0} p. 804 eq. 9), depending of an arbitrary point $(1\colon x_0\colon y_0)\in \mathcal E(\mathcal C)$: $$Sol\eqref{EqAE} \times \mathcal E(\mathcal C) \to \mathcal E(\mathcal L),\quad (b, (1:x_0:y_0)) \mapsto (1:\xi:\eta)$$ \begin{equation}\label{Addition1} \xi(x_0,y_0) = -b - x_0 -\frac{1}{4}\left(\frac{\partial b - ay_0}{a(b - x_0)}\right)^2 \end{equation} \begin{equation}\label{Addition2} \eta(x_0,y_0) = -\frac{\partial b+ ay_0}{2a}+\frac{6}{2}(b + x_0)\frac{\partial b - ay_0}{a(b - x_0)}-\frac{1}{4}\left(\frac{\partial b - ay_0}{a(b - x_0)}\right)^3. \end{equation} \end{example} \begin{definition} Let $\mathcal K\subset \mathcal L$ a differential field extension. We say that $b\in \mathcal L$ is a Weierstrassian element if there exist $a\in \mathcal K$, and $g_1, g_2\in \mathcal C$, with the polynomial $4x^3 - g_1 x - g_2$ having simple roots and such that, $(\partial b)^2 = a^2(4b^2 - g_1 x - g_2)$. The differential extension $\mathcal K \subset K(b,\partial b)$ is called an elliptic extension. \end{definition} The Galois extension of the automorphic equation \eqref{EqAE} is an elliptic extension of $\mathcal K$. It can be transcendent or algebraic. If it is transcendent then its Galois group is the elliptic curve $\mathcal E$, if it is algebraic then its Galois group is a finite subgroup of $\mathcal E$. \index{equation!Weierstrass} \begin{remark} Let us examine the case of complex numbers: assume that the field of constants of $\mathcal K$ is $\mathbb C$. The solution of Weierstrass equation is the elliptic function $\wp$, and it gives rise to the universal covering of $\mathcal E$, $$\pi\colon \mathbb C \to \mathcal E, \quad z\mapsto (1\colon\wp(z)\colon \wp'(z)).$$ The automorphic vector field $a\vec v$ in $\mathcal E$ is the projection of the automorphic vector field $a\frac{\partial}{\partial z}$ in $\mathbb C$. The solution of the equation in the additive group is given by an integral element $\int a$. Then the a solution of the projected system in $\mathcal E$ is $(1\colon\wp(\int a)\colon\wp'(\int a))$. Then $b = \wp(\int a)$ is the Weierstrass element of the Galois extension. Formulas \eqref{Addition1} and \eqref{Addition2} are the addition formulas for the Weierstrass $\wp$ and $\wp'$ functions. \end{remark} \begin{example} We obtain the previous situation in the case of one degree of freedom, algebraic complete integrable hamiltonian systems. Let us consider the pendulum equation: \begin{equation} \left.\begin{matrix} \dot x &=& y \\ \dot y &=& \sin(x)\end{matrix}\quad \quad \right\} \quad \frac{y^2}{2} - \cos(x) = h \end{equation} It is written as a simple ordinary differential equation depending of the energy parameter $h$, $$\left(\frac{dx}{dt}\right)^2 = 2h + 2\cos(x),$$ by setting $z = e^{ix}$, we obtain the algebraic form of such equation, which is an automorphic equation in an elliptic curve for all values of $h$ except for $h = \pm 1$; $$\left(\frac{dz}{dt}\right)^2 = - z^3 -2hz^2 - 1.$$ The Weierstrass normal form is attained by setting $u = \frac{-z}{4} - \frac{1}{6}h$; $$\left(\frac{du}{dt}\right)^2 = 4u^3 - \frac{h^2}{3}u - \left(\frac{h^3}{27}+\frac{1}{16}\right).$$ Hence, the general solution is written in terms of the $\wp$ functions of invariants $g_2 = \frac{h^2}{3}$ and $g_3 = \frac{h^3}{27}+\frac{1}{16}$, for $h\neq \pm 1$: $$z(t) = -4\wp(t + t_0) - \frac{2}{3}h\quad ;\quad x(t) = \log\left( -4\wp(t-t_0) - \frac{4h+3\pi i}{6} \right).$$ \end{example} \subsection{Liouville and Kolchin Integrability} \index{liouvillian extension} \index{liouvillian extension!strict} \index{Kolchin extension} \begin{definition}\label{C4DEF4.2.8} Let $\mathcal K\subset \mathcal F$ a differential field extension. Let us break it up into a tower of differential fields: $$\mathcal K = \mathcal F_0 \subset \mathcal F_1 \subset \ldots \subset \mathcal F_d = \mathcal L.$$ We say that $\mathcal K \subset \mathcal F$ is $\ldots$ \begin{enumerate} \item[(1)] $\ldots$ a Liouvillian extension if the differential fields $\mathcal F_i$ can be chosen in such way that $\mathcal F_i\subset \mathcal F_{i+1}$ is an algebraic, exponential or integral extension. \item[(2)] $\ldots$ a strict-Liouvillian extension if the differential fields $\mathcal F_i$ can be chosen in such way that $\mathcal F_i\subset \mathcal F_{i+1}$ is an exponential or integral extension. \item[(3)] $\ldots$ a Kolchin extension the differential fields $\mathcal F_i$ can be chosen in such way that $\mathcal L_i\subset \mathcal F_{i+1}$ is algebraic, elliptic, exponential or integral extension. \end{enumerate} \end{definition} Liouvillian and strict-Liouvillian extensions are Picard-Vessiot extensions. An elliptic curve can not be a subquotient of an affine group. Hence, if $\mathcal K\subset \mathcal F$ is a Kolchin extension and $\ensuremath{\mbox{\rm Gal}}(\mathcal F/\mathcal K)$ is an affine group, then it is a Liouville extension. From this perspective, the following classical result is almost self evident: \index{theorem!Drach-Kolchin} \begin{theorem}[Drach-Kolchin]\label{C4THE4.2.9} Let $\mathcal K$ be a field of meromorphic functions of the complex plane $\mathbb C$. Assume that the Weierstrass's $\wp$ function is not algebraic over $\mathcal K$. Then $\wp$ is not the solution of any linear differential equation with coefficients in $\mathcal K$. \end{theorem} \begin{proof} Let us assume that this equation exist, and let $\mathcal K \subset \mathcal F$ na associated its Galois extension. Its Galois group $\ensuremath{\mbox{\rm Gal}}(\mathcal F/\mathcal K)$ is an affine group. We have an intermediate extension: $$\mathcal K \subset \mathcal K(\wp, \wp') \subset \mathcal F,$$ This intermediate extension $\mathcal K \subset \mathcal K(\wp, \wp')$ is strongly normal and its Galois group is an elliptic curve. Thus, there is a normal subgroup $H\lhd \ensuremath{\mbox{\rm Gal}}(\mathcal F/\mathcal K)$ and an exact sequence, $$ 0 \to H \to \ensuremath{\mbox{\rm Gal}}(\mathcal F/\mathcal K) \to \mathcal E \to 0$$ but the quotient group of an affine group is an affine group, and then $\mathcal E$ is affine. \end{proof} From the Galois correspondence and some elemental properties of algebraic groups we also have immediately the characterization of Liouvillian and Kolchin extensions in terms of their Galois groups. \begin{proposition}\label{C4PRO4.2.10} Let $\mathcal K \subset \mathcal L$ be a strongly normal extension. \begin{enumerate} \item[(1)] $\mathcal K \subset \mathcal L$ is a Kolchin extension if and only if there is a sequence of normal subgroups in $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$, $$H_0 \lhd H_1 \lhd \ldots \lhd H_n = \ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K),$$ such that $\dim_{\mathcal C} H_i/H_{i+1} \leq 1$. \item[(2)] $\mathcal K \subset \mathcal L$ is a strict-Liouville extension if and only if $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal K)$ is an affine solvable group. \item[(3)] $\mathcal K \subset \mathcal L$ is a Liouvillian extension if and only if the identity component $\ensuremath{\mbox{\rm Gal}}^0(\mathcal L/\mathcal K)$ is a linear solvable group. \end{enumerate} \end{proposition} \begin{proof} For (1) and (3) see \cite{Ko0}. Let us proof that linear solvable Galois group implies strict Liouville. Let us consider a resolution of the Galois group $H_0\lhd \ldots H_n$ such that each quotient $H_{i+1}/H_i$ is a cyclic group, a multiplicative group or an additive group. This resolution exist by means of Lie-Kolchin theorem. This resolution split the extension $\mathcal K\subset \mathcal L$ in a tower of differential fields $$\mathcal K_n \subset \mathcal K_{n-1} \subset \ldots \mathcal \subset \mathcal K_0.,$$ Each differential extension of the tower is an exponential, integral or algebraic extension with cyclic Galois group. But an algebraic extension with cyclic group is a radical extension. The field $\mathcal C$ is algebraically closed, hence such radical extension is generated by the radical $\sqrt[n]{a}$ of a non-constant element of $a$, and then it is the Picard-Vessiot extension of the equation, $$\partial x = \frac{\partial a}{n a}x,$$ which is an exponential extension. \end{proof} \subsection{Integration by Quadratures in Solvable Groups} Let us remind that along this chapter we are considering an automorphic vector field $\vec A$ with coefficients in $\mathcal K$ in an algebraic group $G$ defined over $\mathcal C$. We also consider a Kolchin closed differential point $\mathfrak x\in \ensuremath{\mbox{\rm Diff}}(G_{\mathcal K},\partial_{\vec A})$ and the associated Galois extension $\mathcal K \subset \mathcal L$. We are going to explain the classical integration by quadratures in terms of Lie-Kolchin reduction method and Galois correspondence. Let us consider a normal subgroup $H\lhd G$, and the quotient group $\bar G = G/H$. Let $\mathfrak y$ be the projection in $\bar G_{\mathcal K}$ of $\mathfrak x$. In virtue of Theorem \ref{ThGM} we know that, $$\mathcal K \subset \mathcal \kappa(\mathfrak y) \subset \mathcal L,$$ is an intermediate strongly normal extension. Furthermore, the Galois group in $\mathfrak y$ of the automorphic system with coefficients if $\kappa(\mathfrak y)$ is the intersection of the Galois group $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ with $H$. \begin{theorem}\label{C4THE4.2.11} Assume that there is a resolution of $G$, $$H_0 \lhd H_1 \lhd \ldots \lhd H_n = G,$$ such that $\dim_{\mathcal C} H_i/H_{i+1} = 1$, then $\mathcal K \subset \mathcal L$ is a Kolchin extension. \end{theorem} \begin{proof} Let us consider the quotients $\bar G_i = H_{n-i+1}/H_{n-i}$. They are algebraic groups of dimension one. Each $G_i$ is isomorphic to one of the following: the additive group, the multiplicative group, or an elliptic curve. Each one corresponds to an integral, exponential, or Weierstrassian quadrature. We prove the theorem by induction in the length of the resolution. Let us consider the projection $\pi\colon G \to G/H_{n-1}$. Define $\mathfrak y = \pi(\mathfrak x)$ and let $\mathcal K_1$ be the relative algebraic closure of $\kappa(\mathfrak x)$ in $\mathcal L$. Then $\mathcal K\subset \kappa(\mathfrak y)$ is an integral, exponential or elliptic extension and $\kappa(\mathfrak y)\subset \mathcal K_1$ is an algebraic extension. Hence, $\mathcal K \subset \mathcal K_1$ is a Kolchin extension. Let $\mathfrak z$ be a closed differential point of $(G_{\mathcal K_1},\partial_{\vec A})$ in the fiber of $\mathfrak x$. By Theorem \ref{ThGM} $\ensuremath{\mbox{\rm Gal}}_{\mathfrak z}(G_{\mathcal K_1},\partial_{\vec A}) \subset H_{n-1}$, and then by Theorem \ref{C4THE4.1.10} there is a gauge transformation $L_{\tau}$ with coefficients in $\mathcal K_1$ reducing the automorphic field to an automorphic field in $H_{n-1}$. Any Galois extension associated to this last equation is $\mathcal K_1$-isomorphic to $\mathcal L$. By the induction hypothesis the extension $\mathcal K_1 \subset \mathcal L$ is a Kolchin extension, hence $\mathcal K \subset \mathcal L$ is a Kolchin extension. \end{proof} \begin{theorem}\label{C4THE4.2.12} Assume that $G$ is affine and solvable. Then $\mathcal K \subset \mathcal L$ is a strict-Liouville extension. \end{theorem} \begin{proof} The Galois group is a subgroup of $G$, and then it is a solvable group. The result comes from Proposition \ref{C4PRO4.2.10} (2) together with Theorem \ref{C4THE4.2.11}. \end{proof} \begin{proposition}\label{C4PRO4.2.13} If there is a connected affine solvable group $H\subset G$ such that $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})\subset H$, then $\mathcal K \subset \mathcal L$ is a strict-Liouville extension. \end{proposition} \begin{proof} $H$ is connected affine solvable an then it has trivial Galois cohomology. We can reduce to the group $H$ by means of theorem \ref{ThKolchinH}. Hence, we are in the hypothesis of theorem \ref{C4THE4.2.12}. \end{proof} \subsection{Linearization}\label{C4SS4.2.3} There exist non-linear non-linearizable algebraic groups. An algebraic group that does not admit any linear representation is called quasi-abelian. In other words, a quasi-abelian variety is an algebraic group $G$ such that $\mathcal O_G(G) = \mathcal C$. Algebraic groups over an algebraic closed base field $C$, which are complete and connected, are called abelian varieties. Since they are complete varieties, they do not admit non-constant global regular functions and then they are quasi-abelian. The following results give us the structure of the algebraic groups by terms of linear and quasi-abelian algebraic groups. See, for instance \cite{Sa}. \begin{theorem}[Rayleigh decomposition] Let $G$ be an algebraic group. There is a unique subgroup $X\in G$ such that, $X$ is quasi-abelian and $G/X$ is an affine group. \end{theorem} \begin{theorem}[Chevalley-Barsotti-Sancho]\label{ThChevalleyBS} Let $G$ be a connected algebraic group over $\mathcal C$, with $\mathcal C$ an algebraically closed field of characteristic zero. Then there is a unique normal affine subgroup $N\subset G$ such that the quotient $G/N$ is an abelian variety. \end{theorem} \subsection{Reduction by means Chevalley-Barsotti-Sancho Theorem} In virtue of Chevalley-Barsotti-Sancho theorem (\ref{ThChevalleyBS} in appendix B), there is a unique linear normal connected algebraic group $N \lhd G$ such that the quotient $G/N$ and is an abelian variety $V$. Let us consider the projection $\pi\colon G \to V$. Let $\vec B$ be the projected automorphic system $\pi(\vec A)$ in $V$, and denote by $\mathfrak y$ the image of $\mathfrak x$ by $\pi$. We state the following: \begin{theorem}\label{C4THE4.2.14} Let $\mathcal M$ be the field of meromorphic functions in $V_{\mathcal K}$. Assume that $\ensuremath{\mbox{\rm Gal}}_{\mathfrak y}(V_{\mathcal K}, \partial_{\vec B}) = V$, and one of the following hypothesis: \begin{enumerate} \item[(1)] $H^1(N,\mathcal M)$ is trivial. \item[(2)] $\mathcal K$ is relatively algebraically closed in $\mathcal L$. \end{enumerate} Then, there is a gauge transformation of $G$ with coefficients in $\mathcal M$ reducing the automorphic system $\vec A$ to $N$. \end{theorem} \begin{proof} Let us consider $\vec A$ as an automorphic vector field in $G$ with coefficients in $\mathcal M$. By Galois correspondence we have: $$\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal M) \simeq \ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})\cap N.$$ If hypothesis (1) holds, then the statement is a particular case of Theorem \ref{ThKolchinH}. Let us prove the result in the case of hypothesis (2). By Theorem \ref{C4THE4.1.10} there exists a gauge transformations whose coefficients are algebraic over $\mathcal M$. By hypothesis $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ is connected. This group $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ realizes itself as a principal bundle over $V$ whose structural group os $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal M)$. It implies that $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal M)$ is also connected. So that $\mathcal M$ is relatively algebraically closed in $\mathcal L$. The coefficients of the considered gauge transformation are in $\mathcal M$, as we wanted to prove. \end{proof} \subsection{Linearization by means of Adjoint Representation} We consider $GL(\mathcal R(G))$ the group of $\mathcal C$-linear automorphisms of the Lie algebra $\mathcal R$. It is an algebraic group over $\mathcal C$. The adjoint representation $$\ensuremath{\mbox{\rm Adj}}\colon G\to GL(\mathcal R(G))$$ is a morphism of algebraic groups. It gives us a linearization of the equations. Let us consider the center $\mathfrak Z(G)$ and the exact sequence: $$0 \to \mathfrak Z(G) \to G \to GL(\mathcal R(G)) \to 0$$ Denote by $\vec B$ the projection of the automorphic vector field $\vec A$ by the morphism $\ensuremath{\mbox{\rm Adj}}$. It is a linear system and then its Galois extension $\mathcal K \subset \mathcal P$ is a Picard-Vessiot intermediate extension of $\mathcal K\subset \mathcal L$. \begin{proposition}\label{C4PRO4.2.15} $\mathcal P \subset \mathcal L$ is a strongly normal extension and $\ensuremath{\mbox{\rm Gal}}(\mathcal L/\mathcal P)$ is an abelian group. \end{proposition} \begin{proof} The extension $\mathcal P\subset \mathcal L$ is a Galois extension of $\vec A$ with coefficients in $\mathcal P$, so that it is strongly normal. Its Galois group is, by the Galois correspondence, the intersection of the Galois group of $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ with the center $\mathfrak Z(G)$; it is an abelian group. \end{proof} \subsection{Linearization by means of Global Regular Functions} The ring of global regular functions $\Gamma(\mathcal O_{G},G)$ is a Hopf algebra, and then it spectrum is a linear algebraic group $L = \ensuremath{\mbox{\rm Spec}}(\Gamma(\mathcal O_G,G))$. The kernel $C$ of the canonical morphism $\pi\colon G\to L$ is, by definition a quasi-abelian variety (see \cite{Sa}). Let us consider the exact sequence: $$ 0 \to C \to G \to L \to 0.$$ We proceed as we did in Proposition \ref{C4PRO4.2.15}, and then we obtain the following result. \begin{proposition}\label{C4PRO4.2.16} Let $\mathcal K\subset \mathcal P$ be the Picard-Vessiot extension of the automorphic system $\pi(\vec A)$ in $L$. Then $\mathcal P \subset \mathcal L$ is a strongly normal extension, and the connected component of the identity of its Galois group is a quasi-abelian variety. \end{proposition} \section{Integrability of Linear Equations} This section is devoted to the Liouville integrability of linear differential equations. Since the development of Picard-Vessiot system it is a rich field of research, let us cite some important specialized literature \cite{Kov0}, \cite{SingerUlmer1}, \cite{SingerUlmer2}, \cite{UlmerWeil}, \cite{HoeijWeil}, \cite{HRUW}. Here, we adopt a slightly different point of view on linear differential equations. We see them as automorphic systems. It gives us some insight into the geometric mechanisms that allows quadratures. In this way we are able to measure the solvability of the Galois groups, in terms of equations in flag varieties and grassmanians (Theorem \ref{C4THE4.3.2}). They are the natural geometrical generalization of Riccati equations. From now on let $G$ be a \emph{linear} connected algebraic group over $\mathcal C$. We consider $\vec A$ an automorphic vector field in $G$ with coefficients in $\mathcal K$. \index{flag variety} \subsection{Flag Variety} We call \emph{Borel subgroup} \index{Borel subgroup} of $G$ to any maximal connected solvable group of $G$. Borel subgroups are all conjugated and isomorphic subgroups. The quotient space $G/B$ is a complete variety (see \cite{Sa} p. 163, th. 10.2). \begin{definition} We call flag variety of $G$ to the homogeneous space quotient $G/B$, being $B$ a Borel subgroup of $G$. \end{definition} The flag variety of $G$ is defined up to isomorphism of $G$-homogeneous spaces. Let us consider $Flag(G)$ a flag variety of $G$, and let $(Flag(G),\partial_{\vec F})$ be the induced Lie-Vessiot system. Let us see a natural generalization of the well-known theorem of J. Liouville that relates the integrability by Liouvillian functions of the second order linear homogeneous differential equation with the existence of an algebraic solution of an associated Riccati equation. This classical result is the particular case of $GL(2,\mathbb C)$ in the following \emph{general Liouville's theorem}. \begin{theorem}\label{C4THE4.3.2} The Galois extension $\mathcal K \subset \mathcal L$ is Liouvillian if and only if the flag Lie-Vessiot system $(Flag(G),\partial_{\vec F})$ has an algebraic solution with coefficients in ${\mathcal K^\circ}$, the algebraic relative closure of $\mathcal K$ in $\mathcal L$. \end{theorem} \begin{proof} By the Galois correspondence we have that the Galois group of $(G_{{\mathcal K^\circ}}, \partial_{\vec A})$ is the connected identity component of the Galois group of $(G_{\mathcal K},\partial_{\vec A})$. Assume that $(Flag(G),\partial_{\vec F})$ has an algebraic solution $x \in Flag(G)({\mathcal K^\circ})$. We are under the hypothesis of Theorem \ref{C4THE4.1.10}. There is a gauge transformation of $G_{\mathcal K^0}$ that send $\vec A$ to an automorphic vector field $\vec B$ in the Borel subgroup $B$. Then the Galois group of $\vec B$ with coefficients in $\mathcal K^0$ is contained in a Borel subgroup. Then the connected component of $\ensuremath{\mbox{\rm Gal}}_{\mathfrak x}(G_{\mathcal K},\partial_{\vec A})$ is solvable. Reciprocally, let us assume that $\mathcal K \subset \mathcal L$ is a Liouvillian extension. In such case the identity connected component of the Galois group is contained in a Borel subgroup $B$. By Proposition \ref{PrpRationalX} there is a solution with coefficients in $\mathcal K^\circ$ of $\vec F$. \end{proof} \subsection{Automorphic Equations in the General Linear Group} \subsection{Grassmanians} Let us consider $E$ as $n$-dimensional vector space. Along this text \emph{$m$-plane} will mean \emph{$m$-dimensional linear subspace}. For all $m\leq n$ the linear group $GL(E)$ acts transitively in the set of $m$-planes. For an $m$-plane $E_m$, the stabilizer subgroup is an algebraic group, and then the set of $m$-planes define an algebraic homogeneous space. \index{grassmanian} \begin{definition} We call grassmanian of $m$-planes of $E$, $\ensuremath{\mbox{\rm Gr}}(E,m)$, to the homogeneous space whose closed points are the $m$-planes of $E$. Denote $\ensuremath{\mbox{\rm Gr}}(\mathcal C,n,m)$ the grassmanian of $m$-planes of $\mathcal C^n$. \end{definition} \begin{example} $\ensuremath{\mbox{\rm Gr}}(\mathcal C,n,1)$ is the space of lines in $\mathcal C^n$, and then if its the projective space of dimension $n-1$, $\mathbb P(n-1,\mathcal C)$. The $\ensuremath{\mbox{\rm Gr}}(\mathcal C,n,n-1)$ is the space os hyperplanes and then it is the dual projective space $\mathbb P(n-1,\mathcal C)^*$. \end{example} In general, $m$-planes of $E$ are in one-to-one correspondence with $(n-m)$-planes of the dual space $E^*$, and then we have the projective duality $$\ensuremath{\mbox{\rm Gr}}(E,m) \simeq \ensuremath{\mbox{\rm Gr}}(E^*, n-m).$$ The action of $GL(E)$ on $\ensuremath{\mbox{\rm Gr}}(E,m)$ is not faithful. Each scalar matrix of the center of $GL(\mathcal C, n)$ fix all $m$-planes. Thus, the non faithful action of $GL(E)$ is reduced to a faithful action of the projective group $PGL(E)$. All grassmanian are projective varieties. There is a canonical embedding of $\ensuremath{\mbox{\rm Gr}}(E,m)$ into the projective space of dimension $\left(\substack{n\\ m}\right)-1$, called \emph{the pl\"ucker embedding}: $$\ensuremath{\mbox{\rm Gr}}(E,m) \to \mathbb P(E^{\wedge n}), \quad\langle e_1,\ldots,e_m\rangle \mapsto \langle e_1\wedge e_1\wedge\ldots\wedge e_m\rangle.$$ \index{Pl\"ucker!embedding}\index{Pl\"ucker!coordinates} For computation in the grassmanian spaces we will use \emph{pl\"uckerian coordinates}. This system of coordinates is subordinated to a basis in $E$. Thus, let us consider a basis $\{e_1,\ldots,e_n\}$. Let $E_1 = \langle e_1,\ldots, e_m\rangle$ be the $m$ plane spanned by the first $m$ elements of the basis, and define $E_2 = \langle e_{m+1}, \ldots e_{n}\rangle$ its complementary. Let us consider the projection $\pi \colon E \to E_2$ of kernel $E_1$. We define the open subset $U \subset \ensuremath{\mbox{\rm Gr}}(E,m)$, $$U = \{F\colon F\oplus E_2 = E\}.$$ For $F\in U$ the splitting of the space induces an isomorphism $i_F \colon E_1 \to F$. We have an isomorphism $$U \xrightarrow{\sim} \ensuremath{\mbox{\rm Hom}}_{\mathcal C}(E_1,E_2),\quad F \mapsto \pi\circ i_F.$$ We define the pl\"ukerian coordinates of $F$ as the matrix elements of $\pi\circ i_F$ in the above mentioned basis. By permuting the elements of the basis we construct a covering of $\ensuremath{\mbox{\rm Gr}}(E,m)$ by $\left(\substack{n \\ m}\right)$ affine open subsets isomorphic to $\mathcal C^{n(n-m)}$. Let us compute pl\"uckerian coordinates in $\ensuremath{\mbox{\rm Gr}}(\mathcal C, m, n)$ related to the canonical basis. Let us consider $F\in \ensuremath{\mbox{\rm Gr}}(\mathcal C, m, n)$, and a basis of $F$, $\{\vec x_1,\ldots, \vec x_m\}$, $\vec x_i = (x_{1i},\ldots,x_{ni})$. The matrix, $$\left(\begin{matrix}x_{11} & \ldots & x_{1m} \\ x_{21} & \ldots & x_{2m} \\ \vdots & \ddots & \vdots \\ x_{n1} & \ldots & x_{nm} \end{matrix}\right)$$ is of maximal rank. Thus, there is a non vanishing minor of rank $m$. In particular, $F$ is in the open subset $U$ if and olny if the minor corresponding to the first $m$ rows does not vanish. In such case we define the numbers $\lambda_{ij}^{(m)}$ $$\left(\begin{matrix}x_{11} & \ldots & x_{1m} \\ x_{21} & \ldots & x_{2m} \\ \vdots & \ddots & \vdots \\ x_{n1} & \ldots & x_{nm} \end{matrix}\right)\left(\begin{matrix}x_{11} & \ldots & x_{1m} \\ \vdots & \ddots & \vdots \\ x_{m1} & \ldots & x_{mm} \end{matrix} \right)^{-1} = \left(\begin{matrix} 1 & \ldots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \ldots & 1 \\ \lambda^{(m)}_{11} & \ldots & \lambda^{(m)}_{1m} \\ \vdots & \ddots & \vdots \\ \lambda^{(m)}_{n-m,1} & \ldots & \lambda^{(m)}_{n-m,m} \end{matrix}\right)$$ that are the pl\"uckerian coordinates of $E_m\in \ensuremath{\mbox{\rm Gr}}(\mathcal C,m,n)$ in the open affine subset $U$ related to the split of $\mathcal C^n$ as $E_1\otimes E_2$. \subsection{Flag Variety of the General Linear Group} A flag of subspaces of $\mathcal C^n$, is a sequence, $$E_1 \subset E_2 \subset \ldots \subset E_{n-1}, \quad \dim_{\mathcal C}E_i = i$$ of linear subspaces of $\mathcal C^n$. The space $Flag(\mathcal C,n)$ of flags of $\mathcal C^n$ is an homogeneous space of $GL(\mathcal C,n)$, and it is faithful for the action of $PGL(\mathcal C, n)$. There is a canonical morphism, $$Flag(\mathcal C, n) \to \prod_{m=1}^{n-1} \ensuremath{\mbox{\rm Gr}}(\mathcal C,n,m),\quad E_1 \subset E_2 \subset E_{n-1} \mapsto (E_1,\ldots, E_{n-1}).$$ By Lie-Kolchin theorem the isotropy subgroup of a flag is also a Borel subgroup. Then, we can state $Flag(\mathcal C,n)$ is the flag variety of the general linear group. Let us introduce a system of coordinates in $Flag(\mathcal C,n)$. Let us consider $\{e_1,\ldots, e_n\}$ the canonical basis of $\mathcal C^n$. Each $\sigma\in GL(\mathcal C,n)$ defines a flag $F(\sigma)$ as follows: $$\langle\sigma(e_1)\rangle \subset \langle\sigma(e_1),\sigma(e_2)\rangle \subset \ldots \subset \langle\sigma(e_1),\ldots,\sigma(e_{n-1})\rangle.$$ There is a canonical flag corresponding to the identity element. Its isotropy group is precisely $T(\mathcal C,n)$ the group of upper triangular matrices. Then two matrices $A, B \in GL(\mathcal C,n)$ define the same flag if and only if $A = BU$ for certain $U\in T(\mathcal C,n)$. Then let us consider the affine subset of $GL(\mathcal C,n)$ of matrices with non vanishing principal minors. For such a matrix there exist a unique $LU$ decomposition such that $U\in T(\mathcal C,n)$ and is a lower triangular matrix as follows, $$A = \left(\begin{matrix} 1 & 0 & \ldots & 0 \\ \lambda_{21} & 1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ \lambda_{n1} & \lambda_{n2} & \ldots & 1 \end{matrix}\right)U$$ Hence the matrix elements $\lambda_i$ define a system of affine coordinates in $Flag(\mathcal C,n)$, in certain affine open subset. We construct an open covering of the flag space by permutating the vectors of the canonical base. The canonical morphism $$Flag(\mathcal C, n) \to \prod_m \ensuremath{\mbox{\rm Gr}}(\mathcal C,m,n)$$ is easily written in pl\"uckerian coordinates: $$\lambda_{ij}^{(m)} = \lambda_{i+m,j} - \sum_{k=1}^{m}\lambda_{i+m,k}\lambda_{kj}.$$ \subsection{Matrix Riccati Equations} Let us consider an homogeneous linear differential equation $$\dot x = Ax, \quad A\in gl(\mathcal K, n).$$ It is seen as an automorphic system that induces Lie-Vessiot systems in each homogeneous space. Let us compute the induced Lie-Vessiot systems in the grassmanian spaces. First, the linear system induces a linear system in $(\mathcal C^n)^m$. \begin{equation}\label{EqMatL} \dot X = AX, \end{equation} where $X$ is a $n\times m$ matrix. We write $X = \left(\substack{ U \\ Y }\right)$, being $U$ a $m\times m$ matrix and $Y$ a $(n-m)\times m$ matrix. $\Lambda_m = YU^{-1}$ is the matrix of pl\"uckerian coordinates of the space generated by the $m$ column vectors of the matrix $X$. Then, $\dot \Lambda_m = \dot Y U^{-1} - \Lambda_m \dot U U^{-1}$. If we decompose the matrix $A$ in four submatrices $$A = \left(\begin{matrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{matrix}\right)$$ being $A_{11}$ of type $m\times m$, $A_{12}$ of type $m\times (n-m)$, $A_{21}$ of type $(n-m)\times m$, and $A_{22}$ if type $m\times m$. Them the matrix linear equation \eqref{EqMatL} splits as a system of matrix linear differential equations, $$\dot U = A_{11} + A_{12}Y, \quad \dot Y = A_{21}U + A_{22}Y,$$ from which we obtain the differential equation for affine coordinates in the grassmanian, \begin{equation}\label{EqRiccMat} \dot\Lambda_m = A_{21} + A_{22}\Lambda_m - \Lambda_m A_{11} - \Lambda_m A_{12} \Lambda_m \end{equation} which is a quadratic system. We call such a system a \emph{matrix Riccati equation} \index{equation!matrix Riccati} associated to the linear system. $$\Lambda_m = \left(\begin{matrix}\lambda^{(m)}_{11} &\ldots & \lambda^{(m)}_{1,m} \\ \vdots & \ddots & \vdots \\ \lambda^{(m)}_{n-m,1} & \ldots & \lambda^{(m)}_{n-m,m} \end{matrix}\right)$$ $$\dot \lambda^{(m)}_{ij} = a_{m+i,j} + \sum_{k=1}^{n-m}a_{m+i,m+k}\lambda^{(m)}_{kj} - \sum_{k=1}^m \lambda^{(m)}_{ik}a_{kj} - \sum_{\substack{k = 1 \ldots m \\ r = 1 \ldots n-m}}\lambda^{(m)}_{ik}a_{k,r+m}\lambda^{(m)}_{rj}$$ \begin{example} Let us compute the matrix Riccati equations associated to the general linear system of rank $2$ and $3$. First, let us consider a general linear system of rank $2$, $$\dot x_1 = a_{11} x_1 + a_{12} x_2, \quad \dot x_2 = a_{21} x_1 + a_{22} x_2.$$ There is one only grassmanian $\ensuremath{\mbox{\rm Gr}}(\mathcal C,1,2)$, which is precisely the projective line. The associated matrix Riccati equation is an ordinary Riccati equation $$\dot x = a_{21} + (a_{22} - a_{11})x - a_{12}x^2.$$ In the case of a general system of rank $3$, $$\left(\begin{matrix} \dot x_1 \\ \dot x_2 \\ \dot x_3\end{matrix} \right) = \left(\begin{matrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{matrix} \right) \left(\begin{matrix} x_1 \\ x_2 \\ x_3 \end{matrix} \right)$$ \end{example} there are two grassmanian spaces, $\ensuremath{\mbox{\rm Gr}}(\mathcal C, 1, 3)$ and $\ensuremath{\mbox{\rm Gr}}(\mathcal C, 2, 3)$, being the projective plane $\mathbb P^2(\mathcal C)$ and the projective dual plane $\mathbb P^2(\mathcal C)^*$ respectively. Then we obtain two quadratic systems, $$\mathbb P(2,\mathcal C) \quad \left\{\begin{matrix} \dot x &=& a_{21} + (a_{22}-a_{11})x + a_{23}y - a_{12}x^2 - a_{13}xy \\ \dot y &=& a_{31} + (a_{33}-a_{11}) + a_{32}x - a_{13}y^2 - a_{12}xy \end{matrix}\right.$$ $$\mathbb P(2,\mathcal C)^*\left\{\begin{matrix} \dot \xi &=& a_{31} + (a_{33}-a_{11})\xi + a_{21}\eta - a_{23}\xi\eta - a_{13}\xi^2 \\ \dot \eta &=& a_{32} + (a_{33}-a_{22})\eta + a_{12}\xi - a_{13}\xi\eta - a_{23}\eta^2 \end{matrix}\right.$$ called the associated \emph{projective Riccati equations}. \index{equation!projective Riccati} \subsection{Flag Equation} \index{equation!flag} From the relation between pl\"uckerian coordinates and affine coordinates in the flag variety we can deduce the equations of the induced Lie-Vessiot system in $Flag(\mathcal C,n)$, from the matrix Riccati equations. We will obtain a Riccati quadratic equation for $n = 2$, and a cubic system for $n\geq 3$. $$\dot \lambda_{ij} = a_{ij} + \sum_{k=j+1}^{n}a_{ik}\lambda_{kj} - \sum_{k=1}^{j}\lambda_{ik}a_{kj} + \sum_{k=1}^{j}\sum_{r=k+1}^{j}\lambda_{ir}\lambda_{rk}a_{kj} $$ $$- \sum_{k=1}^j\sum_{r=j+1}^{n}\lambda_{ik}a_{kr}\lambda_{rj}+ \sum_{k=1}^{j}\sum_{r=j+1}^{n}\sum_{s=k+1}^{j}\lambda_{is}\lambda_{sk}a_{kr}\lambda_{rj},$$ Setting $\lambda_{ii} = 1$ for all $i$, we can simplify these equations. \begin{equation}\label{EqFlagEq} \dot{\lambda_{ij}} = \sum_{k=j}^{n}a_{ik}\lambda_{kj} - \sum_{k=1}^j\sum_{r=j}^n \lambda_{ik}a_{kr}\lambda_{rj} + \sum_{k=1}^j\sum_{r=k+1}^j\sum_{s=j}^n \lambda_{ir}\lambda_{rk}a_{ks}\lambda_{sj} \end{equation} Such as cubic system can be seen as a hierarchy of projective Riccati equations. The equation corresponding to the first column $\lambda_{i1}$, $i=2\ldots,n$ is a projective Riccati equation in $\mathbb P(n-1,\mathcal C)$. The equation corresponding to the second column is a projective Riccati equation in $\mathbb P(n-2,\mathcal C(\lambda_{i1}))$, and so on. \begin{example} Let us compute the flag equation for the general differential linear system of rank $3$. Denote $x = \lambda_{21}$, $y=\lambda_{31}$, $z =\lambda_{32}$. \begin{equation}\label{C4EQ4.7} \left\{\begin{array}{ccl} \dot x &=& a_{21} + (a_{22}- a_{11})x + a_{23}y - a_{12}x^2 - a_{13}xy \\ \dot y &=& a_{31} + a_{32}x + (a_{33} - a_{11})y - a_{12}xy - a_{13}y^2\end{array}\right. \end{equation} $$\dot z = a_{32} - a_{12}y + (a_{33} -a_{22} +a_{12}y - a_{13}y )z + (a_{13}y - a_{23})z^2.$$ \end{example} \subsection{Equations in the Special Orthogonal Group} Automorphic equations in special orthogonal group have been deeply studied since 19th century \cite{Ve3}, \cite{Darboux}. In particular Darboux related these equation with Riccati equation. He stated that the integration of \eqref{EqSO3} is reduced to the integration of \eqref{RicSO3}. Here we show that the Flag equation of an automorphic equation in $SO(\mathcal C, 3)$ is precisely the Riccati equation, and then the solutions of \eqref{EqSO3} are Liouvillian if and only if there are algebraic solutions for \eqref{RicSO3} The Lie algebra $so(3,\mathcal C)$ is the algebra of skew-symmetric matrices of $gl(\mathcal C, 3)$. Then an automorphic system in $SO(3, \mathcal C)$ is written in the following form. \begin{equation}\label{EqSO3} \left(\begin{matrix} \dot x_0\\ \dot x_1 \\ \dot x_2\end{matrix}\right) = \left(\begin{matrix} & a & b \\ -a & & c \\ -b & -c \end{matrix} \right) \left(\begin{matrix} x_0 \\ x_1 \\ x_2 \end{matrix}\right)\quad \quad a,b,c\in\mathcal K, \end{equation} \emph{where the void spaces represent the vanishing elements in the matrix}. \subsection{On the Structure of the Special Orthogonal Group} The special orthogonal group is the group of linear transformations preserving the quadratic form $x_0^2 + x_1^2 + x_2^2$. Let us consider the non degenerated quadric in the projective space $S_2\subset\mathbb P(3,\mathcal C)$, defined by homogeneous equation $\{t_0^2+t_1^2+t_2^2-t_3^2=0\}$. In affine coordinates $x_i = \frac{t_i}{t_3}$, its affine part is a sphere of radius $1$. Thus $SO(3, \mathcal C)$ is a subgroup of algebraic automorphisms of the quadric; $SO(3)\subset Aut(S_2)$. Each non degenerate quadric in the projective space over an algebraically closed field is a hyperbolic ruled surface. It has two systems of generatrices, being each system parameterized by a projective line. Denote $P_1$, $P_2$ these projective lines. $p\in P_1$, and $q\in P_2$ are lines $S_2$, and they intersect in a unique point $s(p,q)\in p\cap q$. We have a decomposition of $S_2$ which is a particular case of \emph{Segre isomorphism}, $$P_1 \times_{\mathcal C} P_2 \xrightarrow{\sim} S_2 \subset \mathbb P(3,\mathcal C)$$ $$((u_0:u_1),(v_0:v_1))\mapsto (t_0:t_1:t_2:t_3) \left\{\begin{matrix} t_0 &=& u_0v_1+u_1v_0 \\ t_1 &=& u_1v_1-u_0v_0 \\ t_2 &=& i(u_1v_1 + u_0v_0) \\ t_3&=& u_0v_1 - u_1v_0\end{matrix}\right.$$ Let us consider any algebraic automorphism of $S_2$. $\tau\colon S_2\to S_2$. In particular, it must carry a system of generatrices to a system of generatrices. Let us denote $P_1$, $P_2$ to the two system of generatrices of $S_2$. Hence, $\tau$ is induces by a pair of projective transformations $(\tau_1,\tau_2)$, where $$\tau_1\colon P_1 \to P_1, \quad \tau_2\colon P_2 \to P_2$$ or $$\tau_1\colon P_1\to P_2, \quad \tau_2\colon P_2\to P_1.$$ We conclude that the group of automorphism of $S_2$ is isomorphic to the following algebraic group, $$\ensuremath{\mbox{\rm Aut}}(S_2) = PGL(1,\mathcal C) \times_{\mathcal C} PGL(1,\mathcal C) \times_{\mathcal C} \mathbb Z/2\mathbb Z.$$ Let us compute the image of the canonical monomorphism \linebreak $SO(3,\mathcal C)\subset \ensuremath{\mbox{\rm Aut}}(S_2)$. We take affine coordinates in the pair of projective lines, $x = \frac{u_0}{u_1}$, $y = \frac{v_0}{v_1}$. This is the system of \emph{symmetric coordinates} of the sphere introduced by Darboux \cite{Darboux}. \begin{equation}\label{SymCoor1} x_0 = \frac{1-xy}{x-y} \quad x_1 = i\frac{1+xy}{x-y} \quad x_2 = \frac{x+y}{x-y} \end{equation} \begin{equation}\label{SymCoor2} x = \frac{x_0+ix_1}{1-x_2} \quad y = \frac{x_2-1}{x_1 - ix_2}. \end{equation} Let us write a general element of $SO(3,\mathcal C)$ in affine coordinates, $$R_{\lambda,\mu,\nu} = \left(\begin{matrix} 1 & & \\ & \frac{\lambda+\lambda^{-1}}{2} & \frac{\lambda^{-1}-\lambda}{2i}\\ & \frac{\lambda-\lambda^{-1}}{2i} & \frac{\lambda+\lambda^{-1}}{2} \end{matrix}\right) \left(\begin{matrix} \frac{\mu+\mu^{-1}}{2} & \frac{\mu^{-1}-\mu}{2i} & \\ \frac{\mu-\mu^{-1}}{2i} & \frac{\mu+\mu^{-1}}{2} & \\ & & 1 \end{matrix}\right) \left(\begin{matrix} 1 & & \\ & \frac{\nu+\nu^{-1}}{2} & \frac{\nu^{-1}-\nu}{2i} \\ & \frac{\nu-\nu^{-1}}{2i} & \frac{\nu+\nu^{-1}}{2} \end{matrix}\right)$$ where, in the complex case $\lambda = e^{i\alpha}$, $\mu = e^{i\beta}$, $\nu = e^{i\gamma}$ are the exponentials of the Euler angles. Direct computation gives us, $$R_{\lambda, \mu, \nu}\left\{\begin{matrix} x&\mapsto & \frac{(\lambda\mu\nu + \lambda\nu + \mu\nu - \nu + \lambda\mu - \lambda + \mu +1 )x + \lambda\mu\nu + \lambda\nu + \mu\nu - \nu - \lambda\mu + \lambda - \mu -1}{(\lambda\mu\nu + \lambda\nu - \mu\nu + \nu + \lambda\mu - \lambda - \mu -1)x + \lambda\mu\nu + \lambda\nu - \mu\nu + \nu - \lambda\mu + \lambda + \mu +1} = r_{\lambda,\mu,\nu}(x)\\ y&\mapsto & \frac{(\lambda\mu\nu + \lambda\nu + \mu\nu - \nu + \lambda\mu - \lambda + \mu +1 )y + \lambda\mu\nu + \lambda\nu + \mu\nu - \nu - \lambda\mu + \lambda - \mu -1}{(\lambda\mu\nu + \lambda\nu - \mu\nu + \nu + \lambda\mu - \lambda - \mu -1)y + \lambda\mu\nu + \lambda\nu - \mu\nu + \nu - \lambda\mu + \lambda + \mu +1} = r_{\lambda,\mu,\nu}(y)\end{matrix}\right. $$ and then $R_{\lambda,\mu,\nu}$ induces the same projective transformation $r_{\lambda,\mu,\nu}$ for $x$ and $y$. Hence, $$SO(3) \subseteq PGL(1,\mathcal C) \subset Aut(S_2).$$ In particular, we have the following formulae for rotations around euclidean axis: \begin{align}\label{:EqRot1} \left(\begin{matrix} 1 & & \\ & \frac{\lambda+\lambda^{-1}}{2} & \frac{\lambda^{-1}-\lambda}{2i}\\ & \frac{\lambda-\lambda^{-1}}{2i} & \frac{\lambda+\lambda^{-1}}{2} \end{matrix}\right) &\colon x \mapsto \frac{(\lambda+1)x + (\lambda -1)}{(\lambda-1) x + (\lambda + 1)} \\\label{:EqRot2} \left(\begin{matrix} \frac{\lambda+\lambda^{-1}}{2} & \frac{\lambda^{-1}-\lambda}{2i} & \\ \frac{\lambda-\lambda^{-1}}{2i} & \frac{\lambda+\lambda^{-1}}{2} & \\ & & 1 \end{matrix}\right) &\colon x \mapsto \lambda x \\\label{:EqRot3} \left(\begin{matrix} \frac{\lambda+\lambda^{-1}}{2} & & \frac{\lambda^{-1}-\lambda}{2i} \\ & 1 & \\ \frac{\lambda-\lambda^{-1}}{2i} & & \frac{\lambda+\lambda^{-1}}{2} \end{matrix}\right) &\colon x \mapsto \frac{(\lambda+\lambda^{-1}+1/2)x-i(\lambda-\lambda^{-1})}{i(\lambda^{-1}-\lambda)x - (\lambda+\lambda^{-1}+1/2)} \end{align} An the following formulae for the induced Lie algebra morphism -- the are computed by derivation of previous formulae with $\lambda = 1 + i\varepsilon$ --. Here the Lie algebra $pgl(1,\mathcal C)$ is identified with $sl(2,\mathcal C)$: \begin{align*} \left(\begin{matrix} & 1 & \\ -1 & & \\ & & 0 \end{matrix}\right)&\mapsto\left(\begin{matrix} \frac{i}{2} & \\ & \frac{-i}{2}\end{matrix}\right) \\ \left(\begin{matrix} & & 1 \\ & 0 & \\ -1 & & \end{matrix}\right)&\mapsto\left(\begin{matrix} & \frac{1}{2} \\ -\frac{1}{2} & \end{matrix}\right) \\ \left(\begin{matrix} 0 & & \\ & & 1 \\ & -1 & \end{matrix}\right)&\mapsto\left(\begin{matrix} & -\frac{i}{2} \\ - \frac{i}{2} \end{matrix}\right) \end{align*} Reciprocally, a projective transformation $$x \mapsto \frac{u_{11} x + u_{12}}{u_{21}x+u_{22}}; \quad y \mapsto \frac{u_{11}y + u_{12}}{u_{21}y+u_{22}},$$ induces a linear transformation in the affine coordinates $x_0,x_1,x_2$ (see \cite{Darboux} p. 34). $SO(\mathcal C,3)$ is precisely the group of automorphisms of $S_2$ that are linear in those coordinates. We have proven the following proposition which is due to Darboux. \begin{proposition} The special orthogonal group $SO(3,\mathcal C)$ over an algebraically closed field is isomorphic to the projective general group $PGL(1,\mathcal C)$. The isomorphism is given by formulae \eqref{:EqRot1}, \eqref{:EqRot2}, \eqref{:EqRot3}. \end{proposition} \subsection{Flag Equation} The flag variety of $SO(3,\mathcal C)$ is a projective line. Any of the Darboux symmetric coordinates, $$x \colon S_2 \to P_1$$ gives us a realization of the action of $SO(3)$ on $P_1$. By substituting the equation \eqref{EqSO3} in the identities \eqref{SymCoor1}, \eqref{SymCoor2} we deduce the Riccati differential equation satisfied by this symmetric coordinate, which is the flag equation of equation \eqref{EqSO3}: \begin{equation}\label{RicSO3} \dot x = \frac{-b-ic}{2}-iax+\frac{-b +ic}{2}x^2. \end{equation} In \cite{Darboux}, Darboux reduces the integration of the equation \eqref{EqSO3} to finding two different particular solutions of the Riccati equation \eqref{RicSO3}. By application of our generalization of Liouville's theorem we obtain an stronger result. \index{theorem!Darboux on rigid movements} \begin{theorem}[Darboux]\label{C4THE4.3.8} The Galois extension of the equation \eqref{EqSO3} is a Liouvillian extension of $\mathcal K$ if and only if the Riccati equation \eqref{RicSO3} has an algebraic solution. \end{theorem} \begin{proof} It is a particular case of Theorem \ref{C4THE4.3.2}. \end{proof}
1,314,259,996,908
arxiv
\section{Introduction} The design and fabrication of textured superhydrophobic surfaces have received much attention in recent years. If the recessed regions of the texture are filled with gas (the Cassie state), roughness can produce remarkable liquid mobility, dramatically lowering the ability of drops to stick~\cite{quere.d:2008}. These surfaces are known to be self-cleaning and show low adhesive forces. In addition to the self-cleaning effect, they also exhibit drag reduction for fluid flow. Thus, they are of importantance in the context of transport phenomena and fluid dynamics as well~\cite{bocquet2007,vinogradova.oi:2011,rothstein.jp:2010}. Many sea animals e.g. shark and other fish are known to possess superhydrophobic skin~\cite{bhushan.b:2011}. Also many artificial textures have been designed to increase drag reduction efficiency~\cite{vinogradova.oi:2012}. This drag reduction is associated with the liquid slippage past solid surfaces. This slippage occurs at smooth hydrophobic surfaces and can be described by the boundary condition~\cite{vinogradova.oi:1999,bocquet2007,lauga2005}, $u_{\rm slip} = b \,\partial u / \partial z$, where $u_{\rm slip}$ is the (tangential) slip velocity at the wall, $\partial u / \partial z$ the local shear rate, and $b$ the slip length. A mechanism for hydrophobic slippage involves a lubricating gas layer of thickness $\delta$ with viscosity $\mu_g$ much smaller than that of the liquid $\mu$~\cite{vinogradova.oi:1995a}, so that $b \simeq \delta (\mu/\mu_g - 1) \simeq 50 \delta$~\cite{vinogradova.oi:1995a,andrienko.d:2003}. However, at smooth flat hydrophobic surfaces $\delta$ is small, so that $b$ cannot exceed a few tens of nm~\cite{vinogradova.oi:2003,vinogradova.oi:2009,charlaix.e:2005,joly.l:2006}. In case of superhydrophobic surfaces, the situation can change dramatically, and slip lengths up to tens of $\mu$m may be obtained over a thick gas layer stabilized with a rough texture~\cite{choi.ch:2006,joseph.p:2006}. To quantify the flow past heterogeneous surfaces it is convenient to apply the concept of an effective slip boundary condition at the imaginary smooth homogeneous, but generally anisotropic surface \cite{vinogradova.oi:2011,Kamrin_etal:2010}. Such an effective condition mimics the actual one along the true heterogeneous surface, and fully characterizes the real flow. The quantitative understanding of the effective slip length of the superhydrophobic surface, $\mathbf{b}_{\mathrm{% eff}}$, is still challenging since the composite nature of the texture in addition to liquid-gas areas requires regions of lower local slip (or no slip) in direct contact with the liquid. For an anisotropic texture, the effective slip generally depends on the direction of the flow and is a tensor, $\mathbf{b}_{\mathrm{% eff}}\equiv \{b_{ij}^{\mathrm{eff}}\}$ represented by a symmetric, positive definite $2\times 2$ matrix~\cite{Bazant08} \begin{equation} \mathbf{b}_{\mathrm{eff}}=\mathbf{S}_{\theta }\left( \begin{array}{cc} b_{\mathrm{eff}}^{\parallel } & 0 \\ 0 & b_{\mathrm{eff}}^{\perp }% \end{array}% \right) \mathbf{S}_{-\theta }, \label{beff_def1} \end{equation}% diagonalized by a rotation \begin{equation*} \mathbf{S}_{\theta }=\left( \begin{array}{cc} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta% \end{array}% \right) . \end{equation*}% Eq.(\ref{beff_def1}) allows us to calculate an effective slip in any direction given by an angle $\theta $, provided the two eigenvalues of the slip-length tensor, $b_{\mathrm{eff}% }^{\parallel }$ ($\theta =0$) and $b_{\mathrm{eff}}^{\perp }$ ($\theta =\pi /2$), are known. The concept of an effective slip length tensor is general and can be applied for an arbitrary channel thickness~\cite{harting.j:2012}, being a global characteristic of a channel~\cite{vinogradova.oi:2011}, so that the eigenvalues normally depend not only on the parameters of the heterogeneous surfaces, but also on the channel thickness. However, for a thick (compared to a texture period, $L$) channel we are interested in here they become a characteristics of a heterogeneous interface solely. In case of an anisotropic isolated surface (or a thick channel limit) with a scalar local slip $b(y)$, varying in only one direction, the transverse component of the slip-length tensor was proven to be equal to a half of the longitudinal one with twice larger local slip, $2b(y)$~\cite{asmolov:2012} \begin{equation} b_{\mathrm{eff}}^{\bot }\left[ b\left( y\right) /L\right] =\frac{b_{\mathrm{% eff}}^{\parallel }\left[ 2b\left( y\right) /L\right]. }{2} \label{aff} \end{equation}% A remarkable corollary of this relation is that the flow along any direction of the one-dimensional surface can be easily determined, once the longitudinal component of the effective slip tensor is found from the known spatially nonuniform scalar slip. One-dimensional superhydrophobic surfaces are very important for a class of phenomena, which involves ``transverse'' hydrodynamic couplings, where an applied pressure gradient or shear rate in one direction generates flow in a different direction, with a nonzero perpendicular component. This can be used to mix adjacent streams, control the dispersion of plugs, and position streams within the cross section of the channel~\cite{stroock2002b}. Such grooved surfaces can be easily prepared by modern lithographic methods~\cite{vinogradova.oi:2012}. \begin{figure} \begin{center} \includegraphics [width=6.5 cm]{asmolov_fig1.eps} \end{center} \caption{(Color online) Sketch of the SH surface with a cosine relief, and its equivalent representation in terms of flow boundary conditions. } \label{fig:geometry} \end{figure} Most of the prior work focussed on a flat, periodic, striped superhydrophobic surface, which corresponds to patterns of rectangular grooves. The flow past such stripes was tackled theoretically ~\cite% {lauga.e:2003,belyaev.av:2010a,feuillebois.f:2009,vinogradova.oi:2011,feuillebois.f:2010b,ng:2009}, and several numerical approaches have also been used either at the molecular scale, using molecular dynamics~\cite{priezjev.n:2011}, or at larger mesoscopic scales using finite element methods~\cite{priezjev.nv:2005,cottin.c:2004}, lattice Boltzmann~\cite{harting.j:2012} and dissipative particle dynamics~\cite{zhou.j:2012} simulations. For a pattern composed of no-slip ($b=0$) and perfect-slip ($b=\infty$) stripes, the expression for the eigenvalues of the effective slip-length tensor takes its maximum possible value and reads~\cite{philip.jr:1972,lauga.e:2003} \begin{equation} b_{\mathrm{ideal}}^{\parallel }=2b_{\mathrm{ideal}}^{\perp }=-\frac{L}{\pi }% \ln \left[ \sec \left( \frac{\pi \left( 1-\phi \right) }{2}\right) \right] , \label{Phil72} \end{equation}% where $\phi $ is the fraction of the no-slip interface. In the limit of vanishing solid fraction, it therefore predicts $b_{\mathrm{ideal}}^{\parallel}$ and $b_{\mathrm{ideal}}^{\perp }$ to depend only logarithmically on $\phi $ and scale as $-L\ln \phi $. At a qualitative level, this result means that the effective slip lengths essentially saturate at the value fixed by the period of the roughness. In case of stripes the perturbation of piecewise constant local slip has a step-like jump on the heterogeneity boundary, which leads to a singularity both in pressure and velocity gradient~\cite{asmolov:2012} by introducing an additional mechanism for a dissipation. It is natural to assume that an anisotropic one-dimensional texture with a continuous local slip could potentially lead to a larger effective tensorial slip. In this paper we address the issue of the effective slip of flat surfaces with cosine variation in the local slip length, which corresponds to modulated hydrophobic grooved surfaces with a trapped gas layer (the Cassie state) as shown in Fig.~\ref{fig:geometry}. Flows over hydrophilic surfaces (the Wenzel state) with cosine surface relief of small amplitude have been studied by a number of authors~\cite{stroock2002b,hocking.lm:1976,wang2004,priezjev.nv:2006,niavarani.a:2010}. Previous studies of similar grooves in the Cassie state have investigated only small variations in local slip length~\cite{hocking.lm:1976,hendy2005effect}. We are unaware of any previous work that has studied the most interesting case of finite and large variations in the amplitude of a local cosine slip. \section{Theory} Consider a shear flow over a textured flat slipping plate, characterized by a slip length $b(y)$, spatially varying in one direction, and the texture varying over a period $L$ (as shown in Fig.~\ref{fig:geometry}). We use a rectangular coordinate system $(x,y,z)$ with origin at the wall. The $z-$ axis is perpendicular to the plate. Our analysis is based on the limit of a thick channel or a single interface, so that the velocity profile sufficiently far above the surface, at a height large compared to $L$, may be considered as a linear shear flow. All variables are non-dimensionalized using the texture period $L$ as the characteristic length, the shear rate of the undisturbed flow $G$ and the fluid viscosity $\mu .$ The dimensionless fluid velocity is sought in the form% \begin{equation*} \mathbf{v}=\mathbf{U}+\mathbf{u}_{\rm slip}+\mathbf{u}_{1}\left( x,y,z\right) , \end{equation*}% where $\mathbf{U}=z\mathbf{e}_{l},\ l=x,y$ is the undisturbed linear shear flow, and $\mathbf{e}_{l}\ $are the unit vectors parallel to the plate. The perturbation of the flow, which is caused by the presence of the texture, involves a constant slip velocity $\mathbf{u}_{\rm slip}=\left( u_{\rm slip},v_{\rm slip},0\right) $ and a varying part $\mathbf{u}_{1}=\left( u,v,w\right) $ of the velocity field. A periodic velocity $\mathbf{u}_{1}$ should decay at infinity and has zero average:% \begin{equation} \int_{0}^{1}\mathbf{u}_{1}dy=0. \label{av_u} \end{equation}% At a small Reynolds number $Re=GL^{2}/\nu ,$ $\mathbf{u}_{1}$ satisfies the dimensionless Stokes equations,% \begin{gather} \mathbf{\nabla }\cdot \mathbf{u}_{1}=0, \label{Se} \\ \mathbf{\nabla }p-\Delta \mathbf{u}_{1}=\mathbf{0}. \notag \end{gather} The boundary conditions at the wall and at infinity are defined in the usual way as% \begin{gather} z=0:\quad \mathbf{u}_{\rm slip}+\mathbf{u}_{1\tau }-\beta \left( y\right) \frac{% \partial \mathbf{u}_{1\tau }}{\partial z}=\beta \left( y\right) \mathbf{e}% _{l},\ \label{bcu} \\ w=0, \label{bcw0} \end{gather}% \begin{equation} z\rightarrow \infty :\quad \mathbf{u}_{1}=\mathbf{0,} \label{bci} \end{equation}% where $\mathbf{u}_{1\tau }=\left( u,v,0\right) $ is the velocity along the wall and $\beta =b/L$ is the normalized local slip length. The eigenvalues of the effective slip-length tensor can be obtained as the components of $% \mathbf{u}_{\rm slip}:$ \begin{equation} b_{\mathrm{eff}}^{\parallel }=Lu_{\rm slip},\quad b_{\mathrm{eff}}^{\perp }=Lv_{\rm slip}. \label{co_b} \end{equation} \section{Cosine slip length} In this Section we consider a 1D periodic texture with the local slip length% \begin{equation} \label{eq:beta} \beta =\beta _{0}+2\beta _{1}\cos \left( 2\pi y\right) . \end{equation}% The coefficients should satisfy $\beta _{0}\geq2\beta _{1}\geq0,~$in order to obey $\beta \left( y\right) \geq0$ for any $y.$ The disturbance velocity field is presented in terms of Fourier series as% \begin{equation} \mathbf{u}_{1}=\sum_{n=-\infty ,n\neq 0}^{\infty }\mathbf{u}^{\ast }\left( n,z\right) \exp \left( i2\pi ny\right) ,\quad \mathbf{u}^{\ast }=\left( u^{\ast },v^{\ast },w^{\ast }\right) . \label{fu} \end{equation}% A general solution of the Stokes equations for the longitudinal flow, $% \mathbf{U}=z\mathbf{e}_{x}$ decaying at infinity, reads \cite{asmolov:2012}% \begin{equation} u^{\ast }=X_{n}\exp \left( -2\pi \left\vert n\right\vert z\right) , \label{1x} \end{equation}% \begin{equation} v^{\ast }=w^{\ast }=0, \notag \end{equation}% and that for the transverse flow, $\mathbf{U}=z\mathbf{e}_{y}$ is given by \begin{equation*} u^{\ast }=0, \end{equation*}% \begin{equation} v^{\ast }=Y_{n}\exp \left( -2\pi \left\vert n\right\vert z\right) \left( 1-2\pi nz\right) , \label{1y} \end{equation}% \begin{equation} w^{\ast }=-i2\pi nY_{n}z\exp \left( -2\pi \left\vert n\right\vert z\right) . \label{1z} \end{equation}% The Fourier coefficients $X_{n}$ and $Y_{n}$ are determined from the Navier slip boundary condition (\ref{bcu}). \subsection{Longitudinal configuration} Since the local slip length is an even function of $y,$ the solution (\ref% {fu}) is also an even function. This requires $X_{n}=X_{-n},$ so it is sufficient to evaluate $X_{n}$ for $n\geq 0.$ The Navier slip boundary condition (\ref{bcu}) can be written, following to \cite{asmolov:2012}, in terms of Fourier coefficients as a linear system \begin{equation} u_{\rm slip}=\beta _{0}-2e_{1}X_{1}, \label{ap1} \end{equation}% \begin{equation} d_{1}X_{1}+e_{2}X_{2}=\beta _{1}, \label{ap2} \end{equation}% \begin{equation} n>1:\quad e_{n-1}X_{n-1}+d_{n}X_{n}+e_{n+1}X_{n+1}=0, \label{ap3} \end{equation}% \begin{equation} d_{n}=1+2\pi n\beta _{0},\quad e_{n}=2\pi n\beta _{1}. \label{a4} \end{equation}% A three-diagonal infinite linear system (\ref{ap1})-(\ref{ap3}) should be solved numerically to find teh unknown $X_{n}$ by truncating the system. In the limit of large slip, $\beta _{0}>2\beta _{1}\gg 1,$ the asymptotic solution to (\ref{ap1})-(\ref{ap3}) can also be constructed. To the leading order in $\beta _{0}^{-1}$, the first term in (\ref{a4}) can be neglected compared to the second one, and the system (\ref{ap2})-(\ref{ap3}) is rewritten for new variables $t_{n}=2\pi nX_{n}$ as \begin{equation} t_{1}+\lambda t_{2}=\lambda , \label{ap4} \end{equation}% \begin{equation} n>1:\quad \lambda t_{n-1}+t_{n}+\lambda t_{n+1}=0, \label{ap5} \end{equation}% where $\lambda =\beta _{1}/\beta _{0}<1/2.$ The solution of the last system is a geometric progression, $t_{n+1}=q^{n}t_{1},$ with% \begin{equation*} q=\frac{-\lambda ^{-1}+\sqrt{\lambda ^{-2}-4}}{2}, \end{equation*}% \begin{equation*} t_{1}=\frac{\lambda }{1+q\lambda }=\frac{\beta _{1}}{\beta _{0}+q\beta _{1}}. \end{equation*}% Therefore, the final expression for the slip length, $b_{\mathrm{eff}% }^{\parallel }=Lu_{\rm slip}$, in view of (\ref{ap1}), takes the following form at $\beta _{0}>2\beta _{1}\gg 1:$% \begin{equation} b_{\mathrm{eff}}^{\parallel }=\sqrt{b_{0}^{2}-4b_{1}^{2}}. \label{be_par} \end{equation} The asymptotic solution for the velocity field can also be derived from Eqs.(% \ref{fu}) and (\ref{1x}). The normal component of velocity gradient can be written as% \begin{equation} \frac{\partial u}{\partial z}=-2t_{1}\sum_{n=1}^{\infty }q^{n-1}\mathrm{Re}% \left\{ \exp \left[ 2\pi n\left( iy-z\right) \right] \right\} . \label{dudz} \end{equation}% The factor $2$ is due to the contribution of $n<0.$ The sum in (\ref{dudz}) is also a geometric progression, so that% \begin{equation} \frac{\partial u}{\partial z}=-\frac{2t_{1}\exp \left( -2\pi z\right) \left[ \cos \left( 2\pi y\right) -q\exp \left( -2\pi z\right) \right] }{s}, \label{dul} \end{equation}% \begin{equation*} s=1-2q\cos \left( 2\pi y\right) \exp \left( -2\pi z\right) +q^{2}\exp \left( -4\pi z\right) . \end{equation*}% The last equation can be integrated over $z$ to give% \begin{equation} u=-\frac{t_{1}}{2\pi q}\ln s. \label{ul} \end{equation}% The asymptotic solutions (\ref{be_par}), (\ref{dul}) and (\ref{ul}) predict well the numerical data even at finite $b_{0}$ (see next Section). \subsection{Transverse configuration} It was found by \cite{asmolov:2012} that the velocity components for the transverse flow can be expressed in terms of the longitudinal one calculated for twice larger local slip, $u_{2}=u\left[ 2\beta \left( y\right) \right] :$% \begin{equation} v_{\rm slip}=\frac{u_{slip,2}}{2}, \label{v_slip} \end{equation} \begin{eqnarray} v &=&\frac{1}{2}\left( u_{2}+z\frac{\partial u_{2}}{\partial z}\right) , \label{double} \\ w &=&-\frac{z}{2}\frac{\partial u_{2}}{\partial y}. \end{eqnarray} Using (\ref{co_b}) and (\ref{v_slip}) we derive% \begin{equation} b_{\mathrm{eff}}^{\perp }=b_{\mathrm{eff}}^{\parallel }=\sqrt{% b_{0}^{2}-4b_{1}^{2}}. \label{be_per} \end{equation} Thus, the texture is isotropic at large $b_{0}$. One can demonstrate that the conclusion about isotropy of the slip-length tensor is general and valid for any textures (see Appendix~\ref{A1}). The values $\lambda ,q,t_{1}$ remain the same for the double slip length since they depend on the ratio $\beta _{1}/\beta _{0}$ only. As a result, we obtain% \begin{equation*} u_{2}=u=-\frac{t_{1}}{2\pi }\ln s, \end{equation*}% \begin{eqnarray} v &=&-\frac{t_{1}}{4\pi q}\ln s \\ &-&\frac{zt_{1}\exp \left( -2\pi z\right) \left[ \cos \left( 2\pi y\right) -q\exp \left( -2\pi z\right) \right] }{s}, \notag \\ w &=&\frac{zt_{1}\exp \left( -2\pi z\right) \sin \left( 2\pi y\right) }{s}. \label{v} \end{eqnarray} \section{Simulation method} \label{sec:lbm} For the modelling of fluid flow in a system of two parallel plates, we employ the lattice Boltzmann (LB) method ~\cite{bib:succi-01}. Lattice Boltzmann methods are derived by a phase space discretization of the kinetic Boltzmann equation \begin{equation} \left[ \frac{\partial }{\partial t}+\mathbf{v}\cdot \nabla _{\mathbf{r}}% \right] f(\mathbf{r,v},t)=\mathbf{\Omega }, \label{eq:boltzmann} \end{equation}% which expresses the dynamics of the single particle probability density $f(% \mathbf{r},\mathbf{v},t)$. Therein, $\mathbf{r}$ is the position, $\mathbf{v} $ the velocity, and $t$ the time. The left-hand side models the propagation of particles in phase space, the collision operator $\mathbf{\Omega }$ on the right hand side accounts for particle interactions. Constructing the lattice Boltzmann equation, the time $t$, the position $% \mathbf{r}$, and the velocity $\mathbf{v}$ are discretized. This discrete variant of Eq.~(\ref{eq:boltzmann}) \begin{equation} \begin{array}{cc} f_{k}(\mathbf{r}+\mathbf{c}_{k},t+1)-f_{k}(\mathbf{r},t)=\Omega _{k}, & k=0,1,\dots ,B,% \end{array}% \end{equation}% describes the kinetics in discrete time- ($\Delta t$) and space-units ($% \Delta x$). We employ a widely used three-dimensional lattice with $B=18$ discrete non-zero velocities (D3Q19) which is chosen to carry sufficient symmetry to allow for a second order accurate solution of the Navier-Stokes equations. Here, for $\Omega $, we choose the Bhatnagar-Gross-Krook (BGK) collision operator~\cite{bib:bgk} \begin{equation} \Omega _{k}=-\frac{1}{\tau }\left( f_{k}(\mathbf{r},t)-f_{k}^{eq}(\mathbf{v}(% \mathbf{r},t),\rho (\mathbf{r},t))\right) \mbox{ ,} \label{Omega} \end{equation}% which assumes relaxation on a linear timescale $\tau $ towards a discretized local Maxwell-Boltzmann distribution $f_{k}^{eq}$. The kinematic viscosity $% \nu =\frac{2\tau -1}{6}$ of the fluid is related to the relaxation time scale. In this study it is kept constant at $\tau =1.0$. Stochastic moments of $f$ can be related to physical properties of the modelled fluid. Here, conserved quantities, like the fluid density $\rho (% \mathbf{r},t)=\rho _{0}\sum_{k}f_{k}(\mathbf{r},t)$ and momentum $\rho (% \mathbf{r},t)\mathbf{u}(\mathbf{r},t)=\rho _{0}\sum_{k}c_{k}f_{k}(\mathbf{r}% ,t)$, with $\rho _{0}$ being a reference density, are of special interest. Slip over hydrophobic surfaces is commonly modelled by introduction of a phenomenological repulsive force ~\cite% {bib:jens-kunert-herrmann:2005,bib:jens-jari:2008,bib:zhu-tretheway-petzold-meinhart-2005,bib:benzi-etal-06,bib:zhang-kwok-04}% . The magnitude of interactions between different components and surfaces, as determined by simulation parameters allows to specify arbitrary contact-angles ~\cite% {benzi-etal-06b,bib:huang-thorne-schaap-sukop-2007,bib:jens-schmieschek:2010,bib:jens-kunert-herrmann:2005}% . Other approaches include boundary conditions taking into account specular reflections~\cite{succi02,bib:tang-tao-he-2005,bib:sbragaglia-succi-2005}, or diffuse scattering~\cite% {bib:ansumaili-karlin-2002,bib:sofonea-sekerka-2005,bib:niu-shu-chew-2004}, respectively. The strategy applied for this work employs a second order accurate on-site fixed velocity boundary condition to simulate wall slippage. Here, the velocity at the boundary is set proportional to the local stress imposed by the flow field as well as a slip length parameter adjusting the stress-response. For the details of the implementation we refer the reader to~\cite{HechtHarting2010,ahmed.nk:2009}. Local slip lengths are calculated according to Eq.~(\ref{eq:beta}). Varying slip patterns are applied to the $x$-$y$-plane at $z=0$. Periodic boundary conditions are employed in $x$ and $y$-direction, thus reducing the simulation domain to a pseudo-2D system. By exploiting the periodic boundaries, only one single period needs to be resolved. The Couette flow is driven by applying a constant velocity of $v=0.1$ (in lattice units) in the $x$-$y$-plane at $z=z_{\rm max}$. The resolution of the simulated system is given by the lattice constant \begin{equation} \Delta x = \frac{H}{\mathcal{N}}, \end{equation} where $\mathcal{N}$ is the number of discretization points used to resolve the height of the channel. As we consider thick channels, we choose a height to periodicity ratio of $H/L=10$. The number of timesteps required to reach a steady state depends on the channel height, the velocity of the flow as determined by the driving acceleration as well as the fraction of slip and no slip area at the surface. We find that in order to reduce the deviation from theoretical predicted values below 5 percent, a domain size of $96\times 1\times 960\Delta x^{3}$ is required. For this geometrical setup, with a shear rate in the order of $\dot{\gamma} = 1\cdot 10^{-4}$ (in lattice units), a simulation time in the order of 10 million timesteps is needed to reach the steady state. We compare the theoretical predictions by measurements of $b_{\mathrm{eff}}$ obtained from velocity profiles. The profile of the linear shear velocity is fit by a linear function in the region far from the surface pattern. From this fit the effective slip lengths and shear rate are determined. \section{Results and discussion} In this section, we present the LB simulation results and compare them with predictions of the continuous theory. \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.45]{asmolov_fig2.eps} \end{center} \caption{Eigenvalues of the effective slip-length tensor as a function of $b_1$ simulated at fixed $b_{0}=1$ (symbols). The longitudinal effective slip length, $b_{\mathrm{eff}% }^{\parallel }$, is shown by circles, and the transverse effective slip, $b_{\mathrm{eff}}^{\perp }$ is presented by diamonds. Solid and dashed curves denote the corresponding theoretical values obtained by numerical Fourier-series solutions. The asymptotic (isotropic) solution, Eq. (\protect\ref{be_per}), expected in the limit $b_{0}\gg L$ is shown by the dash-dotted line.} \label{be} \end{figure} We start with varying the amplitude of cosine perturbations of the slip length, $b_{1}$, at fixed $b_{0}=1$. Fig. \ref{be} shows simulation data for $b_{\mathrm{eff}% }^{\parallel }\ $ and $b_{\mathrm{eff}}^{\perp }$ as a function of $b_{1}/b_0$. These results show that the largest possible value of $b_{\mathrm{eff}}/b_0$ is attained when $b_1=0$, i.e. for a smooth hydrophobic surface with $b(y)=b_0$. In this situation the effective slip is (obviously) isotropic and equal to the area-averaged slip $b_0$. When increasing the amplitude $b_1$, there is a small anisotropy of the flow, and the eigenvalues of the slip-length tensor decrease. Therefore, in the presence of a cosine variation in slip length the effective slip always becomes smaller than average. This conclusion is consistent with earlier observations made for different textures~\cite{alexeyev:96,vinogradova.oi:2011}. To obtain theoretical values, the linear system, Eqs.~(\ref{ap1})-(\ref{ap3}) has been solved numerically by using the IMSL-DLSLTR routine. We see that the agreement is excellent for all $b_1/b_0$, indicating that our asymptotic theory is extremely accurate, and confirming the relation (\ref{v_slip}) between the longitudinal and transverse slip lengths. Also included in Fig.~\ref{be} is the asymptotic formula (\ref{be_per}) obtained in the limit of large $b_0$. Note that this formula is surprisingly accurate even in the case of finite $b_{0},$ except for the texture with $b_{1}/b_{0}=1/2$ (no-slip point at $y=1/2$). \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.45]{asmolov_fig3a.eps} \includegraphics[scale=0.45]{asmolov_fig3b.eps} \end{center} \caption{$\left( a\right) $ Effective slip lengths computed for $b_{1}/b_{0}=0.5$, which correspond to a texture with no-slip lines, vs. average slip. The notations are the same as in Fig. \protect\ref{be}. Dash-dotted and dotted lines show the effective lengths for longitudinal and transverse stripes as a function of $1/(3 \phi)$ calculated with Eq.(\protect\ref{Phil72}). $\left( b\right) $ The cosine profile of the local slip length with $b_{0}/L=5, \ b_{1}/b_{0}=0.5$ (solid curve) and the stripe profile with $\protect\phi =0.06 $ (dashed line) with the same longitudinal effective slip lengths.} \label{noslip} \end{figure} Fig.~\ref{noslip} $\left( a\right) $ shows the simulation data for effective slip lengths as a function of average slip, $b_0/L$, for a texture with the no-slip point $\left( b_{1}/b_{0}=1/2\right)$. Also included are theoretical (Fourier series) curves. The fits are quite good for $b_{0}/L$ up to 10, but at larger average slip there is some discrepancy. The simulation results for $b_{\mathrm{eff}}^{\parallel }$ and $b_{\mathrm{eff}}^{\perp }$ give smaller values than predicted by the theory. A possible explanation for this discrepancy is that the major contribution to the shear stress at large $% b_{0}/L$ and $b_{1}/b_{0}=1/2$ comes from a very small region near the no-slip point (as we discuss below). The discretization error of the LB simulation becomes maximal in this region, and is particularly pronounced for the velocity gradients of systems with large effective slip. While we observe deviations around the no slip extremal value, the curves converge fast when stepping away from it and the excellent agreement of the measured effective slip suggest that the influenece of discretization errors on the mean flow is negligible at the resolution used. The asymptotic formula, Eq.(\ref{be_per}), predicts $b_{\mathrm{eff}}=0$. This likely indicates that in this situation it is necessary to construct the second-order term of expansions for eigenvalues. At relatively large $b_{0}/L$, the effective slip lengths can be well fitted as% \begin{eqnarray} b_{\mathrm{eff}}^{\parallel }/L &\simeq&0.1871+0.3175\ln \left( b_{0}/L+1.166\right) , \label{fit} \\ b_{\mathrm{eff}}^{\perp }/L &\simeq&0.2036+0.158\,8\ln \left( b_{0}/L+0.583\right) . \notag \end{eqnarray} In other words, they scale as $\ln (b_{0}/L)$ at large $b_{0}/L.$ Since the effective slip lengths for a texture decorated with perfect-slip stripes, Eq.(\ref{Phil72}), also shows a logarithmic growth (with $\phi $), in order to compare these two one-dimensional anisotropic textures the theoretical curve for stripes is included in Fig.~\ref{noslip}$\left(a\right)$. It can be seen that in the limit of large average slip the asymptotic curves for longitudinal effective slip for stripes and cosine texture nearly coincide. This means that both textures generate the same forward flow in the longitudinal direction. Simple estimates suggest $% b_{\mathrm{eff}}^{\parallel }\left( \beta _{0}\right) \simeq b_{\mathrm{ideal% }}^{\parallel }\left[ 1/\left( 3\beta _{0}\right) \right].$ Perhaps the most interesting and important aspect of this observation is that, from the point of view of the longitudinal effective slip, the ``wide'' cosine texture with $\beta _{0}=5$ taken for our numerical example is equivalent to the patterns of stripes with the extremely low fraction of no-slip regions, $\phi =0.06$ (see Fig.~\ref{noslip}$\left( b\right) $). These results may guide the design of superhydrophobic surfaces for large forward flows in microfluidic devices. Note, however, that in the situation when longitudinal slip for both textures are similar, the cosine texture shows a larger transverse effective slip as seen in Fig.~\ref{noslip}$\left( a\right) $. This means that textures with the cosine variation in the local slip length are less anisotropic than stripes, and $b_{\mathrm{eff}}^{\parallel }/b_{\mathrm{eff}}^{\perp }<b_{\mathrm{ideal}}^{\parallel }/b_{\mathrm{ideal}% }^{\perp }=2$. Therefore, cosine textures are less optimal for a generation of robust transverse flows as compared with a sharp-edge stripe geometry. \begin{figure}[tbp] \includegraphics[scale=0.45]{asmolov_fig4a.eps} \includegraphics[scale=0.45]{asmolov_fig4b.eps} \caption{$\left( a\right) $ The velocities and $\left( b\right) \ $the normal velocity gradients along the wall for the textures with $% b_{1}/b_{0}=1/3,\ b_{0}/L=0.2$ (solid curve, diamonds)$,\ b_{0}/L=1$ (dashed curve, circles)$,\ b_{0}/L=5$ (dash-dotted line, crosses). Dotted curves show predictions of asymptotic formulae, Eqs.~(\protect\ref{ul}) and (\protect\ref% {dul}).} \label{u3} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.45]{asmolov_fig5a.eps} % \includegraphics[scale=0.45]{asmolov_fig5b.eps} \end{center} \caption{$\left( a\right) $ The velocities and $\left( b\right) \ $the normal velocity gradients along the wall for the no-slip textures ($% b_{1}/b_{0}=1/2$). Other notations are the same as in Fig.~\protect\ref{u3}} \label{u2} \end{figure} \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.45]{asmolov_fig6.eps} \end{center} \caption{The normal velocity gradients for the no-slip textures with large amplitudes of slip length variation, $b_{0}/L=1$ (dashed curve)$,\ b_{0}/L=5$ (dash-dotted curve),$\ b_{0}/L=20$ (dotted curve) as functions of stretched coordinates.} \label{duc} \end{figure} The flow direction is associated with hydrodynamic pressures in the film, which is related to the heterogeneous slippage at the wall. Fig.~\ref{u3} shows the profiles of the velocity and of the normal velocity gradient along the wall for different $\beta _{0}$ and $\beta _{1}/\beta _{0}>1/2$. The velocity dependence $u\left( x,y,0\right) $ is smooth, and $\frac{\partial u}{\partial z}\left( x,y,0\right) $ is finite for any $% \beta _{0}$ and $\beta _{1}$, unlike the striped textures with piecewise-constant $% \beta $~\cite{asmolov:2012}. Asymptotic predictions (\ref{ul}) and (\ref% {dul}) are in a good agreement with numerical results and simulation data. Similar theoretical and simulation results, but obtained for a texture with no-slip point, $\beta _{1}/\beta _{0}=1/2$, are shown in Fig.~\ref{u2}. In this situation we find that $\partial u/\partial z\left( 1/2\right) =2\pi \beta_{0}$ for all $\beta_{0}$. Finally, we should like to stress that a very small region near the no-slip point gives a main contribution to the shear stress at large $b_{0}/L.$ For the major portion of the texture far from this region, we have $\partial u/\partial z\simeq -1,$ so that the total shear stress is zero, and this part of the texture is shear-free. Since the maximum values of the normal velocity gradients grow like $b_{0}/L$ one can expect that a length scale of this small region is $% L^{2}/b_{0}\ll L,$ or, equivalently, the curvature radius, $r=\left( d^{2}b/dy^{2}\right) ^{-1}=L^{2}/\left( 4\pi ^{2}b_{0}\right) \ll L,$ at the no-slip point. The validity of this assumption is justified in Fig. \ref{duc}, where the gradients for several values of $b_{0}/L$ versus stretched coordinates, $% b_{0}\left( y-0.5\right) /L,$ are presented. The curves are very close for $b_{0}/L\geq5$. Therefore, the normal gradient distribution and the dimensionless effective slip length in this case are controlled by the ratio $r/L$ only. These conclusions can be extended to any $b\left( y\right) $ characterized by a small radius $r=\left(d^{2}b/dy^{2}\right) ^{-1}\ll L$ near the no-slip point and by a large slope, of the order of $L/r\gg 1$ or larger, far from it. \iffals Eqs. (\ref{fit}) then can be written in terms of the radius $r$ as% \begin{eqnarray} b_{\mathrm{eff}}^{\parallel }/L &=&-0.98+0.3175\ln \left( L/r+46.03\right) , \label{fir} \\ b_{\mathrm{eff}}^{\perp }/L &=&-0.38+0.158\,8\ln \left( L/r+23.01\right) . \notag \end{eqnarray}% The above conclusion can be extended to any dependence $b\left( y\right) $ characterizing by a small radius $r=\left(d^{2}b/dy^{2}\right) ^{-1}\ll L$ near the no-slip point and by a large slope, of the order of $L/r\gg 1$ or larger, far from it. Eqs. (\ref{fir}) can be also applied to estimate $\mathbf{b}_{% \mathrm{eff}}$ for such textures. \f \section{Conclusion} We have investigated shear flow past a super-hydrophobic surface with a cosine variation of the local slip length, and have evaluated resulting effective slippage and the flow velocity. We have found that the cosine texture can provide a very large effective (forward) slip, but generates a smaller transverse velocity to the main (forward) flow than discrete stripes considered earlier. Our approximate formulae for longitudinal and transversal directional effective slip lengths are validated by means of lattice Boltzmann simulations. Excellent quantitative agreement is found for the effective slippage as well as for the flow field. Slight deviations of the observed velocity gradient close to the no-slip extremal value can be explained by discretization errors. \begin{acknowledgments} This research was partly supported by the Russian Academy of Science (RAS) through its priority program `Assembly and Investigation of Macromolecular Structures of New Generations', by the Netherlands Organization for Scientific Research (NWO/STW VIDI), and by the German Science Foundation (DFG) through its priority program `Micro- and nanofluidics'. We acknowledge computing resources from the J\"ulich Supercomputing Center and the Scientific Supercomputing Center Karlsruhe. \end{acknowledgments}
1,314,259,996,909
arxiv
\section{INTRODUCTION} Discovered on 1996 August 7 \citep{els96}, Comet 133P/Elst-Pizarro (also designated 7968 Elst-Pizarro; hereafter 133P) orbits in the main asteroid belt ($a=3.156$~AU, $e=0.165$, $i=1.39^{\circ}$). It has a Tisserand parameter (with respect to Jupiter) of $T_J=3.184$, while classical comets have $T_J<3$ \citep{vag73,kre80}. In 2005, two more objects displaying cometary activity that are likewise dynamically indistinguishable from main-belt asteroids were identified: P/2005 U1 (Read) \citep{rea05} and 176P/LINEAR (also known as asteroid 118401 (1999 RE$_{70}$)) \citep{hsi06c}. Their discoveries led to the designation of a new cometary class --- the main-belt comets (MBCs) --- among which 133P is also classified \citep{hsi06b}. A fourth MBC, P/2008 R1 (Garradd), has also since been discovered \citep{gar08,jew09}. Despite the initial excitement over the discovery of the cometary nature of 133P in 1996, no physical studies or monitoring reports were published in the refereed literature until the comet's activity was re-observed in 2002 \citep{hsi04,low05}. Consequently, little is known about 133P's active behaviour in that intervening period. Since knowledge of the timing of active episodes can constrain hypotheses concerning the source of the activity, we report the results of our own monitoring campaign, which began following 133P's active outburst in 2002 and which culminated in observations of renewed activity in 133P in 2007. \section{OBSERVATIONS\label{observations}} Since 133P's 2002 active episode, we have monitored the comet for evidence of recurrent dust emission using the University of Hawaii (UH) 2.2-m telescope and the 10-m Keck I telescope, both on Mauna Kea, the 1.3-m telescope operated by the Small and Moderate Aperture Research Telescope System (SMARTS) Consortium at Cerro Tololo, and the 3.58-m New Technology Telescope (NTT) operated by the European Southern Observatory (ESO) at La Silla. All observations reported here were obtained under photometric conditions. Details of these monitoring observations are listed in Table~\ref{obs_elstpiz}. Observations with the UH 2.2-m telescope were made using a Tektronix 2048$\times$2048 pixel CCD with an image scale of $0\farcs219$ pixel$^{-1}$ behind Kron-Cousins BVRI filters. Observations with Keck were made using the Low Resolution Imaging Spectrometer (LRIS) imager \citep{oke95} which employs a Tektronix 2048$\times$2048 CCD with an image scale of $0\farcs210$~pixel$^{-1}$ and Kron-Cousins BVRI filters. Observations with the SMARTS 1.3-m were made using the optical channel of A Novel Double-Imaging Camera (ANDICAM) which employs a Fairchild 447 2048$\times$2048 CCD with an image scale of $0\farcs369$~pixel$^{-1}$ (using 2$\times$2 binning) and Johnson-Kron-Cousins BVRI filters. Observations with the NTT in 2007 were made using the ESO Multi-Mode Instrument (EMMI) \citep{dek86} which employs two adjacent 2048$\times$4096 MIT/LL CCDs with image scales of $0\farcs332$~pixel$^{-1}$ (using 2$\times$2 binning) and Bessel BVRI filters, while observations in 2008 were made using the ESO Faint Object Spectrograph and Camera (EFOSC2) \citep{buz84} which employs a Loral/Lesser 2048$\times$2048 CCD with an image scale of $0\farcs24$~pixel$^{-1}$ (using 2$\times$2 binning) and Bessel BVR and Gunn i filters. Except for those conducted with the SMARTS 1.3-m telescope, all observations were made while tracking our target non-sidereally to prevent trailing of the object. For SMARTS 1.3-m observations, non-sidereal tracking was not available, and as such, exposure times were selected such that the trailing of the object during the course of a single exposure would be less than $0\farcs5$, well below the typical full width at half-maximum (FWHM) seeing at the 1.3-m site. Standard image preparation (bias subtraction and flat-field reduction) was performed for all images. Flat fields were constructed from dithered images of the twilight sky. Photometry of \citet{lan92} standard stars and field stars was obtained by measuring net fluxes (over sky background) within circular apertures, with background sampled from surrounding circular annuli. Comet photometry was performed using circular apertures of different radii (ranging from $2\farcs0$ to $5\farcs0$), but to avoid the contaminating effects of the coma, background sky statistics were measured manually in regions of blank sky near, but not adjacent, to the object. Several (5--10) field stars in the comet images were also measured to correct for minor extinction variations during each night. \section{RESULTS \& DISCUSSION\label{results}} \subsection{Monitoring Campaign\label{monitoring}} For all monitoring observations, individual $R$-band images (aligned on the object's photocenter using linear interpolation) from each night were combined into single composite images (Fig.~\ref{images_133p}). For reference, we also show composite images from 133P's 2002 active phase \citep[Figs.~\ref{images_133p}a--\ref{images_133p}d;][]{hsi04}. Activity is marginally visible in images from 2007 May 19, 2007 August 18, and 2007 September 12 (Figs.~\ref{images_133p}p, \ref{images_133p}r, \ref{images_133p}s), while the comet's characteristic dust trail is clearly visible in the image from 2007 July 17 (Fig.~\ref{images_133p}q). We find no evidence of activity in images from 2003 September 22 through 2007 March 21 (Figs.~\ref{images_133p}e--\ref{images_133p}o) and from 2008 July 1 (Fig.~\ref{images_133p}t). In all images, even those obtained while 133P was active, the FWHM of the object's surface brightness profile is consistent with the typical FWHM seeing at the time of night when those images were obtained, implying that little or no coma is present. In Figure~\ref{actv133p}, we mark the positions where we observed 133P to be active or where others reported it to be active, as well as positions where we observed it to be inactive, on a plan view of its orbit. The figure shows that reports of activity in 133P are approximately confined to the quadrant following perihelion, with the earliest detection of activity occurring shortly before perihelion at a true anomaly of $\nu\approx350^{\circ}$ and the latest detection occurring at $\nu\approx90^{\circ}$. This activity profile is consistent with the hypothesis of seasonal activity modulation described in \citet{hsi04} and \citet{hsi06a}, whereby 133P's activity is driven by the sublimation of a localised patch of exposed volatile material confined to either the ``northern'' or ``southern'' hemisphere of the body. Assuming non-zero obliquity, activity then only occurs during the portion of the orbit when that active site receives enough solar heating to drive sublimation, i.e., during that hemisphere's ``summer''. We note that our observations of 133P on 2008 July 1 at the NTT showed it to be inactive despite the object being observed to be active at nearly the same orbital position in 2002. We attribute this discrepancy to a combination of the low signal-to-noise of this observation and the expected extremely weak activity of 133P at that point in its orbit \citep{hsi04}. \subsection{Photometric Activity Detection and Measurement\label{activitydetection}} When no coma is clearly visible for an object, an alternate method for detecting activity is examination of its photometric behaviour: i.e., determining whether it is consistent with an inactive object of a fixed size, or whether it shows anomalous brightening over a certain portion of its orbit. This type of analysis led to the discovery of activity in 95P/(2060) Chiron \citep{tho88,bus88,mee89,har90}. In applying this approach to 133P, we recall that \citet{hsi04} originally derived linear and IAU $H$,$G$ phase function solutions for 133P using data taken in 2002 when the object was visibly emitting dust. In the case of that data set, 133P's activity was judged to contribute negligibly to nucleus photometry (as no significant coma was detected) and was thus assumed to affect phase function derivations similarly negligibly. Having since accumulated a substantial set of observations while 133P was entirely inactive, though, we can now assess the validity of this neglect by deriving new phase function solutions and comparing the results to those of \citet{hsi04}. We caution that, unlike the data used by \citet{hsi04}, the photometric data used in this follow-up analysis (2003 Sep 22 to 2007 Mar 21) all consist of ``snapshot observations'', which are short sequences of exposures at unknown rotational phases, instead of full lightcurves. This caveat is significant because rotation of the body is expected to cause deviations in measured brightness by as much as 0.2~mag from the comet's true mean brightness at a given time \citep{hsi04}. Given a sufficiently large data set, however, we expect that the average of these fluctuations will approach zero, allowing us to derive reasonably accurate phase function solutions without necessarily knowing the rotational phase at which each individual photometry point was obtained. Nonetheless, the lack of rotational phase information for our snapshot observations remains a source of uncertainty. We compute the reduced magnitude, $m_R(1,1,\alpha)$, of 133P at the time of each observation using \begin{equation} m_R(1,1,\alpha) = m_{mid}(R,\Delta,\alpha) - 5\log(R\Delta) \end{equation} where $R$ is the heliocentric distance of the object in AU, $\Delta$ is the object's geocentric distance in AU, and $m_{mid}(R,\Delta,\alpha)$ is the estimated $R$-band magnitude at the midpoint of the full photometric range of the rotational lightcurve (Table~\ref{obs_elstpiz}). For observations of full lightcurves, $m_{mid}$ is determined by simply plotting the data and locating the midpoint between the maximum and minimum values of the lightcurve. For snapshot observations, $m_{mid}$ is generally taken to be the mean of the available photometry data with large error bars applied to reflect rotational phase uncertainties, assuming a full possible photometric range of 0.40~mag. We fit reduced magnitude values to both a linear phase function and an IAU phase function, finding best-fit values of $m_R(1,1,0)=15.80\pm0.07$~mag and $\beta=0.041\pm0.005$~mag~deg$^{-1}$ for the linear phase function where \begin{equation} m_R(1,1,\alpha) = m_R(1,1,0) + \beta\alpha . \end{equation} We also find best-fit values of $H_R=15.49\pm0.05$~mag and $G_R=0.04\pm0.05$ for the IAU phase function as defined in \citet{bow89}. Photometry obtained at phase angles of $\alpha<5^{\circ}$, where an opposition surge effect is expected, are included in the derivation of the IAU phase function but omitted from the derivation of the linear phase function. We plot our best-fit solutions in Figure~\ref{phaselaws}. A modest amount of scatter around our solutions is present, as expected, but in all cases the deviations from the best-fit phase functions are consistent with expected brightness fluctuations due to 133P's rotation. Due to the uncertainty of the active status of 133P on 2008 July 01 (\S\ref{monitoring}), the photometry from that night is plotted but was not included in the computation of the best-fit phase functions. While the slope parameters of both newly-derived functions are consistent with the parameters computed in \citet{hsi04} ($\beta=0.044\pm0.007$~mag~deg$^{-1}$; $G_R=0.026\pm0.1$), both newly-derived absolute magnitudes are $\sim$0.2~mag fainter than their previously derived values ($m_R(1,1,0)=15.61\pm0.01$~mag; $H_R=15.3\pm0.1$~mag), strongly suggesting that the previously derived parameters were affected by contamination by 133P's dust emission. This contamination is assumed to consist of a combination of coma and the portion of the dust trail (as projected in the plane of the sky) contained within the seeing disc. This suggestion of dust contamination is reinforced by Figure~\ref{phaselaws} where we note that photometry from both 133P's 2002 and 2007 active phases are consistently brighter than expected from our new phase function solutions. Because most of the data points from 2002 and 2007 are mean magnitudes derived from fully sampled lightcurves, brightness fluctuations due to rotation cannot account for the discrepancies. Assuming that the discrepancy between an observed magnitude, $m_{mid}$, and expected magnitude, $m_{exp}$, is due to dust contamination, the scattering surface area of the dust, $A_d$, is given by \begin{equation} A_d = A_n\left({A_d\over A_n}\right) = A_n\left(10^{0.4(m_{exp} - m_{mid})} - 1\right) \label{eqadust} \end{equation} where $A_n=\pi r_e^2=1.13\times10^7$~m$^2$ is the scattering cross-section of the nucleus \citep{hsi09}, and albedos of the nucleus and dust are assumed to be equal. Assuming optically thin dust, the total dust mass, $M_d$, can then be estimated from \begin{equation} M_d \sim {4\over 3}\pi\rho a_d^3 \left({A_d\over\pi a_d^2}\right) \label{eqmdust} \end{equation} where we adopt typical dust grain radii of $a_d=10$~$\mu$m and a bulk grain density of $\rho_d=1300$~kg~m$^{-3}$ \citep[{\it cf}.][]{hsi04}. For reference, we also compute $Af\rho$ \citep[{\it cf}.][]{ahe84} for each set of observations where the parameter is given by \begin{equation} Af\rho = {(2R\Delta)^2\over \rho} 10^{0.4[m_{\odot}-m_R(R,\Delta,0)]} \label{eqafrho} \end{equation} where $R$ is in AU, $\Delta$ is in cm, $\rho$ is the physical radius in cm of a $4\farcs0$-radius photometry aperture at the distance of the comet, and $m_R(R,\Delta,0)$ is the phase-angle-corrected $R$-band magnitude of the comet measured using a $4\farcs0$-radius aperture, which we calculate using \begin{equation} m_R(R,\Delta,0) = m_{mid}(R,\Delta,\alpha) + 2.5\log\left[(1-G)\Phi_1(\alpha)+G\Phi_2(\alpha)\right] \end{equation} where $\Phi_1$ and $\Phi_2$ are given by \begin{equation} \Phi_1=\exp\left[-3.33\left(\tan{\alpha\over 2}\right)^{0.63}\right] \end{equation} \begin{equation} \Phi_2=\exp\left[-1.87\left(\tan{\alpha\over 2}\right)^{1.22}\right] \end{equation} \citep{bow89}. Using Equations~\ref{eqadust}, \ref{eqmdust}, and \ref{eqafrho}, we compute $A_d$, $M_d$, and $Af\rho$ for each set of observations from 2002 and 2007 during which 133P was observed to be active, and tabulate the results in Table~\ref{dustcontrib}. We find that for data from 2002, dust contamination is approximately constant with a scattering surface area of $\sim$0.20$A_n$ and a dust mass of $M_d\sim4\times10^4$~kg contained within $\sim$3~arcsec ($\sim$4500~km in August-November; $\sim$6500~km in December) photometry apertures. The relatively constant amount of dust over this time period explains why we were able to derive reasonably accurate slope parameters for 133P from our 2002 data despite arriving at incorrect results for the comet's absolute magnitude due to the dust contamination. In data from July 2007, we find that 133P's inferred dust coma has a strength comparable to that observed in 2002, having a scattering surface area equivalent to $\sim$0.25$A_n$ and a dust mass of $M_d\sim5\times10^4$~kg contained within $\sim$4~arcsec ($\sim$4700~km) photometry apertures. The slightly larger amount of inferred dust in 2007 could indicate a higher rate of dust production, but could also be due to different viewing geometries, given that 133P was close to opposition when observed in July 2007. At this position, the antisolar vector for 133P points very nearly directly behind the object as seen from Earth, causing more of the dust trail to be located within the seeing disc of the comet as projected on the sky. Given the limitations of our observations, however, we are unable to disentangle this possible projection effect from any intrinsic increase in dust production. Additionally, 133P's apparent brightness could also have been enhanced by an opposition surge effect from the dust in its coma, though we unfortunately lack observational constraints for quantifying this effect. Given these various possible contributing factors to 133P's enhanced brightness on 2007 July 17 and 20, we are unable to determine whether 133P was more active on these dates compared to 2002 August 19 through 2002 November 7. We can conclude, however, that coma contamination is present in nucleus photometry performed for 133P during both observing periods, and that the measured magnitude enhancements suggest at least comparable levels of activity in each case. The remainder of our photometry from 133P's 2007 active phase is derived from incomplete lightcurve information, and as such, coma estimates at these times have much larger uncertainties than at other times. We find no definitive evidence of a coma on 2007 May 19, but find that the inferred comae on 2007 Aug 18 and 2007 Sep 12 are far stronger ($\sim$0.65$A_n$) than in any other observations, a rather unexpected discovery given the minimal amount of time elapsed since our 2007 July observations. We suggest that the large inferred dust contribution to nucleus photometry in August and September could be at least partly due to geometric effects. As can be seen in Figures~\ref{images_133p}q-\ref{images_133p}s, the orientation of the projection of the dust trail appears to change over this period of time. We caution that poor seeing during our August and September observations and the small aperture (1.3~m) of the telescope used to obtain these data mean that the observed morphology (namely, the near-disappearance of the dust trail) cannot be considered entirely reliable. If the observed morphology is believed, however, much of the precipitous increase in 133P's apparent coma strength between July and August could be due to the dust trail becoming almost directly aligned behind the nucleus in August and September, thus becoming unavoidably included within our photometry apertures. We can account for this viewing geometry effect by integrating the scattering surface area of the visible dust trail measured in July data (discussed below; \S\ref{dusttrail}) and then assuming that it all falls within the photometry aperture used to measure the nucleus magnitudes in August and September. The net increase in dust scattering surface area implied by photometry between 2007 July 20 and 2007 August 18 is $\sim$0.40$A_n$. The integrated scattering surface area of the dust trail on 2007 July 20 over the first 30~arcsec from the nucleus (the trail becomes too faint to measure reliably beyond this point), however, is $\sim$0.20$A_n$, accounting for only about half of the observed increase in dust contamination between July and August. The remainder of the observed increase could be partly due to distant material in the dust trail that was too diffuse to detect in trail form in July data, but nevertheless contributed positively to nucleus photometry when projected directly behind the nucleus in August and September. It seems unlikely, however, that half of the dust in the trail could go undetected in our July data, and as such, we surmise that at least part of the increase must in fact be due to a real increase in dust production, which of course would certainly be plausible at this early stage in 133P's active phase. \subsection{The Lightcurve Revisited\label{ltcurverevisit}} \subsubsection{Search for Rotational Colour Variations\label{colorvariations}} During our 2007 NTT run when 133P was active, we observed the comet in continuously cycling filters ($VRI$ on 2007 July 17 and $BVRI$ on 2007 July 20). Observations were made in this way to allow us to obtain deep imaging of 133P in multiple filters and also construct simultaneous lightcurves in each filter. These lightcurves then allowed us to search for surface colour inhomogeneities that, for example, may constrain the position of the localised active site hypothesized by \citet{hsi04}. These lightcurves, phased to a rotational period of $P_{rot}=3.471$~hr \citep{hsi04}, are plotted in Figure~\ref{nttltcurves}. To then assess colour variation as a function of rotational phase for each filter pair, we use linear interpolation to obtain the magnitudes of the object in the second filter at times of observations in the first filter and then plot the differences (Fig.~\ref{nttcolors}), again phased to $P_{rot}=3.471$~hr. We find mean nucleus colours of $B-V=0.65\pm0.03$~mag, $V-R=0.36\pm0.01$~mag, and $R-I=0.32\pm0.01$~mag. These values are somewhat different from the mean colours found for 133P by \citet{hsi04}, but are within the range of individual values measured in that work. We regard the colour measurements presented here to be more accurate since our repeated multifilter observations of 133P here allowed us to account for both rotational magnitude variations (via lightcurve interpolation) and minor extinction variability (using field stars as references for making differential photometric corrections). The single sets of multifilter observations used to make 133P's previous colour measurements did not permit either of these corrective measures. Upon examining individual colour measurements, we find no conclusive evidence of rotational colour inhomogeneity. We find maximum colour variations of only $\Delta(B-V)=0.11\pm0.13$~mag, $\Delta(V-R)=0.06\pm0.07$~mag, and $\Delta(R-I)=0.08\pm0.08$~mag, where the non-systematic distribution of even these small variations indicates that they are most likely due to ordinary measurement uncertainties. We note that this result does not rule out the possibility that 133P's active area exhibits a different colour signature than inactive surface material. First, the coma that is likely present (\S\ref{activitydetection}) should act to obscure colour variations on the nucleus surface, with the precise amount of obscuration varying with rotational phase as the ratio of the nucleus's scattering cross-section to the coma's cross-section changes. Furthermore, under the seasonal heating hypothesis \citep{hsi04,hsi06a}, the active site is in fact expected to be illuminated by the Sun at all rotational phases when near perihelion (assumed to be close to solstice) when these observations were made. The nucleus orientation at this time allows the active site to receive maximal solar heating but also means that the active site is always in the line of sight as viewed from Earth. We suggest that more favourable conditions for detecting colour inhomogeneities will occur around 133P's next pre-perihelion equinox (likely near $\nu\sim270^{\circ}$). Based on prior observations (\S\ref{monitoring}), the nucleus should be largely coma-free over this portion of the orbit, and at equinox, the active site should pass into and out of the line of sight as the nucleus rotates, maximising any colour variations. We therefore encourage additional rotationally-resolved colour measurements of 133P between late 2011 and early 2012. \subsubsection{Implications for 133P's Pole Orientation} For reference, we remove the estimated dust contamination from both our 2002 and 2007 lightcurve data, and overplot the two sets of lightcurves (Fig.~\ref{ltcurves0207}). Each of the two sets of data are phased self-consistently to $P_{rot}=3.471$~hr, though given the great difficulty of phasing data together that are separated by almost 5 years to such a short rotational period, the 2002 and 2007 data are simply aligned by eye. Due to the two-peaked nature of 133P's lightcurve, though, there is an ambiguity in performing this alignment. In one case (Fig.~\ref{ltcurves0207}a), the data can be aligned such that the lightcurve shape and photometric range appear largely unchanged between the two observation epochs. In the second case (Fig.~\ref{ltcurves0207}b), the data can be aligned such that the photometric range of lightcurve appears to decline to $\Delta m_R\sim0.25$~mag in 2007 from $\Delta m_R\sim0.35$~mag in 2002. In the latter case, it should be recalled that the coma contribution to the data plotted has already been subtracted, and as such the change in photometric range cannot be attributed to differences in the amount of coma. Unfortunately, due to the incomplete sampling of the lightcurve in 2007, it is not possible to resolve the ambiguity between these two cases. This ambiguity is significant because of the implications of photometric range behaviour for the orientation of 133P's rotational pole. To gain more insight as to how the photometric range of 133P should change depending on pole orientation and observing geometry, we simulate its lightcurve behaviour using the model presented in \citet{lac07}. We assume a simple prolate ellipsoidal shape for the nucleus of 133P and render it at various observing geometries and rotational phases. At each rotational phase, the light reflected back to the observer is integrated to generate lightcurve points. The 2002 September coma-corrected photometric range for 133P was measured to be $\Delta m_R=0.35$~mag, and so we use a nucleus axis ratio of $a/b=10^{0.4\Delta m_R}=1.39$ (it should be noted that this is a lower limit due to the unknown projection angle at the time). We use a Lommel-Seeliger ``lunar'' scattering function \citep[{\it cf}.][]{fai05} which has no free parameters and is appropriate for simulating the low albedo \citep[$p_R=0.05\pm0.02$;][]{hsi09} surface of 133P. To simplify the geometry, we neglect the small orbital inclination ($i=1.4\degr$) of 133P and assume that it is coplanar with the Earth (i.e., $i=0\degr$). The seasonal heating hypothesis implies that 133P is at solstice when close to perihelion, i.e. has a true anomaly at solstice of $\nu_\mathrm{sol}\approx0\degr$, and also requires that the object have non-zero obliquity ($\varepsilon\neq0\degr$). In principle, $\nu_\mathrm{sol}$ could potentially have any value from $\nu_\mathrm{sol}\approx0\degr$ to $\nu_\mathrm{sol}\approx45\degr$, since the temperature of the hemisphere where 133P's active site is located will begin to rise due to solar heating before the spin axis direction is actually aligned with the Sun. The seasonal heating hypothesis is inconsistent, however, for pole orientations for which $\nu_\mathrm{sol}\approx90\degr$, or $\varepsilon=0\degr$. We simulate the lightcurve behaviour of 133P for $\nu_\mathrm{sol}=0\degr$, $\nu_\mathrm{sol}=40\degr$ and $\nu_\mathrm{sol}=90\degr$. The first and third pole orientations are limiting cases that are consistent and inconsistent with the seasonal heating hypothesis, respectively. The intermediate geometry, in which solstice is reached approximately half-way through the active portion of the orbit, is meant to test how sensitive we are to the exact longitude of the pole. Because we assume zero orbital inclination, each case sets the ecliptic longitude of pole, and the ecliptic latitude is defined by the choice of obliquity. We simulate obliquities of $\varepsilon=0\degr$, $\varepsilon=10\degr$, $\varepsilon=20\degr$ and $\varepsilon=30\degr$. Only $\varepsilon=0\degr$ is inconsistent with the seasonal hypothesis. Rendered samples of 133P, where we assume $\varepsilon=30\degr$, are shown in Figures~\ref{ltcurve_nu0}, \ref{ltcurve_nu40} and \ref{ltcurve_nu90}. Figure~\ref{rangevsgeom} shows the expected photometric range in 2002 September and 2007 July for each pole orientation. As expected, the photometric range changes in opposite directions for $\nu_\mathrm{sol}=0\degr$ and $\nu_\mathrm{sol}=90\degr$, whereas the intermediate pole orientation ($\nu_\mathrm{sol}=40\degr$) produces only a small change between the two epochs. The absolute value of $\Delta m_R$ in the figure depends on the assumed axis ratio and is unimportant in this analysis, in which we are primarily concerned with relative changes. The key feature is the variation of the range between the two epochs. The two possible scenarios indicated by the data ({\it cf}. Fig.~\ref{ltcurves0207}) are where (a) both the 2002 and 2007 photometric ranges are similar ($\Delta m_R\sim0.35$ mag), or (b) the 2002 photometric range ($\Delta m_R\sim 0.35$ mag) is larger than the 2007 range ($\Delta m_R\sim0.25$ mag). Inspection of Figure~\ref{rangevsgeom} shows that the first scenario is consistent with low obliquity ($\varepsilon\lesssim10\degr$) and any of the considered pole orientations. The second scenario is only consistent with a solstice around $\nu_\mathrm{sol}=0\degr$ and significant obliquity ($\varepsilon\gtrsim20\degr$). Both scenarios rule out a pole orientation where $\nu_\mathrm{sol}=90\degr$ if there is also significant obliquity. Clearly, additional and more complete lightcurve observations at different points in 133P's orbit are needed to clarify how 133P's photometric range varies with orbit position, constrain the object's pole orientation, and determine whether the seasonal heating hypothesis remains plausible. Given our current data, we can neither confirm nor reject the plausibility of seasonal activity modulation as described by \citet{hsi04}. While the pattern of activity of 133P along its orbit appears consistent with the seasonal heating hypothesis, the discovery of an incompatible pole solution could indicate that activity is in fact modulated by factors other than obliquity, {\it e.g.}, shadowing of the active site by crater walls or other local topographic features. In Figure~\ref{rangevsorbit}, we use our model to forecast the photometric range behaviour of 133P over 1.5 orbits from its perihelion passage in 2007 August and to its aphelion passage in 2016 January. We plot solutions for four pole positions, two consistent with the seasonal heating hypothesis ( $\varepsilon=20\degr$, $\nu_\mathrm{sol}=0\degr$, and $\varepsilon=20\degr$, $\nu_\mathrm{sol}=40\degr$) and two inconsistent with that hypothesis ($\varepsilon=20\degr$, $\nu_\mathrm{sol}=90\degr$, and $\varepsilon=0\degr$). The observability of 133P during this period is also indicated in the figure, and should assist in planning observations that are best-suited for discriminating between the various pole orientations that we consider here. \subsection{The Dust Trail Revisited\label{dusttrail}} To produce deep composite images from our 2007 NTT data, we use linear interpolation to shift the multiple images obtained in each filter to align the photocenters of the nucleus in each image, and sum the resulting shifted images. To measure the surface brightness profiles of the dust trail in these composite images, we then rotate the images to make the trail horizontal in the image frames, and measure the net flux in rectangular apertures placed along the length of the trail \citep[{\it cf}.][]{hsi04}. The dimensions of these equally-sized apertures are set to lengths (along the direction of the trail) of 5 pixels, and widths (perpendicular to the trail) of 6 pixels (approximately equal to the FWHM of the trail cross-section on each night). The net fluxes in these apertures are then converted to net fluxes per linear arcsec (as measured along the length of the trail) and normalised with respect to the total net flux of the nucleus. We plot the resulting surface brightness profiles for both 2007 Jul 17 and 2007 Jul 20 in Figure~\ref{trailcolors07}. From these plots, we see that the trail profile does not change significantly between the two nights. We also note that there are minimal differences in the profiles of the trail as observed in different filters, indicating that the colours of the dust along the trail are consistently similar to those of the nucleus. To quantify this observation, we measure the surface brightness of the trail as observed on 2007 July 20 in each filter in a single aperture approximately 5 arcsec (15 pixels, or $\sim$6000~km) in length and 1 arcsec (3 pixels or $\sim$1200~km) in width placed along the trail. Seeking to minimise the effect of the nucleus on our trail photometry, we place the nearest edge of this aperture $\sim$3.0~arcsec from the nucleus photocentre. We find surface brightnesses of $\Sigma_B=24.88\pm0.17$~mag~arcsec$^{-2}$, $\Sigma_V=24.35\pm0.05$~mag~arcsec$^{-2}$, $\Sigma_R=24.04\pm0.05$~mag~arcsec$^{-2}$, and $\Sigma_I=23.70\pm0.07$~mag~arcsec$^{-2}$, giving colours of $B-V=0.53\pm0.18$~mag~arcsec$^{-2}$, $V-R=0.31\pm0.07$~mag~arcsec$^{-2}$, and $R-I=0.34\pm0.09$~mag~arcsec$^{-2}$, consistent with the colours of the nucleus found in \S\ref{ltcurverevisit}. We also wish to know how trail morphology changes between 133P's 2002 and 2007 active episodes. The most obvious difference between the two observing epochs is that the dust trail of 133P is significantly shorter in our 2007 data than in 2002 (despite composite images from each epoch being of approximately equivalent effective exposure time), extending only $\sim$30 arcsec from the nucleus in 2007 observations, compared to nearly 3 arcmin in 2002 \citep{hsi04}. In terms of trail width, the observed mean FWHM of the trail on 2002 Sep 07 over the first 10~arcsec of the trail, as measured from the edge of the nucleus's seeing disc (taken to be 2.5$\times$ the FWHM seeing), was measured to be $\theta_o=1\farcs3$. This observed value corresponds to an intrinsic FWHM of $\theta_i=0\farcs9$ ($\sim$1300~km in the plane of the sky), which is computed using \begin{equation} \theta_i = (\theta_o^2 - \theta_s^2)^{1/2} \label{intrinsicwidth} \end{equation} where the FWHM seeing was $\theta_s=0\farcs9$ on 2002 Sep 07. For comparison, the observed FWHM of the trail on 2007 July 17 was $\theta_o=1\farcs9$, corresponding to $\theta_i=1\farcs3$ ($\sim$1500~km in the plane of the sky), where $\theta_s=1\farcs4$. Given that viewing geometries (parametrized by out-of-plane viewing angles, $\alpha_{pl}$) in 2002 and 2007 were comparable, we therefore find that the computed intrinsic width of the dust trail is approximately equal in both our 2002 and 2007 observations. As \citet{hsi04} found the primary factor controlling 133P's trail width to be particle ejection velocity, this result suggests that sublimation took place with comparable intensity in both 2002 and 2007. In order to further compare 133P's activity level in 2002 and 2007, we measure the profile of 133P's trail in $R$-band data from 2002 September 07 using the procedure described above, i.e., using rectangular apertures placed along the length of the trail with lengths of 5 pixels and widths of 6 pixels each. We then compare the resulting profile to the mean $R$-band trail profile from 2007 (Fig.~\ref{trailcomparison}), finding that the trail is noticeably weaker in 2007 than it was in 2002. The difference in trail strength in 2002 and 2007 could be due to several factors. The simplest explanation is that the activity was actually weaker in 2007 due to depletion of exposed volatile material on 133P by the previous outburst. This explanation, however, is at odds with our findings of comparable dust ejection velocities for the two observing epochs (above), and comparable dust enhancement of the nucleus brightness (\S\ref{activitydetection}). A more likely explanation is that by the time our 2007 NTT observations were made, 133P was no more than 4 months into its current active phase, whereas it had been active for about a year by the time it was observed on 2002 September 07. Thus, 133P may simply have not yet reached its peak level of activity by the time we observed it with the NTT in 2007. The position of 133P near opposition on 2007 July 17 and 20 also meant that a dust trail pointed in the antisolar direction would be highly projected in the sky, which would additionally explain why the trail appeared to be so much shorter in 2007 than in 2002. \section{SUMMARY} Key results are as follows: \begin{enumerate} \item{Monitoring observations of 133P show no evidence of activity from UT 2003 September 22 through UT 2007 March 21. This result is consistent with the seasonal activity modulation hypothesis proposed by \citet{hsi04} which predicted that, following its 2002 outburst, 133P should remain inactive until approximately late 2007. } \item{A recomputation of 133P's phase function parameters using inactive data yielded the new IAU phase function parameters of $H_R=15.49\pm0.05$~mag and $G_R=0.04\pm0.05$, and linear phase function parameters of $m_R(1,1,0)=15.80\pm0.05$~mag and $\beta=0.041\pm0.005$~mag~deg$^{-1}$. While these new values for $G_R$ and $\beta$ are similar to the values computed by \citet{hsi04}, the values for $H_R$ and $m_R(1,1,0)$ computed here are $\sim$0.2~mag fainter than previously derived values, a discrepancy we attribute to previously undetected dust contamination. } \item{Comparison of 133P's newly-computed IAU phase function with rotationally-averaged magnitudes found during its 2002 active outburst reveals the presence of unresolved coma with a dust scattering surface area on the order of $\sim$0.20 of the nucleus cross-section. Similarly, unresolved coma and trail material on the order of $\sim$0.25 of the nucleus cross-section is found in images taken on 2007 July 17 and 20, increasing to $\sim$0.65 of the nucleus cross-section in August and September as the dust trail appears to become projected almost directly behind the nucleus as viewed from Earth. } \item{From NTT observations obtained in 2007, we find mean nucleus colours of $B-V=0.65\pm0.03$, $V-R=0.36\pm0.01$, and $R-I=0.32\pm0.01$, and no evidence of colour inhomogeneities on 133P's surface (though we hypothesize that inhomogeneities will be more effectively searched for between late 2011 and early 2012). Additionally, we find from the same observations that the dust trail shares approximately the same colours as the nucleus. } \item{Examination of coma-corrected lightcurve data for 133P from 2002 and 2007 indicates a possible reduction of photometric range from $\Delta m_R\sim0.35$~mag in 2002 to $\Delta m_R\sim0.25$~mag in 2007, though this result is inconclusive due to incomplete sampling of the lightcurve in 2007. Additional observations will be needed to determine how 133P's photometric range actually varies with orbital position and what implications these variations have for constraining the object's pole orientation. Our present constraints on pole orientation do not currently permit us to confirm or reject obliquity-related seasonal activity modulation as a plausible mechanism for explaining 133P's active behaviour. } \item{While 133P's dust trail appears shorter and weaker in 2007 data as compared to 2002 data, other measures of activity strength (dust ejection velocity and dust contamination of nucleus photometry) during the two outburst events are found to remain roughly constant. We suggest that the weaker trail observed in 2007 could simply be due to the fact that observations were made at an earlier stage in 133P's active phase than in 2002, and find that there is no conclusive evidence of any substantial decrease in activity strength between 2002 and 2007. } \end{enumerate} \section*{Acknowledgements} We thank John Dvorak, Dave Brennen, Dan Birchall, Ian Renaud-Kim, and Jon Archambeau at the UH 2.2-m, Greg Wirth, Cynthia Wilburn, and Gary Punawai at Keck, Michelle Buxton and various queue observers at NOAO, and Leonardo Gallegos at the NTT for their assistance with our observations, and Matthew Knight for a prompt and helpful review. We appreciate support of this work through STFC fellowship grant ST/F011016/1 to HHH, NASA planetary astronomy grants to DJ and SCL, a Royal Society Newton Fellowship grant to PL, the National Optical Astronomy Observatory, and the European Southern Observatory.
1,314,259,996,910
arxiv
\section{Introduction} The controllable generation of entangled states has triggered a considerable amount of interest in the physics community. In particular, within cavity-quantum electrodynamics (cavity-QED) contexts, the achievement of two-qubit entanglement has seen a flourishing of proposals. It has been suggested to set entanglement between two remote atomic qubits by using the cancellation of which-path information relative to spontaneously emitted photons~\cite{whichpath}. The resonant interaction of a cavity field with two qubits has revealed a striking {entangling power} even when the field is prepared in an incoherent state~\cite{myungpeter}. In a similar setup, entanglement can be created through the continuous detection of the field leaking from the cavity containing the atoms~\cite{pleniohuelga}. However, qubit entanglement can also be set in a regime of dissipative dynamics, where the system at hand interacts with a structured environment~\cite{massimo} and the conditions for entanglement generation between subsystems undergoing purely dissipative dynamics have been studied~\cite{benatti,tanasficek}. Strategies for the effective engineering and simulation of such environments have subsequently been envisaged~\cite{engineered}. Most recently, schemes have been proposed for efficiently inducing discrete-variable entanglement in a bipartite system by {\it transferring} the correlation properties of a continuous-variable state~\cite{ET}. It is easy to recognize the importance of protocols that are able to reliably protect correlations, once they are established, from the unavoidable spoiling effects of decoherence and decay. This has led to proposals of {\it passive} as well as {\it active} schemes~\cite{misto}. The use of decoherence-free subspaces is the prototypical example of passive strategies, where the correlations set in a system can be protected by choosing a proper encoding of the information~\cite{DFS,beige}. More recently, it has been suggested to use macroscopic quantum jumps as a tool to create an entangled state of two qubits, preparing it in a dark state~\cite{metz}. One could also consider generalized dynamical-decoupling ({bang-bang}) schemes~\cite{bangbang} to actively cancel the effect of the environment on the system of interest. These strategies are appealing and intriguing from a theoretical point of view and proof-of-principle experiments have been performed, for example in a solid-state setup~\cite{solidoDFS}. However, they are still far from complete and too demanding for the current state of the art. In this paper we find a strategy, based on dissipative qubit-bus dynamics, enabling the simultaneous {generation} of two-qubit entanglement and {protection} against both qubit and bus losses. No structured-bath engineering is required in our scheme. The protection is achieved by using a simple global addressing of the register, a feature which relaxes the usually assumed requirement of single-qubit addressing at the center of many dynamical-decoupling protocols and brings our scheme closer to experimental feasibility. While a quantitative analysis is deferred until later, here we briefly provide an intuitive picture of the mechanism behind our proposal. It is known from the study of the so-called Dicke model~\cite{dicke} that a bipartite qubit system, prepared with the qubits in their excited state and exposed to the fluctuations of a common reservoir, soon decays via the channel given by the symmetric state $\ket{s}=(1/\sqrt 2)(\ket{01}+\ket{10})$ (with $\ket{0}$ and $\ket{1}$ the single-qubit logical levels) into the total ground state of the qubits. Thus, there is a transient period in the dynamics of the two qubits when a maximally entangled component is involved in the state of the system. However, the system does not exhibit entanglement because of a competition between the fading symmetric state and the increasingly populated collective ground state. In order to reverse this situation, the influence of $\ket{s}$ has to be emphasized and {stabilized} with time. The protection from environmental effects is then achieved by this stabilization and also by simultaneously inducing a relative phase between the qubits. In proper conditions, this results in part of the population of the symmetric state being moved into the antisymmetric state $\ket{a}=({1/\sqrt 2})(\ket{01}-\ket{10})$, which is decoupled from the decay mechanism (it is a {subradiant} state)~\cite{dicke}. In this paper we show that an entangled steady state is produced by modulating a detuning between the common bus and the register. The results depend on the temporal profile of the modulation, which represents a {dial} that can be turned so as to span a specific sector of entangled steady states. Finally, we address an active protocol based on postselection, which projects the state of the qubits almost completely onto the symmetric state, thereby achieving nearly maximal entanglement. It is important to stress the differences between a bang-bang scheme~\cite{bangbang} and our own one. Although both scheme ultimately rely on the control of the interaction between the register and the environment, the two approaches are intrinsically different. Bang-bang schemes keep the state of interest unchanged throughout the evolution by effectively decoupling it from the environment. Our protocol however produces entanglement through the exploitation of purely dissipative dynamics. The protection of the entanglement from the influences of the external world is provided by the development of a {subradiant} behavior of the system due to the detuning modulation. Moreover, in a bang-bang protocol the timing is set by the fast switching rate of a control field, which is given by the inverse of the coupling strength between the system and the environment. Our scheme, on the other hand, is based on a weak-coupling regime between the register and the bus, which sets a slower time-scale than bang-bang schemes. This reflects not just a technical diversity, but is a manifestation of two almost complementary ways of designing protection from an environment. The remainder of the paper is organized as follows. In Section~\ref{system} we address the Bloch equations derived from the qubits' reduced master equation, in a weak-coupling regime and first Born-Markov approximation. This describes the dynamics of the qubits interacting with a leaky bus mode exposed to a bosonic multi-mode reservoir at thermal equilibrium with temperature $T$. The Bloch equations (as well as the master equation) fully account for a time-dependent detuning, which modulates the coupling between the qubits and the bus. Section~\ref{double detuning} explains in some detail how single-qubit addressing is not required in our scheme. This paves the way toward setting our proposal within the context of active (but simple) global-addressing scenarios. In Section~\ref{postselect} we describe the postselection protocol to improve the amount of entanglement set between the qubits by a nearly perfect projection of their joint state onto the symmetric state $\ket{s}$. Finally, Section~\ref{setup} describes a physical system to embody our proposed scheme. We show in some detail how a circuit-QED setup of two superconducting charge qubits incorporated in a microwave planar stripline resonator can be used. This offers practical advantages compared to the standard cavity-QED setup. The details regarding the derivation of the qubits' master equation are given in the Appendix. \section{The system and detuning modulation protocol} \label{system} We consider two qubits, labelled $1$ and $2$, interacting with a one-sided single-mode cavity field described by the bosonic creation (annihilation) operator $\hat{a}^{\dag}$ ($\hat{a}$). Each qubit is characterized by a transition frequency $E^{0}$ and interacts with the cavity field (frequency $\omega_c$) via a dipole interaction with strength $g$. We assume a leaky cavity exposed to a multi-mode bosonic environment. The cavity mode is in a thermal state with temperature $T$ and average photon number $\bar{n}=(e^{\beta\omega_c}-1)^{-1}$, where $\beta^{-1}=K_{B}T$ and $K_B$ is the Boltzmann constant (we use units such that $\hbar=1$). The strength of the field mode-external bath coupling is given by the cavity decay-rate $\kappa$. Within the rotating wave approximation, we model each qubit-field interaction as $\hat{H}_{cj}=g(\hat{a}^{\dag}\hat{\sigma}^{-}_{j}+h.c.)$ ($j=1,2$), where $\hat{\sigma}^{-}_{j}=(\hat{\sigma}^{+}_{j})^{\dag}=\ket{0}_{j}\!\bra{1}$. The total energy of the system is \begin{figure}[t] \psfig{figure=scheme4.eps,width=6.0cm,height=4.0cm} \caption{(Color online) Two qubits interact with the same cavity mode (in contact with a thermal reservoir, decay rate $\kappa$) acting as a common bus. An inhomogeneous global potential, setting a time-dependent detuning is also shown.} \label{scheme} \end{figure} \begin{equation} \label{sistema} \hat{H}_{sys}=\omega_c\,\hat{a}^\dag\hat{a}+\sum^{2}_{j=1}[\frac{E^{0}_{j}(t)}{2}\hat{\sigma}^{z}_{j}+\hat{H}_{cj}] \end{equation} with $\hat{\sigma}^{z}_{j}$ the $z-$Pauli matrix of qubit $j$. We have considered an implicit time dependence of the single-qubit transition frequency. The detuning between the $j-$th qubit and the cavity field mode is indicated as $\Delta_{j}(t)=E^{0}_{j}(t)-\omega_{c}$. By introducing the transverse-mode energy decay rate $\gamma$ and assuming $\kappa\ll{\omega_{c}}$, the evolution of the qubits-bus system is described by the master equation Eq. (\ref{eq1}) given in the Appendix. Our system is sketched in Fig.~\ref{scheme}. As we are interested in just the dynamics of the qubits, we adiabatically eliminate the bus mode and derive the corresponding Bloch equations. This is straightforwardly done by deriving a reduced master equation for the qubits only and projecting it onto the basis $\{\ket{\uparrow=11},\ket{s},\ket{a},\ket{\downarrow=00}\}_{12}$. We refer to the Appendix for the details of the adiabatic elimination. Here we concentrate on the form of the Bloch equations relevant to our work, where we assume the initial state $\ket{\uparrow}_{12}$ is prepared and, for the sake of simplicity, we consider the case of a detuning in amplitude smaller than the cavity decay rate. Using the notation $\varrho_{ij}=\mbox{}_{12}\expect{i}{\varrho}{j}_{12}$ ($i,j=\uparrow,s,a,\downarrow$) and $G^q_p(k\overline{n})=q\gamma+\frac{g^2(k\overline{n}+p)}{\kappa}$ they read \begin{equation} \label{bloch} \begin{split} &\partial_{t}\varrho_{\uparrow\uparrow}=-4G^1_1(\overline{n})\varrho_{\uparrow\uparrow} +2G^0_0(\overline{n})(\varrho_{ss}+\varrho_{aa}),\\ &\partial_{t}\varrho_{ss}=-G^0_1(2\overline{n})[\cos{\Delta(t)}\varrho_{ss}-i\sin{\Delta(t)}\varrho_{sa}+h.c.]\\ &+2G^1_1(\overline{n})(\varrho_{\uparrow\uparrow}-\varrho_{ss})-2G^0_0(\overline{n})(\varrho_{ss}-\varrho_{\downarrow\downarrow})\\ &+2\cos\Delta(t)[G^0_1(\overline{n})\varrho_{\uparrow\uparrow}+G^0_0(\overline{n})\varrho_{\downarrow\downarrow}],\\ &\partial_{t}\varrho_{sa}=-2G^1_1(2\overline{n})\varrho_{sa}+i\sin{\Delta(t)}[2G^0_1(\overline{n})\varrho_{\uparrow\uparrow}\\ &+2G^0_0(\overline{n})\varrho_{\downarrow\downarrow}-G^0_1(2\overline{n})(\varrho_{ss}+\varrho_{aa})]. \end{split} \end{equation} The equation for $\varrho_{aa}$ is given by that for $\varrho_{ss}$ with $\varrho_{ss}\rightarrow\varrho_{aa}$ and $\Delta(t)\rightarrow\pi-\Delta(t)$. The equation for $\varrho_{as}$ is the hermitian conjugate of the one for $\varrho_{sa}$ and the equation for $\varrho_{\downarrow\downarrow}$ is found from $\varrho_{\downarrow\downarrow}=1-\varrho_{\uparrow\uparrow}-\varrho_{ss}-\varrho_{aa}$. We have taken $\Delta_1(t)>0$, with $\Delta_{2}(t)=0$ and $\Delta_{1}(t)=\Delta(t)$ for ease of notation. In Section~\ref{double detuning} we consider the generalization of this situation to two-qubit detuning and in the Apppendix we provide the form of the reduced master equation. Moreover, we stress that the absence of terms like $\varrho_{\uparrow{a}},\varrho_{\uparrow{s}}$ (and analogous) is due to the specific choice of the initial state. In particular, if any coherence is initially present in the qubit state, the set of Eqs.~(\ref{bloch}) must be complemented by a second closed system of Bloch equations which can easily be derived. From Eqs.~(\ref{bloch}) the initial state $\ket{\uparrow}_{12}$ evolves into \begin{equation} \label{density} \varrho=\sum_{j=\uparrow,a,s,\downarrow}\varrho_{jj}\ket{j}_{12}\!\bra{j}+(\varrho_{as}\ket{a}_{12}\!\bra{s}+h.c.). \end{equation} \begin{figure}[t] \psfig{figure=fidelity.eps,width=6.cm,height=4.0cm} \caption{(Color online) Fidelity of the two-qubit state $\varrho$ for the full-resonant case with the symmetric state $\ket{s}_{12}$ (solid line) and the antisymmetric one $\ket{a}_{12}$ (identical to zero) against the dimensionless interaction time $\tau=g^2t/\kappa$. } \label{fedelta} \end{figure} In order to illustrate the basic features of our proposal, we consider $\gamma=\bar{n}=0$ for the moment. These parameters will be re-introduced later on. As a measure of the entanglement in the bipartite mixed state of qubits $1$ and $2$, we use the concurrence~\cite{wootters} which, for a bipartite state, can be calculated as ${\cal C}(\varrho)=\max[0,\sqrt{\alpha_{1}}-\sum_i\sqrt{\alpha_i}]$ with $\bar{\varrho}={\varrho(\otimes^{2}_{j=1}\sigma^{j}_{y})\varrho^{*}(\otimes^{2}_{j=1}\sigma^{j}_{y}})$ ($\varrho^{*}$ is the complex conjugate of $\varrho$) and $\alpha_1\ge\alpha_i\,(i=2,3,4)$ are the eigenvalues of $\bar{\varrho}$. For a qubit state as Eq.~(\ref{density}), we have \begin{equation} \label{concurrence} {\cal C}(\varrho)=\max[0,2(\vert\varrho_{10,01}\vert-\sqrt{\varrho_{\uparrow\uparrow}\varrho_{\downarrow\downarrow}})] \end{equation} with $2\varrho_{10,01}=\varrho_{ss}+\varrho_{sa}-\varrho_{as}-\varrho_{aa}$. In order to gain as much information as we can about the behavior of ${\cal C}(\varrho)$, we relax the $\max$ condition. Thus, in the following plots, entanglement is present only when ${\cal C}(\rho)>0$. Moreover, we address a physically interesting situation by considering the initial state $\ket{\uparrow}_{12}$ which, as our system is formally equivalent to a Dicke model~\cite{dicke}, decays toward the ground state $\ket{\downarrow}_{12}$ on a time-scale which is faster than the single-qubit relaxation time. Thus, for the steady state of the system, no entanglement is expected between $1$ and $2$. As we stress later, this choice for an initial state is dictated by the global addressing context of this work. The state $\ket{\uparrow}_{12}$ can be prepared via a global potential and without local control~\cite{commento}. As time passes, the superradiant state is rotated toward a mixed state. While still exhibiting no quantum correlations (it never violates the Peres-Horodecki separability criterion~\cite{PH}), the state nevertheless has a good projection onto $\ket{s}_{12}$ and no contribution from the antisymmetric state $\ket{a}_{12}$. This is shown in Fig.~\ref{fedelta} where the fidelities ${\cal F}_{j}=\mbox{}_{12}\expect{j}{\varrho}{j}_{12}=\varrho_{jj}$ ($j=s,a$) are plotted against the dimensionless interaction time $\tau=g^2t/\kappa$. At $\tau\simeq{2.5}$, $\varrho_{ss}\gtrsim{0.37}$ (with $\varrho_{aa}=0$), though the state is still mixed and separable due to the non-zero value of the fully polarized states $\ket{\uparrow}_{12}$ and $\ket{\downarrow}_{12}$. This suggests a minimization of the influence of these components on~$\varrho$. The idea behind our proposal is to exploit this fact and change the dynamical evolution of the qubits, by introducing a detuning $\Delta$, so as to induce an evolution with an initial state which is no more $\ket{\uparrow}_{12}$ but $\varrho(\tau=2.5)$. \begin{figure}[b] \psfig{figure=condetu1.eps,width=6.cm,height=4.0cm} \caption{(Color online) ${\cal C}(\rho)$ against $\tau$ for the initial state $\ket{\uparrow}_{12}$ and $\Delta(\tau)=10\Theta(\tau-\tau_0)$. We have considered $\tau_0=1.5, 2.5, 3.5$ (dashed, solid and dotted line respectively). The inset shows the behavior of the detuning functions.} \label{condetu} \end{figure} Thus, at $\tau=2.5$ an external potential is switched on and changes the transition frequency $E^{0}_1(t)$. The dynamics of the qubits are therefore described by Eqs.~(\ref{bloch}) (no component outside the subspace encompassed by Eqs.~(\ref{bloch}) is present in the new initial state) and the concurrence is plotted against $\tau$ in Fig.~\ref{condetu} (solid line), where $\Delta=10g^2/\kappa$ with $g=0.3\kappa$, so that the adiabatic condition is fully respected. We find ${\cal C}>0.3$, stable at the steady state of the qubits. This interesting result can be compared with~\cite{myungpeter} which does not achieve a stable entanglement. The plot results from a transition between the entanglement function of the full resonant case before $\tau=2.5$ and that of the single-qubit detuning described above for $\tau>2.5$. This situation is equivalent to taking $\Delta(\tau)=10\Theta(\tau-2.5)$ (in units of $g^2/\kappa$), where $\Theta(\tau-\tau_0)$ is the Heaviside function. The steady-state value of the entanglement turns out to dependent weakly on the amplitude of $\Delta(\tau)$ but strongly on $\tau_0$. For instance, in Fig.~\ref{condetu} we show the results of small deviations from the case considered above by plotting the concurrence relative to $\tau_0=1.5$ (dashed line) and $\tau_0=3.5$ (dotted line), which give rise to smaller steady-state entanglement. This results from a smaller $\varrho_{ss}(\tau_0)$ component in the two-qubit state and a disadvantageous competition between $\ket{s}_{12}$ and $\ket{a}_{12}$, which lowers the entanglement. This is strikingly exemplified by increasing $\tau_0$ by one order of magnitude. In this case, the switching on of the detuning occurs in correspondence to a state of the two qubits which is mainly decayed to $\ket{\downarrow}_{12}$ ($\varrho_{\downarrow\downarrow}(\tau_0=25)=0.999501$). It is worth stressing that, even though our choice for $g/\kappa$ may seem to put the above example at the boundary of the applicability of the adiabatic elimination, we have checked that for $g/\kappa\sim{0.1}$, no significant change occurs in the entanglement generation process~\footnote{The main implication of choosing a smaller $g/\kappa$ ratio is the shift toward larger values of the instant at which the concurrence starts to be positive.}. In principle, the true evolution of the system, obtained by numerically solving the complete master equation (without adiabatic elimination) should be compared to the situation here at hand. However, this is in general a very hard task, which goes beyond the scopes of the present work. Nevertheless, by checking the effects of values for $g/\kappa$ which are largely within the validity of the adiabatic elimination, we can be sure of the validity of the above approach. The system is flexible enough to tolerate a less strict ratio of the different time-scales involved in the problem. The appearance of steady-state entanglement can be shown clearly by the behavior of the density matrix elements. A careful analysis reveals that the qubit entanglement is due to the presence of $\ket{s}_{12}$ and $\ket{a}_{12}$. The calculation of the fidelities ${\cal F}_{s,a}$ in presence of the detuning modulation, reveals that as soon as the detuning is switched on, an $\ket{a}_{12}$ component is developed, which after a transient period, stabilizes to a steady state value. This stabilization holds also for the $\ket{s}_{12}$ component, with $\varrho_{ss}\gg\varrho_{aa}$. These behaviors can be seen in Fig.~\ref{fedelta2} for the Heaviside function with $\tau_0=2.5$. However $\varrho_{\downarrow\downarrow}$ never vanishes, thus affecting the entanglement. \begin{figure}[t] \psfig{figure=fedelta2.eps,width=6.0cm,height=4.0cm} \caption{(Color online) Fidelities ${\cal F}_{s,a}$ for a detuning modulation strategy based on a Heaviside function with $\tau_0=2.5$. The solid line is for ${\cal F}_s$, the dotted one for ${\cal F}_a$. At $\tau=\tau_0$, an antisymmetric state component is developed. For large $\tau$, both the antisymmetric and symmetric state fidelities are stabilized.} \label{fedelta2} \end{figure} This clarifies the mechanism behind the entanglement generation and protection. Without a detuning modulation, the system would never develop any {subradiant} behavior ({\it i.e.} the overlap with $\ket{a}_{12}$ will always be zero). This is not true for a modulated situation, where the dynamical conditions are changed. Once the system has decayed into an incoherent superposition of $\ket{s}_{12}$, $\ket{\uparrow}_{12}$ and $\ket{\downarrow}_{12}$ (for $\tau<\tau_0$), one qubit acquires a relative phase with respect to the other one due to the detuning, which results in the development of a subradiant component. The steady state entanglement is the result of a competition between $\ket{\downarrow}_{12}$, the antisymmetric and the symmetric component. The assumption of a Heaviside function regulating $\Delta(t)$ is not critical. In order to relax this assumption, we have checked the results corresponding to a smooth raising edge of the detunings given by the function ${\cal A}[{{1+e^{2b(\tau_0-\tau)}}}]^{-\frac{1}{2}}$ with ${\cal A}$ an amplitude. For proper choices of ${\cal A}$ and $b$, this is a slowly rising function (with respect to the time-scale set by $\kappa$, see the Appendix) producing a concurrence which differs from the result obtained for a Heaviside function by less than $1\%$, as shown in Fig.~\ref{confrontodetu} for $b=3,\tau_0=2.5$ and ${\cal A}=10$. \begin{figure}[b] \psfig{figure=confrontodetu3.eps,width=6.0cm,height=4.0cm} \caption{(Color online) Comparison between the concurrence obtained by using a Heaviside function $10\Theta(\tau-2.5)$ (dashed line) and the smooth detuning function ${10}[{{1+e^{6(2.5-\tau)}}}]^{-\frac{1}{2}}$ (solid line). The inset shows the time behavior of the detuning functions.} \label{confrontodetu} \end{figure} Thus, for definiteness and in order to simplify the calculations, we assume a Heaviside profile of the detuning rising edge~\footnote{The considerations made about the ratio $g/\kappa$ hold also in relation to the smooth rising-edge considered here. A smaller ratio does not change the validity of the conclusions drawn so far.}. It should now be clear that a detuning function with a single rising edge represents the best choice. This can be confirmed by considering the value of ${\cal C}$ as a function of the time width of a single square pulse. As already stressed, the turning on of the detuning corresponds to the maximization of the symmetric state component and the introduction of an $\ket{a}_{12}$ component in $\varrho$. If the detuning is switched off after a time $\delta\tau$, the symmetric component quickly goes to zero (together with any correlation betwen $\ket{s}$ and $\ket{a}$) while the subradiant part is preserved. This achieves a non-zero stationary entanglement which nevertheless is reduced with respect to the case of a Heaviside function. Indeed, in the above conditions, at the generic time $\tau_e$, the steady-state entanglement quantitatively corresponds to the fraction of the antisymmetric state being present in $\varrho$, as can be immediately seen by considering a state like $\varrho=\sum_{j=\uparrow,s,a,\downarrow}A_j\ket{j}_{12}\!\bra{j}$. For $\tau_e-\tau_0\gg\delta\tau$, the population of $\ket{\uparrow}_{12}$ is zero and so is the symmetric state fraction. That is $A_{\uparrow,s}=0$ so that $C(\varrho)=A_a$. The entanglement is stable, even though small, due to the sole presence of the subradiant component, developed at the switching-on of the pulse. As soon as $\delta\tau$ becomes larger than $\tau_e$, making the symmetric state fraction (and its correlations with $\ket{a}_{12}$) non-negligible, the entanglement is not only stable but also reaches the asymptotic value corresponding to Fig.~\ref{condetu}, as can easily be seen. As $\tau_e$ is increased, this behavior holds for a larger $\delta\tau$, demonstating that a large steady-state entanglement is achieved only for a step-like function (as mentioned before, the raising edge functional behavior is irrelevant). A further case can be considered, namely a periodic modulation. However, this modulation implies the switching on/off of the detuning at instants of time that correspond to smaller fidelities of the state $\varrho$ with the symmetric state. For instance, in Fig.~\ref{ondaquadra} we consider the case of a square wave $\Delta_{sw}(\tau)=10\sum^{N}_{n=1}(-1)^{n+1}\Theta(\tau-2.5 n)$, which produces $N/2$ square pulses of amplitude $10$ (units of $g^2/\kappa$). \begin{figure}[b] \psfig{figure=ondaquadra.eps,width=6.0cm,height=4.0cm} \caption{(Color online) Concurrence (red curve, left vertical scale) superimposed on the detuning function $\Delta_{sw}(\tau)$ (black curve, right scale) against $\tau$ for the qubits initially prepared in the superradiant state $\ket{\uparrow}_{12}$. In this plot we have considered $N=16$. No entanglement is ever present between the qubits.} \label{ondaquadra} \end{figure} No entanglement results from this detuning modulation strategy, as the switching of the detuning introduces a jagged-drop in the fidelity compared to the situation in Fig~\ref{fedelta2}. While the on part of the square wave always corresponds to a slow {increase} of ${\cal C}$, the off part results in a larger decrease, resulting in an overall {\it pull down} of entanglement. So far, only the case with no spontaneous emission and a zero-temperature bath has been considered. In order to include the effects of $\gamma,\overline{n}\neq{0}$, we need to solve Eqs.~(\ref{bloch}) for $\gamma,\bar{n}\neq{0}$ and relate them more closely to a physical setup that will implement our protocol. Although more detail will be given later, here we mention that the situation considered is such that $\gamma\ll{g^2/\kappa}$ and $\bar{n}\ll{1}$, which are realistic conditions in several physical systems such as circuit-quantum electrodynamics of superconducting charge qubits integrated in microwave cavities~\cite{ioJJ,schoelkopf}. We will postpone discussion of the order of magnitude of these physical parameters until Section~\ref{setup}. To fix the ideas and to be as close as possible to physical reality, we consider $\bar{n}=0.06$ and $\gamma=10^{-3}\kappa$. Moreover, in tackling this analysis, we find it convenient to refer to the computational basis $\{\ket{\uparrow},\ket{10},\ket{01},\ket{\downarrow}\}_{12}$. The essential features of the previously considered case still hold, with the ${\cal F}_s$ function also maximized at $\tau=2.5$. The main effect of the non-zero thermal photon number is a reduction in the steady-state entanglement value. This is due to $\bar{n}\neq{0}$, which reduces the coherence $\varrho_{10,01}$, thus preventing the state from mimicking the symmetric state. On the other hand, $\gamma\neq{0}$ introduces a second time-scale in the system, which results in a slow decay of both the populations of states $\ket{10}_{12},\,\ket{01}_{12}$ and of the coherence $\varrho_{10,01}$. This accounts for an overall decay of ${\cal C}$, as shown in Fig.~\ref{condecrease}. \begin{figure}[b] \psfig{figure=condecrease.eps,width=6.0cm,height=4.0cm} \caption{(Color online) Concurrence against $\tau$ for a modulated detuning with $g=0.3\kappa,\,\gamma=10^{-3}\kappa$ and $\bar{n}=0.06$. The time axis has been extended in order to show that a concurrence close to $0.2$ is present for $\tau$ up to $100$. The slow entanglement decay is due to the presence of a non-zero $\gamma$, while the smaller steady-state value is largely due to the thermal nature of the bath ($\bar{n}\neq{0}$).} \label{condecrease} \end{figure} Despite the decrease found for non-zero $\gamma$, the entanglement still remains as large as $0.2$ for interaction times up to $\tau=100$. This corresponds to an effective entanglement protection from both thermal behavior of the bosonic bath and the qubits' spontaneous decay. \section{Double detuning and global addressing} \label{double detuning} In this Section we address the question of whether the assumption of a single-qubit detuning modulation is critical to the proposed protocol. The answer to this provides an effective justification and an {\it a posteriori} motivation for the analysis conducted so far. In order to do this, we re-consider the physical system described in Section~\ref{system} (see the Appendix) with the inclusion of the second qubit detuning $\Delta_2(t)$. This modifies the dynamics of the system in such a way that the Bloch equations (\ref{bloch}) remain unaltered with the replacement ${\Delta}(t)\rightarrow\tilde{\Delta}(t)=\Delta_{1}(t)-\Delta_{2}(t)$. Thus, considering both the qubits as detuned in time is equivalent to just considering the energy of qubit $1$ being modulated with an effective time-dependent detuning $\tilde\Delta(t)$. That is, the analysis conducted so far is perfectly general and there is no limitation in considering a single-qubit modulation as this case encompasses rigorously the most general situation of double detuning. Obviously, the $\Delta_{j}(t)$'s cannot be chosen arbitrarily, in general. This result has two main implications. The first is that in order for the protocol we have described to be effective, we must consider detuning functions which are opposite in sign ({\it i.e.} one detuning has to be positive, the other negative). The second point is pragmatically relevant as we can now put our scheme within the context of global addressing protocols~\cite{sougato}. Indeed, it should be clear after the above discussion that the realization of the detuning-modulation protocol simply requires the appropriate setting of a potential which addresses both the qubits in the correct way (increasing the energy spacing of qubit $1$ with respect to the resonant value $\omega_c$ and reducing the spacing of qubit $2$). No single-qubit addressability is needed, which considerably reduces the experimental efforts for the implementation of the scheme. It is not necessary to require a strongly focused potential applied to just one of the two qubits and having no effect on the dynamics of the other one. In order to fix this idea, one can consider a global magnetic field inducing a {Zeeman-like} effect on the qubits' energy levels, the shifting being different from qubit to qubit because of a gradient in the magnetic potential (see Fig.~\ref{scheme}). In Section~\ref{setup} we address the physical mechanism responsible for such a shift by considering a specific experimental setup that can be used in order to implement our proposal. \section{Entanglement improvement by postselection} \label{postselect} As the entanglement is set in a (quasi) steady-state, the required degree of control over the system is reduced. With the exception of the choice for the optimal value of $\tau_0$ at which to switch on the detuning, no fine time control is necessary in order to properly drive the dynamics of the system. Nevertheless, it might be desirable in many situations to raise ${\cal C}$ up to a maximal entanglement of one ebit. We have seen an intrinsic limitation in the amount of establishable entanglement due to the unavoidable presence of the spurious population of $\ket{\downarrow}_{12}$. A procedure which allows us to cut away the unwanted contribution from $\ket{\downarrow}_{12}$ is represented by the postselection of the two-qubit state after some detection event. Explicitly, consider the fading influence of the $\varrho_{\uparrow\uparrow}$ component. In the specific case here at hand, each time the state of the two qubits is not found to be $\ket{\downarrow}_{12}$, the overlap with $\ket{s}_{12}$ increases, improving the entanglement between the qubits. Thus, by using the positive-operator-valued-measure (POVM) $\{\hat{\Pi}_{0}=\ket{\downarrow}_{12}\!\bra{\downarrow},\hat{\Pi}_{1}=\openone-\hat{\Pi}_{0}\}$ with $\openone$ the identity operator, we can postselect the state resulting from the qubits not being found in the global ground state. This changes $\varrho$ into $\varrho_{p}={\cal N}\hat{\Pi}_{1}\varrho\hat{\Pi}_{1}={\cal N}(\varrho-\varrho_{\downarrow\downarrow}\ket{\downarrow}_{12}\!\bra{\downarrow})$ with ${\cal N}$ a normalization factor. As $\varrho_{\uparrow\uparrow}\rightarrow{0}$, this effectively results in a projection of the two-qubit state onto the subspace spanned by $\ket{s,a}_{12}$ with asymptotically $\mbox{}_{12}\expect{s}{\varrho_p}{s}_{12}>{0.9}$. After the analysis in Section II, we know that a large fraction of $\ket{s}_{12}$ implies a large degree of entanglement. This is witnessed by the entanglement properties of the resulting state, which is represented in Fig.~\ref{entpost}. The plot represents the amount of entanglement in the postselected state when the measurement is performed at the instant $\tau$ in the evolution of the two-qubit state. Both the detuning-modulated (solid line) and the full-resonant case (dashed line) are shown. In both the cases there is an improvement in the amount of stationary entanglement. \begin{figure}[t] \psfig{figure=conpostsel.eps,width=6.0cm,height=4.0cm} \caption{(Color online) Comparison between the entanglement in the postselected state with and without the detuning-modulation protocol (solid and dashed curves respectively). A Heaviside modulation is considered in the solid line case.} \label{entpost} \end{figure} While the second case has concurrence stabilized around $0.4$, an entanglement larger than $0.9$ is obtained in the modulated case. However, a full ebit is not possible as the spontaneous emission and the thermal effects of the bath spoil the correlations between the qubits. Indeed, we have checked that almost a complete ebit is achievable if $\gamma=\bar{n}=0$ is considered, where the detuning-modulated condition is better than the unmodulated case, as the concurrence value always remains above the entanglement curve of the unmodulated case. \section{Physical setup} \label{setup} The physical system we suggest to use in order to implement our proposal is given by two superconducting charge qubits, embodied by Superconducting-Quantum-Inferference-Devices (SQUIDs)~\cite{schon}, nano-lithographically implanted in a quasi-unidimensional microwave stripline resonator~\cite{zmuidinas}. This system offers advantages in many respects. First, the qubits are stationary within the cavity, so that the requirement for a fine tuning of the transit-time through the cavity (typical of microwave cavity-QED implementations) is no more an issue. Second, the coupling between the qubits in the register and the cavity bus can easily be arranged so as to satisfy the weak coupling regime required by our proposal. Finally, the manipulability of charge qubits embodied by SQUIDs allows for a detuning modulation in the global addressing fashion depicted in this paper. In detail, we assume the charging regime and the low-temperature limit~\cite{schon} and set each SQUID to work at the charge degeneracy point, where the qubits are encoded in equally-weighted superpositions of states, having zero and one excess Cooper-pair on the SQUID island, namely $\ket{\pm}_j=(1/\sqrt{2})(\ket{0}\pm\ket{2e})_j$ ($2e$ being the charge of a Cooper pair). The degeneracy point is set by biasing each SQUID with a dc electric field connected to the superconducting devices via the ground plate of the resonator. The free Hamiltonian of a single SQUID is thus given by $(1/2)E^{0}_{j}(\phi)\hat{\sigma}^{z}_{j}$, with $E^{0}_{j}(\phi)$ the Josephson energy (tuned via an external magnetic flux $\phi$ piercing the SQUIDs). By modulating the magnetic flux, we can change the energy separation between the qubit levels, thus setting the detunings with respect to the cavity mode frequency. A gradient can be incorporated into the external magnetic flux so as to realize a configuration of equal and opposite detunings for the qubits in our register. The microstrip resonator can be modelled as a distributed $LC$ oscillator, where $C$ is the capacitance between the plates of the stripline and $L$ is the overall inductance of the device (depending on the length of the resonator, typically in the range of $1$ cm). In this setup, $\gamma/\kappa\simeq{10^{-3}}$ and a cavity quality factor of $\sim100$ are conservative assumptions. At $\omega_{c}/2\pi=6$ GHz and $T\simeq170$ mK we have $\bar{n}\simeq{0}.06$. The coupling between qubits and cavity mode is capacitive and mediated by the electric part of the cavity field~\cite{ioJJ,schoelkopf}. In a second quantization picture, the interaction Hamiltonian can be cast in the form of a Jaynes-Cummings model so that $\hat{H}_{sys}$ in Eq.~(\ref{sistema}) can naturally be embodied by the present setup. A Liouvillian description of SQUID-cavity open systems has been proven to be rigorous up to temperatures well above those assumed in this work. Indeed, for two qubits in a stripline resonator, the optical master equation~(\ref{eq1}) can be derived from the Bloch-Redfield formalism, when the secular approximation is relaxed and a large number of elements of the Redfield tensor are considered~\cite{rau}. Two SQUID qubits (size $\sim\mu$m) can easily be accommodated in the cavity far enough away from each other to achieve negligible cross-talk (in principle due to direct capacitive and inductive coupling). Lithographic techniques allow us to control, within a few percent, the geometric characteristics and the resulting parameters of the device. The two qubits can therefore be manipulated both simultaneously or independently with two separate coils. Due to charged impurities in the vicinity of the devices, separate calibration at the degeneracy points would be required for each qubit. This may be achieved with several adjustments to the design of the setup. For instance, by splitting the ground plate and attaching a gate to each part~\cite{ioJJ}. Let us now turn briefly to the description of possible ways of implementing the conditional detection scheme described in Section~\ref{postselect}. In principle, a measurement of the qubits' state can be performed by setting a large qubit-cavity field detuning, attaching a detector at the output capacitive gap of the stripline resonator and measuring the shifts induced in the resonance spectrum of a probe beam sent into the cavity through the input capacitive gap. The dispersive nature of the qubit-cavity coupling, which changes the refractive index of the cavity field mode, determines qubit-state dependent shifts in the resonance peak of the probe beam. This allows for the non-demolition detection of the qubit state, following the strategy depicted in~\cite{schoelkopf}. However, in order for these shifts to be detectable, the change of the refractive index has to be larger than the cavity linewidth, a condition which is hard to match if the bad cavity regime is invoked. However, a second strategy is possible, which is more suitable for conditions of large detunings between the cavity and register and a large cavity decay rate. This involves driving a cavity field mode with a coherent state $\ket{\alpha}$ ($\alpha\in{\mathbb C}$). In the situation of a large qubit-cavity detuning, the dispersive dynamics the system undergoes is such that the globally unexcited state $\ket{\downarrow}_{12}$ becomes correlated with the field state $\ket{\alpha{e}^{i\theta}}$~\cite{ionjp,schoelkopf}. That is, in phase space the coherent state acquires an additional phase dependent in general on the ratio $2g^2/\Delta$. On the other hand, the symmetric and antisymmetric component of the density matrix leave the coherent state unchanged~\cite{ionjp}. A homodyne measurement of the cavity field provides a distinction between the states of the register and therefore the implementation of the POVM we have described. As an additional remark, we stress that in this setup, at the charge degeneracy point, decoherence due to low-frequency modes vanishes at the first order. This allows the minimization of the effects of noise sources represented by switching charged impurities in the proximity of the SQUIDs' islands, which constitute a system of bistable fluctuators giving rise to $1/f$ noise~\cite{elisabetta}. Finally, it is worth stressing that due to the qubit-resonator interaction, the energy levels of our qubits are much less sensitive to these charge fluctuations than isolated qubits at the optimal working point~\cite{ioJJ}. This allows us to neglect any resulting dephasing effects. Finally, it is worth mentioning that the example considered in this Section is just one of the possible physical setups where our proposal could be implemented. Indeed, the formalism we have used in order to describe the main features of our protocol is general enough to be adapted to various situations. For instance, the case of two trapped ions, in the Lamb-Dicke regime and placed inside an optical cavity can be taken in consideration~\cite{walther}. The extension of our analysis to the case of multi-level systems composing the register, on the other hand, will pave the way to the use of two closely-spaced ensembles of cold two-level atoms (confined in vapor cells or magneto-optical traps). The free-space interaction of a laser with the ensembles, each treated as an effective $N/2$-spin (where $N$ is the number of atoms in each ensemble) and within the rotating wave approximation, provides an interaction Hamiltonian which is the generalization of our model to $N+1$ systems~\cite{polzik}. \acknowledgments We thank the Leverhulme Trust (ECF/40157), the UK EPSRC, KRF (2003-070-C00024) and DEL for financial support. MP thanks C. Brukner and J. Kofler for useul discussions.
1,314,259,996,911
arxiv
\section{Introduction} \noindent For a Borel subset $K$ of $\mathbb{R}^n$ and a point $z\in \mathbb{R}^n$, the polar body $K^{*z}$ of $K$ with respect to $z$ is the convex set defined by: $$K^{*z}=\{y\in \mathbb{R}^n; \langle y-z,x-z\rangle \le 1\hbox{ for every $x\in K$}\}.$$ Here $\mathbb{R}^n$ is endowed with the canonical scalar product $\langle\ , \ \rangle$ and the associated Euclidean norm $|\cdot|$. For $z=0$, we simply write $K^\circ$ instead of $K^{*0}$. Denote by $|A|$ the Lebesgue measure of a Borel subset $A$ of $\mathbb{R}^n$. The {\it Santal\'o point} $s(K)$ of $K$ is a point for which $$|K^{*s(K)}|=\min_{z}|K^{*z}|.$$ If $K$ is bounded and not contained in a hyperplane, its Santal\'o point $z$ is characterized by the property that it is the center of mass of $K^{*z}$. The inequality of Blaschke-Santal\'o (Blaschke \cite{Blaschke}, Santal\'o \cite{Santalo}) states that $$|K|\cdot |K^{*s(K)}| \le v_{n}^2:=|B_{2}^n|^2\ ,$$ where $B_{2}^n=\{x\in \mathbb{R}^n; |x|\le 1\}$ is the Euclidean ball.\\ We shall prove here new functional versions of the Blaschke-Santal\'o inequality and give applications which extend the theorem of Ball \cite{BallPhD} as well as the recent result of Artstein, Klartag and Milman \cite{AKM}. Notice that Lutwak and Zhang \cite{Lutwak1} and Lutwak, Yang and Zhang \cite{Lutwak2} gave other very different functional forms of the Blaschke-Santal\'o inequality and recently Klartag and Milman \cite{KlartagMilman}, Klartag \cite{Klartag} and Colesanti \cite{Colesanti} also established functional forms of some other geometric inequalities. \\ The first main result of this paper generalizes with a new proof an inequality of K.~Ball \cite{BallPhD}; it treats the case of "centered" functions: \vskip 1mm \noindent {\bf Proposition} {\em Let $\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}$ and $f_1, f_2:\mathbb{R}^n\to\mathbb{R}_+$ be measurable functions such that $$f_1(x)f_2(y)\le \rho^{2}(\langle x,y\rangle ) \hbox{ for every $x,y\in \mathbb{R}^n$ satisfying $\langle x,y\rangle >0$}.$$ If the star shaped set $K_{1}=\{ x\in \mathbb{R}^n; \int_{0}^{+\infty} r^{n-1} f_1(rx)dr\ge 1\}$ is centrally symmetric (which holds if $f_{1}$ is even), or is a convex body with center of mass at the origin, then $$\int_{\mathbb{R}^n} f_1(x)dx\int_{\mathbb{R}^n} f_2(y)dy\le \left( \int_{\mathbb{R}^n} \rho({|x|^{2}})dx\right)^2.$$ } \vskip 1mm \noindent The idea is to attach bodies $K_1$ and $K_2$ to the functions $f_1$ and $f_2$. From the duality relation on the $f_j$'s, we deduce, using the Pr\'ekopa-Leindler inequality for the geometric mean, that the sets $K_j$'s satisfy the inclusion $K_2\subset c_n(\rho)K_1^\circ$, for some constant $c_n(\rho)$. Then the result follows from the Blaschke-Santal\'o inequality for sets. \\ As an application of this proposition, we treat the case of "non centered" functions: \vskip 1mm \noindent {\bf Theorem} {\em Let $\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}$ be measurable and $f:\mathbb{R}^n\to\mathbb{R}_+$ be a log-concave function such that $0<\int f <+\infty$. Then there exists $z\in \mathbb{R}^n$ with the following property: for any measurable function $ g: \mathbb{R}^n \mapsto \mathbb{R}_{+}$ satisfying $$f(x)g(y)\le\rho^2\left(\langle x-z, y-z\rangle\right)\ $$ for every $x,y\in \mathbb{R}^n$ with $ \langle x-z, y-z\rangle>0$, one has $$\int_{\mathbb{R}^n} f(x)dx\int_{\mathbb{R}^n} g(y)dy\le \left( \int_{\mathbb{R}^n} \rho({|x|^{2}})dx\right)^2.$$ } \vskip 1mm \noindent In the proof, we attach, for every $z\in\mathbb{R}^n$, the convex body $$K_{z}=\left\{ x\in \mathbb{R}^n;\ \int_{0}^{+\infty} f(z+rx) r^{n-1} dr \ge 1\right\}$$ and show that there exists $z_{0}\in \mathbb{R}^n$ such that the center of mass of $K_{z_{0}}$ is at the origin. Then the result follows from the preceding proposition. The existence of such a $z_0$ is proved using Brouwer's fixed point theorem.\\ The main consequence of this theorem is the following generalization of the results of Artstein, Klartag and Milman \cite{AKM} (who considered only the cases $\rho(t)=e^{-t}$ and $\rho(t)=(1-t)_+^m$) for the Legendre transform $\mathcal L_{z}\phi$ of a convex function $\phi$. \vskip 1mm\noindent {\bf Theorem} {\em Let $\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}$ be a log-concave non-increasing function and let $\phi$ be a convex function such that $0<\int_{\mathbb{R}^n}\rho\left(\phi(x)\right)dx<+\infty$ . Then for some $z\in \mathbb{R}^n$, one has $$\int_{\mathbb{R}^n}\rho\left(\phi(x)\right)dx \int_{\mathbb{R}^n}\rho\left(\mathcal L_{z}\phi(y)\right)dy\le \left(\int_{\mathbb{R}^n}\rho\left(\frac{|x|^{2}}{2}\right)dx\right)^2.$$ } \vskip 1mm \noindent In all these functional forms of Blaschke-Santal\'o inequality, we determine the equality cases and establish some geometric corollaries. In particular we investigate the following question: What are the Borel measures $\mu$ on $\mathbb{R}^n$ and the sets $K$ in $\mathbb{R}^n$ which satisfy a Blaschke-Santal\'o type inequality $$\mu(K)\cdot \mu(K^\circ)\le \mu(B_2^n)^2\ ?$$ Cordero-Erausquin (\cite{Cordero}) proved such an inequality in $\mathbb{C}^n$ for plurisubharmonic measures and $\mathbb{C}$-symmetric pseudo-convex sets, using complex interpolation. He also remarked that it holds for the Gaussian measure in $\mathbb{R}^n$ and asked whether it still holds for any symmetric $\log$-concave measures $\mu$ and any symmetric convex body $K$ in $\mathbb{R}^n$. Klartag also established this inequality for a special class of measures in \cite {Klartag}. As corollaries of our functional inequalities, we get that this inequality holds:\\ - for any unconditional $\log$-concave measure $\mu$ and unconditional measurable set $K$\\ - for any rotation invariant $\log$-concave measure $\mu$ and any centrally symmetric measurable set $K$.\\ And we determine the equality cases. \vskip 2mm\noindent The paper is organized in the following way. In section 2, we treat the case of unconditional functions and sets, where one can apply a multiplicative version of the Pr\'ekopa-Leindler inequality. In Section 3, we prove the proposition stated above concerning the case of "centered" functions. Section 4 is devoted to the proof of our theorem on general (not centered) functions. In Section 5, we prove the consequences for Legendre transforms of convex functions. \vskip 2mm\noindent It should be observed that the main difficulty when working with Santal\'o type inequalities for non-symmetric bodies or functions is to find a good center. If $G(K)$ is the center of mass of $K$ ($G(K)=\int_{K}xdx/|K|$), one has as well $$|K|\cdot |K^{*G(K)}| \le v_{n}^2, $$ because Blaschke-Santal\'o inequality can be applied to $K^{*G(K)}$. But if $K$ is centrally symmetric, the situation is simpler: $\min_{z}|K^{*z}|$ is reached at $0$, and then $|K|\cdot |K^\circ| \le |B_{2}^n|^2$. We shall also make use of the equality case in Blaschke-Santal\'o inequality: there is equality if and only if $K$ is an ellipsoid. At the end of the paper, we give a new and elementary proof of this result. \vskip 3mm \noindent \section {An inequality for unconditional functions} \vskip 2mm \noindent We say that a function $\varphi: \mathbb{R}^n\mapsto \mathbb{R}$ is {\it unconditional} if $$\varphi(\varepsilon_1 x_1, \dots, \varepsilon_n x_n )=\varphi( x_1, \dots, x_n)$$ for every $(\varepsilon_1,\dots,\varepsilon_n)\in \{-1,1\}^n$ and every $(x_1,\dots,x_n)\in\mathbb{R}^n$. In the same way, a subset $K$ in $\mathbb{R}^n$ is {\it unconditional } if its characteristic function $\chi_{K}$ is unconditional. Observe that an unconditional convex function $W:\mathbb{R}^n\mapsto \mathbb{R}$ is minimal at $0$ and is moreover {\it increasing}, in the sense that $W(x)\le W(y)$ whenever $x=(x_{1},\dots,x_{n})$ and $y=(y_{1},\dots,y_{n})$ satisfy $|x_{i}|\le |y_{i}|$, $1\le i\le n $. \vskip 1mm \noindent In particular, if $W$ is unconditional and convex, one has $$ W(\sqrt{x_1 y_1},\dots, \sqrt{x_n y_n})\le W\left(\frac{x+y}{ 2}\right)\le \frac{W(x)+W(y)}{2}\ , $$ for all $x=(x_{1},\dots,x_{n}),y=(y_{1},\dots,y_{n})\in\mathbb{R}_+^n$ \vskip 3mm \noindent The next proposition is a form of Pr\'ekopa-Leindler inequality for the geometric mean due to Borell (\cite{Borell}), Ball (\cite{Ball}), Uhrin (\cite{Uhrin}). This result is well known and follows from the usual Pr\'ekopa-Leindler inequality. We prove it here for the convenience of the reader. As we shall see in the corollary, this proposition gives a first functional form of Blaschke-Santal\'o inequality. \begin{Prop}\label{Prekopageom}{\bf (Pr\'ekopa-Leindler inequality for the geometric mean)}\\ Let $f_1$, $f_2$, $f_3:\ \mathbb{R}^n\to\mathbb{R}_{+}$ be unconditional measurable functions such that $$ f_1(x_1,\dots,x_n) f_2(y_1,\dots,y_n)\le f_3(\sqrt{x_1 y_1},\dots, \sqrt{x_n y_n})^2$$ for every $(x_1,\dots,x_n)$ and $(y_1,\dots,y_n)\in \mathbb{R}_+^n$. Then $$\int_{\mathbb{R}^n} f_1(x)dx\int_{\mathbb{R}^n} f_2(y)dy\le \left(\int_{\mathbb{R}^n} f_3(z)dz\right)^2$$ with equality if and only if there exists a continuous function $\tilde{f_3}:\mathbb{R}_{+}\to\mathbb{R}_{+}$ such that the following two conditions hold: \vskip 1mm \noindent {\bf a.} $f_3=\tilde{f_3}$ a.e. and $\tilde{f_3}(x_1,\dots,x_n)\tilde{f_3}(y_1,\dots,y_n)\le \tilde{f_3}(\sqrt{x_1 y_1},\dots, \sqrt{x_n y_n})^2$ \vskip 1mm \noindent {\bf b.} for some $c_1,\dots, c_n>0$ and $d>0$, one has $$f_1(x_1, \dots,x_n)=d\tilde{f_3}(c_1 x_1, \dots,c_nx_n)\hbox{ and } f_2(x) =\frac{1}{d} \tilde{f_3}\left( \frac{x_1} {c_1}, \dots,\frac{x_n}{c_n}\right) \quad a.e.$$ \end{Prop} \vskip 5mm \noindent {\bf Proof:} Since the $f_{j}$ are unconditional, one has $ \int _{\mathbb{R}^n}f_j= 2^n \int_{\mathbb{R}_+^n} f_{j}$, \ $j=1,2,3$. For $(t_1,\dots, t_n)\in\mathbb{R}^n$, we define $$g_j(t_1,\dots, t_n)=f_j(e^{t_1}, \dots, e^{t_n})\,e^{\sum_{i=1}^n t_i}\ .$$ We get $$\int_{\mathbb{R}_+^n} f_j =\int_{\mathbb{R}^n} g_j$$ and for every $s,t\in \mathbb{R}^n$, $$g_1(s) g_2(t)\le g_3\left(\frac{s+t}{2}\right)^2.$$ Hence the result follows from Pr\'ekopa-Leindler inequality. For the equality case, see \cite{Dubuc}. \hfill\hfill $\Box$ \bigskip \vskip 5mm \noindent As a corollary, we get the following generalized form of Blaschke-Santal\'o inequality for unconditional sets, together with its case of equality. \begin{Cor}\label{corinc} Let $W: \mathbb{R}^n\to\mathbb{R}\cup\{+\infty\}$ be an unconditional convex function and let $\mu$ be the Borel measure on $\mathbb{R}^n$ with density $e^{-W(x)}$ with respect to the Lebesgue measure. Then one has $$ \mu(K)\mu(K^\circ)\le\mu(B_2^n)^2, $$ for every unconditional measurable set $K\subset\mathbb{R}^n$. \noindent If moreover the support of $\mu$ is $\mathbb{R}^n$, there is equality if and only if there exists a diagonal matrix $T$, with diagonal entries $(t_1,...,t_n)\in\mathbb{R}_+^n$ such that: - $K=T(B_2^n)$ - $W(x)=W(Px)$, for every $ x\in K\cup K^\circ\cup B_2^n$, where $P$ is the orthogonal projection on the subspace spanned by the $(e_i)_{i\in I}$ and $I=\{i ;\ 1\le i\le n,\ t_i=1\}$. \end{Cor} \vskip 3mm \noindent {\bf Proof:} \vskip 1mm \noindent {\bf A. The inequality.} \vskip 1mm \noindent We apply Proposition \ref{Prekopageom} to $$ f_1(x)=e^{-W(x)}\chi_K(x),\ f_2(x)=e^{-W(x)}\chi_{K^\circ}(x), f_3(x)=e^{-W(x)}\chi_{B_2^n}(x)\ . $$ The hypotheses are satisfied since for all $x=(x_1,\dots,x_n), y=(y_1,\dots,y_n)\in \mathbb{R}_+^n$, one has $$ \chi_K(x)\chi_{K^\circ}(y)\le\chi_{B_2^n}(\sqrt{x_1 y_1},\dots, \sqrt{x_n y_n})\ $$ and \begin{equation}\label{Wconv} W(\sqrt{x_1 y_1},\dots, \sqrt{x_n y_n})\le W\left(\frac{x+y}{2}\right)\le \frac{W(x)+W(y)}{2}\ . \end{equation} \noindent as explained at the beginning of this section. This gives the inequality. \vskip 2mm \noindent {\bf B. The case of equality.} \vskip 1mm \noindent Assume that the support of $\mu$ is $\mathbb{R}^n$ (hence $W(x)< +\infty$, for every $x\in\mathbb{R}^n$) and that there is equality in the preceding inequality. From the equality case in Proposition \ref{Prekopageom}, there exists $t_1,\dots, t_n>0$ and $d>0$, such that if we denote by $T$ the diagonal matrix with diagonal entries $(t_1,\dots, t_n)$, then $$ e^{-W(x)}\chi_K(x)=de^{-W\left(Tx\right)}\chi_{B_2^n}\left(Tx\right) $$ and $$ \ e^{-W(x)}\chi_{K^\circ}(x)=\frac{1}{ d}e^{-W\left(T^{-1}x\right)}\chi_{B_2^n}\left(T^{-1}x\right) . $$ We get $K=T^{-1}(B_2^n)$ and $K^\circ=T(B_2^n)$. Taking $x=0$ gives $d=1$ so that $$ W(x)=W\left(Tx\right)=W\left(T^{-1}x\right)\ \hbox{for every $x\in B_2^n$ .} $$ Let $S=\frac{T+T^{-1}}{ 2}$ be the diagonal matrix with diagonal entries $s_i=\frac{1}{ 2}\left(t_i +\frac{1}{ t_i}\right)$, $1\le i\le n$. One has $s_i>1$ for all $i\notin I:=\{j\ ;\ t_j=1\}$ hence $\lim_{k\rightarrow +\infty}S^{-k}(x)=Px$, for all $x\in\mathbb{R}^n$. Using the inequalities (\ref{Wconv}) for $Tx$ and $T^{-1}x$, we get $$ W(x)\le W\left(\frac{Tx+T^{-1}x}{2}\right)\le\frac{W(Tx)+W(T^{-1}x)}{ 2}=W(x)\ . $$ Hence $W(Sx)=W(x)$ for every $x\in B_2^n$. The result follows from the continuity of $W$. \hfill\hfill $\Box$ \bigskip \vskip 2mm \noindent {\bf Remarks:}\\ {\bf 1) } Actually the proof shows that the inequality of Corollary \ref{corinc} still holds true when the hypothesis that $W$ is convex is replaced with the weaker hypothesis that $$ (t_1,\dots,t_n)\mapsto W(e^{t_1},\dots,e^{t_n})$$ is convex on $\mathbb{R}^n$.\\ {\bf 2) }The Pr\'ekopa-Leindler inequality for the geometric mean was also used in \cite{CFM} to prove that if $K$ is an unconditional convex body and $\mu$ has an unconditional log-concave density with respect to the Lebesgue measure, then $t\mapsto\mu(e^t K)$ is a log-concave function. \section{The Blaschke Santal\'o inequality for centered functions.} In the next result, we generalize with a new proof an inequality obtained by K. Ball \cite{BallPhD} in the special case of even functions, and we characterize the case of equality. \begin{Prop}\label{even} Let $\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}$ and $f_1, f_2:\mathbb{R}^n\to\mathbb{R}_+$ be measurable functions such that $$f_1(x)f_2(y)\le \rho^{2}(\langle x,y\rangle ) \hbox{ for every $x,y\in \mathbb{R}^n$ satisfying $\langle x,y\rangle >0$}\ .$$ If the star shaped set $K_{1}=\{ x\in \mathbb{R}^n; \int_{0}^{+\infty} r^{n-1} f_1(rx)dr\ge 1\}$ is centrally symmetric (which holds if $f_{1}$ is even), or if $K_1$ is a convex body with center of mass at the origin, then $$\int_{\mathbb{R}^n} f_1(x)dx\int_{\mathbb{R}^n} f_2(y)dy\le \left( \int_{\mathbb{R}^n} \rho({|x|^{2}})dx\right)^2$$ with equality if and only if for some continuous function $\tilde{\rho}:\mathbb{R}_{+}\to\mathbb{R}_{+}$ one has \vskip 1mm\noindent {\bf a.} $\rho=\tilde{\rho}$ a.e., $\sqrt{\tilde\rho(s)\tilde\rho(t)}\le\tilde\rho(\sqrt{st})$ for every $s,t\geq 0$ and if $n\geq 2$, $\tilde\rho(0)>0$ or $\tilde\rho$ is the null function. \vskip 1mm \noindent {\bf b.} For some positive definite $[n\times n]$ matrix $T$ and for some $d>0$, one has $$f_1(x) = d\tilde\rho( |Tx|^2)\hbox{ and } f_2(x) =\frac{1}{d}\tilde\rho(|T^{-1}x|^2)\quad a.e.$$ \end{Prop} \noindent {\bf Proof:} \vskip 1mm \noindent {\bf A. The inequality.} \vskip 1mm \noindent Let $x_1, x_2\in \mathbb{R}^n$ satisfying $\langle x_1,x_2\rangle >0$. We define $g_j: \mathbb{R}_+\rightarrow \mathbb{R}_+$ by $$g_j(s)= s^{n-1} f_j(sx_j),\ j=1,2\ \hbox{ and }g_3(u)= u^{n-1} \rho (u^{2}\langle x_1,x_2\rangle).$$ Then by hypothesis, one has $g_1(s)g_2(t)\le (st)^{n-1} \rho^{2}(st \langle x_1,x_2\rangle)= g_3^2(\sqrt{st}) \ .$ It follows from Proposition \ref{Prekopageom} ($n=1$) that $$ \int_{\mathbb{R}_+} s^{n-1} f_1(sx_1)ds\int_{\mathbb{R}_+}t^{n-1} f_2(tx_2)dt \le \left(\int_{\mathbb{R}_+} u^{n-1}\rho\left((u^{2}\langle x_1,x_2\rangle\right) du\right)^2$$ $$= \frac{1}{\langle x_1,x_2\rangle^{n}}\left(\int_{\mathbb{R}_+}r^{n-1}\rho(r^{2})) dr\right)^2= \frac{c_{n}(\rho)^n}{\langle x_1,x_2\rangle^{n}} . $$ where $c_n(\rho): =\left(\int_{\mathbb{R}_+} r^{n-1}\rho(r^{2}) dr\right)^{\frac{2}{ n}}$. For $j=1,2$, we define $$K_{j} =\{x\in \mathbb{R}^n; \int_{\mathbb{R}_+} r^{n-1} f_j(rx)dr\ge 1\}\ .$$ The sets $K_1$ and $K_2$ are starshaped with respect to the origin. Denote their gauge by $\|\cdot\|_{K_j}$, $j=1,2$. One has $$ \|x\|_{K_j}=\inf\{ \lambda >0; \ x\in \lambda K_j\} = \left(\int_{\mathbb{R}_+} r^{n-1} f_j(rx)dr\right)^{-{\frac{1}{ n}}} \hbox {for all $x\in\mathbb{R}^n$}\ .$$ The preceding inequality may be read as follows: for every $x_1, x_2\in \mathbb{R}^n$ such that $\langle x_1,x_2\rangle >0$, one has \begin{equation}\label{polar} \langle x_1,x_2\rangle\le c_n(\rho)\|x_1\|_{K_1}\|x_2\|_{K_2}\ . \end{equation} This means that $$K_2\subset c_n(\rho)K_1^\circ\ .$$ \medskip\noindent Under our hypotheses, either $K_{1}$ is centrally symmetric, so its closed convex hull is also centrally symmetric and has its center of mass at the origin, or $K_{1}$ is itself a convex body with center of mass at the origin. In both cases, the origin is actually the Santal\'o point of $K_{1}^{\circ}$, and it follows from Blaschke-Santal\'o inequality that $|K_{1}|\ |K_{1}^\circ|\le v_{n}^{2}$. We get thus $$|K_1|\ |K_2|\le c_n(\rho)^n |K_1|\ |K_1^\circ|\le c_n(\rho)^nv_n^2\ .$$ Integrating in polar coordinates for $j=1,2$, one has $$ \int_{\mathbb{R}^n}f_j(x)dx=nv_n\int_{S^{n-1}}\int_{\mathbb{R}_+}s^{n-1}f_j(su)dsd\sigma(u) =nv_n\int_{S^{n-1}}\frac{d\sigma(u)}{\|u\|_{K_j}^n}=n|K_j|\ , $$ where $\sigma$ denotes the rotation invariant probability on the unit sphere $S^{n-1}:=\{u\in\mathbb{R}^n\ ;\ |u|=1\}$. Thus $$ \int_{\mathbb{R}^n} f_1(x)dx\int_{\mathbb{R}^n} f_2(y)dy=n^2|K_1||K_2|\le (nv_n)^2 c_n(\rho)^n= \left( \int_{\mathbb{R}^n} \rho({|x|^{2}})dx\right)^2. $$ {\bf B. The case of equality.} \vskip 1mm \noindent Assume now that there is equality. By the case of equality of Blaschke-Santal\'o inequality, $K_1$ is an ellipsoid centered at the origin and $K_2=c_n(\rho)K_1^\circ$. We may and do assume that $K_1=B_2^n$. For every $x\in S^{n-1}$, one has $\langle x,x\rangle =1=c_n(\rho)\|x\|_{K_1}\|x\|_{K_2}$, which means that there is equality in (\ref{polar}) for $x_1=x_2=x$. From the equality case of Proposition \ref{Prekopageom} ($n=1$), it follows that there exists a continuous function $\tilde{\rho}:\mathbb{R}_{+}\to\mathbb{R}_{+}$ such that \vskip 1mm \noindent - $\rho=\tilde{\rho}$ a.e., $\sqrt{\tilde\rho(s)\tilde\rho(t)}\le\tilde\rho(\sqrt{st})$ for every $s,t\geq 0$ \vskip 1mm \noindent - for every $x\in S^{n-1}$, there exists $c=c(x)>0, d=d(x)>0$ such that $$ g_1(s)=dg_3(cs)\hbox{ and }g_2(s)=\frac{1}{ d} g_3\left(\frac{s}{c}\right) \hbox{ for a.e. $s\ge 0$.} $$ Let us prove that $c$ and $d$ are constant functions. Since $$ 1=|x|^{-n}=\|x\|_{K_1}^{-n} =\int_{\mathbb{R}_+} g_1(s)ds=\frac{d(x)}{c(x)}\int_{\mathbb{R}_+} g_3(u)du=(c_n(\rho))^{\frac{n}{2}}\frac{d(x)}{c(x)}\ , $$ we have $d(x)=\frac{c(x)}{ c_n(\rho)^{n/2}}$. Hence for a.e. $s\ge 0$ $$f_1(sx)=\left(\frac{c(x)}{\sqrt{c_n(\rho)}}\right)^n\tilde\rho(c(x)^2s^2)\ ,\ f_2(sx)=\left(\frac{\sqrt{c_n(\rho)}}{c(x)}\right)^n\tilde\rho\left(\frac{s^2}{c(x)^2}\right) $$ By the hypotheses, for every $x,y\in S^{n-1}$ satisfying $\langle x,y\rangle >0$ and $s,t\geq 0$ $$ \left(\frac{c(x)}{c(y)}\right)^n\tilde\rho(c(x)^2s^2)\tilde\rho\left(\frac{t^2}{c(y)^2}\right)\le\tilde\rho^2(st\langle x,y\rangle)\ . $$ If $\tilde\rho(0)\neq 0$, we take $s=t=0$, simplify and get $c(x)\le c(y)$, for any $x,y\in S^{n-1}$. Therefore $c$ is a constant function. \\ If $\tilde\rho(0)= 0$ and $n\ge 2$, we take $x, y\in S^{n-1}$ with $\langle x,y\rangle =0$ (this is possible since $\tilde\rho$ is continuous), we get that $\tilde\rho$ is the null function. \hfill\hfill $\Box$ \bigskip \vskip 3mm \noindent {\bf Remarks:} \vskip 1mm \noindent {\bf 1)} We did not follow here the more natural proof given by K.~Ball in the even case. For sake of completeness, we outline his proof in the case where $\rho$ is non-increasing. Setting for $t>0$, $i=1,2$, $p_{i}(t)=|\{f_{i}>t\}|$, one has $\int f_{i}=\int_{0}^{+\infty} p_{i}(t)dt$. The hypothesis on $f_{1}$ and $f_{2}$ gives that for every $s,t>0$, one has $\{f_{2}>t\}\subset \rho^{-1} (\sqrt{st})\{f_{1}>s\}^{\circ}$. Now, {\it the fact that $f_{1}$ is even} implies that its level sets are centrally symmetric and this allows to apply Blaschke-Santal\'o inequality to get for all $s,t>0$, $$p_{1}(s)p_{2}(t)\le \left(\rho^{-1} (\sqrt{st})\right)^n v_{n}^{2}\, ,$$ and the result follows from Proposition~\ref{Prekopageom} applied in dimension $1$. \vskip 1mm \noindent {\bf 2)} The idea of attaching a convex set of the form of $K_1$ to a $\log$-concave function $f_{1}$ to prove a functional inequality was originally used by K.~Ball in \cite{Ball} and is also used by Klartag and Milman in \cite{KlartagMilman}. \vskip 1mm \noindent {\bf 3)} There are many ways to recover the usual Blaschke-Santal\'o inequality for symmetric sets from Proposition~\ref{even}. As noticed by K.~Ball in \cite{BallPhD}, the more natural is to apply it to $f_1=\chi_K$, $f_2=\chi_{K^\circ}$ and $\rho=\chi_{[0,1]}$. But more generally, we get the same result by applying it to $f_1(x)=\rho(\|x\|_K^2)$, $f_2(y)=\rho(\|y\|_{K^\circ}^2)$ and any function $\rho$ such that $t\mapsto\rho(e^t)$ is log-concave and non-increasing on $\mathbb{R}$. This was noticed by Artstein, Klartag and Milman \cite{AKM} in the case when $\rho(t)=e^{-t}$. \vskip 1mm \noindent {\bf 4)} Let $K$ be a convex body whose center of mass is at the origin. If we set $f_1=\chi_K$, $f_2=\chi_{K^\circ}$ and $\rho=\chi_{[0,1]}$, we get $K_{1}=K/n^{1/n}$ so that center of mass of $K_{1}$ is at the origin. Hence Proposition~\ref{even} also permits to recover the general Blaschke-Santal\'o inequality for convex sets. \vskip 3mm \noindent As a corollary of Proposition~\ref{even}, let us prove a generalized form of Blaschke-Santal\'o inequality for symmetric sets and some class of rotation invariant measures. This inequality is known for the Lebesgue measure and the Gaussian measure (see \cite{Cordero}); and also for a special class of measures (see \cite{Klartag}). It was asked in \cite{Cordero} whether it holds for any symmetric $\log$-concave measure. We also give here a partial answer: \begin{Cor} Let $h:\mathbb{R}_+\to\mathbb{R}_+$ be a non-increasing function which satisfies that $t\mapsto h(e^t)$ is $\log$-concave on $\mathbb{R}$. Let $\mu$ be the rotation invariant measure on $\mathbb{R}^n$, with density $h(|x|)$ with respect to the Lebesgue measure. Then, for every centrally symmetric measurable set $K\subset\mathbb{R}^n$, one has $$ \mu(K)\mu(K^\circ)\le\mu(B_2^n)^2. $$ If moreover, the support of $\mu$ is $\mathbb{R}^n$, there is equality if and only if - either $K=B_2^n$ - or $K=T(B_2^n)$ for some positive definite matrix $T\neq I$ and $h$ is constant on $[0,\max(\|T\|,\|T^{-1}\|)]$, where $\|T\|=\max_{|x|=1}|Tx|$. \end{Cor} \noindent {\bf Proof:} \vskip 1mm \noindent {\bf A. The inequality.} \vskip 1mm \noindent We apply Proposition~\ref{even} to $$ f_1(x)=h(|x|)\chi_K(x),\ f_2(y)=h(|y|)\chi_{K^\circ}(y)\ {\rm and}\ \rho(t)=h(\sqrt{t})\chi_{[0,1]}(t). $$ The hypotheses are satisfied since for all $x,y\in \mathbb{R}^n$ such that $\langle x,y\rangle>0$, one has $$ f_1(x)f_2(y)\le h^2\left(\sqrt{|x||y|}\right)\chi_{[0,1]}(\langle x,y\rangle) \le h^2(\sqrt{\langle x,y\rangle})\chi_{[0,1]}(\langle x,y\rangle)= \rho^2\left({\langle x,y\rangle}\right)$$ and $f_1$ is even. We get thus $$ \int f_{1}(x)dx\int f_{2}(y)dy=\mu(K)\mu(K^\circ)\le \left( \int_{\mathbb{R}^n} \rho({|x|})dx\right)^2=\mu(B_2^n)^2. $$ {\bf B. The case of equality.} \vskip 1mm \noindent Assume that the support of $\mu$ is $\mathbb{R}^n$ (hence $h>0$) and that there is equality. It follows from Proposition~\ref{even} that for some positive matrix $T$ and for some $d>0$, one has $$f_1(x) = d\,\rho( |Tx|^{2})\hbox{ and }f_2(y) =\frac{1}{d}\rho(|T^{-1}y|^{2})\hbox{ for all $x,y\in\mathbb{R}^n$\ .}$$ This gives $$ h(|x|)\chi_K(x)=d\,h(|Tx|)\chi_{[0,1]}(|Tx|)\ $$ and $$ h(|y|)\chi_{K^\circ}(y)=\frac{1}{d}h(|T^{-1}y|)\chi_{[0,1]}(|T^{-1}y|)\ . $$ Hence $K=T^{-1}(B_2^n)$, $K^\circ=T(B_2^n)$ and $h(|Tz|)=h(|z|)=h(|T^{-1}z|)$ for every $z\in B_2^n$. If $K\neq B_2^n$, one has $\max(\|T\|,\|T^{-1}\|)>1$. We may assume that $\|T\|>1$. Let $z_0\in S^{n-1}$ satisfying $|Tz_0|=\|T\|$ and $\lambda\in [0,\|T\|]$. Applying the previous equality to $z={\lambda z_0/\|T\|}$, we get $h(\lambda)=h(|Tz|)=h(|z|)=h(\lambda/\|T\|) .$ From the continuity of $h$, $h(\lambda)=h(\lambda/\|T\|^n) =h(0).$ \hfill\hfill $\Box$ \bigskip \section{The general case} We are now in position to prove the following theorem. \vskip 3mm \noindent \begin{Thm}\label{Main} Let $\rho: \mathbb{R}_{+}\to\mathbb{R}_{+}$ be measurable and $f: \mathbb{R}^n\to\mathbb{R}_+$ be a log-concave function such that $0<\int f <+\infty$. Then there exists $z\in \mathbb{R}^n$ such that for any measurable function $ g: \mathbb{R}^n \mapsto \mathbb{R}_{+}$ satisfying $$f(x)g(y)\le\rho^2\left(\langle x-z, y-z\rangle\right)\ $$ for every $x,y\in \mathbb{R}^n$ such that $ \langle x-z, y-z\rangle>0$, one has $$\int_{\mathbb{R}^n} f(x)dx\int_{\mathbb{R}^n} g(y)dy\le \left( \int_{\mathbb{R}^n} \rho({|x|^{2}})dx\right)^2$$ with equality if and only if the following two conditions hold: \vskip 1mm\noindent {\bf a.} For some positive definite $[n\times n]$ matrix $T$, some $z\in\mathbb{R}^n$ and some $d>0$, $$f(x) = d\rho\left( |T(x-z)|^2\right)\hbox{ and } g(x) =\frac{1}{d}\rho\left(|T^{-1}(x-z)|^2\right)\quad a.e.$$ \vskip 1mm \noindent {\bf b.} $\sqrt{\rho(s)\rho(t)}\le\rho(\sqrt{st})$ a.e. \end{Thm} \vskip 2mm\noindent {\bf Proof:} \vskip 1mm\noindent For every $z\in \mathbb{R}^n$ let $$K_{z}=\left\{ x\in \mathbb{R}^n;\ \int_{0}^{+\infty} f(z+rx) r^{n-1} dr \ge 1\right\}.$$ Since $f$ is $\log$-concave, it follows from Ball \cite{Ball} that for every $z\in\mathbb{R}^n$, the set $K_{z}$ is a convex body. If we can prove that there exists $z_{0}\in \mathbb{R}^n$ such that the center of mass of $K_{z_{0}}$ is at the origin, we get the result from proposition \ref{even} applied to $f_1(x)=f(x+z_0)$ and $f_2(x)= g(x+z_0)$. \hfill\hfill $\Box$ \bigskip \vskip 2mm \noindent This will be done in the following two lemmas, using Brouwer's fixed point theorem. \vskip 2mm \noindent \begin{Lem}\label{lemr} Let $n\ge 2$ and $f:\mathbb{R}^n\to \mathbb{R}_{+}$ be a log-concave function such that $0< \int f <+\infty$. For $z,x\in \mathbb{R}^n$, define $r_{z}(x)=\left(\int_{0}^{+\infty} f(z+rx) r^{n-1} dr\right)^{\frac{1}{n}}.$ One has then \vskip 1mm\noindent {\bf 1)} For all $\varepsilon>0$ and $\alpha <1$, there exists $M>0$ such that $r_{z}(u)\le \varepsilon$ whenever $u\in S^{n-1}$ and $z\in \mathbb{R}^n$ satisfy $\langle u,z\rangle \ge -\alpha |z|$ and $|z|\ge M$. \vskip 1mm\noindent {\bf 2)} $r_{z}\left(-\frac{z}{|z|}\right)\rightarrow +\infty$ when $|z|\rightarrow +\infty$. \end{Lem} \vskip 2mm\noindent {\bf Proof:} \vskip 1mm\noindent From the hypotheses on $f$, it is easy to see that for some $a,b,c,d>0$, one has $$a\chi_{bB_{2}^n}(x)\le f(x)\le d\,e^{-c|x|}\hbox{ for every } x\in \mathbb{R}^n\ .$$ \vskip 1mm\noindent 1) If $u\in S^{n-1}$ and $z\in \mathbb{R}^n$ satisfy \ $-\langle u,z\rangle \le \alpha |z|\ , $ then for every $r\ge 0$ $$|z+ru|^2 \ge |z|^{2}-2\alpha |z| r +r^2 \ge (1-\alpha)(|z|^{2}+r^{2})\ge \frac{1-\alpha}{ 2} (|z|+r)^2.$$ It follows that $$ r_{z}(u)^n \le de^{ -c\sqrt{\frac{1-\alpha}{2} }|z|} \int_{0}^{+\infty} r^{n-1} e^{-c\sqrt{\frac{1-\alpha}{2}} r}dr\rightarrow 0\ \hbox{when}\ |z|\rightarrow +\infty. $$ \vskip 1mm\noindent 2) Let $u=-\frac{z}{ |z|}$. Then $$ r_{z}(u)^n =\int_{0}^{+\infty} r^{n-1}f\left((r-|z|)u\right) dr\ge a\int_{0}^{+\infty}r^{n-1} \chi_{[-b,b]}(r-|z|)\, dr$$ Thus, for $|z|>b$, one has $r_{z}(u)^n \ge \frac{a}{ n}\left((|z|+b)^n -(|z|-b)^n\right)\rightarrow +\infty$ when $|z|\rightarrow +\infty$.\hfill\hfill $\Box$ \bigskip \vskip 2mm\noindent As we have already seen, for every $z\in\mathbb{R}^n$, the set $K_z$ is a convex body. Moreover notice that under our hypotheses, the origin is in the interior of $K_z$ and $r_z$ is the radial function of $K_z$ ($r_z(u)=\max\{\lambda>0\ ;\ \lambda u\in K_z\}$, for every $u\in S^{n-1}$). Hence part 1) of the preceding lemma means that for all $\varepsilon>0$ and $\alpha <1$, there exists $M>0$ such that for every $|z|\ge M$, $$\{x\in K_z\ ;\ \langle x,z\rangle\ge -\alpha |z|\}\subset \varepsilon B_2^n\, .$$ \vskip 3mm\noindent \begin{Lem} Let $f: \mathbb{R}^n\to \mathbb{R}_{+}$ be a log-concave function such that $0<\int f <+\infty.$ For every $z\in \mathbb{R}^n$ let $$K_{z}=\left\{ x\in \mathbb{R}^n;\ \int_{0}^{+\infty} f(z+rx) r^{n-1} dr \ge 1\right\}.$$ Then there exists $z_{0}\in \mathbb{R}^n$ such that the convex body $K_{z_{0}}$ has its center of mass at the origin. \end{Lem} \vskip 1mm\noindent {\bf Proof:} \vskip 1mm\noindent Notice first that for $n=1$, the result is easy, one chooses the unique point $z_0\in\mathbb{R}$ such that $$\int_{z_0}^{+\infty}f(r)dr=\int_{-\infty}^{z_0}f(r)dr,$$ then $K_{z_0}$ is a symmetric interval. We assume from now on that $n\ge 2$. It is clear that $z\mapsto K_{z}$ is continuous for the Hausdorff distance, so that if $G(z)$ is the centre of mass of $K_{z}$, then $G:\mathbb{R}^n\mapsto \mathbb{R}^n$ is continuous. \vskip 1mm\noindent {\bf A.} We first show that $$\ |G(z)|\rightarrow +\infty\ \hbox{ and } \ \big\langle \frac{G(z)}{|G(z)|}, \frac{z}{ |z|}\big\rangle \rightarrow -1\ \hbox{ when }|z|\rightarrow +\infty\ .$$ \vskip 1mm\noindent Let $h_{K_{z}}$ be the support function of $K_{z}$ {\em i.e.} $$h_{K_{z}}(y)=\max_{x\in K_{z}}\langle x,y\rangle \hbox{ for every }y\in \mathbb{R}^n.$$ It is well known that one has, for all $u\in S^{n-1}$, $$-h_{K_z}(-u) +\frac { h_{K_z}(u)+h_{K_z}(-u)}{n+1})\le \langle G(z),u\rangle \le h_{K_z}(u)-\frac{ h_{K_z}(u)+h_{K_z}(-u)}{n+1}\ .$$ By part 1) of Lemma \ref{lemr} applied with $\alpha =0$, for every $\varepsilon >0$, there exists $M>0$ such that $$\{x\in K_z\ ;\ \langle x,z\rangle\ge0\}\subset \varepsilon B_2^n, \quad \hbox{for all}\ |z|\ge M.$$ Moreover $K_z$ contains the origin, hence $$h_{K_{z}}\left(\frac{z}{|z|}\right) = \max\left\{ \langle \frac{z}{|z|} ,v\rangle ; v\in K_{z}, \langle z,v\rangle \ge 0\right\}\rightarrow 0\ ,$$ when $|z|\rightarrow +\infty$. By part 2) of Lemma \ref{lemr}, $$h_{K_{z}}\left(-\frac{z}{|z|}\right)\ge r_{z}\left(-\frac{z}{|z|}\right)\rightarrow +\infty\ .$$ \vskip 1mm\noindent It follows that $\langle G(z),{\frac{z}{|z|}}\rangle \rightarrow -\infty$, and thus that $|G(z)|\rightarrow +\infty$ when $|z|\rightarrow +\infty$. \vskip 1mm\noindent But since $K_{z}$ is a convex body, $G(z)\in K_{z}$, and thus $ |G(z)|\le r_{z}\left( \frac{G(z)}{ |G(z)|}\right).$ Since $|G(z)|\rightarrow +\infty$, one has $r_{z}\left(\frac{G(z)}{|G(z)|}\right)\rightarrow +\infty$ when $|z|\rightarrow +\infty$. It follows again from part 1) of Lemma \ref{lemr} that for every $\alpha<1$, there exists $M>0$ such that if $|z|>M$, then $$\langle \frac{G(z)}{ |G(z)|},z \rangle \le -\alpha |z|\ . $$ This means that $$\big\langle \frac{G(z)}{|G(z)|}, {\frac{z}{ |z|}}\big\rangle \rightarrow -1\hbox{ when }|z|\rightarrow +\infty\ .$$ \vskip 1mm\noindent {\bf B.} Let us prove that there exists $z_{0}\in \mathbb{R}^n$ such that $G(z_{0})=0$: \vskip 1mm\noindent Suppose that $G$ does not vanish. Let $C_{2}^n =\{x\in \mathbb{R}^n; |x|<1 \}$ be the open Euclidean unit ball, and define $z: C_2^n\to\mathbb{R}^n$ by $$ z(x):=\frac{x}{1-|x|} \ . $$ Define also $F: B_{2}^n\to S^{n-1}$ by $$F(x)=\frac{G\left(z(x)\right)}{ |G\left(z(x)\right)|}\hbox{ for }x\in C_2^n, \hbox{ and } F(u)=-u \hbox{ for }u\in S^{n-1}. $$ Let us prove that $F$ is continuous on $B_2^n$: It is clear that $F$ is continuous on $C_2^n$. Let $u\in S^{n-1}$. If $x\rightarrow u$, then $|z(x)|\rightarrow +\infty$ and $\frac{z(x)}{|z(x)|}=\frac{x }{|x|}\rightarrow u$. Whence by {\bf A.}, $$ \big\langle \frac{z(x)}{ |z(x)|}, \frac{G\left(z(x)\right)}{ |G\left(z(x)\right)|}\big\rangle\rightarrow -1,$$ which implies that $$F(x)=\frac{G\left(z(x)\right)}{ |G\left(z(x)\right)|}\rightarrow -u\ .$$ Thus $F:B_{2}^n\to S^{n-1}$ is continuous and satisfies $F(u)=-u$ for every $u\in S^{n-1}$. To conclude, we define $Q:B_{2}^n\mapsto B_{2}^n$, by $$Q(x)=\frac{x+F(x)}{ 2}\hbox{ for every }x\in B_2^n\ .$$ Then $Q$ is continuous, but has no fixed point, which contradicts Brouwer fixed point theorem. \hfill\hfill $\Box$ \bigskip \vskip 3mm \noindent {\bf Remark:} \vskip 1mm \noindent Theorem~\ref{Main} can be generalized in the following way: given $h:(0,+\infty)\to(0,+\infty)$ such that $t\mapsto h(e^t)$ is $\log$-concave and $h(r)r^{n-1}\to+\infty$ when $r\to+\infty$, let $\mu$ be the measure on $\mathbb{R}^n$ with density $h(|x|)$. Let $\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}$ be measurable and $f:\mathbb{R}^n\to\mathbb{R}_+$ be a log-concave function such that $0<\int f d\mu<+\infty$. Then there exists $z\in \mathbb{R}^n$ such that for any measurable function $ g: \mathbb{R}^n \mapsto \mathbb{R}_{+}$ satisfying $$f(x)g(y)\le\rho^2\left(\langle x-z, y-z\rangle\right)\ $$ for every $x,y\in \mathbb{R}^n$ such that $ \langle x-z, y-z\rangle>0$, one has $$\int_{\mathbb{R}^n} f(x)d\mu(x)\int_{\mathbb{R}^n} g(y)d\mu(y)\le \left( \int_{\mathbb{R}^n} \rho({|x|^{2}})d\mu(x)\right)^2.$$ \section{Consequences on Legendre transform} Given a function $\phi:\mathbb{R}^n\to \mathbb{R}\cup\{+\infty\}$ and $z\in \mathbb{R}^n$, we recall that the {\it Legendre transform} $\mathcal L_{z}\phi$ of $\phi$ with respect to $z\in \mathbb{R}^n$ is defined by $$\mathcal L_{z}\phi(y) =\sup_{x}\left(\langle x-z,y-z\rangle-\phi(x)\right)\ \hbox{ for all } y\in \mathbb{R}^n\ .$$ For $z=0$, we use the notation $\mathcal L:=\mathcal L_0$. Observe that $\mathcal L_{z}\phi:\mathbb{R}^n\to \mathbb{R}\cup\{+\infty\}$ is convex and that by a classical separation argument, $\mathcal L_{z}(\mathcal L_{z}\phi)=\phi$, whenever $\phi$ is itself convex and $\phi(z)<+\infty$. Notice also that the function $\phi(x)=|x|^{2}/2$ is the unique function which satisfies $\mathcal L\phi=\phi$. As a consequence of Theorem~\ref{Main}, we get the following theorem which generalizes the results of Artstein, Klartag and Milman \cite{AKM} who considered only the cases $\rho(t)=e^{-t}$ and $\rho(t)=(1-t)_+^m$. \vskip 3mm\noindent \begin{Thm}\label{Legendre} Let $\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}$ be a log-concave non-increasing function and let $\phi$ be a convex function such that $0<\int_{\mathbb{R}^n}\rho\left(\phi(x)\right)dx<+\infty$ . Then for some $z\in \mathbb{R}^n$, one has $$\int_{\mathbb{R}^n}\rho\left(\phi(x)\right)dx \int_{\mathbb{R}^n}\rho\left(\mathcal L_{z}\phi(y)\right)dy\le \left(\int_{\mathbb{R}^n}\rho\left(\frac{|x|^{2}}{2}\right)dx\right)^2.$$ If $\rho$ is decreasing, there is equality if and only if for some positive definite matrix $T:\mathbb{R}^n\to \mathbb{R}^n$ and some $c\in \mathbb{R}$, one has $$\phi(x)=\frac{|T(x+z)|^2}{ 2} +c\ ,\ \quad\hbox{ for all}\ x\in \mathbb{R}^n,$$ and moreover either $c=0$ or $\rho(t)=e^{at+b}$ for some $a< 0$, some $b\in\mathbb{R}$, and all $t\in [-|c|, +\infty)$. \end{Thm} \vskip 2mm\noindent {\bf Proof.} \vskip 1mm \noindent {\bf A. The inequality.} \vskip 1mm \noindent We apply Theorem~\ref{Main} to the $\log$-concave function $f:=\rho\circ\phi$ to get a convenient $z\in \mathbb{R}^n$. By the definition of $\mathcal L_{z}$ and the fact that $\rho$ is $\log$-concave and non-increasing, one has for every $x,y\in \mathbb{R}^n$ such that $\langle x-z,y-z\rangle>0$, $$\rho\left(\phi(x)\right)\rho\left(\mathcal L_{z}\phi(y)\right)\le \rho^2\left(\frac{\phi(x)+\mathcal L_{z}\phi(y)}{2}\right) \le \rho^{2}\left(\frac{\langle x-z,y-z\rangle}{2}\right).$$ Setting $g(y)=\rho\left(\mathcal L_{z}\phi(y)\right)$, we may apply Theorem~\ref{Main}, to get the inequality. \vskip 2mm\noindent {\bf B. The case of equality.} \vskip 1mm \noindent We may assume that $z=0$. Set $\psi=\mathcal L\phi$. If there is equality, we get from Theorem~\ref{Main} that for some positive definite matrix $T:\mathbb{R}^n\to \mathbb{R}^n$ and some $d>0$, one has $$\frac{1}{ d}\rho\left( \phi(|T^{-1}x|)\right)=d \rho\left(\psi(|Tx|)\right)= \rho\left( \frac{|x|^2}{2} \right), $$ for every $x\in \mathbb{R}^n$. Since $\rho$ is $\log$-concave and decreasing one has \begin{eqnarray*} \rho\left(\frac{|x|^{2}}{ 2}\right) & = & \sqrt{\rho\left(\phi(T^{-1}x)\right)\rho\left(\psi(Tx)\right)} \le \rho\left(\frac{\phi(T^{-1}x)+\psi(Tx)}{ 2}\right)\\ & \le & \rho\left(\frac{\langle T^{-1}x,Tx\rangle}{2}\right)= \rho\left(\frac{|x|^{2}}{ 2}\right). \end{eqnarray*} Since $\rho$ is decreasing, we get $\phi(T^{-1}x) +\psi(Tx)={|x|^2}\hbox{ for all $x\in \mathbb{R}^n$ .} $ Thus $$ |x|^2-\phi(T^{-1}x) = \psi(Tx) = \sup_{y}\big(\langle Tx,y\rangle-\phi(y)\big) = \sup_{w}\left(\langle x,w\rangle-\phi(T^{-1}w)\right). $$ We get $\phi(T^{-1}x)-\phi(T^{-1}w)\le |x|^{2} -\langle x,w\rangle$, for every $w,x\in \mathbb{R}^n$, Setting $C(x)= \phi(T^{-1}x)- \frac{|x|^2}{ 2}$, it follows that $$|C(x)-C(w)|\le \frac{|x-w|^{2}}{ 2}\hbox{ for all $x,w\in \mathbb{R}^n$}. $$ It is easy then to conclude that $C$ is actually constant, and this gives that for some $c>0$, one has $$\phi(x)=\frac{|Tx|^2}{2} +c\quad {\rm and}\quad \psi(x)=\frac{|T^{-1}x|^2}{ 2} -c \ .$$ This implies that $\rho$ satisfies $$\rho\left(\frac{|x|^{2}}{ 2}\right)^2 =\rho\left(\frac{|x|^{2}}{ 2}+c\right) \rho\left(\frac{|x|^{2}}{ 2}-c\right) $$ and using again the log-concavity of $\rho$, either $c=0$ or $\log(\rho)$ is affine on $[-|c|, +\infty)$. \hfill\hfill $\Box$ \bigskip \vskip 3mm\noindent {\bf Remarks:} \vskip 1mm\noindent {\bf 1)} The cases when $\rho(t)=e^{-t}$ or $\rho(t)=(1-t)_+^m$ of Theorem~\ref{Legendre} were proved by Artstein, Klartag and Milman in \cite{AKM} by applying the Blaschke-Santal\'o inequality for sets to a sequence of convex bodies $(K_s(\phi))_{s\in\mathbb{N}}$ in $\mathbb{R}^{n+s}$ and by letting $s\to +\infty$. The use of this sequence makes the case of equality much more difficult that in our proof. \vskip 1mm\noindent {\bf 2)} In the case when the function $\rho$ is strictly convex (for example if $\rho(t)=e^{-t}$), then $$\min_z \int_{\mathbb{R}^n}\rho\left(\mathcal L_{z}\phi(y)\right)dy= \min_z \int_{\mathbb{R}^n}\rho\left(\mathcal L\phi(y)-\langle z,y\rangle\right)dy$$ is reached at a unique point $z_{0}$ which satisfies $$z_{0}=\int_{\mathbb{R}^n}y\rho'(\mathcal L_{z_{0}}\phi(y))dy\bigg/ \int_{\mathbb{R}^n}\rho'(\mathcal L_{z_{0}}\phi(y))dy\ .$$ It follows that the inequality of Theorem~\ref{Legendre} is also valid at this point $z=z_{0}$. \vskip 1mm\noindent {\bf 3)} Actually, it is also possible to prove Theorem \ref{Legendre} by following step by step the method used by Meyer and Pajor (\cite{MP}) for proving Blaschke-Santal\'o inequality for convex bodies. The idea is to prove that the quantity $$\min_z \int_{\mathbb{R}^n}\rho\left(\mathcal L_{z}\phi(x)\right)dx$$ increases if we apply to the epigraph $E_\phi:=\{(x,t)\in\mathbb{R}^n\times\mathbb{R}\ ;\ \varphi(x)\le t\}$ of the function $\phi$ a well chosen Steiner symmetrisation to get a function $\tilde{\phi}$ which is symmetric with respect ot the symmetrisation hyperplane. After $n$ symmetrizations with respect to mutually orthogonal hyperplanes, the function is unconditional and the result follows from the application of the Pr\'ekopa-Leindler inequality for the geometric mean (Theorem \ref{Prekopageom}). However, this proof is much longer, and seems to require some additionally hypotheses on the function $\rho$, namely that $\rho$ is convex and decreasing and that $-\rho'$ is $\log$-concave. \vskip 1mm\noindent {\bf 4) Shortcut for the proof of the equality case in Blaschke-Santal\'o inequality.} There exists different proofs of the equality case for Blaschke-Santalo's inequality. It was first proved in the centrally symmetric case by Saint-Raymond \cite{Saint-Raymond}, using a tricky lemma for functions of one variable, then in the general case by Petty \cite{Petty} with some involved arguments of PDE (see also D.~Hug \cite{Hug}). A simpler proof together with a stronger inequality was then given by Meyer and Pajor \cite{MP} using the Steiner symmetrization, a result of \cite{Falconer} and finally the lemma of Saint-Raymond. \vskip 1mm\noindent In fact, one can give the following simpler argument. \vskip 1mm\noindent {\bf a.} If $K$ is unconditional with maximal volume product, we have seen that the case of equality follows easily from the equality case in the one-dimensional Pr\'ekopa-Leindler inequality. \vskip 1mm\noindent {\bf b.} Suppose now that $K$ has maximal volume product and is centrally symmetric. Then for every $u\in S^{n-1}$, after $n$ Steiner symmetrizations with respect to pairwise orthogonal hyperplanes, the last one being with respect to $\{u\}^{\perp}$, we get from $K$ an unconditional body with maximal volume product (recall that a Steiner symmetrization does not decrease volume product), and thus by {\bf a.} an ellipsoid. To conclude that $K$ is itself an ellipsoid, we use the following elementary lemma, where for $v\in S^{n-1}$, we denote by $S_{v}K$ the Steiner symmetral of $K$ with respect to the hyperplane $v^{\perp}:=\{x\in\mathbb{R}^n\ ;\ \langle x, v\rangle=0\}$. \vskip 3mm\noindent {\bf Lemma.} Let $K$ be a centrally symmetric convex body. Then $K$ is an ellipsoid if and only if for every orthonormal basis $(u_{1},\dots,u_{n})$ of $\mathbb{R}^n$, $S_{u_n}S_{u_{n-1}}\dots S_{u_1}K$ is an ellipsoid. \vskip 3mm\noindent {\bf Proof:} The "only if " part is well known. For the "if" part, fix $u\in S^{n-1}$, and $(u_{1},\dots,u_{n})$ be an orthonormal basis such that $u=u_{n}$. Let $L=S_{u_{n-1}}\cdots S_{u_1}K$. Then $L$ is centrally symmetric (since $K$ is), and symmetric with respect to the $(n-1)$ pairwise orthogonal hyperplanes $u_{i}^{\perp}$, $1\le i\le n-1$. It follows that $L$ is also symmetric with respect to $u_{n}^{\perp}$, so that $L= S_{u_{n}}L= S_{u_n}S_{u_{n-1}}\cdots S_{u_1}K$ is an ellipsoid. Thus for some $a_1,\dots, a_n>0$ one has $$L=\left\{x=x_{1}u_{1}+\cdots +x_{n}u_{n}; \sum_{i=1}^n \frac{x_{n}^{2}} {a_{n}^{2}}\le 1\right\}.$$ Let $h_{K}(u):= \max\{\langle x,u\rangle ; x\in K\} $. It is easy to see that whenever $v\in S^{n-1}$ satisfy $\langle v,u\rangle =0$, then $$h_{K}(u)=h_{S_{v}K}(u)\ \hbox{ and }\ \int_{K}\langle x,u\rangle^{2}dx= \int_{S_{v}K}\langle x,u\rangle^{2}dx.$$ It follows that $a_{n}=h_{L}(u_{n})=h_{K}(u_{n})$ and $$\int_{K}\langle x,u_{n}\rangle^{2}dx= \int_{L}\langle x,u_{n}\rangle^{2}dx =\frac{v_{n}}{ n+2}\cdot a_{1}\dots a_{n}\cdot a_{n}^2\ .$$ Since $|L|=|K|$, one has $v_{n}a_{1}\cdots a_{n}=|K|$. Thus $$h_{K}(u)^{2}=\frac{n+2}{ |K|} \int_{K}\langle x,u\rangle^{2}dx\hbox{ for every } u\in S^{n-1}. $$ It follows that $K^\circ$ and thus $K$ is an ellipsoid. \hfill \hfill $\Box$ \bigskip
1,314,259,996,912
arxiv
\section{Introduction} \label{} Statistical Mechanics (SM) provides useful concepts to study systems with large number of particles. For example, based on standard SM, Edwards \cite{edw} proposed a thermodynamic description of granular matter in which thermodynamic quantities are computed as flat averages over configurations where the grains are static or jammed, leading to a definition of configurational temperature. A numerical diffusion-mobility experiment of a granular system has supported the Edwards' statistical ensemble idea \cite{makse}. Another example is the relation between entropy and the horizon area of a black hole \cite{black}, which provides a new approach for studying black holes and quantum gravity theory. Furthermore, four laws of black hole mechanics can be demonstrated using this thermodynamic description. The microscopic origin of the black hole entropy, originally calculated thermodynamically, has been explained from string theory. \cite{black2} Recently, Cejnar et al. \cite{cej} analyzed quantum phase transitions in finite systems \cite{bor1} by defining an analog of the absolute temperature scale connected to the interaction parameter of the Hamiltonian. And thus, they were capable of establishing a thermodynamic analogy for the quantum phase transition. However, they did not identify the correspondence with statistical mechanics and consequently the new scenario opened by this microscopic analysis. This correspondence and these consequences are the goal of this paper. Here, we use tools developed in SM to study the ground-state of quantum systems. We observe that, for certain classes of quantum systems, taking different intensities of the interaction between particles of the system corresponds to taking different occupation probabilities for non-interacting microstates energy levels. With this observation we can define an analog of the absolute temperature scale in such a manner that it is possible to make a thermodynamic interpretation for the interaction in the ground-state of quantum systems. The Hubbard Hamiltonian is a typical model in which this approach can be applied. Here, we analyze two exact solvable limits of the Hubbard model. This paper is organized as follow. The formalism is described in Sec. \ref{form}. The study of the two exact solvable problems based on the Hubbard model are presented in Sec. \ref{app}. Finally, we present the conclusions in Sec. \ref{concl}. \section{Formalism} \label{form} The scheme of our formalism can be applied to a broad class of Hamiltonians defined as \begin{equation} \label{e1} \hat{H}=\hat{H}_{0} + T \hat{V}, \end{equation} where we assume that $\hat{H}_{0}$ is a one-particle Hamiltonian operator and the interaction term is given by the $\hat{V}$ operator and $T$ is the dimensionless interaction parameter. Here, we must consider that $T \ge 0$ and the operator $\hat{V}$ is positively defined. In this way we have established that the energy is a concave function of $T$ A good example of this class of Hamiltonians is the one of the Hubbard model \cite{hub}. In this model, which is amongst the most important magnetic ones, the eigenstates of the Hamiltonian in the absence of interaction ($T=0$) are just the non-interacting states $| \phi_{i} \rangle $, whose respective energy eigenvalues, $E_i (0)$ are defined through the relation $\hat{H}_{0} | \phi_{i} \rangle =E_i (0) | \phi_{i} \rangle$. The eigenvalues $E_i (T)$ of $\hat{H}$ for nonvanishing $T$ are obtained from the equation $\hat{H} | \psi_{i} \rangle = E_i (T) | \psi_{i} \rangle$. Moreover, the expectation value of an operator $\hat{X}$ on the ground-state $| \psi_{0} \rangle$ is given by $\langle \psi_{0} | \hat{X} | \psi_{0} \rangle$. Now, we can provide a approach for obtaining expectation values of physical quantities on the ground-state in the base of non-interacting states. This simply means to find the expectation values of $\hat{X}$ on the ground-state in the $| \phi_{i} \rangle $ representation. The ground-state $| \psi_{0} \rangle$ can be expanded in terms of the non-interacting states $| \phi_{i} \rangle $ as \begin{equation} | \psi_{0} \rangle = \sum_{i} a_i (T) | \phi_{i} \rangle, \end{equation} where the coefficients $a_i (T) = \langle \phi_{i} | \psi_{0} \rangle$. We recall that the quantity $|a_{i} (T) |^{2}$ has a {\em{probabilistic}} interpretation. In other words, we can write $p_{i} (T) \equiv |a_{i} (T) |^{2} \; \epsilon \; [0,1]$ and $\sum_{i} p_{i} (T) = \sum_{i} |a_{i} (T) |^{2} = 1$. This establishes the connection to SM: the expectation value of $\hat{H}_{0}$, \begin{equation} \langle \hat{H}_{0} (T) \rangle = \langle \psi_{0} | \hat{H}_{0} | \psi_{0} \rangle = \sum_{i} p_i (T) E_i (0), \end{equation} can be interpreted as a usual average, which is computed over the set of non-interacting energy levels $E_{i}(0)$, each one with probability $p_i (T)$. It is an analog of the mean energy $ \langle E \rangle =\sum_{i}{p_i E_i}$. We can easily verify that if $T \ge 0$ and $\langle \hat{V} (T) \rangle \ge 0$ is a monotonically decreasing function of $T$, then $E_{0} (T_{1}) \le E_{0} (T_{2})$ for $T_{1} \le T_{2} $. In this case, in analogy to SM, for the non-interacting case $T=0$, the system has the lowest energy $E_{0} (0)$ and $p_i (0)= \delta_{i0}$. If $T >0$, like a thermal energy, the interaction favors other energy levels of the non-interacting case. In this description, only the non-interacting microscopic states are used to compute the {\em thermodynamic} properties. This enables us to define an analog of the absolute temperature scale, called ground-state temperature, as $T_{g}=T/k$, where $k$ is a constant measured in Kelvins$^{-1}$. This description is illustrated in Fig. \ref{fi1}. Similar to the usual canonical ensemble of the SM, we can consider that taking different ground-state temperatures $T_{g}$, i.e, different values of the interaction parameter, the particles of the system fall into non-interacting microstates, corresponding to different occupation probabilities for these energy levels. \begin{figure} \includegraphics[width=85mm]{sgs_f1.eps} \caption{Similar to the usual canonical ensemble of the SM, taking different ground-state temperature $T_{g}$ (interaction parameter), the particles of the system fall into non-interacting microstates, corresponding to different occupation probabilities for these energy levels.} \label{fi1} \end{figure} In addition, an analogy to the standard thermodynamics is also reproduced by this description. We can introduce a so-called ground-state thermodynamics, defining the ground-state internal energy, ground-state free energy and ground-state entropy, respectively, as \begin{equation} \label{ug} U (T_{g}) = \langle E (T_{g}) \rangle = \sum_{i} p_i (T_{g}) E_i (0), \end{equation} \begin{equation} \label{fg} F (T_{g}) = E_{0} (T_{g}) - T_{g} \langle \hat{V} (0) \rangle, \end{equation} \begin{equation} \label{sg} S (T_{g}) = k ( \langle \hat{V} (0) \rangle - \langle \hat{V} (T_{g}) \rangle ). \end{equation} It can be easily seen that $S (T_{g})$ is a non-negative monotonically increasing function of $T_{g}$. We can trivially verify that, using Eqs. (\ref{ug})-(\ref{sg}), the ground-state thermodynamics precisely satisfies the standard thermodynamics relation for the Helmholtz free energy \begin{equation} \label{futs} F (T_{g}) = U (T_{g}) - T_{g} \; S(T_{g}). \end{equation} Furthermore, we can derive the thermal response function, in correspondence to the heat capacity \begin{equation} \label{sh} C (T_{g}) = T_{g} \frac{dS(T_{g})}{dT_{g}} = -T_{g} \frac{d^{2}F(T_{g})}{dT_{g}^{2}}. \end{equation} It is interesting to observe that the expression above can be calculated using the Hellmann-Feynman theorem which allows to find the ground-state expectation values of a general operator $\hat{X}$ by differentiating the ground state energy of a perturbed Hamiltonian $\hat{H}_{0}+\lambda \hat{X}$ with respect to $\lambda$ \cite{sor}. \section{Applications} \label{app} For illustrating the approach introduced in this letter, let us study two exact solvable problems based on the Hubbard model \cite{hub}. The Hamiltonian of the Hubbard model is defined as \begin{equation} \hat{H}= -t \sum_{\langle ij \rangle \alpha} \hat{c}_{i\alpha}^{\dag} \hat{c}_{j\alpha} + U \sum_{i} \hat{n}_{i\uparrow} \hat{n}_{i\downarrow} , \label{hamil} \end{equation} where $\hat{c}_{i\alpha}^{\dag}$, $\hat{c}_{i\alpha}$ and $\hat{n}_{i\alpha}\equiv \hat{c}_{i\alpha}^{\dag} \hat{c}_{i\alpha}$ are respectively the creation, annihilation and number operators for an electron with spin $\alpha$ in an orbital localized at site $i$ on a lattice of $N$ sites; the $\langle ij \rangle$ denotes pairs $i,j$ of nearest-neighbor sites on the lattice; $U$ is the Coulombian repulsion that operates when the two electrons occupy the same site; and $t$ is the electron transfer integral connecting states localized on nearest-neighbor sites. First and second terms of Eq. (\ref{hamil}) correspond to, respectively, one-particle $\hat{H}_{0}$ and interaction terms of Eq. (\ref{e1}). \begin{figure*} \psfig{figure=sgs_f2a.eps,width=84mm,angle=0} \psfig{figure=sgs_f2b.eps,width=88mm,angle=0} \psfig{figure=sgs_f2c.eps,width=85mm,angle=0} \psfig{figure=sgs_f2d.eps,width=85mm,angle=0} \caption{Ground-state (a) free energy $F(T_{g})$, (b) internal energy $U(T_{g})$, (c) entropy $S(T_{g})$ and (d) heat capacity $C(T_{g})$ versus temperature $T_g$ for the Hubbard model ($k=1$ and $t=1$). The full line represents the case $N=2$ and two electrons, while the dotted line represents the half-filled band for the one-dimensional case in the thermodynamic limit ($N \rightarrow \infty$).} \label{fi2} \end{figure*} The problem of two electrons in two sites is the simplest example to our approach. By using direct calculus, it is easy to obtain the ground-state eigenvalue and eigenfunction, respectively, as \begin{equation} \label{gs2s} E_{0} (U) = - \frac{1}{2} (U-\sqrt{U^{2} + (4t)^{2}}), \end{equation} and \begin{equation} \label{es2s} | \psi_{0} \rangle = a_{-} | \phi_{-} \rangle + a_{+} | \phi_{+} \rangle. \end{equation} where $| \phi_{\pm} \rangle $ are eigenfunctions for the case $U=0$, with $a_{-}=\sqrt{1-a_{+}^{2}}$ and \begin{equation} \label{as2s} a_{+} = 2t/ \sqrt{(2\sqrt{U^{2} + (4t)^{2}} - U) \sqrt{U^{2} + (4t)^{2}} }. \end{equation} Thus, we define $T_{g}=U/kt$, and using Eqs. (\ref{ug})-(\ref{sg}) into Eqs. (\ref{gs2s}) and (\ref{es2s}), we find (from now $k=1$ and $t=1$ for simplicity) \begin{equation} \label{f2s} F (T_{g}) = - \frac{1}{2} \sqrt{T_{g}^{2} + 16}, \end{equation} \begin{equation} \label{s2s} S (T_{g}) = \frac{T_{g}}{ 2 \sqrt{T_{g}^{2} + 16} }, \end{equation} \begin{equation} \label{u2s} U (T_{g}) = - \frac{8} { \sqrt{T_{g}^{2} + 16} }, \end{equation} and \begin{equation} \label{c2s} C (T_{g}) = \frac{8T_{g}}{ 2 (T_{g}^{2} + 16)^{3/2} }. \end{equation} Figure \ref{fi2} shows curves (full lines) of $F(T_{g})$, $U(T_{g})$, $S(T_{g})$ and $C(T_{g})$ versus the temperature $T_g$ for the expression above representing the case of two electrons on two sites for the Hubbard model. As clearly seen in these figures, the behavior of these new variables is exactly as expected from the usual thermodynamics. Now, let us consider the functional dependence for the entropy given by Eq. (\ref{s2s}) in terms of the occupation probability of the non-interacting quantum states. Using the energetic constraint (Eq. (\ref{ug})), this dependence generates the concept of thermostat temperature, if we focus on the canonical ensemble of the SM formalism. It is easy to show from Eqs. (\ref{es2s})-(\ref{as2s}) that the occupation probabilities for the eigenfunctions $| \phi_{\pm} \rangle $ of the non-interacting case are \begin{equation} \label{p2s} p_{\pm } (T_{g}) = \frac{1}{2} \mp \frac{2}{ \sqrt{T_{g}^{2} + 16}}. \end{equation} We straightforwardly obtain the entropic form \begin{equation} \label{sp} S(p) = \sqrt{p_{+}p_{-}}, \end{equation} which is a concave function representing the geometric average of the quantum states probability, where certainty corresponds to $S=0$. Here, we can see the difference between the standard SM and the ground-state SM. For the standard SM we always use the Boltzmann-Gibbs entropy $S(p) = \sum_{i} p_{i} \ln{p_{i}}$, while for the ground-state SM, this universality is broken, and the form of the entropy depends on the particular quantum system. On the other hand, this issue does not rule out the possibility that many different systems fall into some basic classes exhibiting qualitatively similar behavior. In Fig. \ref{fi3}, we show the functional forms of the entropies associated with the Boltzmann-Gibbs and with Eq. (\ref{sp}) assuming 2 states. \begin{figure} \psfig{figure=sgs_f3.eps,width=85mm,angle=0} \caption{Functional forms of the entropies $S(p)$ associated with the Boltzmann-Gibbs (dashed line) and with Eq. (\ref{sp}) for 2 states (full line).} \label{fi3} \end{figure} In what follows, we shall illustrate the above procedure by addressing the exact solution for the half-filled band of the Hubbard model for the one-dimensional case in the thermodynamic limit. This famous solution was obtained by Lieb and Wu in the sixties \cite{lieb} using the Bethe anzatz. Since then, this result is considered one of the most important ones, owing to the lack of exact solution for the Hubbard Model. The ground-state as a function of the electron-electron interactions $U$, for $N$ sites in the limit $N \rightarrow \infty$, is given by \begin{equation} \label{f1d} E_{0} (U) = - 4N \int_{0}^{\infty} \frac{J_{0}(w)J_{1}(w) dw}{w[ 1+ \exp (wU/2)] }, \end{equation} where $J_{0}(w)$ and $J_{1}(w)$ are Bessel functions. It is, then, simple to obtain the quantities associated to the ground-state thermostatistics: \begin{equation} \label{f1d} F (T_{g})/N = - \frac{T_{g}}{4} -4 \int_{0}^{\infty} \frac{J_{0}(w)J_{1}(w) dw}{w[ 1+\exp (wT_{g}/2)] } , \end{equation} \begin{equation} \label{s1d} S (T_{g})/N = \frac{1}{4} - \frac{1}{2} \int_{0}^{\infty} \frac{J_{0}(w)J_{1}(w) dw}{\cosh^{2} (wT_{g}/4)}, \end{equation} \begin{equation} \label{u1d} U (T_{g})/N = - \int_{0}^{\infty} \frac{J_{0}(w)J_{1}(w) f(w,T_{g}) dw}{w [ 1+\exp (wT_{g}/2)]^{2} } , \end{equation} where $f(w,T_{g})=[4+(4+wT_{g}/2)\exp (wT_{g}/2)]$ and \begin{equation} \label{c1d} C (T_{g})/N = \frac{T_{g}}{4} \int_{0}^{\infty} \frac{w J_{0}(w)J_{1}(w) \sinh (wT_{g}/4) dw}{\cosh^{3} (wT_{g}/4)}. \end{equation} We show the behavior of $F(T_{g})$, $U(T_{g})$, $S(T_{g})$ and $C(T_{g})$ versus the temperature $T_g$ for the solution of the one-dimensional Hubbard model in Fig. \ref{fi2}. These curves correspond to the dotted lines and, as well noticed from the case of two electrons, they are also expected from the usual thermodynamics. \section{Conclusions} \label{concl} In summary, we introduce an approach to solve problems of quantum mechanics using concepts of statistical mechanics. We can consider that taking different ground-state temperatures $T_{g}$, i.e, different values of the interaction parameter, the particles of the system fall into non-interacting microstates, corresponding to different occupation probabilities for these energy levels. We found that the functional form of the ground-state entropy depends on the particular quantum system. The break down of universality of the entropy is consistent with the concept of generalized entropies \cite{tsallis} associated with a specific quantum Hamiltonian. Finally, the ideas presented here can eventually provide a mechanism for new approximation methods, such as the usage of the geometric average of the quantum states probability in the high dimensional limit for the Hubbard model. We can envisage in further works the study of the possibility that many different systems may fall into some basic classes of the ground-state entropy. \section*{Acknowledgement} This work was supported by CNPq (Brazilian Agency). \bibliographystyle{elsarticle-num}
1,314,259,996,913
arxiv
\section{Introduction} In the recent years, the Bondi-Metzner-Sachs (BMS) symmetry \cite{BMS1, BMS2, BMS3}, which generates the asymptotic isometries of Minkowski spacetime at null-infinity, has been revisited \cite{Barnich0, Barnich1, Barnich3} and its relevance to field theory has been reconsidered from a modern perspective. This infinite-dimensional symmetry has been found to be relevant in the study of scattering amplitudes of both gravitational and gauge theories in asymptotically Minkowski spacetimes \cite{StromingerLectureNotes}, and its connection to the Weinberg soft theorems and to the memory effects led to a new way of studying processes in flat space \cite{Strominger1, Strominger2, Strominger3}; see \cite{StromingerLectureNotes} and references therein and thereof. More recently, infinite-dimensional symmetries like BMS have also appeared in other geometrical setups, such as in Minkowski spacetime at spacelike infinity \cite{Henneaux1, Henneaux2} and in the vicinity of black hole event horizons \cite{Hawking:2015qqa, DGGP, HPS, DGGP2, HPS2,Afshar:2016wfy,Grumiller:2019fmp}. In this paper, we investigate whether BMS-like symmetry may also appear in a scenario that involves black holes in AdS space. More precisely, the question we ask is whether supertranslation symmetry, a proper infinite-dimensional Abelian subalgebra of BMS, emerges in either the near boundary region or the near horizon region of AdS black holes, two regions in which the symmetry algebras are expected to get enhanced. To answer this question, we will consider massive 3-dimensional gravity \cite{NMG}, which has the advantage of admitting a rich black hole phase space, including AdS black holes with a softly decaying hair \cite{Julio, NMG2}. In order to accommodate such solutions within the space of geometries to be considered, it is necessary to relax the asymptotic conditions near the boundary of AdS$_3$, demanding a fall-off that is weaker than the usual Brown-Henneaux boundary conditions \cite{BH}. This induces an extra current in the near boundary region, which mixes with the boundary local conformal symmetry in a non-trivial way: We derive the corresponding algebra of asymptotic diffeomorphisms and we show that it actually consists of two copies of Virasoro algebra in semi-direct sum with an infinite-dimensional Abelian ideal. In other words, the asymptotic isometry algebra at the boundary does contain supertranslations. However, we show that, unlike the Virasoro transformations, the supertranslations at the boundary act trivially, i.e. they are pure gauge: By computing the Noether charges associated to the asymptotic diffeomorphisms, we find the supertranslation charges identically vanish. Then, we refocus our attention on the near horizon region, a second region where infinite-dimensional symmetries are also expected to emerge \cite{Hawking:2015qqa}. Based on the analysis of \cite{DGGP, DGGP2, DGGP3, DGGP4}, adapting it to the higher-curvature model, we show that at the horizon supertranslation symmetry does yield an infinite set of non-vanishing charges, which can be computed using the Barnich-Brandt formalism \cite{BarnichBrandt}. By evaluating these charges on a stationary hairy black hole solution, we find that the zero-mode reproduces the black hole entropy, as it happens in general relativity (GR). However, a remarkable difference with respect to GR exists: Due to the presence of higher-derivative terms in the massive gravity action, the black hole entropy does not obey the Bekenstein-Hawking area law, but it takes a more involved form that depends on the radii of both internal and the external horizons. Therefore, a natural question arises as to how such dependence on the internal event horizon can be obtained from the near external horizon computation. We show that it actually comes from subleading contributions: It turns out that next-to-leading components in the near-horizon expansion, which in the case of GR give no contribution, in the higher-derivative theory do contribute to the charges yielding the correct entropy formula. The paper is organized as follows: In section II, we introduce the massive 3D gravity theory in AdS. In section III, we specify the point of the parameter space at which we will work, and the special features that the theory exhibits there. In section IV, we discuss the main properties of the hairy black holes and compare them with the hairless BTZ geometry. The asymptotic symmetries at the AdS boundary will be studied in section V, where we prove that, while an infinite-dimensional commuting algebra appears and mixes with the Virasoro symmetry, the conserved charges associated to the former identically vanish. In Section VI, we consider the near horizon symmetries, where supertranslation isometries also appear, in this case yielding an infinite set of conserved charges. By evaluating these charges explicitly, we show that the zero-mode of the horizon supertranslation corresponds to the Wald entropy. In Section VII, we extend the near horizon analysis to the case of rotating black holes, for which the supertranslation charges is also worked out. We show that, in contrast to GR, in the massive gravity theory new (subleading) terms in the near-horizon expansion happen to contribute to the charges. In section VIII, we extend the analysis by adding the gravitational Chern-Simons term, which contribute to the Noether charges in a non-trivial manner. Section IX contains our conclusions. \section{Massive 3D gravity} Let us start with the action of New Massive Gravity (NMG) theory \cite{NMG} \begin{equation} I=\frac{1}{16\pi G}\int d^3x \, \sqrt{-g}\, \Big( R-2 \lambda-\frac{1}{m^2}K \Big) \ , \ \ \text{with} \ \ K=R_{\mu \nu}R^{\mu \nu}-\frac{3}{8}R^2, \label{ActionNMG} \end{equation} which leads to the field equations \begin{equation}\label{eom} \begin{alignedat}{2} &R_{\mu \nu}-\frac 12 Rg_{\mu \nu }+\lambda g_{\mu \nu}-\frac{1}{2m^2}K_{\mu \nu}=0,\\ \end{alignedat} \end{equation} where \begin{equation} K_{\mu \nu}=2\nabla^2R_{\mu\nu}-\frac{1}{2}(\nabla_\mu \nabla_\nu R+g_{\mu \nu}\nabla^2 R)-8 R_{\mu \rho}R^{\rho}_{\,\,\nu}+\frac{9}{2}RR_{\mu\nu}+\frac{1}{8}g_{\mu\nu}\left(24R^{\alpha \beta}R_{\alpha \beta}-13R^2\right), \end{equation} satisfying $K_{\mu \nu}g^{\mu\nu}=K$, so that the problematic mode $\nabla^2 R$ decouples from the trace of the field equations. In the limit $m^2\to \infty$, this theory reduces to GR. The specific linear combination of squared curvature terms in (\ref{ActionNMG}) makes this higher-derivative theory to exhibit especial features: It propagates two spin-2 helicity states, and at the linearized level it results equivalent to the unitary Pauli-Fierz action for a massive spin-2 field of mass $m$. This implies that action (\ref{ActionNMG}) describes a ghost-free, covariant massive gravity theory that, in contrast to Topologically Massive Gravity (TMG) \cite{TMG} turns out to be parity-even. The relative coefficient of the higher-curvature terms of (\ref{ActionNMG}) coincides with the precise combination of quadratic counterterms that appear in the context of holographic renormalization for $D=3$; see \cite{Qcurvature} and reference therein. Related to this, there is an alternative way of seeing (\ref{ActionNMG}) to appear perturbatively: Consider the 3-dimensional Einstein-Hilbert action coupled to matter; namely \begin{equation} I_0=\frac{1}{16\pi G}\int d^3x \, \sqrt{-g}\, R \, + \int d^3x \, \sqrt{-g}\, \mathcal{L}_{\text{matt}}, \label{LaS0} \end{equation} where $\mathcal{L}_{\text{matt}}$ denotes the Lagrangian of matter. Then, we can deform the action $I_0$ by adding to it the irrelevant operator \begin{equation} \delta I= t\, \int d^3x \, \sqrt{-g} \,\Big( T_{\mu \nu } T^{\mu \nu } - \frac 12 T^2 \Big), \label{TT} \end{equation} where $T_{\mu \nu }$ is the stress tensor and $T$ is its trace. Operator (\ref{TT}) can be regarded as the 3-dimensional analog of the $T\bar{T}$-deformation of \cite{TT1, TT2} coupled to gravity. The coupling constant $t$ in (\ref{TT}) has mass dimension $-3$. If one solves the field equations coming from the deformed action $I_{0}+\delta I$ to first order in $t$ and, after that, one evaluates the action on-shell, one obtains the NMG action (\ref{ActionNMG}) with $m^2=4\pi G/t$. The presence of a cosmological constant $\lambda $ in (\ref{LaS0}) would result in a $t$-dependent renormalization of it and of the Newton constant $G$. Another interesting feature of massive theory (\ref{ActionNMG}) is that it has a profuse black hole phase space, including solutions with different asymptotics \cite{Julio, NMG2, Garbarz:2008qn, Lifshitz}. In particular, it allows for black holes in AdS with a softly decaying gravitational hair. Here, we will focus on such solutions. Being a quadratic gravity theory, NMG admits more than one maximally symmetric solution. That is, there exist generally two values of the effective cosmological constant for the solutions; namely \begin{equation} \Lambda_{\pm } = 2m^2 \pm 2m^2\sqrt{1-\frac{\lambda }{m^2} }, \label{Opa} \end{equation} assuming $m^2\geq \lambda$. That is, the theory has two natural vacua, which can be either flat space and/or (Anti-)de Sitter space, depending on the parameters $m^2$ and $\lambda $. The effective cosmological constant (\ref{Opa}) set the curvature radius of the solution $\ell=\sqrt{-\Lambda_{\pm}}$, being $\ell^2 >0 $ for AdS. Notice that, while $\Lambda_-$ tends to the GR value $\lambda$ in the limit $m^2\to \infty $, $\Lambda_+$ diverges. The latter can thus be thought of as a non-perturbative solution. \section{Special point} While at a generic point of the parameter space the theory admits two vacua (\ref{Opa}) with different curvature radii, there exists a special point in the parameter space at which these two vacua coincide. This happens when $m^2=\lambda$. At this point, one gets \begin{equation} \Lambda_{+}=\Lambda_{-}=2\lambda =2m^2 = -\frac{1}{\ell^2}\, . \label{punto} \end{equation} When (\ref{punto}) is satisfied, the theory exhibits special properties, the most interesting ones being the existence of: \begin{enumerate} \item A unique maximally symmetric solution. \item Stationary hairy (A)dS black hole solutions \cite{Julio, NMG2}. \item An extra conformal symmetry at linearized level around (A)dS \cite{Gabadadze}. \item Relaxed boundary conditions compatible with AdS/CFT \cite{Julio, JulioYo}. \item Extra local asymptotic Killing vectors in AdS \cite{Julio}. \end{enumerate} In this paper, we will be concerned with the theory at the point (\ref{punto}) and with its special properties. \section{Hairy black holes in AdS} In addition to the BTZ black holes \cite{BTZ}, which are indeed solutions of NMG provided either $\Lambda_+$ or $\Lambda_-$ is negative, at the special point (\ref{punto}) NMG admits a 1-parameter hairy generalization of BTZ. In the static case, the metric of such hairy black hole takes the form \begin{equation} \begin{alignedat}{2} ds^2=-\left(\frac{r^2}{\ell^2}+br-\mu\right)dt^2+{\left(\frac{r^2}{\ell^2}+br-\mu\right)^{-1}} {dr^2}+r^2d\phi^2,\label{HBH} \end{alignedat} \end{equation} where $t\in\mathbb{R}$, $\phi \in [0,2\pi ]$ with period $2\pi $, and $r\in\mathbb{R}_{>0}$, and where $\mu $ and $b$ are two integration constants. One can verify that (\ref{HBH}) actually solves the NMG equations of motion \eqref{eom} provided (\ref{punto}) holds. In fact the solution with $b\neq 0$ exists if and only if $m^2= \lambda $. For certain range of the parameters $\mu $ and $b$ the solution above describes a black hole, with horizons located at \begin{equation} r_{\pm} = \frac 12 ( - b\ell^2 \pm \sqrt{b^2\ell^4 + 4\mu \ell^2}), \end{equation} whose inverse transformation is \begin{equation} b= -\frac{(r_++r_-)}{\ell^2} \ , \ \ \ \mu=-\frac{r_+r_-}{\ell^2}\, .\label{Relation} \end{equation} In terms of $r_+$ and $r_-$, solution (\ref{HBH}) takes the form \begin{equation} ds^2=-\frac{(r-r_+)(r-r_-)}{\ell^2}\, dt^2 +\frac{\ell^2\, dr^2}{(r-r_+)(r-r_-)} + r^2\, d\phi^2 , \end{equation} and represents a black hole provided $r_+ > 0$. (Without lost of generality one can consider $r_+\geq r_-$). The solution {\it looks} similar to BTZ black hole \cite{BTZ, BTZ2}, although it describes a remarkably different geometry. \begin{figure} \ \ \ \ \ \ \ \includegraphics[width=5.7in]{AdS_BH_Penrose_Diagram.pdf} \caption{Penrose diagram of the static black hole solution with $\mu >0$. } \label{Figure} \end{figure} Let us study the most salient properties of this solution: First, let us summarize some properties that make the $b\neq 0$ black hole different from BTZ. For example: \begin{enumerate} \item It has non-constant curvature, so it is not locally equivalent to AdS$_3$. In fact, Ricci scalar $R=-6/\ell^2 -2b/r$ diverges at $r=0$ (see Figure 1). This means that, as higher-dimensional AdS black holes, solution (\ref{HBH}) exhibits a curvature singularity at the origin. Notice also that, provided $b<0$, the curvature $R$ changes its sign at $r=-b\ell^2/3$. \item It may have two horizons for certain range of parameters, namely for $r_+\geq r_- \geq 0$, despite being a static, uncharged solution. This results in a change of the causal structure and singularity signature, relative to the static BTZ ($r_-=0$). \item It does not obey Brown-Henneaux asymptotic boundary conditions \cite{BH} but more relaxed ones. This will be important for the discussion in the next section. \item It has an additional parameter, $b$. This parameter is physical, in the sense that it cannot be absorbed by coordinate redefinition; notice that the curvature invariant depends on it. \end{enumerate} Despite all these differences, spacetime (\ref{HBH}) does share some properties with BTZ. For example: \begin{enumerate} \setcounter{enumi}{4} \item It is regular outside and on the horizon. \item It is conformally flat \cite{conformal}. That is, the Cotton tensor vanishes, $C_{\mu\nu}=0$, which means that solution (\ref{HBH}) is also a solution when theory (\ref{ActionNMG}) is coupled to TMG. \item It has isometry group $\mathbb{R}\times SO(2)$ generated by the Killing vectors $\partial_t$ and $\partial_{\phi}$. \item It is asymptotically, locally AdS$_3$ in the sense that the Riemann tensor tends to that of AdS$_3$ at large $r$ \cite{Julio}. This implies that $\lim_{r\to \infty } (R_{\mu\nu }+2\ell^{-2}g_{\mu\nu })=0$. \item Its asymptotic is compatible with a microscopic derivation of its thermodynamics \cite{JulioYo} using the Cardy formula in the dual CFT$_2$ \`a la \cite{Strominger}. \item It represents a black hole for certain range of parameters, namely for $r_{+}\geq 0$, \cite{Julio, NMG2}. The static BTZ black hole corresponds to $r_+=-r_-$. It means it contains AdS$_3$ as a particular continuously connected case, i.e. for $b=\mu +1=0$. \item Its metric admits to be written in a quite manageable, simple expression provided there is no rotation. It admits a stationary, rotating generalization (see (\ref{RotA})-(\ref{RotZ}) below) whose form can also be written down analytically, although it acquires a cumbersome form \cite{Julio, JulioYo}, cf. \cite{Leston}. We will discuss the stationary solutions below. \item It yields finite conserved charges. \end{enumerate} Regarding the latter point, the mass of black hole solution (\ref{HBH}) can be computed with the Barnich-Brandt method \cite{BarnichBrandt}, which yields \begin{equation} Q[{\partial_t}]=M=\frac{\mu}{4G}+\frac{b^2\ell^2}{16G}= \frac{(r_+ - r_-)^2}{16\ell^2G},\label{M} \end{equation} which, remarkably, depends on both $\mu$ and $b$. Notice that the solution is massless in the extremal case $r_+=r_-$. A rapid way to confirm this is the right value of the mass is as follows: one can perform in (\ref{HBH}) a change of coordinates by defining $\hat{r}= r+b\ell^2/2$. Then, the metric takes the form \begin{equation} \begin{alignedat}{2} ds^2=-\left(\frac{\hat{r}^2}{\ell^2}-M\right)dt^2+{\left(\frac{\hat{r}^2}{\ell^2}-M\right)^{-1}} {d\hat{r}^2}+\left(\hat{r}-\frac{b\ell^2}{2} \right)^2 d\phi^2\, , \label{nHBH} \end{alignedat} \end{equation} where $M$ is given by (\ref{M}). In these coordinates, the metric takes a form similar to BTZ, up to subleading contributions $\mathcal{O}(\hat{r})$ in the $g_{\phi\phi}$ component, which now reads $g_{\phi\phi}=\hat{r}^2-b\ell^2\hat{r}+b^2\ell^4/4$. The new $\mathcal{O}(\hat{r})$ and $\mathcal{O}(\hat{r}^0)$ terms in $g_{\phi\phi}$ being subleading, one can ignore them to see the asymptotics, and then simply read the mass from the components $g_{tt}$; this obviously yields $M$. Component $g_{\phi\phi}$ of the metric (\ref{nHBH}) vanishes at $\hat{r}_{--}=b\ell^2/2$, where $r=0$. Assuming $b\geq 0$, for this special circle to be inside the horizon one should ask $\hat{r}_+=\ell\sqrt{M}\geq \hat{r}_{--}$, which in turn implies $\mu \geq 0$. Then, taking into account relation (\ref{Relation}) and that $r_+\geq r_-$, one concludes that for $b\geq 0$ the condition $\hat{r}_+\geq \hat{r}_{--}$ ultimately implies $r_-\neq 0$. This also implies that in that case the curvature singularity at $r=0$ is timelike. Another interesting possibility is $b< 0$, where $g_{\phi\phi}$ does not vanish for any positive $\hat{r}$. This solution may still represent a black hole, which would contain both internal and external horizons (i.e. $r_{+}\geq \hat{r}> 0$). Black hole solution (\ref{HBH}) exhibits non-trivial thermodynamical properties. Its Hawking temperature is given by \begin{equation} T=\frac{\kappa}{2\pi }= \frac{(r_+-r_-)}{4\pi \ell^2},\label{LaT} \end{equation} while its entropy can be shown to be \begin{equation} S = \frac{2\pi (r_+-r_-)}{4G}. \label{LaS} \end{equation} Notice that the latter formula does not follow the area law, but the entropy is given by the difference between the areas of the external and the internal horizons. This behavior is due to the presence of the higher-curvature terms present in the action. It can also be thought of as a backreaction effect of the hair parameter $b$ on the geometry. It can easily be checked that the variables $M$, $T$, and $S$ obey a Smarr-like formula $M=\frac 12 \, TS$, which follows from the fact that, for this black hole, $S\propto T$. These variables also obey the first law of black hole mechanics $dM = T\, dS$. Notice that in the extremal case $r_+=r_-$ the solution has all thermodynamical quantities equal to zero: $M=T=S=0$. \section{Asymptotic symmetries at the boundary} Let us now consider the large $r$ behavior of the geometry (\ref{HBH}). To do that, let us first study a weakened version of asymptotically AdS$_3$ boundary conditions: Consider perturbations of the AdS$_3$ metric of the form \begin{equation}\label{pert} \begin{alignedat}{2} &\delta g_{rr}=h_{rr}\, r^{-3}+f_{rr}\, r^{-4}+\cdots\\ &\delta g_{ri}=h_{ri}\, r^{-1}+f_{ri}\, r^{-2}+\cdots\\ &\delta g_{ij}=h_{ij}\, r+f_{ij}+\cdots \end{alignedat} \end{equation} where $i,j=t,\phi$ or, using coordinates $x^\pm={t}/{\ell}\pm \phi$, $i,j=+,-$. The functions $h_{\mu \nu}$ and $f_{\mu \nu}$ above are arbitrary functions of all variables but $r$. Notice that these boundary conditions are weaker than the usual Brown-Henneaux asymptotic conditions \cite{BH}. They are even weaker than the boundary conditions proposed by Grumiller and Johansson in \cite{GJ}, which are the one that holds in the so-called Log-gravity \cite{LogG}. As a matter of fact, the second line in \eqref{pert} also differs from the perturbation given in Eq. (30) of Ref. \cite{Julio}. Still as we will see below, the weakened falling-off (\ref{pert}) is compatible with the main features of AdS/CFT. Let us begin by studying the local conformal symmetry at the boundary: Consider the asymptotic Killing field $\eta=\eta^\mu \partial_\mu$ \begin{equation}\label{AKV} \begin{alignedat}{2} &\eta^+=L^+(x^+)+\frac{\ell^2}{2r^2}\partial_-^2 L^-(x^-)+\cdots\\ &\eta^-=L^-(x^-)+\frac{\ell^2}{2r^2}\partial_+^2 L^+(x^+)+\cdots\\ &\eta^r=-\frac{r}{2}(\partial_+ L^++\partial_- L^-)+\cdots \end{alignedat} \end{equation} which actually preserves the set of metric \begin{equation} g_{\mu \nu}=\bar g_{\mu \nu}+\delta g_{\mu \nu}, \end{equation} with $\delta g_{\mu\nu}$ obeying (\ref{pert}) and $\bar g_{\mu \nu}$ being the line element of AdS$_3$, which in coordinates $r,\, x^+, \, x^-$ reads \begin{equation} \bar {ds}^2=-\ell^2(dx^+)^2-\ell^2 (dx^-)^2-2(\ell^2+2r^2)dx^+dx^-+\ell^2( \ell^2+{r^2})^{-1} {dr^2}\, . \end{equation} Indeed one can check that \begin{equation}\begin{alignedat}{2} \mathcal L_\eta \, g_{ab}=\mathcal O (r), \ \ \ \mathcal L_\eta \, g_{ar}=\mathcal O (r^{-1}),\ \ \ \mathcal L_\eta \, g_{rr}=\mathcal O (r^{-3}), \end{alignedat} \end{equation} and so it closes in (\ref{pert}). Killing field (\ref{AKV}) generates a Virasoro algebra (see below). Since (\ref{pert}) are weaker than the standard AdS$_3$ boundary conditions, a natural question arises as to whether this set of geometries is preserved by additional asymptotic Killing vectors. It was noticed in \cite{Julio}, that the vector field \begin{equation} \zeta=Y(x^+,x^-)\, \partial_r,\label{RTY} \end{equation} also preserves the phase-space (\ref{pert}). More precisely, \begin{equation}\begin{alignedat}{2} \mathcal L_\zeta \, g_{aa}=\mathcal O (r^0), \ \ \ \ \mathcal L_\zeta \, g_{ar}=\mathcal O (r^{-2}), \ \ \ \ \mathcal L_\zeta \, g_{rr}=\mathcal O (r^{-3}), \end{alignedat} \end{equation} together with \begin{equation} \mathcal L_\zeta g_{+-}=-4Y\, r + \mathcal O (r^0). \end{equation} The latter variation relates the subleading fluctuation $\delta g_{+-}$ with the arbitrary function $Y(x^+, x^-)$ that appears in $\zeta $. This means that, under the action of $Y$, the following relation between fluctuations holds: \begin{equation} \delta g_{\phi\phi}=-\ell^{-2}\delta g_{tt}=-2Y\,r+\mathcal{O}(r), \end{equation} now written in terms of the variables $t,\, \phi$. Therefore, in addition to (\ref{AKV}), asymptotics (\ref{pert}) admits a local current (\ref{RTY}). Together, the Killing vectors $\eta $ and $\zeta $ generate two copies of Virasoro \begin{equation}\label{Pocho23} \begin{alignedat}{3} [\eta(L^+_1,L^-_1),\eta(L^+_2,L^-_2)]=\eta(\hat L^+,\hat L^-) \end{alignedat} \end{equation} with \begin{equation} \begin{alignedat}{3} \hat L^+=L^+_1 \partial_+ L^+_2-L^+_2 \partial_+ L^+_1\ , \ \ \ \hat L^-=L^-_1 \partial_- L^-_2-L^-_2 \partial_- L^-_1 \end{alignedat} \end{equation} in semi-direct sum with an infinite-dimensional Abelian ideal: \begin{equation} \begin{alignedat}{3} [\zeta(Y_1),\zeta(Y_2)]=0\ , \ \ \ [\zeta(Y_1),\eta(L^+_2,L^-_2)]=\zeta(\hat Y) \end{alignedat} \end{equation} with \begin{equation}\label{Pocho26} \begin{alignedat}{3} \hat Y=-\frac{1}{2}Y_1(\partial_-L^-_2+\partial_+ L^+_2)-L_2^-\partial_- Y_1 -L_2^+\partial_+ Y_1 \end{alignedat} \end{equation} where $[\, ,\, ]$ stands for the modified Lie brackets \cite{Barnich1}, namely $[\zeta , \eta] = \mathcal{L}_{\zeta } \eta -\delta_{\zeta } \eta + \delta_{\eta } \zeta $, defined to take into account the dependence of the asymptotic Killing vectors upon the functions in the metric components. Algebra (\ref{Pocho23})-(\ref{Pocho26}) generate the asymptotic isometry group, which is infinite-dimensional. While the Noether charge $Q{[\partial_{\eta}]}$, associated to the asymptotic isometries generated by $\eta$, obey a Virasoro algebra with central charge\footnote{This value of $c$ is twice the value of the Brown-Henneaux central charge for Einstein gravity in AdS$_3$.} \begin{equation} c=\frac{3\ell }{G} , \end{equation} the supertranslation symmetry generated by $\zeta(Y)$ yields vanishing charge; namely we find using the covariant phase space formalism that \begin{equation}\label{Pocho28} Q{[\partial_{\zeta}]} = 0. \end{equation} This implies that supertranslation symmetry generated by (\ref{RTY}) is pure gauge. One can easily verify that a translation in $r$, i.e. $r\to r+r_0$, does not change the mass of the black hole solution: both metric (\ref{HBH}) and (\ref{nHBH}) yield the same charge associated to $\partial_t$. Such shift in $r$ makes the $g_{tt}$ component of the metric to acquire the form $g_{tt}=- (r^2/\ell^2 +\hat{b}r-\hat{\mu})$, where $\hat{b}= b+\delta b$ and $\hat{\mu}=\mu + \delta \mu $ with \begin{equation} \delta b = \frac{2r_0}{\ell^2 } \ , \ \ \ \ \ \delta \mu = -\, br_0-\frac{r_0^2}{\ell^2 }\, . \end{equation} Then, we notice that $\hat{M}=(\hat{\mu}+\hat{b}^2\ell^2/4)/(4G)=({\mu}+{b}^2\ell^2/4)/(4G)=M$, i.e. \begin{equation} \delta M = 0. \end{equation} This is consistent with the charge algebra. \section{Horizon symmetries} We have shown above that, despite the extra Killing vector (\ref{RTY}), no supertranslation symmetries act on the boundary gravitons. We will now focus on the black hole horizon, where supertranslation symmetries are also expected to appear \cite{Hawking:2015qqa}. Let us consider the near horizon boundary conditions studied in \cite{DGGP, DGGP2}; namely \begin{equation}\label{ds2} ds^2= f\, dv^2 -2 k\, dv d\rho + 2h\, dv d\phi+R^2d\phi^2, \end{equation} where $v\in \mathbb{R}$, $\rho\geq 0$, and $\phi \in [0,2\pi ]$ with period $2\pi $. Functions $f$, $k$, $h$, and $R$ are of the form \begin{equation} \label{boundaryconditions} \begin{split} f&= -2\kappa \,\rho + \tau(\phi) \,\rho^2+{\mathcal O}(\rho ^3), \\ k&=1+{\mathcal O}(\rho ^2), \\ h&= \theta(\phi)\,\rho+\sigma(\phi)\rho^2+{\mathcal O}(\rho^3 ), \\ R^2&=\gamma^2(\phi)+ \lambda(\phi)\, \rho + {\mathcal O}(\rho^2 ), \end{split} \end{equation} where ${\mathcal O}(\rho^2)$ refers to functions of $v$ and $\phi $ that vanish equally or faster than $\rho ^2$, and where the orders that do not appear in \eqref{ds2} are supposed to be $\mathcal{O}(\rho^2)$. In the expressions above, $\tau(\phi)$, $\theta (\phi)$, $\gamma (\phi)$, and $\lambda (\phi)$ are arbitrary functions of the coordinate $\phi $; $\kappa $ corresponds to the surface gravity at the horizon and is fixed. As shown in \cite{DGGP}, near boundary conditions (\ref{boundaryconditions}) are preserved by a set of asymptotically Killing vectors $\chi=\chi^\mu \partial_\mu$ that generate an infinite-dimensional algebra, consisting of one copy of the Virasoro algebra in semidirect sum with supertranslations. More precisely, \begin{equation}\label{AKVdeH} \begin{alignedat}{2} &\chi^{v}=P(\phi )+\cdots \\ &\chi^{\rho }=\frac{\theta (\phi )}{2\gamma^2(\phi )} \partial_{\phi }P(\phi )\, \rho^2+\cdots \\ &\chi^{\phi }=L(\phi )-\frac{1}{\gamma^2(\phi )} \partial_{\phi }P(\phi )\, \rho + \frac{\lambda (\phi ) }{2\gamma^4(\phi )} \partial_{\phi }P(\phi )\, \rho^2+\cdots \end{alignedat} \end{equation} where the ellipsis stand for $\mathcal{O}(\rho^3)$ terms. These asymptotic Killing vectors satisfy the Lie product \begin{equation} \begin{alignedat}{3} [\chi (P_1,L_1),\chi (P_2,L_2)]=\chi (\hat P,\hat L ) \end{alignedat} \end{equation} with \begin{equation}\label{Poc} \begin{alignedat}{3} \hat P=L_1\,\partial_{\phi }\,P_2 - L_2\,\partial_{\phi }\,P_1 \ , \ \ \hat L=L_1\,\partial_{\phi }\,L_2 - L_2\,\partial_{\phi }\,L_1 \, , \end{alignedat} \end{equation} which generates a copy of Virasoro algebra in semidirect sum with supertranslation, generated by $L$ and $P$ respectively. Under the action of the vector field \eqref{AKVdeH}, the metric functions transform as \begin{equation} \begin{alignedat}{4} &\delta_\chi \tau=Y\partial_\phi \tau-\frac{\kappa \theta \partial_\phi P}{\gamma^2},\\ &\delta_\chi \theta=-2\kappa \partial_\phi P+\partial_\phi (\theta P),\\ &\delta_\chi \gamma= \partial_\phi (\gamma P),\\ &\delta_\chi \lambda= 2\theta \partial_\phi P+2\partial_\phi P\frac{ \partial_\phi \gamma}{\gamma}-2\partial_\phi ^2 P+2\lambda \partial_\phi L+L \partial_\phi \lambda. \end{alignedat} \end{equation} Now, let us compute the Noether charges associated to the infinite-dimensional isometries derived above: In the covariant formalism \cite{BarnichBrandt}, the functional variation of the conserved charge associated to a given asymptotic Killing vector $\chi $ is given by the expression \begin{equation}\label{ALaCarga} \delta Q[\chi ; g,\delta g]=\frac{1}{16\pi G}\int_0^{2\pi} d\phi\, \sqrt{-g} \,\epsilon_{\mu \nu \phi } \, k^{\mu \nu}_\chi[g,\delta g] , \end{equation} where $g$ is a solution, $\delta g$ a perturbation around it, and $k^{\mu \nu}$ is a surface 1-form potential. The latter is the sum of the GR contribution $k^{\mu \nu}_{\mathrm{GR}}$ and the contributions $k^{\mu \nu}_K$ coming from the quadratic terms of NMG; namely \begin{equation} k^{\mu \nu}= k^{\mu \nu}_{\mathrm{GR}}-\frac{1}{2 m^2}k^{\mu \nu}_K. \end{equation} The explicit expression of the 1-form potential can be found in Appendix A. Evaluating (\ref{ALaCarga}) for the supertranslation symmetry generator $\chi (P)$ yields a set of Noether charges; namely \begin{equation} \delta Q{[\chi (P)]} = \frac{\kappa}{8\pi G} \int_0^{2\pi } d\phi \,P(\phi)\, \delta\Big(\gamma(1+\ell^2 \tau)+\frac{\ell^2(\theta^2+4\kappa \lambda)}{4\gamma} \Big)+\, D, \label{CargaLauGeneral} \end{equation} where $\gamma $, $\tau$, $\theta $ and $\lambda $ in general depend on $\phi $, and where $D$ is given by \begin{equation} D = -\frac{\kappa \ell^2}{8\pi G} \int_0^{2\pi } d\phi \,P(\phi)\, \partial_{\phi}\left( \gamma^{-2}\delta (\theta \gamma ) , \right)\label{EsDerivadaTotalONo} \end{equation} which is a total derivative for constant $P$, and is an exact variation if $\gamma $ is fixed. In other words, the charge is not generically integrable due to the presence of $D$. This is in contrast to what happens in GR, where the supertranslation charge is integrable provided the generators do not depend on $v$. Superrotation charges are found to be \begin{equation} \delta Q{[\chi (L)]} = -\frac{1}{16\pi G} \int_0^{2\pi } d\phi \, L(\phi)\, \delta\left( \theta\gamma\left( 1+5\tau \ell^{2}\right) +\frac{\theta \ell^{2}\left( \theta^{2}+4\lambda\kappa\right) }{4\gamma }+8\ell^2 \kappa \gamma \sigma\right), + \tilde{D} \label{SuperRotation} \end{equation} where $\tilde{D}$ stands for a non-integrable piece that vanishes when $\gamma $, $\theta $, $\lambda $, $\tau$ and $\sigma $ are constant. Notice that the subleading contribution $\sigma $ enters in the superrotation charge. It is possible to verify that (\ref{SuperRotation}) exactly reproduces the charge of the rotating BTZ black hole; see appendix B. It is worth mentioning that explicit expressions of solutions of NMG field equations carrying both supertranslation and superrotation charges can be written down. They are the solutions found in \cite{DGGP} (see equations (15)-(16) therein), which persist as exact solutions when the terms $K_{\mu\nu}$ are added to the Einstein equations, provided the radius $\ell $ is taken to be that given in (\ref{punto}). In particular, the charge $Q[\partial_v]$, associated to the zero-mode of supertranslation vector, in the case where $\gamma $, $\tau$, $\theta $ and $\lambda $ are independent of $\phi $, is given by \begin{equation}\label{QNMG} Q{[\partial_v ]}=\frac{\kappa}{4G}\left(\gamma(1+\ell^2 \tau)+\frac{\ell^2(\theta^2+4\kappa \lambda)}{4\gamma} \right). \end{equation} Now, let us evaluate this charge for the hairy black hole geometry we are interested in: First, we have to take the near horizon limit in the geometry (\ref{HBH}), i.e. looking at the hairy black hole close to its external horizon. To do so, it is convenient to define the new variables \begin{equation} \rho = r-r_+ \ , \ \ \ \ v=t-\ell^2 \int^r \frac{dr}{(r-r_+)(r-r_-)} ; \end{equation} that is \begin{equation} t=v+\frac{\ell^2}{(r_+-r_-)}(\log (r-r_+)-\log (r-r_-)) \, . \end{equation} In these coordinates, the near horizon (near $\rho \simeq 0$) region of the black hole takes the form\footnote{In \cite{Cvetkovic:2018dmq} the near horizon limit of the extremal solution $r_+=r_-$ was considered. The analysis is quite different from the non-extremal case, cf. \cite{DGGP2, DGGP3}.} \begin{equation}\label{compatible} ds^2\simeq -\frac{1}{\ell^2}((r_+-r_-) \, \rho \, +\, \rho^2 )dv^2 - 2\,dv\,d\rho + (r_+^2 +2r_+ \rho )\, d\phi^2 + \, ... \end{equation} where the ellipsis stand for subleading terms of the $\rho$ expansion. Metric components (\ref{compatible}) actually obey the near horizon boundary conditions (\ref{ds2}) where the relevant metric functions are given by: \begin{equation} \kappa = \frac{(r_+ - r_-)}{2\ell^2}\hspace{1 mm}, \hspace{8 mm} \tau=-\frac{1}{\ell^2 } \ , \ \ \ \ \ \gamma = r_+ \ , \ \ \ \ \ \lambda = 2r_+\hspace{1 mm}, \hspace{8 mm} \theta=0. \end{equation} Evaluating it for the above solution, we get \begin{equation}\label{WaldoLaura} Q{[\partial_v ]}=\frac{(r_+-r_-)^2}{8\ell^2 G}= TS, \end{equation} where $T$ is the Hawking temperature (\ref{LaT}) and $S$ is the entropy (\ref{LaS}) of the black hole (\ref{HBH}). We emphasize that entropy (\ref{LaS}) differs from the GR result, as the higher-curvature terms in the action makes that the area law is not necessarily obeyed for all the black hole solutions of the theory. An interesting especial case is the BTZ solution, which we discuss in detail in the appendix. Formula (\ref{WaldoLaura}) reproduces the black hole entropy from the near horizon perspective, even if the entropy of the hairy black hole does not depend only on the radius of the external horizon, but on the difference between the areas of both external and internal horizons. It is actually the subleading contributions in (\ref{boundaryconditions}) what carry the inner horizon dependence in $S$. This is a crucial difference with respect to the near horizon computation in GR, where subleading terms $\lambda $, $\tau$ and $\sigma $ do not enter in the charges, cf. \cite{DGGP, DGGP2, DGGP3}; see also Appendix A. \section{Adding rotation: Stationary hairy black holes} A rotating generalization of the hairy black hole (\ref{HBH}) is given by \cite{Julio} \begin{equation} ds^{2}=-N^{2}( r)\, F\left( r\right) dt^{2}+\frac{dr^{2}% }{F\left( r\right) }+\left( r^{2}+r_{0}^{2}\right) \left( N^{\phi}\left( r\right) dt+d\phi\right) ^{2}\ ,\label{RotA} \end{equation} where $N(r)$, $N^{\phi }(r)$ and $F(r)$ are functions of the radial coordinate $r$, given by \begin{align*} F\left( r\right) & =\frac{r^{2}}{\ell^{2}}+\frac{\left( \eta+1\right) b}% {2}r-\mu\eta+\frac{b^{2}\ell^{2}\left( 1-\eta\right) ^{2}}{16}\ ,\\ N^{\phi}\left( r\right) & =\frac{8a\left( br-\mu\right) }{16r^{2}% +\left( 1-\eta\right) \ell^{2}\left( 8\mu+b^{2}\ell^{2}\left( 1-\eta\right) \right) }\ ,\\ N^{2}\left( r\right) & =\frac{\left( 4r+b\ell^{2}\left( 1-\eta\right) \right) ^{2}}{16r^{2}+\ell^{2}\left( 1-\eta\right) \left( 8\mu+b^{2}% \ell^{2}\left( 1-\eta\right) \right) }\ ,% \end{align*} and \begin{equation} r_{0}^{2} =\frac{\ell^{2}\left( 1-\eta\right) \left( 8\mu+b^{2}\ell^{2}\left( 1-\eta\right) \right) }{16}. \label{RotZ} \end{equation} Here $\eta =\sqrt{1-a^{2}/\ell^{2}}$, and $a$ is the rotation parameter. For certain range of parameters $\mu,$ $a$ and $b$, where $\mu >-{b^{2}\ell^{2}}/{4}$ and $|a|\leq \ell $ are satisfied, this solution also represents a black hole. When $a=0$, the metric reduces reduces to the static hairy black hole (\ref{HBH}), while for $b=0$ it reduces to the stationary BTZ black hole (\ref{BTZ1})-(\ref{BTZ}). As we see, the expression for the metric of the rotating hairy black hole is notably more involved than the one of the static case $a=0$. It can nevertheless be seen that it is consistent with the asymptotic symmetry analysis presented in sections 5 and 6 as follows. The Ricci scalar reveals the presence of a curvature singularity since% \begin{equation} R=-\frac{6}{\ell^{2}}-\frac{2b\eta}{r-r_{s}}\ , \end{equation} where% \begin{equation} r_{s}=-\frac{b\ell^{2}\left( 1-\eta\right) }{4}\ . \end{equation} Provided $r_{+}>r_{-}>r_{s}$, there will be an event and a Cauchy horizon located at $r_{+}$ and $r_{-}$, respectively, given by% \begin{equation} r_{\pm}=-\frac{b\left( 1+\eta\right) \ell^{2}}{4}\pm\frac{\ell\sqrt{\eta\left( b^{2}\ell^{2}+4\mu\right) }}{2}\ . \end{equation} We focus on that case. The change of coordinates% \begin{equation} dt =dv-\frac{dr}{N\left( r\right) F\left( r\right) }\ ,\ \ \ \ d\phi =d\varphi+\frac{N^{\phi}\left( r\right) }{N\left( r\right)F\left(r\right) }dr-N^{\phi}\left( r_{+}\right) dv\ , \end{equation} leads to the metric% \begin{equation} ds^{2}=-N^{2}\left( r\right) F\left( r\right) dv^{2}+2N\left( r\right) drdv+\left( r^{2}+r_{0}^{2}\right) \left( d\varphi +\left( N^{\phi}\left( r\right) -N^{\phi}\left( r_{+}\right) \right) dv\right) ^{2}\ . \end{equation} Finally, introducing the Gaussian coordinate $\rho$ as% \begin{equation} \rho\left( r\right)=N\left(r_+\right)\left(r-r_+\right)+\frac{N'\left(r_+\right)}{2}\left(r-r_+\right)^2\ , \end{equation} suffices to recast the near horizon geometry ($r\rightarrow r_{+}$, $\rho\rightarrow 0$) in the form% \begin{eqnarray} ds^{2}&=\left( -2\kappa\rho+\tau \rho^{2}+\mathcal{O}\left( \rho^{3}\right) \right) dv^{2}+2\left( 1+\mathcal{O}\left( \rho^{2}\right) \right) dvd\rho \, +\nonumber \\ &\ \ \ \ 2\left( \theta\rho+\sigma\rho^2+\mathcal{O}\left( \rho^{3}\right) \right) dvd\varphi + \left( \gamma^{2}+\lambda\rho+\mathcal{O}\left( \rho^{2}\right) \right) d\varphi^{2}\ , \end{eqnarray} where% \begin{align*} \kappa & =\frac{\eta}{\ell }\sqrt{\frac{b^{2}\ell^{2}+4\mu}{2\left( 1+\eta\right) }} , \\ \tau & =\frac{\left( (1+\eta)b^{2}\ell^{2}+2b\ell\sqrt{\eta\left( b^{2}\ell^{2}+4\mu\right) }+4\mu\right) \left( 2l(\eta+1)b\sqrt{\eta\left( b^{2}\ell^{2}+4\mu\right) }-\eta \ell^{2}\left( \eta+3\right) b^{2}-8\mu\eta\right)}{\left( 1+\eta\right) \left( (1-\eta)\ell^{2}b^{2}+4\mu\right) ^{2}\ell^{2}} , % \\ \theta &=\frac{\sqrt{2(b^2\ell^2+4\mu)(1-\eta)}}{2} , \\ \gamma^{2} & =\frac{\ell^{2}}{8\eta}\left( 1+\eta\right) \left( -b^2\ell^2(1+\eta)-2b\ell\sqrt{\left( b^{2}\ell^{2}+4\mu\right) \eta}+4\mu\right),\\ \lambda & =\sqrt{\frac{\left( 1+\eta\right) }{8\eta}}\left( -b\ell\left( 1+\eta\right) +2\sqrt{\left( b^{2}\ell^{2}+4\mu\right) \eta}\right)\ell , \\ \sigma &=\frac{a\left( 2\ell (b^{2}+4\mu)^{3/2}(\eta-1)+\eta^{1/2}\left( 32\mu^{2}-\ell ^{4}(1-\eta^{2})b^{4}+4b^{2}\mu(1-\eta)\ell ^{2}\right) \right) }{2(\eta+1)(b^{2}\ell ^{2}\eta-\left( b^{2}\ell ^{2}+4\mu\right) )^{2}\eta^{1/2}% \ell ^{2}}\ . \end{align*} Evaluating the charge $Q{[\partial_v]}$ yields \begin{equation} Q{[\partial_v]}=\left(\frac{\kappa}{2\pi}\right)\left(\frac{\pi \ell}{4G}\sqrt{2\left(4\mu+b^2\ell^2\right)\left(1+\eta\right)}\right) =\frac{\kappa \sqrt{2} }{8 G}\sqrt{\frac{1+\eta}{\eta}}\left(r_+-r_-\right)\, , \end{equation} which is found to be \begin{equation} Q{[\partial_v]}= TS \ . \end{equation} That is, it reproduces the product of the Hawking temperature $T$ and the black hole entropy $S$. Indeed, the entropy of the rotating black hole has been computed in \cite{Julio}, where was shown to be \begin{equation} S = \frac{2\pi \left(r_+-r_-\right)}{4G}\, \sqrt{\frac{1+\eta}{2\eta}} \, ,\label{EstaS} \end{equation} which reduces to (\ref{LaS}) when $a=0$ (i.e. $\eta =1$). In \cite{JulioYo}, expression (\ref{EstaS}) was observed to agree with the result of the Cardy formula in the dual CFT$_2$ with the correct value of the central charge, $c=3\ell /G$. \section{Adding the Chern-Simons gravitational term} As mentioned, hairy black holes (\ref{HBH}) are conformally flat and so they are solutions to NMG coupled to TMG \cite{TMG}, which is defined by adding to the gravity action (\ref{ActionNMG}) the gravitational Chern-Simons term \begin{equation} \Delta I = \frac{1}{32\pi G\, q }\int d^{3}x\,\varepsilon ^{\alpha \beta \gamma }\, \Gamma _{\alpha \sigma }^{\rho }\Big( \partial _{\beta }\Gamma _{\gamma \rho }^{\sigma }+\frac{2}{3}\, \Gamma _{\beta \eta }^{\sigma }\Gamma _{\gamma \rho }^{\eta }\Big) , \label{CCC} \end{equation} where $q$ is an arbitrary coupling constant\footnote{In the literature, this coupling is usually denoted by $\mu$, but here we prefer to call it $q$ so that it is not to be mistaken for the mass parameter $\mu $ of the black hole solution} of mass dimension 1. The contribution of (\ref{CCC}) to the field equations is the addition of the Cotton tensor, which identically vanishes for a geometry that is conformally flat. However, (\ref{CCC}) yields a non-trivial contribution to the charge, changing both the mass and the entropy of the hairy black holes. \\ We can obtain the contribution to the entropy coming from the gravitational Chern-Simons term by evaluating% \begin{equation} \Delta S=-\frac{1}{8G\, q}\int_{\Sigma}\, \epsilon_{\ \mu}^{\nu}\, \Gamma_{\ \nu\rho }^{\mu}\, dx^{\rho}\ , \end{equation} on the bifurcation surface \cite{Tachikawa}. The binormal $\epsilon$ is defined in terms of the horizon generator $\xi=\partial_{t}+\Omega_{H}\partial_{\phi}$ as $\kappa\epsilon_{\mu\nu}=\nabla_{\mu}\xi_{\nu}$. The angular velocity of the hairy black hole is% \begin{equation} \Omega_{H}=\frac{1}{\ell }\sqrt{\frac{1-\eta}{1+\eta}}\ . \end{equation} Finally the contribution to the entropy is found to be% \begin{equation} \Delta S=-\frac{\pi}{8G\, q}\sqrt{2\left( 1-\eta\right) \left( b^{2}% \ell^{2}+4\mu\right) }=-\frac{\pi }{4G\, q \ell }\sqrt{\frac{\left( 1-\eta\right) }{2\eta}}\left( r_{+}-r_{-}\right) \ . \end{equation} On the other hand, we find that the contribution of the gravitational Chern-Simons term to the charge $Q[\partial_{v}]$ in the near horizon geometry is given by \begin{equation}\label{Confi} \Delta Q[\partial_{v}]=\frac{\kappa\, \theta}{8G\, q} = \frac{\kappa }{8G\, q\ell }\sqrt{\frac{1-\eta }{2\eta }} (r_+ - r_-) = T\Delta S\, . \end{equation} Notice that the TMG contribution $\Delta S$ vanishes for static black holes ($\eta =1$). Notice also that (\ref{Confi}) comprehends, in particular, the result of conformal gravity, which corresponds to the limit $q \to 0$ of the formulae above. \section{Conclusions} We considered stationary black holes in AdS with softly decaying hair. These geometries appear, for example, as solutions of massive 3-dimensional gravity \cite{Julio, NMG2} and of 3-dimensional conformal gravity \cite{conformal}. When AdS boundary conditions that are weak enough to accommodate such solutions are considered, the asymptotic isometry group contains, in addition to local conformal symmetry, an infinite-dimensional Abelian ideal. This is a local supertranslation symmetry that acts non-trivially at the level of the asymptotic isometry but yields vanishing Noether charges and, therefore, turn out to be pure gauge. This is related to the fact that the ADM mass of the hairy black holes in AdS, in addition to the standard mass parameter ($\mu $), also depends on the gravitational hair ($b$): The supertranslation transformation at infinity acts as a angle-dependent shift in the radial direction, changing both $\mu$ and $b$ in a way such that the mass remains unchanged. Then, we reoriented our analysis to the black hole horizon: We studied the supertranslation symmetry that the hairy black hole geometry exhibits in its near horizon region. There, an infinite set of non-trivial supertranslation charges appear. We computed these charges explicitly and we showed that, as it happens in Einstein gravity, the zero-mode of the supertranslation charge in the near horizon limit reproduces the entropy of the black hole. This is the case even when the entropy of the hairy black hole depends not only on the radius of the external event horizon but also on the radius of the internal Killing horizon. In other words, the back-reaction of the gravitational hair in the near horizon geometry produces that the entropy (\ref{LaS}) does not obey the area law: The entropy for $b\neq 0$ is actually proportional to the difference between the areas of both external and internal horizons. In contrast to Einstein gravity, in the massive theory the subleading contributions -- namely $\lambda $, $\tau $ and $\sigma $ in (\ref{boundaryconditions}) -- contribute to the Noether charges in such a way the zero mode reproduces the correct entropy formula; cf. \cite{DGGP, DGGP2, DGGP3}. \[\] L.D. is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 746297. The work of G.G. is partially supported by CONICET grant PIP 1109-2017. The work of J.O. is supported by FONDECYT grant 1181047.
1,314,259,996,914
arxiv
\section{Introduction} Investigating the origin of nitrogen in galaxies has been a major topic of research in the past few years. The reason is at least two-fold: {\it i)} There are many processes involved in the computation of the stellar yields of nitrogen and hence there are still many uncertainties present in these calculations (see Meynet \& Maeder 2002a - hereafter MM02). Meynet \& Maeder have shown that stellar rotation and mass loss can affect the predictions of the stellar yields especially for He, C, N and O. Chemical evolution models can thus be used to test and constrain the stellar yields (see Fran\c cois et al. 2004); {\it ii)} a large amount of data is available for the N/O abundance ratio in different environments, ranging from spiral galaxies (e.g. Pilyugin et al 2004 and references therein) to dwarf galaxies (see Mouhcine \& Contini 2002, Larsen et al. 2001 and references therein) and damped Lyman alpha systems, hereafter DLAs (e.g. Centurion et al. 2003, Prochaska et al. 2002, Pettini et al. 2002). In the latter case, the nitrogen evolution carries important information on the still-debated nature of these systems. It has been recently shown (Chiappini, Matteucci \& Meynet 2003, hereafter CMM03) that models of chemical evolution computed with the MM02 yields for the whole range of masses, predict a slower increase of nitrogen than what is obtained with other sets of stellar yields, with important implications for the interpretation of the DLA abundance data. Due to the slower increase of nitrogen in time, the DLA abundance patterns can be reproduced by ``bursting models'' (see also Lanfranchi \& Matteucci 2003) and in this framework, the ``low N/O'' and ``high N/O'' groups of DLAs (first identified by Prochaska et al. 2002) could be explained as systems that show differences in their star formation histories rather than an age difference. We were able to obtain models that show both a low log(N/O) and a low [O/Fe] (of the order of [O/Fe]$\sim$0.2-0.3 dex, in agreement with observations - Centurion et al. 2003) during almost all their evolution. Alternative interpretations (e.g. Prochaska et al. 2002; Centurion et al. 2003) of the ``low N/O'' DLAs suggested in the literature would imply [O/Fe] ratios larger than the observed ones. DLAs could also be identified with outer regions of spiral galaxies (Hou et al. 2001; Calura et al. 2003) but in this case DLAs with low log(N/O) necessarily would be quite young systems (younger than $\sim$150 Myr - see figure 11 of CMM03) and no discontinuity in the log(N/O) vs. log(O/H) diagram would be expected. However, as pointed out in CMM03, it remains to be seen to what extent the MM02 yields for $^{14}$N in the intermediate mass star range would increase once hot bottom burning (HBB) is taken into account. Although MM02 did not formally include the third dredge-up and HBB, it is worth studying the effects of their yields on chemical evolution models for the following reasons: a) The MM02 yields for nitrogen at low metallicity result from a new process whose importance for chemical evolution has still to be studied. In the absence of a real quantitative assessment of the importance of the HBB it is interesting to study the importance of this new process, which produces ``non-parametric'' yields, independently of HBB and b) this is particularly justified in view of the fact that this new process gives primary nitrogen yields at low metallicity not very different from those obtained from parametric studies such as van den Hoek and Groenewegen (1997 - hereafter vdHG97). This questions the importance of the HBB\footnote{Marigo (2003) showed that variable molecular opacities may decrease the efficiency of HBB - or even prevent it in some cases - especially in the more massive AGB stars}. Only by studying the effects separately it will be possible to understand the different consequences of the two processes. In the massive range, the yields of MM02 predict some primary nitrogen production \footnote{We call attention to the fact that the MM02 yields for helium are currently the only ones to ensure a good agreement between chemical evolution models for the Milky Way (hereafter MW) and the solar helium abundance - see CMM03. This is essentially due to mass loss in massive stars. In the massive range, the yields computed by MM02 for He, C, N and O would be essentially unchanged by explosive nucleosynthesis and can thus be considered robust calculations which take into account important physics (i.e. rotation and mass loss - see Hirschi et al. 2004 for a detailed description of these models for massive stars).}. In CMM03 we showed that models for the MW computed with this new set of yields show a plateau in log(N/O), due to massive stars with initial rotational velocities of 300 km sec$^{-1}$, at log(N/O) $\sim-4$. This value is below the value of $-$2.2 dex observed in some DLAs and hence we suggested that in these systems both massive and intermediate mass star, would be responsible for the nitrogen enrichment (in agreement with the conclusions of Chiappini, Romano and Matteucci 2003 and Henry et al. 2000). This is instead at variance with recent claims that massive stars are the only ones to enrich systems that show a log(N/O)$\sim-$2.2. However, one should keep in mind that stellar evolution calculations for N and O in massive stars depend strongly on the adopted rotational velocities and mass loss rates, respectively. More stringent constraints on nitrogen nucleosynthesis come from the study of the nitrogen abundances in stars in the MW since they represent a true evolutionary sequence, where the stars with lower metallicity are the oldest ones (Matteucci 1986). Moreover, the halo very metal poor stars play a fundamental role since, at metallicities below [Fe/H] = $-$3, only Type II supernovae have had time to contribute to the interstellar medium enrichment from which these stars formed, thus offering a way to constrain the nitrogen production in massive stars at low metallicities (the same is true for other elements as shown by Fran\c cois et al. 2004). On the other hand, an important constraint on the nitrogen production in intermediate mass stars (and thus also on the HBB) is the variation of the N/O abundance ratio with galactocentric distance. As shown by Diaz \& Tosi (1986), in spiral galaxies, the steepness of the abundance gradient of N/O decreases as the primary nitrogen production in intermediate mass stars increases (see also Chiappini, Romano \& Matteucci 2003). When we published our two last papers on the evolution of CNO in galaxies (Chiappini, Romano \& Matteucci 2003 and CMM03) no conclusive data was available for nitrogen in metal-poor halo stars. This situation has now greatly changed. New data on nitrogen abundances in metal-poor stars (by Spite et al. 2005 and Israelian et al. 2004) show a quite surprising result: a high N/O ratio suggestive of high levels of production of primary nitrogen in massive stars. Moreover, the N/O abundance ratios in metal-poor stars show a large scatter (roughly 1dex, much larger than their quoted error bars) although none of the stars measured so far has N/O ratios as low as the ones observed in DLAs. In the present paper we will study the implications of these new data sets for our understanding on the nitrogen enrichment in our galaxy. In Sec. 2 we briefly present our model for the MW and the adopted stellar yields. Section 3 is devoted to the comparison between our models for the MW and the new data now available for the solar vicinity. We will show that currently there is no set of stellar yields able to explain the very metal-poor data of Spite et al. (2005). Invoking the so-called population III stellar yields available in the literature does not solve the problem as it still leads to inconsistencies if one considers other abundance ratios, for instance C/Fe. This section also includes, for the first time, our predictions for the abundance gradients of N/O and C/O once the MM02 stellar yields are adopted. We will show that the N/O abundance gradient represents a powerful tool to assess the importance of HBB in intermediate mass stars. In Sec. 4 a discussion is presented where we point out ways to account for these new observations and check their implications for our previous conclusions on the nature of DLAs. \section{Stellar yields and chemical evolution model} In the present work we adopt the stellar yields described in MM02 and CMM03\footnote{For type Ia SNe we adopted the stellar yields of model W7 of Thielemann et al. (1993).}. In Fig. 1 we plot the nitrogen yields of MM02. Filled squares show stellar yields resulting from models with rotation (V$_{\rm rot}=$300 km/s), while open symbols stand for models computed with V$_{\rm rot}=$0 km/s. MM02 computed stellar yields for the following metallicities: Z=0.020 (solid lines), Z=0.004 (short-dashed lines) and Z=0.00001 (long-dashed lines). The asterisks connected by the long-dashed line show the stellar yields we adopted for metallicities Z$<$10$^{-5}$ in our heuristic model (see Section 3). Two important findings can be seen in Fig. 1: {\it i)} in the lowest metallicity case, rotation increases the nitrogen production in stars of all masses and {\it ii)} the increase of nitrogen, at Z=0.00001, is especially important in low and intermediate mass stars (at least for the case of V$_{\rm rot}=$300 km/s). However, for other metallicities the $^{14}$N yields of MM02 for the intermediate mass stars are lower than the ones of vdHG97 as MM02 did not formally include the HBB (see CMM03 their Fig. 2). \begin{figure} \centering \includegraphics[width=8cm,angle=0]{2292fig1.eps} \caption{MM02 stellar yields for $^{14}$N, for the whole stellar mass range, for different metallicities. The yields of MM02 for stellar models where rotation is not taken into account are shown as open squares. Filled squares stand for models with rotation. Stellar yields are shown for 3 different values of metallicities (solid lines: solar, dashed line: Z=~0.004 and long dashed line: Z=~0.00001). The asterisks connected by the long-dashed line show the stellar yields adopted only for the lowest metallicity case in our heuristic model (see text)}. \end{figure} The adopted chemical evolution model for the MW is the so-called ``two-infall model'' of Chiappini et al. (1997, 2001) where a detailed description can be found. The fundamental idea of this model is that the formation of the MW occurred in two different infall episodes, one forming the halo and part of the thick disk on a relatively short timescale and another one forming the thin-disk on a longer timescale. In this model a threshold gas density is assumed and, as a consequence, the star formation rate becomes zero every time the gas density drops below the threshold value. The two-infall approach, combined with such a threshold, leads to a gap in the star formation before the formation of the thin-disk. During the ``gap'' in the star formation only elements produced by type Ia SNe and low and intermediate mass stars (LIMS), born before the ``gap'', are restored into the ISM. As a consequence this model predicts an increase in the abundance ratios of elements restored on long-timescales (e.g. Fe or C) over $\alpha$-elements (produced mainly by massive short-lived stars) around a metallicity of [Fe/H]$\sim -$0.6 dex (which corresponds to the time of the halt in the SFR which we predict to be around 1 Gyr after the start of the halo phase - see Chiappini et al. 1997 for details). A star formation halt between the formation of the halo and thin disk is suggested by observations (e.g. Gratton et al. 1996, 2000, 2003; Fuhrmann 1998, 2004). The required amount of infall seems to agree well with current estimates and is supported by recent observations both in our galaxy and in M31 (Sembach et al. 2004 and Thilker et al. 2004). \section{Results} \subsection{The solar vicinity} We will concentrate the following discussion on the log(N/O) vs. log(O/H) diagram, instead of the usual [N/Fe] vs. [Fe/H] one for two main reasons. The first one is that since MM02 did not compute stellar yields for Fe it is more consistent to compare N/O abundance data with our theoretical predictions. The second reason is that in the case of the sample by Israelian et al. (2004) the uncertainties in N and O should cancel out once one considers the N/O ratio which is thus less prone to observational uncertainties compared to the N/Fe abundance ratios. \begin{figure} \centering \includegraphics[width=8cm,angle=0]{2292fig2.eps} \caption{Solar vicinity diagram log(N/O) vs. log(O/H)+12. The data points are from Israelian et al. 2004 (large squares), Spite et al. 2005 (asterisks). Also shown is the very metal-poor star found by Christlieb et al. (2004). Solid curves show the prediction of MW models computed with vdHG+WW yields (thin solid line) and MM02 yields (thick solid line). The latter flattens for log(O/H)+12 $<$ 6.6 due to the contribution by massive stars to the nitrogen production at low metallicities. The dotted curve shows a model computed according to the suggestion of Matteucci (1986), where all massive stars, in all metallicities, contribute a fixed amount of primary nitrogen of 0.065 M$_{\odot}$. The dashed line shows the prescriptions of our heuristic model computed with MM02 yields but assuming that massive stars with metallicities less or equal to 0.00001 produce much more nitrogen than the quantities computed by MM02, as shown in Fig. 1.} \end{figure} One of the main assumptions when comparing chemical evolution predictions with abundance data is that they represent the pristine abundances from the ISM from which the stars formed. This means that we should avoid using objects that could have undergone mixing processes. Israelian et al. (2004) published homogeneous N/O abundance ratios for a sample of 31 unevolved dwarf metal-poor stars (shown in Fig. 2 as open squares). Spite et al. (2005) obtained nitrogen abundances for a sample of stars of even lower metallicities. Because in this case the abundances were measured in giants, the authors also measured the surface abundance of lithium as a diagnostic for CNO dredged-up material to the surface. They were in principle able to select a subsample of ``unmixed'' stars (plotted as asterisks in Fig. 2). In reality this sample should be seen as an upper limit for N/O as some mixing could still have taken place. It should also be noticed that although their abundance sample is unique and reaches a metallicity never reached before (especially for N), the absolute value of their data points can be still affected by {\it i)} the 3D corrections applied to their oxygen values and {\it ii)} the fact that the nitrogen abundances were derived from the NH band for most of the stars. These abundances show a systematic shift of $+$0.4 dex with respect to abundances obtained from measurements of the CN band (they had both measurements for 10 stars - see Spite et al. 2005 for details). Fig. 2 also shows the N/O solar ratio (the solar values were taken from Allende Prieto et al. 2001 for oxygen and Holweger 2001 for nitrogen). Also shown in Fig. 2 is the very metal poor star (giant) of Christlieb et al. (2004). It can be seen that this star has a N/O abundance ratio which is clearly larger than the typical ratios of the other two data samples. However, in this case a self-enriched scenario or a contamination by a binary companion star still cannot be excluded (see Chrislieb et al. 2004 for a detailed discussion). In the same figure we show our model predictions for different assumptions of stellar yields. The solid curves are the same models shown in CMM03 (their figure 7): the thin line represents a model computed with the stellar yields of vdHG97 and Woosley \& Weaver (1995 - hereafter WW95), the thick line shows a model computed with MM02 stellar yields. When comparing these two models two things can be noticed : a) as WW95 do not produce primary nitrogen in massive stars the thin curve computed with their stellar prescriptions does not flatten at low metallicities (contrary to what happens to the thick curve because in this case, according to the prescriptions of MM02, the massive stars produce some primary nitrogen) and b) the increase of the N/O ratio as a function of metallicity in the model represented by the thin curve is faster than the one shown by the thick curve. This is mainly due to the large amount of $^{14}$N produced during the HBB in intermediate mass stars according the calculations of vdHG97. Before discussing the other two models plotted in Fig. 2, notice that the new points of Israelian et al. (2004) are not far from the thick solid curve computed with the MM02 stellar yields, especially for log(O/H)+12 $>$ 8.0. As discussed in the previous section, because MM02 did not formally include HBB, one would expect the curve to lie much below the data points. The fact that the thick curve is close to the abundance ratios measured by Israelian et al. (2004) in unevolved stars suggests that HBB is less efficient than in vdHG97 models\footnote{Here we adopted their standard models and took their tables where the mass loss parameter varies with metallicity - see CRM03 for details. vdHG97 also computed another set of models where less HBB was assumed. We also computed a chemical evolution model where the latter stellar prescriptions were adopted. In Fig. 2 this model would fall in between the two solid curves discussed here - it is not shown to make the figure less crowded.}. In fact, the thin curve lies above most of the Israelian et al. (2004) data points\footnote{ Romano \& Matteucci (2003) have shown that by adopting the vdHG97 set of stellar yields with less HBB it is possible to reproduce the trend of $^{12}$C/$^{13}$C, which decreases in time in the solar neighbourhood.}. The dotted curve was computed according a suggestion made by Matteucci (1986) that all massive stars should produce around 0.065 M$_{\odot}$ of primary nitrogen (it is the same model shown by the thick solid curve in Fig. 2 except that in this case we assume that all massive stars, for all metallicities, contribute a fixed amount of 0.065 M$_{\odot}$ of N). This suggestion was based on the little data available at that time which seemed to suggest a flat [N/Fe] ratio at low metallicities. As it can be seen, the dotted line can reproduce the locus of the Spite et al. (2005) data sample but tends to overproduce nitrogen at higher metallicities (this curve is above most of the Israelian et al. data, even though it can reproduce the solar N/O abundance ratio). The dashed line shows what we call our heuristic model. This model is the same as the thick solid line except that for the metallicities Z $<$ 0.00001 we increased the yields of nitrogen given by MM02 for massive stars in the following way: we added 0.15M$_{\odot}$ of nitrogen in the table of MM02 for Z=0.00001 (which translates into a factor of 200 increase in $^{14}$N for a 60 M$_{\odot}$ star and around a factor of 40 for a 9M$_{\odot}$ star). This is shown in Fig. 1 by the asterisks connected by a long-dashed line. As our code then interpolates (linearly) the stellar yields for metallicities between Z=0.0 and Z=0.00001, this model produces more nitrogen at the beginning of galaxy evolution, leading to large N/O ratios at low metallicities but not changing its behavior for metallicities more close to solar (in fact the dashed curve coincides with the thick curve for oxygen abundances above 8.0). The physical motivation for this heuristic model would be an increase of the rotational velocity in very metal-poor stars. As shown by MM02, the nitrogen yields increase with increasing rotational velocities. It might be that the initial distribution of the rotational velocities is different at different metallicities. There are some indirect indications that, at low metallicities, there are more fast rotators. For instance, the observed fraction of Be stars (which, being very fast rotators, are near the break up limit) appear to be more frequent in the SMC than in the MW (Maeder et al. 1999). Keller (2004) found that the rotation velocities of early B-type stars in the LMC are higher than the rotation velocities of comparable stars in the MW. Part of the reason for this is that a zero metallicity star having the same amount of angular momentum as a solar metallicity star rotates much faster due to its greater compactness (see Meynet \& Maeder 2002b). Thus it might be that only stars at low metallicity rotate sufficiently fast to enable massive stars to contribute large amounts of nitrogen. Whether these suggestions are physically plausible remains to be assessed by future stellar evolution models, including rotation and mass loss. If the nitrogen production in very-metal poor massive stars depends strongly on the rotational velocity of the star, this could explain the large scatter observed in N/O at low metallicities. Moreover, the scatter could be related to the distribution of the stellar rotational velocities as a function of metallicity (being more biased to larger values as the metallicity decreases). Clearly, the above suggestion needs to be confirmed but it can in principle be tested by observations. \subsection{Is there an alternative explanation to rotation ?} An alternative way to explain the high N/Fe ratios at low metallicities is to assume that the first stars to enrich the ISM were population III stars (hereafter PopIII). Recently, Akerman et al. (2004) found an upturn in C/O at low O/H, and suggested that this could also be explained by adopting PopIII stellar yields (see figure 8 of Akerman et al. 2004). In particular, they adopted stellar yields computed by Chieffi \& Limongi (2002) for metal-free supernovae which, according to the latter authors, should be C-rich\footnote{Notice that Chieffi \& Limongi (2002) do not include rotation in their computations.} and assumed that PopIII stars were born with a ``top heavy'' IMF. Soon afterwards, Spite et al. (2005) confirmed that the upturn in C/O at low O/H values found by Akerman et al. (2004) extends to lower metallicities (see Fig. 3, where the open squares show the abundances measured by Akerman et al. 2004 and the asterisks show the Spite et al. 2005 data). In this section we will check if current stellar yield calculations for the so-called PopIII stars are able to ensure a good fit of the log(N/O) vs. log(O/H) diagram and, at the same time, still explain the almost flat behaviour of [C/Fe] found by Spite et al. (2005). The only set of stellar yields for PopIII able to produce a high enough N/Fe at low metallicities is the one of Chieffi and Limongi (2002,2004) (for a more detailed discusssion on the role of PopIII stars in the ISM enrichment of several other elements and models adopting different prescriptions for PopIII stars, see Ballero et al. 2005). Here we find that although the latter stellar yields can explain the C/O vs. O/H upturn at low metallicities they fail to reproduce the almost flat [C/Fe] abundance ratios found to extend down to the low metallicities sampled by Spite et al. (2005). This can be clearly seen in Figures 3 and 4 where the dot-dashed lines show a model similar to the one of Akerman et al. (2004), i.e. a model where the prescriptions for PopIII stellar yields of Chieffi \& Limongi (2002, 2004) are adopted for $Z <$10$^{-6}$. The latter value corresponds to the threshold metallicity ($\simeq$ 10$^{-4}$Z$_{\odot}$) below which the IMF should be ``top-heavy'', as suggested in the literature for the so-called PopIII stars (where PopIII stands not only for ``zero metallicity stars'' but also stars born with a different IMF, where low and intermediate mass stars did not form - see Ballero et al. 2005 for details). Therefore, this model was computed following the Akerman et al. (2004) prescriptions also for the IMF, i.e. a ``truncated'' Scalo (1986) IMF, with M$_{low}$ $=$ 10 M$_{\odot}$ for $Z <$10$^{-6}$ and a normal Scalo (1986) IMF for higher metallicities. The dot-dashed model can well explain the C/O vs. O/H observations but leads to C/Fe ratios that are above the observed values (see dot-dashed curve in Fig. 4, upper panel - the data are from Cayrel et al. 2004) and is not able to produce the amount of N required at low metallicities to explain the new data points of Spite et al. (2005 - see dot-dashed curve in Fig. 4, bottom panel). \begin{figure} \centering \includegraphics[width=8cm,angle=0]{2292fig3.eps} \caption{Log(C/O) vs. log(O/H) diagram. The data are from Spite et al. (2005 - asterisks), Israelian et al. (2004 - squares), Nissen (2004 - filled pentagons). The different curves show our model predictions computed with different stellar yields as follows: a) dot-dashed curve - a model computed according the prescriptions of Akerman et al. (2004); b) thin solid curve - vdHG97 and WW95, where the latter refer to their solar tables and c) dashed line - vdHG97 and WW95 (where in this case the oxygen as a function of metallicity was adopted as suggested by Fran\c cois et al. 2004). This figure shows an alternative to the model suggested by Akerman et al. (2004) to obtain an upturn of C/O at low metalliticies.} \end{figure} \begin{figure} \centering \includegraphics[width=8cm,angle=0]{2292fig4.eps} \caption{The data points are from Cayrel et al. (2004), Spite et al. (2005) (asterisk) and Israelian et al. (2004) (squares). The dashed line shows the predictions of our ``heuristic model''. The dot-dashed lines refer to models computed with Chieffi \& Limongi (2002,2004) for metallicities below 10$^{-6}$ and a top-heavy IMF (see text). In the lower panel a model that assumes a constant N production in massive stars of all metallicities (Matteucci 1986) is also shown. In this figure the models are normalized to the solar N and Fe of Holweger (2001) and to the C of Allende-Prieto et al. (2001)} \end{figure} Also shown in Fig.4 is our heuristic model (dashed-line). This model can well explain the C/Fe ratios found in very metal-poor stars by Cayrel et al. (2004) (as expected since our heuristic model is identical to the CMM03 model as far as C and Fe are concerned) and also provides a good agreement with the N/Fe abundance ratios of Spite et al. (2005). However, in this case our heuristic model predicts a "valley" at intermediate metallicities (although less pronounced than the one shown by the dot-dashed model where the PopIII contribution was taken into account), whereas the data show a more flat behavior of N/Fe for the whole metallicity range. An easy way to obtain a flatter curve is to assume that the low and intermediate mass stars at low metallicities also should have higher nitrogen yields than the ones computed for Z$=$10$^{-5}$. It is not excluded that intermediate mass stars of that same low metallicity could also produce large amounts of nitrogen if this production is linked to high rotational velocities (as discussed in Sect. 3.1). However, in this paper we wanted to change as little as possible the already-existing stellar yields of MM02 and thus we increased the nitrogen yields only for massive stars. Our main goal here is to explain the very metal-poor data of Spite et al. (2005) and for this we need massive stars because intermediate mass stars would not have had time to contribute to the ISM enrichment at such low metallicities, given their longer lifetimes. In Fig. 4 we also show a model in which we assumed a constant N production in massive stars of all metallicities (Matteucci 1986 - dotted curve, also shown in Fig. 2. In this case a flat N/Fe vs. Fe/H is also obtained (see Ballero et al. 2005). It is beyond the scope of the present paper to discuss in detail the problems related to carbon nucleosynthesis (for that see our previous papers CMM03 and Chiappini, Romano \& Matteucci 2003), but Fig. 3 illustrates that, for the low metallicity end, an upturn in C/O can also be obtained without the need to invoke PopIII stellar yields/IMF. In this figure the solid thin curve shows model 7 of Chiappini, Matteuci and Romano (2003). The dashed curve shows the same model but adopting the WW95 stellar yields of oxygen as a function of metallicity (as suggested by Fran\c cois et al. 2004 and Goswami \& Prantzos 2000). In this case a C/O upturn can be obtained at low metallicities. This is because WW95 predict a decrease in oxygen rather than an increase in carbon for Z=0. As a consequence, such a model still fits the [C/Fe] abundance ratios as a function of metallicity \footnote{Some mechanism able to increase $^{14}$N at low metallicities is still needed in this case as models computed with WW95 stellar yields are not able to fit the new Spite et al. (2005) data for N/O, as shown by the thin solid curve in Fig. 2. WW95 also did not include rotation in their calculations.}). Notice that in this case the model computed with the stellar yields of MM02 cannot fit the C/O observations either, although they provide a good fit for [C/Fe] (see CMM03, their Fig. 6). The above results illustrate the importance of testing PopIII stellar yield predictions simultaneously on different abundance ratios (see Ballero et al. 2005 for a discussion of several other abundance ratios). In summary, our results suggest that a large $^{14}$N yield in massive stars is required to fit the new abundance data for very low metallicities. Rotation seems to be the most promising way to explain the new data and its scatter, whereas current PopIII stellar yields able to produce a high N/Fe at low metallicities tend to overproduce carbon at variance with the flat C/Fe vs. Fe/H observed . Current stellar models that take into account rotation (MM02) do not provide the required amount of nitrogen to fit the data. However, it is worth noticing that MM02 computed stellar yields down to Z$=10^{-5}$. It has to be seen if computations to even lower metallicities will be able to produce more N, at the levels suggested by our results. Another alternative could be an enhanced nitrogen production in massive close binaries (see Wellstein et al. 2001, Langer 2003 and references therein). However, if future stellar evolution models for very low initial metallicities are able to produce large amounts of nitrogen, it should still be checked to what extent C would also be produced. A large production of C at low metallicities would make it difficult to explain the flat behaviour observed in [C/Fe] from solar metallicities to [Fe/H] as low as $-$5. The results discussed above suggest that a better agreement in all plots would be obtained if the low-Z calculations were able to simultaneously increase the stellar yields of $^{14}$N, keeping C almost unchanged and decreasing the stellar yields of oxygen. \subsection{Present abundance gradients} In this last section we check the effect of the MM02 stellar yields on the abundance gradients predicted for our galaxy. As shown by Prantzos (2003), not much difference is seen when plotting the C/H, N/H and O/H abundances as a function of the galactocentric distance for models computed with WW95+vdHG97 or MM02 stellar yields. We confirm this result. However, as shown by Chiappini, Romano \& Matteucci (2003) this is not the case for the N/O, C/O and C/N abundance ratios. Figure 5 shows our predictions for the variation of these abundance ratios as a function of galactocentric distance obtained with MM02 stellar yields (thick lines) compared with model 7 of Chiappini, Romano \& Matteucci (2003 - thin line), which was computed with WW95+vdHG97 stellar yields. Important differences can be seen when the different sets of yield are adopted. \begin{figure} \centering \includegraphics[width=8cm,angle=0]{2292fig5.eps} \caption{Abundance gradients of C/O, N/O and C/N predicted by models adopting vdHG+WW yields (as model 7 of Chiappini, Romano \& Matteucci 2003 - thin solid line) and the same models computed with MM02 yields (as described in CMM03 - thick solid line). The dashed line barely seen in the bottom diagram corresponds to the predictions of our heuristic model (see text). In the middle panel both models overlap. Therefore, it is clear that the dominant factor of the N/O gradient in the MW is nitrogen production in LIMS and not the primary nitrogen in massive stars. For the abundance data see Chiappini, Romano \& Matteucci (2003) and references therein. Here we added the recent abundance data of Daflon \& Cunha (2004 - large asterisks).} \end{figure} In Fig. 5 we see a large scatter in the data points, especially for C/O abundance ratios (upper panel). Moreover, in the upper panel, where the open symbols stand for HII regions and the filled symbols and asterisks represent B stars, it can be seen that the latter tend to show systematically lower C/O abundance ratios. More data is necessary to use these abundance gradients as tools to better constrain the carbon and nitrogen nucleosynthesis (especially in intermediate-mass stars). We recall (see Chiappini, Romano \& Matteucci 2003) that the C/O predictions shown by the thin curve were computed with vdHG97 yields for the case where the mass loss parameter increases with metallicity. This leads to a larger C production at lower metallicities\footnote{Lower mass loss rates lead to longer stellar lifetimes. As a consequence the star undergoes more dredge up episodes thus increasing the amount of C brought to the surface and later ejected into the interstellar medium.}. As a consequence this model predicts an increase of the C/O abundance ratio towards the outer parts of the disk, where the contribution of low metallicity stars dominates. The model computed with MM02 stellar yields leads to a flat C/O abundance ratio along the disk (see thick solid curve in the upper panel of Fig. 5). This is because, due to the lack of the third dredge up, intermediate-mass stars contribute in a negligible way to the abundance gradients and what is seen in this case is essentially the result of the enrichment due to massive stars. This also explains why the absolute C/O abundance ratios in this case are systematically lower than in the thin-curve model (see Henry 2004 and CMM03). The middle panel of Fig. 5 shows a very interesting result: a model computed with vdHG97+WW95 stellar yields (thin curve) leads to a flat N/O gradient, whereas the model computed with MM02 (thick curve) leads to a decrease of the N/O abundance ratio as a function of the galactocentric distance. This is a very important result. As discussed in Chiappini, Romano and Matteucci (2003), other galaxies, like M101, clearly show a negative N/O gradient. In that paper we showed that if the vdHG97 stellar yields were adopted it was impossible to obtain a negative gradient for M101 and attributed this to the fact that vdHG97 predict too much nitrogen in intermediate-mass stars (due to a very efficient HBB). The fact that the curve computed with the MM02 stellar yields leads to a negative gradient for N/O in the MW (still consistent with the data\footnote{Optical data suggest a flat N/O abundance gradient for the MW, whereas infrared data suggest a negative one (Simpson et al. 1995). Moreover, the recent data by Daflon \& Cunha (2004) also show a gradient in N/O very similar to the one obtained by our model once the MM02 stellar yields are adopted (compare asterisks and thick line shown in the middle panel of Fig. 5).}) again suggests that the quantity of nitrogen ``missing'' in their calculations, due to the fact that these authors do not include the HBB, should be small. Negative abundance gradients for N/O have been observed in many other spiral galaxies as shown recently by a compilation of more than 1000 published spectra of HII regions in spiral galaxies by Pilyugin et al. (2004). Previous determinations of O/H abundance gradients (e.g. Diaz et al. 1991) in galaxies could have been overestimated in inner disks (see Garnett et al. 2004), which would lead to flatter N/O abundance gradients. In the middle and lower panels of Fig. 5 we also plotted our heuristic model (which is essentially like the thick curve but where we increased the yields of $^{14}$N in massive stars at low metallicities - see previous Section). This model (dashed curve) can be barely seen as it almost overlaps with the thick line model. This shows that the abundance gradients in the MW depend on the stellar yields in intermediate mass stars (as the metallicities do not reach the low values seen in Fig. 2 or 3 even in the outermost parts of the galactic disk - see also Diaz \& Tosi 1986). \section{Discussion and conclusions} In this paper we computed chemical evolution models for the MW aimed at explaining the new nitrogen abundances measured recently in halo stars (Spite et al. 2005 and Israelian et al. 2004). In particular, we computed what we call our heuristic model for the MW where nitrogen stellar yields of massive stars were increased only for the lowest metallicity with respect to the ones published by MM02. Our main conclusions are: \begin{itemize} \item A mechanism able to produce more $^{14}$N in massive stars at low metallicities relative to the existing stellar yields is necessary in order to explain the new data. If this large nitrogen production is linked to the fact that, at low metallicities, stars should in principle rotate faster (as discussed by Meynet \& Maeder 2002b) it would also offer a way to explain the scatter in N/O measured at these metallicities. \item To also reproduce the observed abundances of C/O and C/Fe in Galactic halo stars, it is important that the production of primary nitrogen in massive stars at metallicities below Z$=$10$^{-5}$ is accompanied by a decrease in oxygen and almost no change in carbon. Whether these suggestions are physically plausible is still to be assessed by future stellar evolution models, including rotation and mass loss. \item Rotation in intermediate mass stars is also able to produce primary nitrogen. We show that even if MM02 did not formally include the HBB, models computed with their stellar yields are not far from the abundance data in the solar vicinity (in the metallicity range where the IMS are supposed to contribute) and are still compatible with the abundance gradient for N/O along the Galactic disk. Although the data for the MW are not yet conclusive about the existence of a N/O abundance gradient, abundance gradients are clearly observed in other spiral galaxies. The existence of abundance gradients of N/O in spiral galaxies imposes limits on the efficiency of HBB since for high efficiencies the gradients would vanish (see also Chiappini, Romano \& Matteucci 2003). \end{itemize} If the new case presented here (shown by the dashed curve in Fig. 2) is accurate then it might be that only stars at such low metallicities rotate sufficiently fast to enable massive stars to contribute large amounts of nitrogen. If this is the case, our interpretation of the two DLA groups observed in the N/O vs. O/H diagram as being the result of different star formation histories rather than an age difference (given in CMM03) would still be possible: it could be that in DLAs the ISM was never as metal poor as the one from which the halo stars studied by Spite et al. (2005) formed. In fact, DLAs show metallicities higher than [Fe/H] $\simeq -$2.5. This could happen if, for instance, the ISM in DLAs suffered a pre-enrichment phase before the start of star formation. This is easier to envisage in the case of outer spiral disks as progenitors of DLAs. As shown by Chiappini et al. (2001), the outer parts of spiral disks could have been pre-enriched by halo/thick disk gas. If this is the case, the large nitrogen production seen in halo stars would not necessarily have taken place in DLAs. In other words, in DLAs very fast rotating massive stars probably never existed and this explains why these systems still show the lowest N/O ever measured. Although the data analyzed here are the best at currently available, there is still the possibility that the so-called ``unmixed stars'' receive a minor contribution from CNO processing material and that the nitrogen abundance could have been overestimated. \begin{acknowledgements} We would like to thank F. Calura, S. Recchi, D. Romano and G. Meynet for their suggestions on an earlier draft. C.C. and F.M. acknowledge financial support from the Italian MIUR (Ministery for University and Scientific Research) through COFIN 2003, prot. 2003028039. We also thank the referee, Dr. Argast, for his insightful comments that helped to improve this work. \end{acknowledgements}
1,314,259,996,915
arxiv
\section{Introduction} In the past several years one of the main questions in the research activity on high-$T_c$ superconductors has been the identification of the order parameter symmetry \cite{scalapino,vanh,woll,tsuei,kirtley}. The most possible scenario is that the pairing state is an admixture of a dominant $d$-wave with some small $s$-wave component. This fact is a direct consequence of the orthorhombic distortion of the systems which makes both the $d$-wave and $s$-wave indistinguishable (they transform according to the identity representation of the group). There is a basic difference in the physics if one takes into account the phase difference between the two parts of the order parameter. The mixing due to orthorhombicity predicts a $d+s$ or equivalently $d-s$ order parameter. This has been analyzed within the Ginzburg-Landau framework, valid close to T$_c$ \cite{betouras}. Experimental observation of this possibility has been clearly realized in photoemission experiments \cite{onellion} and the c-axis tunneling \cite{kouzn}. In addition to the above work, calculations based in BCS weak-coupling theory \cite{musaelian,ren} predict that a mixed symmetry is realized in a certain range of interaction. This state has the time-reversal symmetry ${\cal T}$ broken. This symmetry is realized in bulk calculations only as a consequence of the absence of any orthorhombic distortion (the Fermi surface is either circular or tetragonal in the particular examples) which favors a phase difference of $\pi/2$ between the two components as opposed to $\pi$ in the presence of it. The situation becomes more complicated if we consider surface effects. The observation of fractional vortices on the grain boundary in YBa$_2$Cu$_3$O$_7$ by Kirtley $et$ $al.$ \cite{kirtley2}, may indicate a possible violation of the time-reversal symmetry near grain boundary (because the boundary breaks the bulk orthorhombic symmetry). Therefore it is interesting to study more this symmetry in the case of interfaces. In the present paper we study the static properties of one dimensional junction which contains a twin boundary where the pair transfer integral between the two superconductors has an extra relative phase in each twin. The maximum current $I_c$ that a junction can carry versus the external magnetic field $H$ in direction parallel to the plane of the junction is calculated by solving numerically the Sine-Gordon equation. The stability of fractional vortices $f_v$ or antivortices $f_{av}$ which are spontaneously formed as a consequence of the symmetry, is examined in the absence of current and magnetic field for different lengths and relative phases. In the ${\cal T}$-violated state the magnetic interference pattern as has been obtained by Zhu $et$ $al.$ \cite{zhu} in the short junction limit is assymetric. They conclude that for a long junction the magnetically modulated critical current is basically identical to the conventional 0-0 junction due to the formation of the spontaneous vortex near the center of the junction. Our exact numerical calculations, show that there is a ``dip'' near the center of the diffraction patterns even for junctions as long as 10$\lambda_J$. The rest of the paper is organized as following. In section II we discuss the Josephson effect for a mixed wave symmetry. In section III we present the results for the magnetic flux and the interference pattern. Finally, summary and discussions are presented in the last section. \section{Josephson effect for a mixed wave symmetry} We discuss the Josephson coupling at the interface between two superconductors ($A$ and $B$) both with a two component order parameter $(n_1^{A(B)},n_2^{A(B)})$. We can think of the interface as a Josephson junction, so the Josephson current phase relation is \cite{bailey} \begin{equation} J=\sum_{i,j=1}^2 J_{cij}\sin(\phi_i^B-\phi_j^A), \end{equation} where $J_{cij}$ is the coupling between the components $n_i^B$ on side $B$ and with $n_j^A$ on side $A$ [$n_j^{\mu} = |n_j^{\mu}|exp(i\phi_j^{\mu})$]. We consider some special cases. ($i$) For $d$-wave symmetry one component of the order parameter vanishes at the interface ($n_2=0$). The Josephson current density becomes $J=|J_{c11}|\sin(\phi+\pi)$ with $J_{c11}<0$. ($ii$) For $d+s$-wave we are restricted to the case where $\phi_1^A-\phi_2^A=\phi_1^B-\phi_2^B=\pi$ is fixed on both sides of the interface. The current density $J$ depends only on one phase difference through the interface, say $\phi=\phi_1^B-\phi_1^A$ \begin{equation} J(\phi)=|\widetilde J_c|\sin(\phi+\theta) \end{equation} \begin{equation} \widetilde J_c=J_{c11}+J_{c22}-J_{c12}-J_{c21} \end{equation} with $\theta=0$ for $\widetilde J_c>0$ and $\theta=\pi$ for $\widetilde J_c<0$. ($iii$) For $d+is$-wave case the intrinsic phase difference within each superconductor $A$ and $B$ can be assumed to be $\phi_1^A-\phi_2^A = \phi_1^B-\phi_2^B = \pi/2$. The current density $J$ is \begin{equation} J(\phi)=\widetilde J_c \sin(\phi+ \theta) \end{equation} with \begin{equation} \widetilde J_c=\sqrt{(J_{c11}+J_{c22})^2+(J_{c12}-J_{c21})^2}, \end{equation} \begin{equation} \tan(\theta)=\frac{J_{c21}-J_{c12}}{J_{c11}+J_{c22}}. \end{equation} We consider two superconducting sheets $A$ and $C$ which overlap for a distance $L$ with the superconducting sheet $B$, in the $x$-direction, (as shown in Fig 1). All three superconducting sheets have dominant $d$-wave symmetry, with a small $s$-wave component. The angles between the crystalline $a$ axis of each superconductor $A,B,C$ with the junction interface are defined as $\theta_1, \theta_2, \theta_3$. We describe the entire junction with width $w$ small compared to $\lambda_J$ in the $y$ direction, of length $L$ in the $x$ direction, in external magnetic field $H$ in the $y$ direction. The intrinsic phase difference $\theta(x)$ is $\phi_{c1}$ in $0<x<\frac{L}{2}$ and $\phi_{c2}$ in $\frac{L}{2}<x<L$. The superconducting phase difference $\phi$ across the junction is then the solution of the Sine-Gordon equation \begin{equation} \frac{d^2 { \phi}(x)}{dx^2} = \frac{1}{\lambda_J^2}\sin[{ \phi(x)+\theta(x)}] ,~~~\label{eq01} \end{equation} with the inline boundary condition \begin{equation} \frac{d { \phi}}{dx}\left|_{x=0,L}\right. =\pm \frac{{ I}}{2}+H ~~~~\label{eq02} \end{equation} The Josephson penetration depth is given by \[ \lambda_J=\sqrt{\frac{\hbar c^2}{8\pi e d \widetilde J_c}} \] where $d$ is the sum of the penetration depths in two superconductors plus the thickness of the insulator layer. We also assume that $\widetilde J_c$ is constant within each segment of the interface. To check the stability we consider small perturbations $u(x,t)=v(x)e^{st}$ on the static solution $\phi(x)$, and linearize the time-dependent Sine-Gordon equation to obtain: \begin{equation} \frac{d^2 v(x)}{dx^2} +\cos[\phi(x)+\theta(x)] v(x)= \lambda v(x) ,~~~\label{eq10} \end{equation} under the boundary conditions $\frac{d v(x)}{dx}|_{x=0,L}=0$, where $\lambda=-s^2$. It is seen that if the eigenvalue equation has a negative eigenvalue the static solution $\phi(x)$ is unstable. We can also compute the free energy of the solution for zero current and external magnetic field \begin{equation} F = \frac{\hbar \widetilde J_c w}{2 e} \int_{0}^{L} \left[ 1-\cos \left[ \phi(x)+\theta(x) \right] +\frac{\lambda_J^2}{2} \left( \frac{\partial \phi}{\partial x} \right)^2 \right] dx ,~~~\label{eq11} \end{equation} Note that the no vortex solution $\phi=0$ everywhere is not a solution of this problem. When $\phi_{c1}=\phi_{c2}=0$ we have the conventional $s$-wave junction. In case $\phi_{c1}=0$, $\phi_{c2}=\pi$ we have the $d$-wave or $d+s$-wave junction. The above cases have time reversal symmetry (${\cal T}$-conservation). When $\phi_{c1},\phi_{c2}$ are slightly different from $0$ and $\pi$, we have the $d+is$-wave pairing, which is a broken time reversal symmetry state (${\cal T}$-violation). In this work, the particular parameters we use are $\phi_{c1}=0.01\pi$, $\phi_{c2}=1.08\pi$, and the pairing state is $d+is$. \section{Spontaneous magnetic flux and interference patterns for the ${\cal T}$-violating pairing state} In Fig. 2 we plot the maximum current for a symmetric $0-\pi$ junction as a function of the magnetic flux $\Phi$ (in units of $\Phi_0=\frac{h c} {2 e}$) for different junction lengths: (a) $L=10$, (b) $L=4$, (c) $L=2$, (d) $L=1$ ($\lambda_J=1$). The circles and squares in this figure correspond to the fractional vortex ($f_v$) and antivortex ($f_{av}$) branch. For most of the range of existence of $f_v$ ($f_{av}$) the magnetic flux is positive (negative) while there is a small region where it turns into antivortex (vortex). Similar calculation has been done \cite{xu}, \cite{kirtleymoler} who considered the $f_v$, since in this case the plot is symmetric in $H$. As we can see, there is a ``dip'' at $\Phi=0$, for lengths as long as $L=10$. In Fig. 3 we present our calculations for the ${\cal T}$-violation case where $\phi_{c1}=0.01\pi$ and $\phi_{c2}=1.08\pi$. We also plot for $L=1$ the analytical result (solid line) of Zhu $et$ $al.$ \cite{zhu}. In contrast to the pure $d$-wave case, for small lengths this pattern is assymetric and the ``dip'' in the maximum current does not occur at $\Phi=0$, but at a finite $\Phi$ value. This behavior also exists for lengths as long as $L=10$. If we plot $I_c$ vs $H$ (and not $\Phi$) then the two branches in Fig. 3, will be almost coincident and one might draw the conclusion that the behavior for a long junction is the same independent of the symmetry. The proper quantity to consider though is the total magnetic flux which includes both the contribution from the external field and the induced self field. It should be remarked that for an $s$-wave junction the relation between $\Phi$ and $H$ is linear for small $H$ so that the plot of $I_c$ vs $H$ or $\Phi$ does not show any differences for small $H$. For higher $H$ however the overlapping branches (for long $L$) are unfolded. In the case of a different symmetry even the small $H$ form can change due to the existence of spontaneous magnetization. In this case, if we consider the zero current solutions, and vary the magnetic field, both $f_v$, $f_{av}$ are stable, whereas for $L=4$, $L=2$, $L=1$ the stable regions in the magnetic field are separated by the unstable ones, and this behavior persists in both the $f_v$, $f_{av}$ cases. Also in the short junction limit $(i.e.$ $L=1)$ these two branches coincide at the maximum current. Figure 4 addresses the question of spontaneous flux generation in junctions with broken time reversal symmetry (${\cal T}$-violation) as a function of the reduced length ($L$) and the relative factor of $s$ and $d$ components. The long dashed line is the result of \cite{zhu} which compares with our numerical result (solid line). Both cases have $\phi_{c1}=0.01\pi, \phi_{c2}=1.08\pi$. We have also used two other values for $\phi_{c2}$ i.e. $0.9\pi$ (dotted line) and $0.8\pi$ (dashed line). We conclude that as we decrease the value of $\phi_{c2}$ the fractional vortex $f_v$ tends out to be a $2\pi$ vortex, whereas the fractional antivortex gradually loses its flux content. In Fig. 5a we have plotted the magnetic flux $\Phi$ (solid line, $\Phi_0=1$) versus the value $\phi_{c2}$ for $L=10$ and $H=0$. We see that as we decrease $\phi_{c2}$ the magnetic flux increases linearly for the $f_v$ branch from $\Phi \approx 0.45$, for $\phi_{c2}=1.08\pi$ to $\Phi \approx 1$ for $\phi_{c2}=0$. But this last point is unstable, as seen by the stability analysis from which the lowest eigenvalue is also displayed (light line). This is expected since it goes to a point in the unstable ($1,2$) branch of the usual $s$-type junction \cite{cap}. The $f_{av}$ branch increases linearly its flux as we decrease $\phi_{c2}$ and goes to the stable ($0,1$) branch. Here we follow the notation of \cite{owen}. This linear dependence of $\Phi$ from $\phi_{c2}$ can also be seen in the analytical result of \cite{zhu} for large lengths, where the approximation they made is valid. On the other hand as we increase $\phi_{c2}$ from $1.08\pi$ to $2\pi$, the $f_v$ branch decreases its flux and goes to the stable ($0,1$) branch, while the $f_{av}$ branch goes to the unstable ($-2,-1$) branch. When we change $\phi_{c1}$ and keep $\phi_{c2}=0$, from $0$ to $2\pi$, the ($0,1$) branch goes to $f_v$ and then to the unstable ($1,2$), while the unstable ($-2,-1$) branch goes to $f_{av}$ and then to the stable ($0,1$). The situation is a little bit different for small lengths as can be seen from Fig. 5b where $L=1$, $H=0$. Here $f_{av}$ branch is unstable for $\phi_{c2}=1.08\pi$ and by decreasing $\phi_{c2}$ it gets stabilized, but $f_v$ branch is stable for $\phi_{c2}=1.08\pi$ and then it becomes unstable. We can see that from Fig. 6 where we plot the ratio $F/F_0$ of the free energy of the state with some spontaneous flux to the state with no flux. This ratio becomes larger than one as we decrease the $\phi_{c2}$, for the $f_v$ branch, for small lengths. On the other hand, when $F/F_0<1$ the no flux state is metastable and the final state will be the one with spontaneous flux. Notice that the magnetic flux remains almost constant - almost zero - which can be expected since we are in the short junction limit where self currents are neglected. \section{Conclusions} We have studied the static properties of a one dimensional junction with $d+is$ order parameter symmetry. The magnetic interference pattern is asymmetric, and there exist a ``dip'' near $\Phi=0$ for lengths as long as $10\lambda_J$. The diffraction pattern of a junction can give us information about the pairing symmetry, at least where junctions are formed. We have followed the evolution of spontaneously formed vortex and antivortex solutions for different mixing between the $s$ and $d$ components of the order parameter. We have shown that for small lengths the fractional vortex becomes unstable as we decrease the extra phase of the pair transfer integral in the right part of the junction. We conclude that when a mixing state symmetry is realized, the fractional vortex and antivortex solutions evolve differently and this characterizes the $d+is$-wave pairing. We expect these findings to hold even if a bulk $d+s$ state evolves continuously as a function of distance from the interface to a $d+is$ one, as long as the there is a well defined area close to the interface where the time reversal symmetry is not conserved and the junction is formed.
1,314,259,996,916
arxiv
\section{Introduction} The interaction between a strong stellar magnetic field and an accretion disc can affect both the evolution and observational properties of the star. Close to the star the field is strong enough that the accretion disc is truncated, and mass is channelled along field lines to accrete on to the star's surface. At the inner edge of the truncated disc, the field and disc interact directly over some finite region, allowing angular momentum exchange from the differential rotation between the Keplerian accretion disc and the star. Angular momentum exchange between the field and the disc leads to two different states that can exist for a disc truncated by a magnetic field. The distinction depends on the position of the truncation radius relative to the corotation radius, $r_{\rm c} \equiv (GM_*/\Omega^2_*)^{1/3}$ (where $M_*$ and $\Omega_*$ are respectively the mass and spin frequency of the star), the radius at which the Keplerian frequency in the disc equals the star's rotational frequency. If the disc is truncated inside $r_{\rm c}$ then the field-disc interaction extracts angular momentum from the disc and accretion can proceed. If on the other hand the disc is truncated outside $r_{\rm c}$, the star-field interaction will create a centrifugal barrier that inhibits accretion. This is usually called the `propeller regime', under the assumption that most of the mass in the disc is expelled as an outflow \citep{1975A&A....39..185I}. Accreting stars with strong magnetic fields such as T Tauri stars, and X-ray millisecond pulsars show a large degree of variability in luminosity (corresponding to changes in accretion rate), which may be ascribable to magnetic activity. For example, the protostar EX Lupi (the prototype of the `EXor' class), a TTauri star, increases and decreases in brightness by several magnitudes every 2--3 years \citep{2007AJ....133.2679H}. At much higher energies, a 1 Hz quasi-periodic oscillation (QPO) in accreting millisecond pulsar SAX J1808.8-3658 has been observed during the decay phase of several outbursts \citep{2009ApJ...707.1296P}. The time-scale and magnitude of the variability in both sources suggest changes in accretion rate in the inner regions of the accretion disc, where it interacts with the star's magnetic field. In this paper we revisit a disc instability first suggested in \cite{1977PAZh....3..262S} and developed in \cite{1993ApJ...402..593S} (hereafter ST93), which can lead to episodic bursts of accretion. The instability arises when the magnetic field truncates the disc near the corotation radius. The magnetic field initially truncates the disc outside but close to the corotation radius, thus transferring angular momentum from the star to the disc and inhibiting gas from accreting on to the star (the propeller state). However, close to $r_{\rm c}$, the energy and angular momentum transferred by the field to the gas will not be enough to unbind much of the disc mass from the system and drive an outflow. Instead, the interaction with the magnetic field will prevent accretion \citep{1977PAZh....3..262S}. As gas in the inner regions of the disc piles up, the local gas pressure increases, forcing the inner edge of the disc to move inwards until it crosses $r_{\rm c}$. When the inner region of the disc cross inside $r_{\rm c}$, the centrifugal barrier preventing accretion disappears (since now the differential rotation between star and disc has changed sign) and the accumulated reservoir of gas is accreted on to the star. Once the reservoir has been accreted, the accretion rate through the disc's inner edge decreases, and the disc will again move outside $r_{\rm c}$, allowing another cycle to start. We study this process by following the time evolution of a thin axisymmetric viscous disc, with a paramaterization of the interaction between the disc and the magnetic field both inside and outside $r_{\rm c}$. This approach allows us to investigate the behaviour of the disc on time-scales much longer than the rotation period of the star. Long time-scales are important since the instability evolves on viscous rather than dynamical time-scales of the disc. We are able to reduce the uncertainties in the detailed MHD interaction between the field and the disc to two free (but constrained) parameters. Using this description we can then investigate the physical conditions for which the instability develops. In this paper we describe in detail the physics that can lead to episodic bursts of accretion and give a brief overview of the observed oscillations. In a later paper we will explore the range of outbursts seen in our simulations in more detail, and discuss their prospects for observability in specific stellar systems. \section{Magnetosphere-Disc Interactions} \subsection{Interaction region between a disc and magnetic field} \label{sec:global} We consider a star with a strong dipolar magnetic field surrounded by a thin Keplerian accretion disc. We assume that the dipole is aligned with both the star's spin axis and the spin axis of the disc, so that the system is axisymmetric. Near the surface of the star the magnetic field will truncate the disc, forcing gas into corotation with the star. This inner region (in which the gas dynamics is regulated by the magnetic field) is called the magnetosphere, and we define the {\em magnetospheric radius}, $r_{\rm m}$ as the radius at which the magnetic field is no longer strong enough to force the disc into corotation \citep{1993ApJ...402..593S}. Outside $r_{\rm m}$ the magnetic field will penetrate the disc and become strongly coupled over some radial extent, which we call the {\em interaction region}, $\Delta r$. Beyond the interaction region the disc and magnetic field are decoupled, so that the outer parts of the disc are not directly affected by the stellar magnetic field. Figure \ref{fig:field} shows a schematic picture for the magnetic field configuration, with a closed magnetosphere close to the star, and a large region of opened field lines further out. In the interaction region, the differential rotation between the Keplerian disc and star shears the magnetic field, generating an azimuthal component $B_\phi$ from the initially poloidal field. This in turn creates a magnetic stress which exerts a torque on the disc, transferring angular momentum between the disc and star. The torque per unit area exerted by the field on the disc is given by ${\rm}d{\mathbf tau}/{\rm d}r = rS_{z\phi}{\bf \hat{z}}$, where \begin{equation} \label{eq:stress} S_{z\phi} \equiv \pm \frac{B_\phi B_z}{4\pi} \end{equation} is the magnetic stress generated by the twisted field lines. The sign of the torque will depend on the location of the coupled disc region relative to the corotation radius, $r_{\rm c} \equiv (GM_*/\Omega_*^2)^{1/3}$. If the coupling takes place inside $r_{\rm c}$ the torque will extract angular momentum from the disc, spinning down the disc (and spinning up the star), while if the coupling is outside $r_{\rm c}$ the torque adds angular momentum to the disc, spinning it up (and spinning down the star). The radial extent of the interaction region has been a point of long-standing controversy in the study of accretion discs. In an early series of influential papers, Ghosh et al. (1977; \citealt{1979ApJ...232..259G,1979ApJ...234..296G}) argued that the coupled region is large ($\Delta r / r \gg 1$), so that the magnetic field exerts a torque over a considerable fraction of the disc with a resulting large influence on the spin evolution of the star. However, the original model proposed by Ghosh \& Lamb was shown to be inconsistent by \cite{1987A&A...183..257W}, since the magnetic pressure they derived from field winding far from $r_{\rm c}$~is high enough to completely disrupt the majority of the disc. More recent analytical and numerical work has shown that the interaction region is likely much smaller, and much of the disc is disconnected from the star (see \citealt{2004Ap&SS.292..573U} for a recent review). This comes about from the fact that in force-free regions (where the magnetic pressure dominates over the gas pressure) as are likely to exist above an accretion disc, field lines will tend to open up as the twisting increases \citep{1985A&A...143...19A, 1994MNRAS.267..146L}. As the disc and star rotate differentially, the increasing twist $\Delta \phi$ in the field line will only increase the $B_\phi$ component to some maximum $B_\phi \sim B_z$ before the increased magnetic pressure above the disc causes the field lines to become inflated and eventually open, severing the connection between the disc and star. Analytic studies of a sheared force-free magnetic field \citep{1985A&A...143...19A, 1994SSRv...68..299V, 2002ApJ...565.1191U} have shown that the $B_\phi$ component will grow to a maximum twist angle $\Delta \phi \sim \pi$ before opening. The twist angle grows on the time-scale of the beat frequency $\equiv |\Omega_*-\Omega_K|^{-1}$, which is very short compared to the viscous time-scale in the disc except in a very small region around corotation. To prevent field lines from opening, they must be able to slip through the disc faster than the rate at which the field is being wound up. The rate at which the field can move through the disc is set by the effective diffusivity, $\eta$, of the disc. Like the effective viscosity, $\nu$, that drives the transport of angular momentum, the effective diffusivity is also assumed to be driven by turbulent processes in the disc. Recent numerical studies of MRI (Magnetorotational Instability) turbulence (believed to be responsible for angular momentum transport in at least the inner regions of accretion discs) have tried to measure $\eta$ directly. In these simulations, an external magnetic field is imposed on a shearing box simulation, and the effective magnetic diffusivity is estimated as the flow becomes unstable. The results suggest that the effective diffusivity and viscosity are of similar size, that is, the effective magnetic Prandtl number, $Pr \equiv \nu/\eta$ is of order unity \citep{2009A&A...507...19F}. Such a large magnetic Prandtl number implies that for realistic disc parameters the magnetic field will not be able to slip through the majority of the disc fast enough enough to prevent field lines from opening \citep{1995MNRAS.275..244L, 2002ApJ...565.1191U}. Outside this region there will still be some coupling between the disc and the star as the gas moves from Keplerian to corotating orbits, but this estimate suggests that the actual extent of coupling is small ($\Delta r/r < 1$) regardless of where the disc is truncated relative to the corotation radius. Once the field lines are opened, there may be some reconnection across the region above the disc between open magnetic field lines (e.g. \citealt{1990A&A...227..473A,1997ApJ...489..199G,2002ApJ...565.1191U}). The effective size of the interaction region would then depend on the efficiency of reconnection, and could also then become time-dependent (although likely on time-scales of order the dynamical time, which is much shorter than the viscous evolution time-scale). The opening and reconnection of field lines has also been suggested as a possible launching mechanism for strong disc winds and a jet (e.g. \citealt{1990A&A...227..473A,1996ApJ...468L..37H,1997ApJ...489..199G}). This picture of a small interaction region with some reconnection was first proposed by \cite{1995MNRAS.275..244L}, and has been supported by 2 and 3D simulations of accretion discs interacting with a magnetic field (e.g. \citealt{1997ApJ...489..890M,1997ApJ...489..199G,1996ApJ...468L..37H,2009arXiv0907.3394R}). \begin{figure} \includegraphics[width=\hsize]{figglobal_field.eps} \caption{Global magnetic field configuration for a strongly magnetic star surrounded by an accretion disc. In this picture, the majority of the field exists in an open configuration, and the connected region between the field and the disc is very small. Adapted from Lovelace et al. (1995).\label{fig:field}} \end{figure} In summary, although the extent of the interaction region is uncertain (subject to uncertainties in the effective diffusivity of magnetic field in the disc and its possible reconnection in the magnetosphere, as well as the detailed interaction between the disc and field near the magnetosphere), numerical and analytic work suggests that it is small. Except for very special geometries for the magnetic field (such as \citealt{2000MNRAS.317..273A, 1994ApJ...429..781S}), the low effective magnetic diffusivity in the disc will force the magnetic field into a largely open configuration, and the majority of the accretion disc will be decoupled from the star, in strong contrast to the prediction of the \cite{1979ApJ...232..259G} model. The extent of the interaction region as well as the average magnitude of the $B_\phi$ component generated by the disc-field interaction will depend on the detailed interaction between the disc and the field as the gas moves from Keplerian orbits to corotation with the star, as well as the frequency and magnitude of possible reconnection events. In the present work we therefore assume that the time-averaged $B_\phi$ component generated by field-line twisting will be some constant fraction of $B_z$, so that $B_\phi/B_z \equiv \eta < 1$. We also assume that $\Delta r/r $ is small ($< 1$) but leave it as a free parameter. \subsection{Accretion and angular momentum transport} \label{sec:ang_mom} In this paper we describe the evolution of an accretion disc in which the conditions at the inner boundary are changing in time. Before doing this, however, we review how the conditions at the inner boundary affect the angular momentum transport and density structure of a thin accretion disc. In the thin-disc limit the evolution equation for the surface density $\Sigma$ can be written: \begin{equation} \label{eq:sigma_ev} \frac{\partial \Sigma}{\partial t} = \frac{3}{r}\frac{\partial}{\partial r}[r^{1/2}\frac{\partial}{\partial r}(r^{1/2}\nu\Sigma)], \end{equation} where $\nu$ is the effective viscosity in the disc that enables angular momentum transport. In a steady state (in which the accretion rate is constant throughout the disc), the general solution for $\nu\Sigma$ is given by: \begin{equation} \label{eq:steady} \nu\Sigma = \frac{\dot{m}}{3\pi}\left(1-\beta\left(\frac{r_i}{r}\right)^{1/2}\right), \end{equation} where $r_i$ is the inner edge of the disc, $\dot{m}$ is the accretion rate and $\beta$ is a dimensionless measure of the angular momentum flux through the disc per unit mass accreted \citep{1991ApJ...370..604P, 1991ApJ...370..597P}. All accretion discs have a boundary layer at their inner edge that connects the disc with either the surface of the star or the star's magnetosphere. In the boundary layer the gas must transition from Keplerian orbits to orbits corotating with the star in order to accrete. The structure of this boundary layer will determine the value of $\beta$ in (\ref{eq:steady}). In the standard accretion scenario, that is, for accretion on to a slowly-rotating star or on to the star's magnetosphere inside the corotation radius, the gas in the boundary layer will be decelerated, meaning that there will be a maximum in the rotation profile, $\Omega(r)$. At the maximum in $\Omega(r)$, there is no longer an outward transfer of angular momentum from viscous torques, which in the thin-disc approximation will cause the surface density to decrease sharply, so that $\beta$ = 1 in (\ref{eq:steady}) \citep{1972A&A....21....1P, 1973A&A....24..337S}. The maximum in $\Omega(r)$ effectively corresponds to the inner radius of the disc, since inside this radius gas is viscously decoupled from the rest of disc. The gas falling through the inner boundary of the disc will add its specific angular momentum ($\dot{m}r^2_{\rm in}\Omega$) to the star, spinning it up. However, there are in fact a wide range of solutions for the surface density profile of an accretion disc depending on the conditions imposed by the boundary layer, which in turn set the rate of angular momentum transport across the inner boundary of the disc. In a nonmagnetic star spinning close to breakup \citep{1991ApJ...370..597P,1991ApJ...370..604P}, the angular momentum flux can be inward or outward, depending on the accretion history of the star. The dimensionless angular momentum flux $\beta$ can in principle have any value less than 1 in this case. The top panel of Fig. \ref{fig:sigma} shows the steady-state surface density profile for a range of different values of $\beta$ from -1 to 1. \begin{figure} \includegraphics[width=\hsize]{figns.eps} \includegraphics[width=\hsize]{figns1.eps} \caption{Surface density $\nu\Sigma$ of a thin disc as a function of distance from the corotation radius $r_{\rm c}$, for a steady, thin viscous disc. Top: steady accretion at a fixed accretion rate $\dot m$, for inner edge of the disc at corotation. $\beta$ measures the angular momentum flux, $\beta= 1$ corresponding to the standard case of accertion on to a slowly rotation object. For $\beta < 0$ the angular mometum flux is outward (spindown of the star). Bottom: `quiescent disc' solutions with $\dot{m} = 0$ and a steady outward angular momentum flux due to a torque $f$ applied at the inner edge. The two curves show solutions for $ r_{\rm in}/r_{\rm c} = $ 2 and 4.\label{fig:sigma}} \end{figure} \cite{1977PAZh....3..262S} studied a similar situation in which there is outward angular momentum transport in an accretion disc, and showed adding angular momentum at the inner edge of the accretion can in fact halt accretion altogether. The evolution of the disc in this case depends on the rate at which angular momentum is being injected at the inner edge of the disc compared to the rate at which it is carried outwards via viscous coupling. If angular momentum is injected into the inner boundary of the disc at exactly the same rate as viscous transport carries it outwards, then all accretion on to the star will cease. For a steady state like this to exist, the outward angular momentum flux due to the magnetic torque at the inner edge of the disc has to be taken up at some larger distance. In a binary system, this sink of angular momentum can be the orbit of the companion star. If the disc is sufficiently large, the angular momentum can also be taken up by the outer parts of the disc, while the inner parts of the disc are close to a steady state. The inner edge of the disc then slowly moves outward under the influence of the angular momentum flux. The surface density distribution in this case can be found from (\ref{eq:steady}) by taking the limit $\dot m\rightarrow 0$, while letting $\beta\rightarrow -\infty$ (noting that it measures the angular flux per unit accreted mass). This yields: \begin{equation} \label{eq:steady_sigma} \nu\Sigma = f(r_i)\left(\frac{r_i}{r}\right)^{1/2}, \end{equation} where $f(r_{\rm i})$ is a measure of the torque exterted at the inner edge of the disc. The bottom panel of Fig. 2 shows the surface density, scaled to the value of $f(r_{\rm i})$, for two instances of (\ref{eq:steady_sigma}) with different values of $r_{\rm in}$. \cite{1977PAZh....3..262S} refer to this solution as a `dead disc', since there is no accretion on to the star. In this paper we call non-accreting discs without large outflows `quiescent discs', to avoid confusion with `dead zones' thought to be present in proto-stellar discs (regions in which there is insufficient ionization to drive angular momentum transport via MRI but are too hot for efficient angular momentum transport via gravitational instabilities; e.g. \citealt{1996ApJ...457..355G}). These quiescent discs play a role in the cyclic solutions discussed in Section \ref{sec:model}. In these solutions accreting phases are separated by long intervals in which the inner disc is close to the quiescent state described by (\ref{eq:steady_sigma}). \subsection{Evolution of a disc truncated inside the corotation radius} \label{sec:rin<rcorot} When the accretion disc is truncated by a magnetic field inside the corotation radius, the standard $\beta = 1$ case applies for a steady-state solution. The location of the inner edge of the disc $r_{\rm in}$ will be determined by the interaction between the disc and magnetic field, and change with changing conditions at the inner edge (such as the accretion rate on to the star). Here we estimate the location of $r_{\rm in}$, and use it to show how the inner boundary of the disc will change in a non-steady disc. We define the inner edge of the disc as the point at which material in the disc is forced into corotation with the star. We use the azimuthal equation of motion for gas at the magnetospheric radius to obtain an estimate for $r_{\rm in}$ in a disc (see, e.g. ST93): \begin{equation} \label{eq:eq_mo} 2\pi\Sigma \frac{\partial}{\partial t}(rv_\phi)-\frac{\dot{m}_{\rm in}}{r}\frac{\partial}{\partial r}(rv_\phi) + 2\pi rS_{z\phi} = 0, \end{equation} where $\dot{m}_{\rm in} = -2\pi r\Sigma v_r$ is the accretion rate through the inner edge of the disc. (\ref{eq:eq_mo}) neglects viscous angular momentum transport through the inner regions of the disc, under the assumption that it will be much smaller than angular momentum transport from the magnetic field. Using $v_\phi = \Omega_* r$ (since at $r_{\rm in}$ the gas corotates with the star), and assuming a steady-state solution ($\partial/\partial t = 0$), (\ref{eq:eq_mo}) becomes: \begin{equation} \label{eq:rm1} \frac{\dot{m}\Omega_*}{\pi} = r_{\rm in}S_{z\phi} = \frac{r_{\rm in}B_\phi B_z}{4\pi}, \end{equation} where $S_{z\phi}$ is the magnetic stress from the coupling between the disc and star (introduced in Section \ref{sec:global}). As long as the wind-up time for the field is shorter than the rate at which $r_{\rm in}$~is changing, $B_{\phi}/B_z$ will be roughly constant, so we make the assumption that $B_\phi = \eta B_z$, where $\eta < 1$ and is constant. For a dipole field aligned with the star's axis of rotation ($B_z = \mu/r^3$, where $\mu = B_SR^3_*$ is the star's magnetic dipole moment), (\ref{eq:rm1}) can be re-written: \begin{equation} \label{eq:rm} r_{\rm in} = \left(\frac{\eta\mu^2}{4\Omega_*\dot{m}_{\rm in}}\right)^{1/5}. \end{equation} For $\eta = 0.1$, this estimate gives a value for $r_{\rm in}$ about 40\% smaller than the simple estimate found by equating the magnetic pressure from the field ($B^2/8\pi$) to the ram pressure from spherically-symmetric gas in free-fall on to the star (e.g. \cite{1972A&A....21....1P}). The derivation for $r_{\rm in}$~above holds for steady accretion. For the problem studied here the position of the inner edge (set by the location of the magnetosphere) will change in time, which requires a minor reinterpretation of (\ref{eq:rm}). If $r_{\rm in}$~is moving in time, the mass flux $\dot{m}_{\rm co}$ in the reference frame comoving with $r_{\rm in}$~differs from the mass flux, $\dot{m}$, measured in a fixed frame: \begin{equation} \label{eq:dotm} \dot{m}_{\rm co} = \dot{m} + 2\pi r\Sigma \dot{r}_{\rm in}, \end{equation} where $\dot{r}_{\rm in}$ is the time derivative of $r_{\rm in}$. Since the torque between the magnetosphere and the disc acts at the inner edge, the mass flux entering the magnetosphere (used in (\ref{eq:rm})) is given by $\dot{m}_{\rm co}$, not $\dot{m}$. As before, $\dot{m}$~itself is given in terms of the surface density by the usual thin disc expression: \begin{equation} \dot{m} = 3r^{1/2}_{\rm in}\frac{\partial}{\partial r}\left(r^{1/2}\nu\Sigma\right)\big|_{r_{\rm in}}. \end{equation} \subsection{Evolution of a disc truncated outside the corotation radius} \label{sec:rin>rcorot} If the star is spinning fast enough, the magnetic field can truncate the disc {\em outside} $r_{\rm c}$. In this case the interaction with the magnetic field will {\em add} angular momentum to the disc, creating a centrifugal barrier that inhibits accretion. This scenario was first described by \cite{1975A&A....39..185I} and is often termed the `propeller' regime, under the assumption that the interaction with the magnetic field will expel the disc at $r_{\rm in}$~as an outflow via the `magnetic slingshot' mechanism \citep{1982MNRAS.199..883B}. However, in order for the gas to be ejected from the system, it must be accelerated to at least the escape speed ($v_{esc} = \sqrt{GM_*/2r}$). At the inner edge of the interaction region the gas is brought into corotation with the star, where $v_{\rm c} = \Omega_*r$. If this is less than the escape speed, the majority of the gas will not be accelerated enough to be expelled. Setting $v_{esc} = v_{\rm c} = \sqrt{GM_*/r^3_{\rm c}}r$ implies that for $r_{\rm in}$ $< 1.26 r_{\rm c}$ most of the gas will not be expelled. Part of the disc could still be expelled in an outflow, but while the majority of the gas remains confined in the disc, the disc can act as an efficient sink for angular momentum from the star and accretion can effectively be halted. The open field lines at larger radii could launch a disc wind which would provide an additional sink for angular momentum and somewhat change the structure of the disc (e.g. \citealt{2005ApJ...632L.135M}). Numerical studies of the field-disc interaction, for example, find that reconnection across field lines can lead to intermittent accretion (e.g. \citealt{1997ApJ...489..199G}, see also Section \ref{sec:discussion}). However, models of disc winds typically include mass loss rate as a parameter of the problem, so that the amount of mass actually lost to the wind is uncertain. In this paper we make the assumption that the disc becomes quiescent, that is, for $r_{\rm in}> r_{\rm c}$ no accretion or outflows occur. The steady-state disc solution is then given by (\ref{eq:steady_sigma}). In the next section we will derive $f(r_{\rm in})$, the boundary condition for the surface density at the inner edge of a quiescent disc. Like for cases when $r_{\rm in} < r_{\rm c}$, we want to study non-steady-state solutions in which $r_{\rm in}$~moves in time. As in the steady-state case, to derive $\dot{r}_{\rm in}$ we consider the difference in accretion rate at $r_{\rm in}$~in a fixed frame and in a frame comoving with $r_{\rm in}$. Since for a quiescent disc no matter is being accreted on to the star, $\dot{m}_{\rm co}$ = 0, so that (\ref{eq:dotm}) can be written: \begin{equation} 2\pi r\Sigma\dot{r}_{\rm in} = -3r^{1/2}_{\rm in}\frac{\partial}{\partial r}\left(r^{1/2}\nu\Sigma\right)\big|_{r_{\rm in}}. \end{equation} Together with (\ref{eq:sigma_ev}), a viscosity prescription and condition for the outer boundary, we can use the results from this section and the previous one to study the time-dependent behaviour of an accretion disc interacting with a magnetic field. \section{Cyclic accretion} \label{sec:model} The existence of quiescent disc solutions can naturally lead to bursts of accretion. Since there is very little accretion on to the star or outflow, if mass continues to accrete from larger radii it will pile up in the inner regions in the disc until the gas pressure is high enough to overcome the centrifugal barrier from the magnetic field-disc interaction and accretion can proceed. Once the reservoir has been emptied the inner edge of the disc will move back outside the corotation radius and the reservoir will start to build up again. In Sections \ref{sec:rin<rcorot} and \ref{sec:rin>rcorot} we showed how the inner radius of a thin viscous accretion disc will evolve inside and outside corotation. To study the time-dependent evolution of a disc, we must connect these two states as the inner edge of the disc passes through the corotation radius. We also require a description for $f(r_{\rm in})$, the inner boundary condition for the disc truncated outside $r_{\rm c}$. \subsection{Surface density profile for $r_{\rm in} > r_{\rm c}$} \label{sec:rin_outsiderc} When the interaction region is outside $r_{\rm c}$, the star is rotating faster than the Keplerian disc and the magnetic field lines lead the disc, adding angular momentum to the material in the inner regions. As discussed in Section \ref{sec:global}, the torque per unit area exerted on the disc will be $\langle S_{\phi z}\rangle r$, so that the torque exerted across the entire interaction region (assuming it is small) is approximately: \begin{equation} {\mathbf \tau} \simeq 4 \pi \langle S_{\phi z}\rangle r_{\rm in} \Delta r {\bf \hat{z}}, \end{equation} where the extra factor 2 comes from coupling to both sides of the disc. As argued in the previous section, if the disc is truncated close to but outside $r_{\rm c}$, the majority of the gas in the interaction region will not be expelled in an outflow. Instead, the angular momentum from the magnetic field is transferred outwards to the rest of the disc. We can derive a relationship between the position of and surface density at the inner edge of non-interacting disc from the conservation of angular momentum across the interaction region. Since the interaction region is small we do not consider its density profile explicitly, focusing instead on its influence on the non-interacting disc. We therefore define $r_{\rm in}$ as the point in the disc just outside the interaction region, where there is no magnetic coupling between the disc and the star. Across the interaction region the density in the disc decreases sharply (since the gas is forced into nearly corotating orbits with the star). We make the simplifying assumption that none of the mass in the disc escapes, either into an outflow or through the magnetosphere on to the star. The inner edge of the interaction region, $r_{\rm in} - \Delta r$, is therefore defined as the point at which the surface density drops to zero. To determine $\Sigma$ at $r_{\rm in}$ we consider the angular momentum flux across $\Delta r$ when $r_{\rm in} > r_{\rm c} $. The flux of angular momentum must be continuous across $\Delta r$, meaning that the viscous angular momentum transport outside $\Delta r$ must balance the angular momentum flux added by the magnetic field across the interaction region. This balance is written: \begin{eqnarray} \label{eq:ang_mom} \lefteqn{\dot{m} r^2\Omega - 2\pi r(\nu\Sigma)^+r^2\Omega' =}\\ \nonumber&& \dot{m} r^2\Omega - 2\pi r(\nu\Sigma)^-r^2\Omega' + \int^{r_{\rm m}+\Delta r}_{r_{\rm m}} 4 \pi r^2S_{z\phi} dr. \end{eqnarray} In this equation, $\nu^{\pm}$ and $\Sigma^{\pm}$ are the viscosity and surface density inside ($^-$) and outside ($^+$) $\Delta r$, $\dot{m} = 2\pi r(\Sigma v_r)^{\pm}$ is the mass flux through $\Delta r$~ (where $v_r$ is the radial velocity of the gas) and $\Omega$ is the orbital frequency at $r_{\rm in}$. The first term on either side of the equation denotes the advection of angular momentum across $r_{\rm in}$, while the second is the angular momentum transported by viscous stresses. The final term on the right hand side is the angular momentum added by the magnetic field to the coupled region of the disc. The first term on both sides cancel (to enforce conservation of mass across $\Delta r$), and we make the further assumption that in the interaction region most of the angular momentum is transported through external magnetic torques rather than viscous stress, so that $(\nu\Sigma)^- \ll (\nu\Sigma)^+$. For a small interaction region, the last term in (\ref{eq:ang_mom}) can be re-written: \begin{equation} \label{eq:ang_mom2} \int^{r_{\rm in}}_{r_{\rm in} - \Delta r} 4 \pi rS_{z\phi} dr \approx 4\pi \Delta r r_{\rm in}\langle S_{z\phi}\rangle. \end{equation} (\ref{eq:ang_mom}) can then be re-written to yield the surface density at $r_{\rm in}$~for $r > r_{\rm c}$: \begin{equation} \label{eq:sigma_in} (\nu\Sigma)^+ = -\frac{2 \langle S_{z \phi}\rangle \Delta r}{\pi r_{\rm in}\Omega'}. \end{equation} As predicted in Section \ref{sec:rin>rcorot}, (\ref{eq:sigma_in}) shows that the surface density at $r_{\rm in}$~will be large, a consequence of the torque being applied by the disc-magnetic field coupling \citep{1977PAZh....3..262S, 1991ApJ...370..604P, 1991ApJ...370..597P}. (\ref{eq:sigma_in}) corresponds to the function $f(r_{\rm in})$ introduced in Section \ref{sec:ang_mom} for $r_{\rm in} > r_{\rm c}$, that is, the boundary condition at the inner edge of the disc. In a time-dependent system, as gas accretes from larger radii (via viscous torques) it will pile up near $r_{\rm in}$ and the increased gas pressure will push the inner edge of the disc further inwards towards $r_{\rm c}$. \subsection{Transition region} \label{sec:transition} When the inner edge $r_{\rm in}$ is well inside $r_{\rm c}$, conditions at the inner edge are the standard ones for accretion of a thin disc on a slowly rotating object: \begin{equation} \Sigma(r_{\rm in})=0, \end{equation} while the time-dependent position of the inner edge is determined by (\ref{eq:rm}): \begin{equation} \label{eq:mdot_in} r_{\rm in} = \left(\frac{\eta \mu^2}{4\Omega_*\dot{m}_{\rm co}}\right)^{1/5}, \end{equation} where $\dot m_{\rm co}$ is the mass flux in a frame comoving with $r_{\rm in}$ as discussed above. When the inner edge is outside the corotation radius, the magnetosphere does not accrete: \begin{equation} \dot m_{\rm co}=0, \end{equation} while the surface density at $r_{\rm in}$ is determined by a magnetic torque, as discussed above. With the Keplerian value for $\Omega(r_{\rm in})$ and assuming a dipolar magnetic field, the results of Section \ref{sec:rin_outsiderc} can be re-written: \begin{equation} \label{eq:sigma0} (\nu\Sigma)^+ = \frac{\eta\mu^2}{3\pi(GM_*)^{1/2}}\frac{\Delta r}{r^{9/2}_{\rm in}}. \end{equation} To connect these two limiting cases, we assume that the effect of the interaction processes is equivalent to a smooth transition in the conditions. This is valid since the time-scales we are interested in are much longer than the orbital time-scale on which the conditions of the transition region between disc and magnetosphere vary. The assumption is thus that the effect of the fast processes in the transition region can be represented by averages. The mass flux on to the magnetosphere is therefore taken to vary smoothly from 0 for $r_{\rm in}$ well outside corotation to the value in (\ref{eq:mdot_in}) valid well inside: \begin{equation} \label{eq:mdot_co} \dot {m}_{\rm co}= y_m \dot m^+, \end{equation} where $ \dot {m}^+$ is given by (\ref{eq:mdot_in}). For the connecting function $y_m$ we take a simple function that varies from 0 to 1 across the transition: \begin{equation} y_m = \frac{1}{2}\left[1 - \tanh\left(\frac{r_{\rm in}-r_{\rm c}}{\Delta r_2}\right)\right] \end{equation} where $\Delta r_2$ is the nominal width of the disc-magnetosphere transition and a parameter of the problem. Similarly the surface density at the inner edge makes a smooth transition from its value in (\ref{eq:sigma0}) to 0: \begin{equation} \label{eq:sigma_in2} \Sigma_{\rm in}= y_\Sigma \Sigma^+, \end{equation} where the connecting function $y_\Sigma$ is: \begin{equation} y_\Sigma = \frac{1}{2}\left[1 + \tanh\left(\frac{r_{\rm in}-r_{\rm c}}{\Delta r}\right)\right]. \end{equation} All the uncertainties in the transition region are thus subsumed in the parameters $\Delta r$ and $\Delta r_2$. In Section \ref{sec:results} we study the effect of these uncertainties with a parameter survey. The effective widths of the transition of magnetospheric accretion rate and inner-edge surface density need not be the same, and we in fact find that the difference between $\Delta r$ and $\Delta r_2$ is important for the form of the resulting accretion cycles. \subsection{Physical constraints on $\Delta r$ and $\Delta r_2$} \label{sec:constraints} In this paper we treat $\Delta r$ and $\Delta r_2$ as free parameters. However, a lower limit on both parameters can be set by considering the stability of the inner regions of the disc to the interchange instability. In the quiescent disc, the low-density magnetosphere must support the high-density disc against infall. This configuration will be unstable to interchange instability (the analog of the Kelvin-Helmholtz instability), unless the surface density gradient in the interaction region is shallow enough to suppress it. This sets a limit on the minimum width of the interaction region, $\Delta r$, where the density gradient falls from its maximum (at $r_{\rm in}$) to close to zero in the magnetosphere. This instability also sets a limit on the minimum width of $\Delta r_2$, the transition length over which the disc moves from a non-accreting quiescent disc to one in which there is accretion through the inner boundary. As $r_{\rm in}$ moves closer to $r_{\rm c}$ the width of the interaction region preventing accretion (i.e. where the field lines are adding angular momentum to the disc) decreases. When the width of the interaction region outside $r_{\rm c}$ becomes smaller than is stable against the interchange instability, accretion through the magnetosphere will begin. $\Delta r_2$ must therefore be larger or equal to this value, that is, at this minimum distance from $r_{\rm c}$ accretion onto the star will take place. \cite{1995MNRAS.275.1223S} studied the stability of a disc interacting with a magnetic field to interchange instabilities, and derived the following linear stability criterion: \begin{equation} \frac{B_rB_z}{2\pi\Sigma}\frac{\rm{d}}{\rm{d}r}\ln\left|\frac{\Sigma}{B_z}\right| > 2\left(r\frac{\rm{d}\Omega}{\rm{d}r}\right)^2. \end{equation} Assuming that $B_r \sim B_\phi$, in our formulation this inequality becomes: \begin{equation} \frac{3\alpha}{1+\tanh\left(\frac{\Delta r_2}{\Delta r}\right)}\left(\frac{H}{r}\right)^2 > 2\left(1-\left(\frac{r_{\rm in}}{r_{\rm c}}\right)^{3/2}\right)^2. \end{equation} For $\alpha = 0.1$ and assuming $H/r$ is in the range 0.07--0.1, the range of $\Delta r/r = [0.05,0.1]$ will satisfy this inequality for $\Delta r_2/r = [0.01,0.02]$. In this inequality larger values of $\Delta r$ correspond to smaller possible values for $\Delta r_2$, since larger $\Delta r$ correspond to smaller maximum $\Sigma(r_{\rm in})$ and hence shallower gradients. This instability has recently been studied using 3D numerical simulations \citep{2008MNRAS.386..673K}, who find numerically approximately the same criterion for stability as \cite{1995MNRAS.275.1223S}. The shaded regions of Figs. \ref{fig:parmap1} and \ref{fig:parmap2} show the values for $\Delta r_2$ and $\Delta r$ that are unstable to the instability studied in this paper. The simple analysis of this section suggests that at least part of the shaded sections in Figs. \ref{fig:parmap1} and \ref{fig:parmap2} will be stable against the interchange instability, so that the larger magnetosphere-disc instability could occur. \section{Numerical Implementation} \label{sec:computation} \subsection{Disc equation and viscosity prescription} \label{sec:disc_eq} To study the surface density evolution of an accretion disc interacting with a magnetic field as outlined in the previous section, we use a time-dependent numerical simulation of a diffusive accretion disc. Our assumption that the interaction region is small ($\Delta r/r < 1$) means that rather than calculate the disc behaviour in the interaction region explicitly we can instead use the physics of the interaction region to derive boundary conditions for the inner edge of the non-interacting disc. We assume that the accretion disc (outside the interaction region) can be treated in the thin-disc limit, so that the evolution equation for the surface density $\Sigma$ is given by (\ref{eq:sigma_ev}). We assume that the viscosity in the disc follows a power-law dependence, so that: \begin{equation} \label{eq:visc} \nu = \nu_0 r^\gamma, \end{equation} where $\nu_0 = \alpha (GM_*)^{1/2}(H/R)^2$ and $\gamma = 0.5$ following the standard $\alpha$-viscosity prescription \citep{1973A&A....24..337S}. To evolve (\ref{eq:sigma_ev}) in time, we require boundary conditions at $r_{\rm in}$~and $r_{\rm out}$, plus an additional equation to describe the movement of the inner edge of the disc, $\dot{r}_{\rm in}$. We set the outer boundary by defining the mass accretion rate through the outer edge of the disc ($\dot{m}$), which we vary as a parameter of the problem. This defines the time-averaged mass accretion rate in the disc. The surface density at the inner edge of the disc is given by (\ref{eq:sigma_in2}): \begin{equation} \label{eq:sigma_in_final} \Sigma(r_{\rm in}) = \frac{\eta\mu^2}{6\pi(GM_*)^{1/2}\nu_0}\frac{\Delta r}{r^{9/2+\gamma}_{\rm in}}\left[\tanh\left(\frac{r_{\rm in}-r_{\rm c}}{\Delta r}\right) + 1\right]. \end{equation} We calculate the displacement of the inner boundary using the results of Sections \ref{sec:rin<rcorot} and \ref{sec:rin>rcorot}, by considering the difference between the total mass flux at $r_{\rm in}$~in a fixed and comoving frame of reference: \begin{equation} \dot{m}_{\rm co} = \dot{m} + 2\pi r \Sigma \dot{r}_{\rm in}, \end{equation} where $\dot{m}_{\rm co}$ is given by (\ref{eq:mdot_co}). This expression can be re-written: \begin{eqnarray} \label{eq:mdot,final} 6\pi r^{1/2}_{\rm in}\frac{\partial}{\partial t}(\nu\Sigma r_{\rm in}) = -2\pi r_{\rm in}\Sigma(r_{\rm in})\dot{r}_{\rm in} + \\ \nonumber \left[1-\tanh\left(\frac{r_{\rm in}-r_{\rm c}}{\Delta r_2}\right)\right] \frac{\eta\mu^2}{8\Omega_*r_{\rm in}^5}. \end{eqnarray} Taken together, (\ref{eq:sigma_ev}), (\ref{eq:visc}), (\ref{eq:sigma_in_final}), (\ref{eq:mdot,final}) and an outer boundary condition describe the time-dependent evolution of an accretion disc. \subsection{Steady-State solution} \label{sec:steadystate} From the results of the previous sections, we can calculate the steady-state solutions for a given $\dot{m}$, the average mass accretion rate. For certain values of $\dot{m}$, $\Delta r$ and $\Delta r_2$, this equilibrium is unstable, leading to oscillations in $r_{\rm in}$~and corresponding accretion bursts. In a steady-state, the accretion rate is constant throughout the disc, i.e. $\dot{m}_{\rm co} = \dot{m}$: \begin{equation} \label{eq:rin} \dot{m} =\frac{1}{2}\left[1-\tanh\left(\frac{r_{\rm in}-r_{\rm c}}{\Delta r_2}\right)\right]\frac{\eta\mu^2}{4 \Omega r^5_{\rm in}}. \end{equation} Implicitly solving (\ref{eq:rin}) for $r_{\rm in}$ yields the inner radius of the disc in a steady-state solution. The general steady-state surface density profile was calculated in Section \ref{sec:ang_mom}, and is given by (\ref{eq:steady_sigma}) with an additional term since $\dot{m}\neq 0$ in the disc. The function $f(r_{\rm in})$ is given by equation (\ref{eq:sigma0}). The steady-state surface density profile will thus be: \begin{eqnarray} \label{eq:sig} \nu\Sigma = \frac{\dot{m}}{3\pi}\left[1-\left(\frac{r_{\rm in}}{r}\right)^{1/2}\right] \\ \nonumber + \frac{\eta \mu^2 \Delta r}{6\pi r^4_{\rm in}(GMr)^{1/2}}\left[1+\tanh\left(\frac{r_{\rm in} - r_{\rm c}}{\Delta r}\right)\right] \end{eqnarray} The numerical simulations described in the following sections of the evolution of a viscous accretion disc show that the equilibrium solution given by (\ref{eq:rin}) and (\ref{eq:sig}) can become unstable to episodic bursts of accretion by the process outlined in Section \ref{sec:model}. \subsection{Numerical setup} To follow the time-dependent evolution of a viscous accretion disc interacting with a magnetic field we use a 1D numerical simulation, first making a series of mathematical transformations. The power-law prescription for the viscosity, (\ref{eq:visc}), allows us to define a new function, $u$, for convenience: \begin{equation} u \equiv \Sigma r^{1/2+\gamma} \end{equation} To make our results more readily applicable to different magnetic stars (e.g. neutron stars, magnetic white dwarves and protostars), we adopt scale-free coordinates. The instability studied in this paper varies on viscous time-scales of the inner disc, which are in general much shorter than the time-scale over which the transfer of angular momentum between the star and the disc can substantially change the star's rotation period. A constant rotation period implies that a constant corotation radius, making it a natural choice for scaling our variables. We thus scale the radial coordinate to the corotation radius, and the time in terms of the viscous time-scale ($r^2/\nu$) at the corotation radius. Further, since we are most interested in the behaviour of inner regions of the disc, we adopt a coordinate system comoving with $r_{\rm in}$: \begin{equation} r' \equiv \frac{r-r_{\rm in}}{r_{\rm c}}; t' \equiv t \frac{\nu_0} {r_{\rm c}^{2-\gamma}}. \end{equation} Dropping the prime notation, the surface density evolution equation in the new coordinate system then becomes: \begin{equation} \frac{\partial u}{\partial t} = 3 r^{\gamma - 1/2}\frac{\partial}{\partial r}\left[r^{1/2}\frac{\partial u}{\partial r}\right] + \dot{r}_{\rm in}\frac{\partial u}{\partial r}, \end{equation} with the boundary condition at $r_{\rm in}$~given by: \begin{equation} u(r_{\rm in}) = \frac {\eta\mu^2}{3\pi(GM_*)^{1/2}\nu_0 r_{\rm c}^4}\frac{\Delta r}{r_{\rm in}}r_{\rm in}^{-3}\left[\tanh\left(\frac{r_{\rm in}-1}{\Delta r}\right) + 1\right]. \end{equation} The evolution of the inner edge of the disc is given by: \begin{eqnarray} \label{eq:mdot} \dot{r}_{\rm in} = \left[1-\tanh\left(\frac{r_{\rm in}-1}{\Delta r_2}\right)\right]\frac{\eta\mu^2}{16\pi\Omega_*\nu_0^2r_{\rm c}^{\gamma-3/2}}\frac{r^{-11/2+\gamma}_{\rm in}}{u(r_{\rm in})}\\ \nonumber - \frac{3r^{\gamma}_{\rm in}}{u(r_{\rm in})}\frac{\partial u}{\partial r}\big|_{r_{\rm in}}. \end{eqnarray} Finally, to increase the resolution at the inner edge of the disc we make a further coordinate transformation to an exponentially scaled grid: \begin{equation} \label{eq:exp} x \equiv \frac{1}{a}\left[\ln\left(\frac{r-r_{\rm in}}{r_{out}-r_{\rm in}}\right) + 1\right], \end{equation} where $a$ is a scaling factor to set the clustering of grid points around $r_{\rm in}$. We calculate the second-order discretization of the spatial derivatives on an equally-spaced grid in $x$. To evolve the resulting system of equations in time requires an algorithm suitable for stiff equations. This is necessary to follow the evolution of the inner boundary, (\ref{eq:mdot,final}). When $r_{\rm in} \gg r_{\rm c}$, (\ref{eq:mdot,final}) reduces to a differential equation that is first order in time. However, for $r_{\rm in} \ll r_{\rm c}$, $\Sigma(r_{in})$ becomes very small, and the equation essentially becomes time-independent. We have formulated the problem so that $\Sigma(r_{\rm in})$ stays small but non-zero for all values of $r_{\rm in}$ (so that the solutions is continuous at all values of $r_{\rm in}$), but its small value inside $r_{\rm c}$ means that the differential equation is stiff (since the evolution equation for $r_{\rm in}$ in (\ref{eq:mdot,final}) contains terms of very different sizes). To perform the time evolution, we therefore use the semi-implicit extrapolation method (\cite{1992nrca.book.....P}, p. 724), which is second-order accurate in time and suitable for stiff equations. Since the grid comoves with the inner radius, the outer boundary of our disc also moves. We set the accretion rate at the outer boundary to be fixed in the moving coordinate system, so that it changes slightly as the outer boundary moves. The effect is negligible as long as the disc is large enough that the outer parts of the disc are unaffected by the changing inner boundary condition, which we confirm by varying the position of the outer boundary of the disc. The solutions are sensitive to the changing conditions at the inner boundary of the disc. To confirm that our results are robust for the grid we have chosen, we varied the various numerical parameters of the problem: grid resolution, the exponential stretch parameter $a$ at the inner boundary (see (\ref{eq:exp})) and the fractional accuracy of the solution computed by the semi-implicit extrapolation method (which sets the maximum possible timestep). \section{Results} \label{sec:results} Our primary goal in this paper is to study the conditions for which the disc is unstable to episodic outbursts. To do this we follow the evolution of an accretion disc in which the mean mass accretion rate, $\dot{m}$ is a parameter of the problem by setting $\dot{m}$ as the accretion rate through the disc's outer boundary. The other system parameters of the problem are the stellar mass, $M_*$, frequency, $\Omega_*$, and magnetic moment, $\mu$. The interaction between the magnetic field and the disc introduces three additional parameters: $\eta \equiv B_\phi/B_{\rm z}$, the fractional width of the interaction region $\Delta r/r$, and the length scale $\Delta r_2/r$ over which the inner edge of the disc moves from a non-accreting to accreting state. Finally, our description of the viscosity, (\ref{eq:visc}), introduces three additional parameters: $\alpha$, the aspect ratio of the disc, $H/R$ (assumed constant), and $\gamma$, the radial power-law dependence of the viscosity. The problem has two scale invariances, which reduces the number of free parameters. As seen in (\ref{eq:visc}), $\alpha$ and $H/R$ are degenerate. Additionally, the system parameters $\mu$, $M_*$, $\Omega_*$ and $\dot{m}$ can be re-written as the ratio $\dot{m}/\dot{m}_{\rm c}$, where $\dot{m}_{\rm c}$ is the accretion rate in (\ref{eq:rm}) that puts the magnetospheric radius at $r_{\rm c}$. This ratio is equivalent to the `fastness parameter', $\Omega_{\rm in}/\Omega_*$ (where $\Omega_{\rm in}$ is the Keplerian frequency at $r_{\rm in}$) which is sometimes used to describe disc-magnetosphere interactions. For reference, our dimensionless parameter $\dot{m}/\dot{m}_{\rm c}$~ can be expressed in terms of physical parameters appropriate for protostellar systems: \begin{eqnarray} \label{eq:refpar} \frac{\dot{m}}{\dot{m}_{\rm c}} = \left(\frac{\dot{m}}{2.3\times10^{-7}M_\odot \rm{yr}^{-1}}\right)\left(\frac{M_*}{0.6M_\odot}\right)^{5/3}\\ \nonumber \left(\frac{B_s}{2000\rm{G}}\right)^{-2}\left(\frac{R_*}{2.1R_\odot}\right)^{-6}\left(\frac{P_*}{1~\rm{day}}\right)^{7/3}. \end{eqnarray} We assume that the time-averaged $B_\phi$ component will be constant with radius in the coupled region, and set the parameter $\eta = 0.1$. For the viscosity, $\nu = \alpha (GM_*)^{1/2}(H/R)^2 r^{\gamma}$, we take $\alpha = 0.1$ and $H/R = 0.1$ to calculate the magnitude of $\nu_0$, and assume $\gamma = 0.5$ everywhere in the disc. Varying $\alpha$, $H/R$ and $\gamma$ will change the time-scale over which outbursts occur, but will not change the general character of our outburst solutions. This leaves three scale-free parameters in the problem: $\dot{m}/\dot{m}_{\rm c}$, $\Delta r/r$ and $\Delta r_2/r$. We vary each of these parameters to explore the range of unstable solutions. For small values of $\Delta r/r$ ($\sim 0.1$) and $\Delta r_2/r$ ($\sim 0.01$), and $\dot{m}/\dot{m}_{\rm c}$ $<$ 1, the position of the inner boundary quickly becomes unstable and begins oscillating. Since the position of $r_{\rm in}$ determines the mass accretion rate on to the star, (\ref{eq:mdot_co}), the change in $r_{\rm in}$ leads to an accretion outburst. We use the steady-state solution (given by (\ref{eq:rin}) and (\ref{eq:sig})) as an initial condition for all our simulations. Fig. \ref{fig:margstab} shows the growth of the instability for $\dot{m}/\dot{m}_{\rm c}$ = 1, $\Delta r/r = 0.05$ and $\Delta r_2/r = 0.014$. The solid curve shows the evolution in $r_{\rm in}$, scaled to the corotation radius. The horizontal dashed line shows the steady-state value for $r_{\rm in}$. The right-hand axis plots the accretion rate on to the star as a function of time (the dashed curve). The accretion rate is scaled to units of the steady-state accretion rate, $\dot{m}$. The instability quickly grows out of the equilibrium solution, and saturates into steady oscillations. \begin{figure} \includegraphics[width=\hsize]{figms.eps} \caption{Growth of instability from steady-state solution, (\ref{eq:rin}) and (\ref{eq:sig}), for $\dot{m}/\dot{m}_{\rm c}$ = 1, $\Delta r/r = 0.05$, and $\Delta r_2/r = 0.014$. The inner radius (solid curve) evolves around its steady-state value (dashed horizontal line), causing the net accretion rate on to the star to change as well (dashed curve). \label{fig:margstab}} \end{figure} We observe a wide range of oscillatory solutions that span three orders of magnitude in frequency, depending on the values of $\dot{m}/\dot{m}_{\rm c}$, $\Delta r/r$ and $\Delta r_2/r$. The shape of the accretion burst itself also changes dramatically depending on the system parameters. At large $\dot{m}/\dot{m}_{\rm c}$~the bursts are quasi-sinusoidal oscillations, as in Fig. \ref{fig:margstab} and the bottom panel of Fig. \ref{fig:moderate}. As the mean accretion rate is decreased, the bursts take the shape of a relaxation oscillator, where the bursts are characterized by an initial sharp spike of accretion which then relaxes to a quasi-steady accretion rate for the duration of the burst, before abruptly turning off as the reservoir is emptied and $r_{\rm in}$ quickly moves well outside $r_{\rm c}$. During the outburst phase, higher frequency sub-oscillations are also sometimes seen with varying intensity. \begin{figure*} \includegraphics{fig_678.eps} \caption{Outburst profiles of $r_{\rm in}$ and $\dot{m}$ for moderate values of $\dot{m}/\dot{m}_{\rm c}$. From bottom to top, $\dot{m}/\dot{m}_{\rm c}$ = [0.095,0.052, 0.031]. For adopted protostellar parameters this corresponds to $\dot{m} = [2.2,1.2,0.73]\times 10^{-8} M_\odot \rm{yr}^{-1}$. The lines are the same as in Fig. \ref{fig:margstab}.\label{fig:moderate} } \end{figure*} \begin{figure*} \includegraphics{fig_lowm.eps} \caption{Outburst profiles of $r_{\rm in}$ and $\dot{m}$ for small values of $\dot{m}/\dot{m}_{\rm c}$.From bottom to top, $\dot{m}/\dot{m}_{\rm c}$ = [0.019,0.0084,0.003,0.0022]. For adopted protostellar parameters this corresponds to $\dot{m} = [4.5,1.9,0.95,0.38]\times 10^{-9}M_{\odot}\rm{yr}^{-1}$. The lines are the same as in Fig. \ref{fig:margstab}. \label{fig:low} } \end{figure*} Figs. \ref{fig:moderate} and \ref{fig:low} show the evolution of $r_{\rm in}$ and accretion rate as we vary $\dot{m}/\dot{m}_{\rm c}$ but the other parameters stay fixed. From bottom to top, the panels of Fig. \ref{fig:moderate} show the instability for $\dot{m}/\dot{m}_{\rm c}$ = [0.095, 0.052, 0.031] ($\dot{m} = [2.2,1.2,0.73]\times 10^{-8} M_\odot \rm{yr}^{-1}$ for the parameters in (\ref{eq:refpar})). At the highest mean accretion rate, $r_{\rm in}$ (the solid curve) oscillates with a high frequency around its steady-state value (dashed line), with corresponding bursts of accretion on to the star (dashed curve). As $\dot{m}/\dot{m}_{\rm c}$~is decreased, the accretion profile changes to much lower frequency outbursts, with long periods of quiescence as $r_{\rm in}$ moves away from $r_{\rm c}$ and accretion ceases completely. The high-frequency oscillation that dominates for $\dot{m}/\dot{m}_{\rm c}$ = 0.095 is superimposed over the low-frequency accretion bursts for lower $\dot{m}/\dot{m}_{\rm c}$. Fig. \ref{fig:low} shows the continuation of Fig. \ref{fig:moderate} for $\dot{m}/\dot{m}_{\rm c}$~ = [0.019,0.0084,0.003,0.0022] ($\dot{m} = [4.5,1.9,0.95,0.38]\times 10^{-9}M_{\odot}\rm{yr}^{-1}$). The characteristic accretion burst profile essentially stays the same as $\dot{m}/\dot{m}_{\rm c}$~is decreased, with sharp spikes at the beginning and end of an accretion outburst. The overall amplitude of the outburst decreases only slightly with decreasing mean accretion rate. The initial spike decreases by about 20\% as the mean accretion rate drops from $\dot{m}/\dot{m}_{\rm c}$ = 0.052 to $\dot{m}/\dot{m}_{\rm c}$ = 0.0022. The more significant effect is that the length of time between outbursts increases with decreasing $\dot{m}/\dot{m}_{\rm c}$, since at low average accretion rates it takes longer to build enough mass to drive another outburst. The overall shape of the outburst is relatively insensitive to changing $\dot{m}/\dot{m}_{\rm c}$, becoming shorter as $\dot{m}/\dot{m}_{\rm c}$~ decreases. At the lowest accretion rate ($3.8\times 10^{-10} M_\odot\rm{yr}^{-1}$; the top panel of Fig. \ref{fig:low}), the burst consists of only one sharp spike. As we have formulated the problem, the instability will persist down to arbitrarily low accretion rates. Changing the other parameters, $\Delta r/r$ and $\Delta r_2/r$, has a much stronger effect on the shape of the outburst than changing the mean accretion rate. Fig. \ref{fig:deltar} shows the outburst profiles for different values for $\Delta r/r$, setting $\dot{m}/\dot{m}_{\rm c}$ = 0.04 and $\Delta r_2/r = 0.014$. From the bottom to top, $\Delta r/r$ = [0.03,0.05,0.07,0.09], which spans the unstable region of $\Delta r/r$ for the adopted $\dot{m}/\dot{m}_{\rm c}$. For small $\Delta r/r$ the instability manifests itself as repeating short bursts of accretion, with comparatively long quiescent phases. As $\Delta r/r$ increases, the frequency of the outburst decreases, and the duty cycle increases dramatically. For very large $\Delta r/r$ the outburst lasts about 200 times as long as for the minimum $\Delta r/r$ but at lower accretion rate after the initial spike. The burst profile of the instability is thus sensitive to small changes in $\Delta r/r$, but the range in $\Delta r/r$ over which the instability exists is quite small.We find a similar range of outburst profiles by changing $\Delta r_2/r$ and keeping $\Delta r/r$ fixed, except with the opposite trend: for large $\Delta r_2/r$ the instability manifests as a series of short spiky bursts, becoming longer as $\Delta r_2/r$ decreases. \begin{figure*} \includegraphics{fig_deltar.eps} \caption{Outburst profiles of $r_{\rm in}$ and $\dot{m}$ for changing $\Delta r/r$, with $\Delta r_2/r = 0.014$ and $\dot{m}/\dot{m}_{\rm c}$ = 0.04. From bottom to top, $\Delta r/r = [0.03,0.05,0.07,0.09]$. The lines are the same as in Fig. \ref{fig:margstab}.\label{fig:deltar}} \end{figure*} We next considered the parameter space in $\dot{m}/\dot{m}_{\rm c}$, $\Delta r/r$ and $\Delta r_2/r$ over which the instability occurs. We have briefly explored the effect of varying both $\Delta r/r$ and $\Delta r_2/r$ over a small range in $\dot{m}/\dot{m}_{\rm c}$~ and found that, although the outburst profile changes somewhat, the range over which $\Delta r/r$ and $\Delta r_2/r$ produce unstable solutions are independent. We therefore assume that $\Delta r/r$ and $\Delta r_2/r$ vary independently of each other for all $\dot{m}/\dot{m}_{\rm c}$, and consider the range of the instability over the [$\dot{m}/\dot{m}_{\rm c}$, $\Delta r/r$] and [$\dot{m}/\dot{m}_{\rm c}$, $\Delta r_2/r$] spaces separately. \begin{figure} \includegraphics[width=\hsize]{figRmRc1.eps} \caption{Parameter map of instability as a function of $\dot{m}/\dot{m}_{\rm c}$ and width of interaction region $\Delta r/r$, with constant $\Delta r_2 = 0.014$. The shaded regions denote unstable parameters. \label{fig:parmap1}} \end{figure} Fig. \ref{fig:parmap1} shows the range of unstable solutions (shown as shaded regions) changing $\dot{m}/\dot{m}_{\rm c}$~and $\Delta r/r$, but keeping $\Delta r_2/r $ fixed at 0.014. Although there is a small unstable branch around $\dot{m}/\dot{m}_{\rm c}$~= 1, in general as $\Delta r/r$ increases, a lower $\dot{m}/\dot{m}_{\rm c}$~is required before the instability sets in. \begin{figure} \includegraphics[width=\hsize]{figRmRc2.eps} \caption{Parameter map of instability as a function of $\dot{m}/\dot{m}_{\rm c}$~and accretion transition length $\Delta r_2/r$, with constant $\Delta r_2 = 0.014$. The shaded regions denote unstable parameters. \label{fig:parmap2}} \end{figure} Fig. \ref{fig:parmap2} shows the unstable solutions changing $\dot{m}/\dot{m}_{\rm c}$~and $\Delta r_2/r$ but keeping $\Delta r/r$ fixed at 0.05. The opposite trend from Fig. \ref{fig:parmap1} is seen, with a larger range of unstable accretion rates. There is again a range of unstable solutions around $\dot{m}/\dot{m}_{\rm c}$ = 1, although in this case the unstable region extends over the entire $\Delta r_2/r$ parameter space. The instability likely extends to smaller $\Delta r_2/r$, but we do not explore the region smaller than $\Delta r_2 = 0.005$ on physical grounds, since such a small transition length will likely be unstable to other instabilities like the interchange instability (see Section \ref{sec:constraints}). As with changing $\Delta r/r$, the outburst profile changes substantially over the small range of $\Delta r_2/r$ in which the instability occurs. \section{Discussion} \label{sec:discussion} In this paper we studied a disc instability first explored by \cite{1977PAZh....3..262S} and ST93, with a more physically motivated and general formulation of the problem than was used in ST93. In particular, we have improved the description of the disc-field interaction when the disc is truncated outside corotation by deriving conditions for a `quiescent' state, in which the angular momentum transferred from the star into the disc halts accretion altogether. In agreement with ST93, we observe a wide range of oscillatory behaviour, and the frequency range of individual outbursts spans three orders of magnitude. The period of the cycle seen in Figs. \ref{fig:margstab}--\ref{fig:low} varies from 0.02 to 20$t_{\rm c}$, where $t_{\rm c}$ is the nominal viscous time-scale at the corotation radius $t_{\rm c}=r_{\rm c}^2/\nu(r_{\rm c})$. Though cycle times scale with $t_{\rm c}$, this is evidently not the only factor. As discussed in ST93, the viscous time-scale relevant for the cycle period depends on the size of the disc region involved. This depends itself on the cycle period, hence the period must be determined by additional factors. One of these is the mean accretion rate, but the physical conditions in the magnetosphere-disc interaction region have an equally important effect. From Figs. \ref{fig:parmap1} and \ref{fig:parmap2} it appears that there are two different kinds of instability. One of these operates in a narrow range of accretion rates, around the value where steady accretion would put the inner edge at corotation. The instability in this case is of the type shown in Fig. \ref{fig:margstab}: an approximately sinusoidal modulation, characteristic for a weak form of instability. The inner edge of the disc oscillates about a mean value, but stays inside the width of the transition region. The longer cycles in the upper parts of Figs. \ref{fig:parmap1} and \ref{fig:parmap2} are a strongly non-linear, relaxation type of oscillation. The inner edge is somewhat outside the transition region for much of the cycle with no accretion taking place (the `quiescent' phase), and dips in for a brief episode of accretion before moving back out again. This is the kind of cycle envisaged by \cite{1977PAZh....3..262S}. During the quiescent phase, the disc (\cite{1977PAZh....3..262S} call it a `dead disc') extracts angular momentum from the star by the magnetic interaction at its inner edge. These two forms of instability are merged into a continuum in ST93, as a result of the different (and less realistic) assumptions made there about the interaction between disc and magnetosphere outside corotation. This difference also affects the dependence on the mean accretion rate. Whereas in ST93 cyclic behavior was found only in a limited range of accretion rates, our results show that cycles can occur in principle at arbitrarily low accretion rates, with steadily increasing cycle period and decreasing duty cycle of the accretion phase. Figs. \ref{fig:moderate} and \ref{fig:low} show that the radius of the inner edge of the disc does not move by more than 10\% around corotation, even at the lowest mean accretion rates. For example in the case $\dot m/\dot m_{\rm c}=9.5 \times 10^{-2}$ of Fig. \ref{fig:moderate}, the standard `ram pressure' estimate would yield a much larger magnetosphere radius, about $r_{\rm m}=3.6\,r_{\rm c}$. The difference arises because in our cyclic accretion states the conditions in the inner disc are very different from those assumed in conventional estimates of $r_{\rm m}$; the density in the inner disc, for example, is much higher. At $r_{\rm in}\le 1.1 r_{\rm c}$, the velocity difference between the magnetosphere and the disc is only 5\%, much less than the 40\% which mass would need in order to escape from the system. `Propellering' of mass out of the system is thus unlikely to be effective. This does not exclude that some mass loss (powered by a magnetic wind from the disc or the interaction region around the inner edge of the disc) may also take place, but our results show that this is not a necessary consequence for a disc in what is traditionally called `propeller' regime. At sufficiently low accretions rates one would expect, however, that propellering would also be a possible outcome: if the rotation rate of the star is high enough, matter could be ejected before it has the time to form a dense disc. The existence of a cyclic form of accretion at low accretion rates thus suggests that two different accretion states are possible, and that there would be a second parameter determining which of the two is realised. This might simply be the history of the system. If a disc is initially absent and accretion is started, the density will initially be low enough that ejection by propellering can prevent accretion altogether. The cataclysmic variable AE Aqr (e.g. \citealt{1997MNRAS.286..436W}) is likely to be such a case. On the other hand, if a disc is initially in a high accretion state such that the inner edge is inside corotation, a subsequent decline to low accretion rates could lead to the cyclic accretion described here. Such a situation could be at work in the TTauri star EX Lupi (where the initial high accretion phase has ended). It could also be appropriate for the X-ray millisecond pulsar, SAX J1808.8-3658, which has shown a 1-Hz QPO in the decline phase of several outbursts \cite{2009ApJ...707.1296P}. The pile-up of mass at the magnetosphere will maintain the disc this state, and prevent propellering even when the mean accretion rate drops to very low values. The instability studied in this work has not yet been observed in numerical simulations, partly because most numerical simulations do not run for long enough to observe it, but mainly because most simulations have focused on either accreting or strong propeller cases. However, in virtually all numerical simulations outflows and variability in the disc are observed, with an intensity that varies between different simulations. Gas pile-up at the inner edge of the disc is also observed, with the amount of pile-up tied to the effective diffusivity of magnetic field at the inner edge of the disc (e.g. \cite{2004ApJ...616L.151R}). The process of closing and opening field lines provides a source of mass to launch both a weakly-collimated outflow (the disc wind) and a well-collimated jet (e.g \citealt{1996ApJ...468L..37H, 1997ApJ...489..199G, 2009arXiv0907.3394R}). The whole cycle takes place on time-scales that can vary between the dynamical and viscous time-scales at the inner edge of the disc, but are generally of higher frequency than the disc instability studied in this paper. The inner edge of the disc also oscillates significantly (although it remains on average outside corotation), from between a few stellar radii \citep{2009arXiv0907.3394R} up to 30 stellar radii \citep{1997ApJ...489..199G}. Even if such variability is present, the instability studied in this paper can still occur provided the outflows/accretion bursts generated by field lines opening are not strong enough to fully empty the reservoir of matter accumulating just outside $r_{\rm c}$. \section{Conclusions} \label{sec:conclusion} We have studied the accretion of a thin viscous disc on to a magnetosphere of a magnetic star, under the influence of the magnetic torque it exerts on the disc. We focused in particular on cases with low accretion rates. For high accretion rates such that the inner edge $r_{\rm in}$ of the disc is inside the corotation radius, standard steady thin viscous disc solutions are recovered. However, when the inner edge is near corotation we find that the accretion becomes time-dependent, and takes the form of cycles consisting of alternating accreting and non-accreting (`quiescent') states. The period of this cycle varies from a small fraction of the characteristic viscous time scale in the inner disc, $r_{\rm in}^2/\nu$, to a large multiple of it, depending on the mean accretion rate as well as on the precise conditions assumed at the magnetosphere. These cyclic accretion solutions continue to exist indefinitely with decreasing accretion rate. The cycle period increases, while the duty cycle of the accreting phase decreases with decreasing accretion rate. In the quiescent phase after a burst of accretion, the inner edge of the disc moves outward, and mass starts piling up in the inner regions of the disc. In response, the inner edge eventually starts moving back in again and accretion picks up as $r_{\rm in}$ crosses the corotation radius. This empties the inner regions of the disc, causing the inner edge to move outward again. The cycle thus has the properties of a relaxation oscillator, as found before in ST93. The reservoir involved is the mass in the inner region of the disc. These results (as well as those of \cite{1977PAZh....3..262S} and ST93) show that accretion without mass ejection can occur at accretion rates well inside what is usually called the `propeller' regime. Instead of the mass being ejected, the accreting mass can stay piled up at high surface density in the inner disc, just outside corotation. We have suggested that systems with very low accretion rates can be in either of these states. Propellering would occur when a disc is initially absent and mass transfer is first initiated (the case of AE Aqr for example), while a system with an accretion rate that drops from an initially high value would end in the cyclic accretion state described in this paper. This would apply to most cataclysmic variables and X-ray binaries, as well as some TTauri stars. \section{Acknowledgments} CD'A would like to thank Stuart Sim for useful scientific discussion, and acknowledges financial support from the National Science and Engineering Research Council of Canada. \bibliographystyle{mn2e}
1,314,259,996,917
arxiv
\section{Introduction} Fluids composed of non-spherical molecules have been studied by using different methods and approaches. Liquid crystals, rod-like polymers, aqueous suspensions of tobacco mosaic virus (TMV) and disk-like particles have a high degree of shape anisotropy. This shape anisotropy allows them to exhibit a rich variety of structural phases as density and temperature are changed. In some thermotropic liquid crystals (for instance, the 4-n-pentylbenzenethio-4$'$-n-decyloxybenzoate) as temperature is increased the fluid undergoes different transitions going from the crystal to the smectics, nematic and isotropic phases \cite{book1}. Due to orientational and positional degrees of freedom these fluids exhibit phenomena not present in fluids of spherical particles \cite{rod-like}. They are use in many applications and with different purposes, going from the well known display technologies to medical devices in biological systems \cite{book1,app1} as the self-assembly of viruses in aqueous suspensions \cite{tmv1,tmv2,tmv3}. Experiments, theoretical models and computer simulation studies have been conducted in recent years for pure fluids and mixtures of non-spherical particles \cite{mix1,mix2,mix3,mix4}. However these studies are scarce compared to the research that has been done in fluids of spherical particles, where phase diagrams, static structure and dynamic properties have received wide attention. By the theoretical side many studies have been undertaken on these systems \cite{teo1,teo2,teo3}, however is often more difficult to include orientational degrees of freedom and geometrical shape in theories than in numerical simulations \cite{guille}. Theoretical works have employed the Fokker-Planck equation \cite{guille,doi}, the generalized Langevin \cite{coffey,medina-01,medina-02}, the Onsager theory \cite{varga}, the density functional theory \cite{enrique4} and a generalized Van der Waals description \cite{VdWtheory}, among others. The non-spherical feature can be seen as an internal degree for instance a dipole orientation, whereas the geometrical shape often enters through some physical parameter as the diffusion coefficients without an explicit account of particle shape. Concerning computer molecular simulations, different techniques as Monte Carlo (MC) and molecular dynamics (MD) have allowed the calculations of phase behavior, thermodynamics, structure and dynamic properties of pure fluids of flexible, rigid and axially symmetric molecules \cite{allen-etal}. Fluids composed of spherocylinders (hard cylinders with semi-spherical endings) as another model for non-spherical particles are mainly studied by MC techniques \cite{carlos}, while models where the interaction potential is a continuous function of distance are suitable for MD calculations. From an atomistic point of view the intramolecular structure has been considered by including intramolecular sites interacting through bond, bending and torsional interactions. Other paths have employed the Yukawa and Lennard-Jones (LJ) potentials between sites \cite{tmv,sitesYuk2}. However the atomistic approach often increases the size of the system. From the models employed in literature, the Gay-Berne (GB) potential have played a crucial role in the description of mesophases in fluids of non-spherical particles \cite{pgb}. The GB model is flexible enough to allow the description of long ellipsoids, passing through spheres and ending in discotic particles, using one site per particle. This potential depends on four parameters for a pure fluid, usually denoted as ($\kappa,\kappa',\mu,\nu$), which are closely related to the shape of particles and the strength interaction between them. In this sense the GB model constitutes a family of potentials. From all possible sets of parameters the most studied is the ($\kappa=3, \kappa'=5, \mu=2, \nu=1$) GB fluid for ellipsoids, whose phase diagram, second rank orientational order parameter and pair distribution functions are already known \cite{enrique1}. Other properties have also been studied for this case as: the velocity autocorrelation function \cite{enrique2}, bulk and shear viscosities \cite{viscos}, elastic constants \cite{elastic1}, entalphy and free energies \cite{enrique4}, the isotropic-nematic transitions \cite{trans1} and the liquid-vapor coexistence \cite{trans2}. The GB potential also has been used to obtain the viscosities and stress \cite{no-new1} and self-diffusion coefficient \cite{no-new2} the non-Newtonian regime of different liquid crystal models. Recently the computer simulation have been used to understand nematic-vapour interface of the GB model for prolate molecules with $\kappa=4$ and $6$, and for oblate molecules $\kappa=0.3$ and $0.5$, whit different values of $\kappa'$ for each $\kappa$ \cite{trans2}; that together with the elastic properties of the liquid crystal are determinant to understand the formation of nematic droplets \cite{Rull_2012, Vanzano_2012, Vanzano_2016}. Other sets of parameters have been explored under very specific conditions, in this direction Mori et al. \cite{shear-flow} examined the effect of changing $\nu$=1.8, 2.0, 2.2 on the orientational order parameter and the viscosities under a shear flow. By setting the values $\mu$=1 and $\nu$=3 and Germano et al. obtained elastic constants \cite{elastic2}. Bates and Luckhurst \cite{BL-1, BL-2} explored the values $\kappa=4.4, \kappa'=20, \mu=1, \nu=1$ and calculated the diffusion coefficients in the smectic A phase and De Miguel et al. \cite{enrique3} obtained stable smectic phases for the same parameters, and the pair distribution functions, phase diagrams and orientational order parameters were also calculated. Satoh investigated the rotational viscosity coefficients \cite{satoh1} and studied the effect of external magnetic fields \cite{satoh2}. In the case of very long particles ($\kappa=15$) the isotropic-nematic region was explored \cite{k15-01, k15-02}. The variation of the parameter $\kappa$ was done to analyze the isotropic-nematic region \cite{k-var3} by calculating orientational correlation functions. The study of phases in discotic fluids has been explored by constructing columnar states \cite{disc-04}, varying the parameter $\kappa$ \cite{disc-05,disc-06} or varying both the energy strength and the geometrical parameters $\kappa$ and $\kappa'$ respectively, and the phase diagrams were obtained \cite{dicotic-01}. Other cases of study remained to be explored. In this direction the route that many studies have addressed is to parametrize the Gay-Berne potential for a particular type of molecules and adjust the set of parameters to reproduce the geometry or the interaction between pairs \cite{golubkov}. Computer simulations, by using GB potential, have been used to study discotic liquid crystal with $\kappa=0.345$, $0.2$, $1.0$, and $2.0$ \cite{Cienega_2014}, by resulting promising materials for technological applications in films that increase the angle of view in liquid crystal displays \cite{Bushbyand_2011}. The variation of the parameter $\kappa'$ for ellipsoids have been analyzed taking the values $\kappa'$=1,5,6.63 and 8.33 to study the liquid-vapor region where $\kappa$ was set to $\kappa$=3 \cite{kp-var1}. The studied temperatures were $T^*=$0.5, 0.6, 0.65, 0.7 and 0.8. Another study of varying $\kappa'$ is found in \cite{kp-var2} where the pressure and the order parameters were obtained for $\kappa'=$5, 10, 25 and $T^*=$0.7 and the authors analyzed the liquid-vapor region for $\kappa'=$5, 2.5, 1.25, 1. However a systematic study in terms of $(\kappa,\kappa',\mu,\nu)$ and different conditions of density and temperature to those already mentioned has not been performed, this is quite desirable to drawn the general phase behavior of this model. In these work we have undertaken an extensive numerical study of non-spherical particle fluids by changing the interaction strength in the Gay-Berne model in a systematic way, covering regions where it has not been done. This allows us to quantify its effects on the pressure-density phase diagrams, the order parameter, the perpendicular and parallel correlation functions and the translational diffusion coefficients, both parallel and perpendicular to the director. This properties give us information of the smectic phase. The rest of the paper is organized as follows: Section II contains a brief description of the Gay-Berne interaction potential, section III summarizes details concerning the procedure followed in the simulations. In section IV we present the definition of the properties. The results on phase diagrams, order parameter, radial distribution functions and diffusion coefficients are presented and discussed in section V. Finally, conclusions are given in section VI. \section{Gay-Berne potential model} The Gay-Berne potential was introduced as a model to simulate the interaction between two elongated four-site Lennard-Jones molecules through an effective pair potential between two particles with no internal structure. This model was proposed by Gay and Berne \cite{pgb} as a modification to the earlier Berne-Pechukas potential \cite{pechukas}, since then, the GB model have played a crucial role. This serves as a benchmark that accounts reasonably well for the shape of non-spherical particles. In this model two particles interact according to \begin{equation} u_{ij}(\hat{n}_{i},\hat{n}_{j},{\bf r})=4\epsilon(\hat{n}_{i}, \hat{n}_{j},{\bf r})\left[ \left( \frac{\sigma_{o}}{r-\sigma+\sigma_{o}} \right)^{12} - \left( \frac{\sigma_{o}}{r-\sigma+\sigma_{o}} \right)^{6} \right], \label{potencial} \end{equation} \noindent where $\hat{n}_i$ is the orientational axial vector of particle $i$, $r = |{\bf r}|=|{\bf r}_i-{\bf r}_j|$ is the separation between center of mass of particles $i$ and $j$. The length $\sigma$ and strength $\epsilon$ are functions of the orientational vectors $\hat{n}_{i},\hat{n}_{j}$ and the separation $r$. These are given by \begin{eqnarray} \sigma & = & \sigma_{\mathrm{o}}\left\{ 1 - \frac{\chi}{2r^{2}} \left[ \frac{\left({\bf r}\cdot\hat{n}_{i}+ {\bf r}\cdot\hat{n}_{j} \right)^{2}}{1+\chi\left( \hat{n}_{i}\cdot\hat{n}_{j} \right) } + \frac{ \left({\bf r}\cdot\hat{n}_{i}-{\bf r} \cdot\hat{n}_{j} \right)^{2}}{ 1-\chi\left( \hat{n}_{i}\cdot\hat{n}_{j} \right) } \right] \right\}^{-1/2}\label{sigma},\\ \epsilon(\hat{n}_{i},\hat{n}_{j},{\bf r}) & = & \epsilon_{\mathrm{o}}\epsilon^{\nu}(\hat{n}_{i},\hat{n}_{j}) \epsilon^{\prime\mu}(\hat{n}_{i},\hat{n}_{j},{\bf r}),\label{epsilon} \end{eqnarray} \noindent where $\sigma_{\mathrm{o}}$ and $\epsilon_{\mathrm{o}}$ have length and energy units, respectively, and are used to make real quantities dimensionless, for spheres they are reduced to the usual LJ parameters. Additionally, \begin{eqnarray} \epsilon(\hat{n}_{i},\hat{n}_{j}) & = & \left[ 1 - \chi^{2} \left( \hat{n}_{i}\cdot\hat{n}_{j} \right)^{2} \right]^{-1/2}, \label{epsilon1}\\ \epsilon^{\prime}(\hat{n}_{i},\hat{n}_{j},{\bf r}) & = & 1 - \frac{\chi^{\prime}}{2r^{2}} \left[ \frac{\left({\bf r}\cdot \hat{n}_{i}+{\bf r}\cdot\hat{n}_{j} \right)^{2}}{1+\chi^{\prime} \left( \hat{n}_{i}\cdot\hat{n}_{j} \right) } + \frac{ \left({\bf r}\cdot\hat{n}_{i}-{\bf r}\cdot\hat{n}_{j} \right)^{2}}{ 1-\chi^{\prime}\left( \hat{n}_{i}\cdot\hat{n}_{j} \right) } \right], \label{epsilon2} \end{eqnarray} \noindent where $\chi$ and $\chi^{\prime}$ are defined as \begin{equation} \chi=\frac{\kappa^2-1}{\kappa^2+1} \hspace{0.5cm}\mbox{and}\hspace{0.5cm} \chi^{\prime}=\frac{\kappa^{\prime(1/\mu)}-1}{\kappa^{\prime(1/\mu)}+1}, \end{equation} \noindent $\chi$ is a function of the shape anisotropy parameter $\kappa$, which describes the particle shape (rodlike $\kappa > 1$, disc-like $\kappa < 1$, spheres $\kappa =1$) and $\chi^{\prime}$ is a function of the energy anisotropy parameter, $\kappa^{\prime}$, this letter provides the ratio between the potential well depths for the side-side and end-end configurations. In this way the Gay-Berne potential is usually specified in the form GB ($\kappa$,$\kappa'$,$\mu$,$\nu$). The exponents $\mu$ and $\nu$ usually take values 2 and 1, respectively \cite{pgb}, although other values have also been used. \vspace{1.0cm} \begin{figure}[htb!] \centerline{\epsfig{figure=Figure1.eps,width=4.0in}} \caption{The Gay-Berne potential for side-side, ``T'' and end-end configurations at $k'=5$ (dashed lines) and $k'=20$ (solid lines). The inset, the Gay-Berne potential for ``T'' and end-end configurations at $k'=5$ (dashed lines), $k'=10$ (dot lines), $k'=15$ (dot-dashed lines) and $k'=20$ (solid lines).} \label{pgb-01} \end{figure} In this work we study GB fluids for different values of $\kappa'$ and fixed $\kappa$ = 3, $\mu$ = 2, $\mu$ = 1, we denoted them by GB (3,$\kappa'$,2,1) in regions of temperature where it has not been done, in particular we set $\kappa'$ = 5, 10, 15, 20 at temperatures $T^*$ = 0.5, 0.75, 1.0, 1.25. The case $\kappa'$ = 5 was studied for comparison with previous work \cite{enrique1}. The GB potential is shown in Fig. \ref{pgb-01} for $\kappa'$ = 5, 20, the inset of the figure displays the ``T'' and the end-end configurations for $\kappa$'=5,10,15,20, the side-side configuration (not shown) has not change by varying the $\kappa$' parameter. We calculated pressure-density phase diagram, the order parameter, the parallel and perpendicular correlation functions and the translational diffusion coefficients for these four GB fluids. \noindent In order to solve Newton's equations of motion in the molecular dynamics, the force between a pair of GB particles $i$ and $j$, is calculated according to \begin{equation} {\bf F}_{ij}=-\frac{\partial u}{\partial r}{\bf r}_{ij}- \frac{\partial u}{\partial a}\hat{n}_{i}- \frac{\partial u}{\partial b}\hat{n}_{j}, \label{fuerza1} \end{equation} \noindent where we have defined $a={\bf r}\cdot\hat{n}_{i}$, $b={\bf r}\cdot\hat{n}_{j}$, $c=\hat{n}_{i}\cdot\hat{n}_{j}$. The force ${\bf F}_{ij}$ obeys ${\bf F}_{ij}=-{\bf F}_{ji}$ and the torque has to be calculated as \cite{antypov} \begin{eqnarray} \boldsymbol{\tau}_{ij} & = & -\hat{n}_{i}\times\left( \frac{\partial u}{\partial a}{\bf r}_{ij} + \frac{\partial u}{\partial c}\hat{n}_{j} \right)\label{torca1},\\ \boldsymbol{\tau}_{ji} & = & -\hat{n}_{j}\times\left( \frac{\partial u} {\partial b}{\bf r}_{ij} + \frac{\partial u} {\partial c}\hat{n}_{i} \right)\label{torca2}, \end{eqnarray} \noindent where the partial derivatives are given by \begin{eqnarray} \frac{\partial u}{\partial r} & = & 4 \epsilon(\hat{n}_{i},\hat{n}_{j}, {\bf r}) \left\{ \frac{A\mu\chi^{\prime}}{\epsilon^{\prime} (\hat{n}_{i},\hat{n}_{j},{\bf r})r^{4}} \left[ \frac{(a+b)^{2}} {1+\chi^{\prime}c} + \frac{(a-b)^{2}}{1-\chi^{\prime}c} \right] \right.\nonumber\\ & & \left. - \frac{\sigma^{3}\chi B}{2r^{4}} \left[ \frac{(a+b)^{2}}{1+\chi c} + \frac{(a-b)^{2}}{1-\chi c} \right] - \frac{B}{r} \right\}, \end{eqnarray} \begin{eqnarray} \frac{\partial u}{\partial a} & = & 4 \epsilon(\hat{n}_{i}, \hat{n}_{j},{\bf r}) \left\{ -\frac{A\mu\chi^{\prime}} {\epsilon^{\prime}(\hat{n}_{i},\hat{n}_{j},{\bf r})r^{2}} \left[ \frac{a+b}{1+\chi^{\prime}c} + \frac{a-b}{1-\chi^{\prime}c} \right]\right.\nonumber\\ & & \left. + \frac{\sigma^{3}\chi B}{2r^{2}} \left[ \frac{a+b}{1+\chi c} + \frac{a-b}{1-\chi c} \right] \right\}, \end{eqnarray} \begin{eqnarray} \frac{\partial u}{\partial b} & = & 4 \epsilon(\hat{n}_{i},\hat{n}_{j},{\bf r}) \left\{ -\frac{A\mu\chi^{\prime}}{\epsilon^{\prime} (\hat{n}_{i},\hat{n}_{j},{\bf r})r^{2}} \left[ \frac{a+b}{1+\chi^{\prime}c} - \frac{a-b}{1-\chi^{\prime}c} \right]\right.\nonumber\\ & & \left. + \frac{\sigma^{3}\chi B}{2r^{2}} \left[ \frac{a+b}{1+\chi c} - \frac{a-b}{1-\chi c} \right] \right\}, \end{eqnarray} \begin{eqnarray} \frac{\partial u}{\partial c} & = & 4 \epsilon(\hat{n}_{i},\hat{n}_{j},{\bf r}) \left\{ \frac{A\mu\chi^{\prime 2}}{\epsilon^{\prime}(\hat{n}_{i}, \hat{n}_{j},{\bf r})2r^{2}} \left[ \left(\frac{a+b}{1+ \chi^{\prime}c}\right)^{2} - \left(\frac{a-b}{1-\chi^{\prime}c} \right)^{2} \right]\right.\nonumber\\ & & \left. + \frac{\sigma^{3}\chi^{2} B}{4r^{2}} \left[ \left(\frac{a+b}{1+\chi c}\right)^{2}- \left(\frac{a-b}{1-\chi c}\right)^{2} \right] \right.\nonumber\\ & & \left. + A\nu\chi^{2}(\hat{n}_{i}\cdot\hat{n}_{j}) \epsilon^{2}(\hat{n}_{i},\hat{n}_{j})\right\}, \end{eqnarray} \noindent and the quantities $A$ and $B$ are defined as \begin{eqnarray} A&=&\left(\frac{\sigma_{o}}{r-\sigma+\sigma_{o}}\right)^{12}- \left(\frac{\sigma_{o}}{r-\sigma+\sigma_{o}}\right)^{6},\nonumber\\ B&=&12\left(\frac{\sigma_{o}}{r-\sigma+\sigma_{o}}\right)^{13}- 6\left(\frac{\sigma_{o}}{r-\sigma+\sigma_{o}}\right)^{7}. \end{eqnarray} Given the force between a pair of particles, the equations of motion, both translational and orientational are solved to perform Molecular Dynamics of Gay-Berne fluids. \section{Computer simulations} We developed a MD simulation program at constant volume, number of particles and temperature \cite{allen}. The units of mass, length, and energy were chosen as $m$, $\sigma_{\mathrm{o}}$, and $\epsilon_{\mathrm{o}}$, respectively. We allocated $N=500$ particles in a cubic simulation box of volume $V^*=L^3$ $(V^*=V/\sigma_0^3)$. All particles were assigned inertial moment $I^*$=1 ($I^*=I(m\sigma_0^2)^{-1}$). Periodic boundary conditions and the minimum image convention were also employed, the cut-off distance was set to $r_c^*=L^*/2$ $(r_c^*=r_{c}/\sigma_0)$ in all cases. The temperature was kept constant by rescaling the velocities after each time step. The integration of the orientational and translational equations of motion was performed by using the Leap-Frog algorithm developed by Hockney and Potter \cite{hockney,potter} for the translational equations and by Fincham \cite{fincham} for the orientational motion. A time step of $\Delta t^*$ = 0.0015 $(\delta t^*=\Delta t(m\sigma_0^2/\epsilon_0)^{-1/2})$ was used to integrate the equations of motion. The initial configuration for each isotherm was prepared with particles fixed in a fcc lattice at a low-density $\rho^*$ = 0.005 $(\rho^*=\rho\sigma^3_0)$. Their random initial velocities obeyed the Maxwell-Boltzmann distribution \cite{max-boltz}. The unitary orientations and their derivatives were assigned randomly and obeyed a Gaussian distribution \cite{enrique1}. We used $1\times10^{4}$ time steps for the equilibration period and additional $2\times10^{4}$ iterations for calculating average properties. The data for each isotherm were generated starting from a low density state with $\rho^*=0.005$ from which the system was simulated, once the equilibrium was reached a run for production was conducted and the average properties were measured. With the final configuration the system was then compressed to obtain a new state of higher density. This procedure was repeated to obtain a full isotherm. \vspace{1.0cm} \begin{figure}[htb] \centerline{\epsfig{figure=Figure2.eps,width=3.0in}} \caption{Pressure-density phase diagrams for the (3,5,2,1) GB fluid for temperatures $T^*$ = 0.50 (diamonds), 0.75 (circles), 1.00 (triangles) and 1.25 (squares). The comparison with results from \cite{enrique1} for $T^*=0.5$ are shown with stars.} \label{phase-3521} \end{figure} \section{Calculated properties} \subsection{Pressure} The pressure was calculated according to the virial expression as the sum of two contributions, \begin{equation} \langle P \rangle= \langle P^{kin} \rangle + \langle P^{int} \rangle, \end{equation} \noindent where $ P^{kin}$ and $P^{int}$ are the kinetic and that due to forces between particles, given by \begin{equation} P^{kin} = \frac{1}{3V}\sum^N_{i=1}\left(m_i{\bf v}^2_i + I_i\boldsymbol{\omega}^2_i\right), \end{equation} \noindent where ${\bf v}_i$ and $\boldsymbol{\omega}_{i}$ are the translational and angular velocities of particle $i$, and \begin{equation} P^{int} \, = \frac{1}{V}\sum_{i=1} ^{N-1} \, \sum_{j>i} ^{N}{\bf r}_{ij}\cdot{\bf F}_{ij}~. \label{virial} \end{equation} \subsection{Orientational order parameter} The second rank orientational order parameter $P_2(t)$ gives the particle bulk orientational order, it takes values between 0 and 1. When $P_2=1$ the molecules are arranged in a crystal structure, whereas for $P_2=0$ the system is in an isotropic phase. The definition of $P_2(t)$ is given by \begin{equation} \langle P_2(t) \rangle=\left< \frac{1}{N}\sum^N_i P_2(\hat{n}_i(t)\cdot\hat{n}_d(t)) \right>, \end{equation} \noindent where $P_2$ is the second Legendre polynomial, the vector $\hat{n}_d$ is the director of the phase and $\left< ... \right>$ denote time averages. To obtain the director and the order parameter we maximize $P_{2}$ respect to all the rotations of $\hat{n}_d$ by writing ${\bf P}_{2}=\hat{n}_d\cdot{\mathbb Q}\cdot\hat{n}_d$, where ${\mathbb Q}$ is the ordering matrix. Thus we diagonalize the matrix ${\mathbb Q}$ that represents the orientational tensor, this matrix is defined by the $\alpha\beta$ element \begin{equation} Q_{\alpha\beta}=\frac{1}{2N}\sum^N_in_{i\alpha}n_{i\beta}-\delta_{\alpha\beta}, \end{equation} \noindent where $n_{i\alpha}$ is the $\alpha-$component ($\alpha=x,y,z$) of $\hat{n}_i$ and $\delta_{\alpha\beta}$ is the Kronecker delta. The largest eigenvalue obtained by diagonalizing $Q_{\alpha\beta}$ is the order parameter and its corresponding eigenvector is defined as the director. \begin{figure}[htp] \centering \mbox{\includegraphics[width=4in]{Figure3.eps}} \caption{Pressure-density diagrams at two fixed temperatures: a) $T^*=$ 0.50 and b) $T^*=$ 1.00 and order parameter as function of density for the same temperatures: c) $T^*=$ 0.50 and d) $T^*=$ 1.00 for $\kappa'=$ 5 (diamonds), 10 (circles), 15 (triangles) and 20 (squares).} \label{ppo-5101520} \end{figure} \begin{figure}[htp] \centering \mbox{\includegraphics[width=4in]{Figure4.eps}} \caption{ a) Pressure-density diagram and c) Order parameter for the (3,20,2,1) GB fluid at temperatures $T^*$ = 0.50 (diamonds), 0.75 (circles), 1.00 (triangles) and 1.25 (squares). b) Pressure and d) Order parameter as functions of density at fixed temperatures $T^*$ = 0.50 (diamonds) and $T^*$ = 1.00 (triangles) for the (3,5,2,1) GB fluid (open symbols) and the (3,20,2,1) GB fluid (full symbols).} \label{ppo-32021} \end{figure} \subsection{Pair correlation functions} Besides the order parameter, a quantity useful in the classification of structural phases in fluids of non-spherical particles is the pair correlation function, $g(r)$, \cite{allen} which quantifies positional correlations. This function, $g(r)$, can be split into parallel and perpendicular contributions which are measured along the parallel and perpendicular components of the director, denoted by $r^*_{\|}$ and $r^*_{\bot}$, respectively. The parallel correlation function, $g(r^*_{\|})$, is useful for identifying smectic phases; because the layer structure of smectic phases shows up as a periodic variation of $g(r^*_{\|})$ while $g(r^*_{\bot})$ identifies smectic phases with in-layer order. Both are calculated for different states of the GB fluids. \subsection{Translational diffusion coefficients} Another quantity of interest in this work is the translational diffusion coefficients, two different relations can calculate this property: by using the mean square displacement (MSD) and via the velocity autocorrelation function (VACF) \cite{allen,frenkel}. We use the VACF to calculate the total, the parallel and the perpendicular translational diffusion coefficients with respect to the director. The total diffusion coefficient, defined as \begin{equation} D_{tr}=\frac{1}{3}\int^{\infty}_0 dt \left\langle {\bf v}_i(t)\cdot {\bf v}_i(0) \right\rangle, \label{Dtr} \end{equation} \noindent can be splitted into parallel and perpendicular contributions, which in turn, define the parallel and perpendicular diffusion coefficients given by \begin{eqnarray} D_{tr}^{\|}&=&\int^{\infty}_0 dt \left\langle {\bf v}^{\|}_i(t)\cdot {\bf v}^{\|}_i(0) \right\rangle,\label{Dtr(par)}\\ D_{tr}^{\bot}&=&\frac{1}{2}\int^{\infty}_0 dt \left\langle {\bf v}^{\bot}_i(t) \cdot {\bf v}^{\bot}_i(0) \right\rangle,\label{Dtr(per)} \end{eqnarray} \noindent where ${\bf v}^{\|}$ and ${\bf v}^{\bot}$ are the parallel and perpendicular components of the velocity ${\bf v}$ to the director $\hat{n}_d$, given by \begin{equation} {\bf v}^{\|}=({\bf v}\cdot\hat{n}_d)\hat{n}_d, \hspace{0.5cm} \mbox{} \hspace{0.5cm} {\bf v}^{\bot}={\bf v}-{\bf v}^{\|}. \label{v(par-per)} \end{equation} \noindent We have divided by 2 the expression (\ref{Dtr(per)}) to obtain and average of the two degrees of freedom of ${\bf v}^{\bot}$, the expression for $D_{tr}^{\|}$ has the contribution of only one degree of freedom. In the calculation of the diffusion coefficient we first obtained the parallel and perpendicular velocities, defined in Eqs. (\ref{v(par-per)}), then the integrals involved in Eqs. (\ref{Dtr}), (\ref{Dtr(par)}), and (\ref{Dtr(per)}) were evaluated by doing the summation over a correlation time of 300 $\Delta t^*$. \section{Main results} In this section we present the results obtained in this work. Reduced units will be assumed hereafter. Pressure-density phase diagrams were obtained for Gay-Berne fluids along different isotherms as function of the parameter $\kappa'$. In order to validate the program we simulated the $(3,5,2,1)$ GB fluid studied by De Miguel et al. \cite{enrique1}. Results are presented in Fig. \ref{phase-3521} for temperatures $T^*=0.5$, 0.75, 1.00, and 1.25. We compared our results with those of Ref. \cite{enrique1} for the temperatures there reported and good agreement was found in all cases. In particular, the isotherm $T^*=0.5$, taken from \cite{enrique1}, is shown with stars in Fig. \ref{phase-3521} for comparison. At the lower temperature $T^*=0.5$, as the density increases the pressure increases for densities less than $\rho^*\sim 0.275$, then the pressure shows a decay for an intermediate region and eventually it increases again. For isotherms of higher $T^*$ a similar behavior can be observed, however the decay of pressure is shifted to regions of higher density and more than one decay can occur \cite{enrique1}. This effect is observed for $T^*=0.75$ and $T^*=1.0$ in the same figure. \begin{figure}[htp] \centering\mbox{\includegraphics[width=4.0in]{Figure5.eps}} \caption{ Parallel pair correlation function $g(r^*_{\|})$ at temperature $T^*$ = 0.50 and densities: a) $\rho^*$ = 0.30 and b) $\rho^*$ = 0.35, for the PGB(3,$k'$,2,1) with $k'=5,10,15,20$. Perpendicular pair correlation function for the same conditions that parallel one at densities: c) $\rho^*$ = 0.30 and d) $\rho^*$ = 0.35.} \label{gdr-3k21-T050} \end{figure} We explored the effect of changing $\kappa'$ in a systematic way, so the pressure was calculated for $\kappa'=5$, 10, 15, 20 for the temperatures studied in Fig. \ref{phase-3521}: $T^*=0.5$, 0.75, 1.00, and 1.25. Figure \ref{ppo-5101520} a)shows the pressure as function of density for $\kappa'=5$, 10, 15, 20 for the lower temperature $T^*=0.5$. As we increase $\kappa'$ the region for the isotropic phase shifts to lower densities, meaning that a phase transition occurs before, for instance, for $\kappa'=20$ the decay of the pressure occurs at $\rho^* \sim 0.22$ while it decays around $\rho^* \sim 0.27$ for $\kappa'=5$, as seen from Fig. \ref{ppo-5101520} a). This effect is enhanced at low temperatures, as can be seen when we compared the pressure at temperature $T^*=0.5$ in Fig. \ref{ppo-5101520} a) and that of higher temperature $T^*=1.0$ in \ref{ppo-5101520} b). For the same value of $\kappa'$ let say $\kappa'=5$, we observed that the decay on pressure takes place at $\rho^*\sim 0.29$ for $T^*=0.5$ while for $T^*=1.0$ this decay occurs at $\rho^* \sim 0.31$. The order parameter $\langle P_2\rangle$ was evaluated for the isotherms already discussed and the values of $\kappa'=5$, 10, 15, 20. This quantity is shown in Figs. \ref{ppo-5101520} c) and \ref{ppo-5101520} d) at temperatures $T^*=0.5$ and $T^*$ = 1.00, respectively. In Fig. \ref{ppo-5101520} c) as $\kappa'$ increases the order parameter takes higher values for a fixed density. Along a given isotherm, $\langle P_2\rangle$ increases monotonically in the isotropic phase at low densities, then a sudden increase takes place for densities where the pressure decays, then $\langle P_2\rangle$ increases again and eventually the value of unity is reached. For the largest value of $\kappa'$ this increase takes place at slightly lower densities. The differences found in the order parameter between states of equal density and different $\kappa'$ are more pronounced for low temperatures as can be observed when we compare results at $T^*$ = 0.5 (Fig. \ref{ppo-5101520} b)) and $T^*$ = 1.0 (Fig. \ref{ppo-5101520} c)). The differences in the order parameter as $\kappa'$ increases, are significantly reduced at temperature $T^*$ = 1.0 as is shown in Fig. \ref{ppo-5101520} c). \begin{figure}[htp] \centering\mbox{\includegraphics[width=3.5in]{Figure6.eps}} \caption{Same as Fig. \ref{gdr-3k21-T050} at a higher temperature: $T^*$ = 1.00 and densities a) $\rho^*$ = 0.30 and b) $\rho^*$ = 0.35 for the PGB(3,$k'$,2,1) with $k'=5,10,15,20$ as indicated on the inset for parallel correlation function. For perpendicular correlation function same conditions that parallel one at densities: c) $\rho^*$ = 0.30 and d) $\rho^*$ = 0.35.} \label{gdr-3k21-T100} \end{figure} As an example of temperature effects, Fig. \ref{ppo-32021} a) shows the pressure-density curve for the (3,20,2,1) GB fluid at temperatures $T^*$ = 0.5, 0.75, 1.00, and 1.25. The corresponding order parameter is shown in Fig. \ref{ppo-32021} c), its behavior is consistent with the decays on the pressure shown in Fig. \ref{ppo-32021} a). A similar behavior as that for the $(3,5,2,1)$ GB was found in this $(3,20,2,1)$ GB fluid, however the region of densities for the isotropic phase shrinks and after the decay, the pressure takes lower values for $\kappa'=20$ than those of the $\kappa'=5$ case, as can be seen when we compare both cases in Figs. \ref{ppo-32021} b) at $T^*$ = 0.5 and $T^*$ = 1.0. A comparison of the order parameter for the (3,5,2,1) and (3,20,2,1) GB fluids is shown in Fig. \ref{ppo-32021} d). The observed behavior confirms the findings showed in the pressure. The parallel and perpendicular pair correlation functions, $g(r^*_{\|})$ and $g(r^*_{\bot})$, were obtained for different conditions of density, temperature and $\kappa'$. Figure \ref{gdr-3k21-T050} a) and b) show $g(r^*_{\|})$ at $\rho^*=0.3$ and $\rho^*=0.35$, respectively, at temperature $T^*$ = 0.5 for values $\kappa'=5, 10, 15, 20$ as indicated in the inset. The general trend is that the systems have already developed a layered structure at these conditions. At the lower density $\rho^*=0.3$ the $\kappa'=5$ and $\kappa'=20$ data for the pair correlation $g(r^*_{\|})$ did not show significant differences, something similar was found for the pressure as can be verified in Fig. \ref{ppo-32021} a). At this density the order parameter takes a slightly lower value for $\kappa'=5$ than for $\kappa'=20$. From this set the $\kappa'=10$ fluid showed the largest tendency to form the smectic phase at these conditions of density and temperature than the others, however the distance between layers is shorter in the $\kappa'=15$ fluid. A different situation was found at $\rho^*=0.35$, Fig. \ref{gdr-3k21-T050} b), for $\kappa'=20$ the system shows a regular structure in $g(r^*_{\|})$, the maximums have the same height and the order parameter is closer to the value of unity, however the inter-layer space is still shorter for the case $\kappa'=15$ or the structure is less defined. At this density the system with $\kappa'=5$ has less structure parallel to the director, which is opposite to the finding pointed out in Fig. \ref{gdr-3k21-T050} a). \begin{figure}[htp] \centering\mbox{\includegraphics[width=4.0in]{Figure7.eps}} \caption{Parallel pair correlation function $g(r^*_{\|})$ at temperatures $T^*$ = 0.40, 0.50, 0.75, 1.00 and 1.25 and densities: a) $\rho^*$ = 0.30 and b) $\rho^*$ = 0.35, both for the PGB(3,20,2,1). Perpendicular pair correlation function $g(r^*_{\bot})$ at the same conditions that $g(r^*_{\|})$ at densities: c) $\rho^*$ = 0.30 and d) $\rho^*$ = 0.35} \label{gdr-32021-T} \end{figure} The corresponding perpendicular correlation function for the systems presented in Figs. \ref{gdr-3k21-T050} a) and b) are shown in Figures \ref{gdr-3k21-T050} c) and d) at densities $\rho^*=0.3$ and $\rho^*=0.35$, respectively. This perpendicular correlations measure the intra-layer structure in the fluids. The structure inside a layer takes place at lower distances and have a longer range for $\kappa'=20$, than for $\kappa'=5, 10, 15$. At the higher density $\rho^*=0.35$ this effect is much more visible, the structure is well defined for $\kappa'=20$ and a double shoulder at $r^*=1.8$ can be seen. In terms of the interaction, at this temperature $T^*=0.5$ the dominant configuration is the side-side as can be seen in Fig. \ref{pgb-01}, where the well depth is smaller for $\kappa'=20$ than for $\kappa'=5$. Figures \ref{gdr-3k21-T100} a) and b) shows $g(r^*_{\|})$ for the same systems as in Fig. \ref{gdr-3k21-T050} a) and b) but at a higher temperature $T^*=1.0$. At this temperature the layer structure is absent for all $\kappa'$ values studied as shown in Fig. \ref{gdr-3k21-T100} a) for $\rho^*=0.3$, just at this density the pressure loss the monotonic increase as can be observed in Fig. \ref{ppo-5101520} b) and the order parameter increases, see Fig. \ref{ppo-32021} b). For a higher density $\rho^*=0.35$ the layer structure takes place for $\kappa'=10, 15, 20$, while for $\kappa'=5$ is totally absent. For the intra-layer structure we analyzed the perpendicular pair correlation $g(r^*_{\bot})$ for the same systems as in Fig. \ref{gdr-3k21-T050} c) and d) at temperature $T^*=1.0$, this is shown in Fig. \ref{gdr-3k21-T100} c) at $\rho^*=0.3$, for these conditions the structure is almost absent for all the $\kappa'$ values, while at $\rho^*=0.35$ the intra-layer is already well defined at shorter distances for $\kappa'=10$, for $\kappa'=15$ and $\kappa'=20$ it is about the same and for $\kappa'=5$ it is considerably reduced as shown in Fig. \ref{gdr-3k21-T100} d). \begin{figure}[htp] \centering\mbox{\includegraphics[width=3.50in]{Figure8.eps}} \caption{Parallel pair correlation function $g(r^*_{\|})$ at temperatures: a) $T^*$ = 0.50 and b) $T^*$ = 1.00 for densities between $\rho^*=0.26-0.38$ for the (3,20,2,1) GB fluid. Arrows on figure a) indicate regions where the function shows a decrease as an indicative of different transitions. Parallel pair correlation function $g(r^*_{\bot})$ at temperatures: c) $T^*$ = 0.50 and d) $T^*$ = 1.00 for the same conditions that the parallel one. The densities are indicate on the figure.} \label{gdr-32021-RHO} \end{figure} In order to investigate the effect of temperature we considered isotherms at $T^*=0.4, 0.5, 0.75, 1.00, 1.25$ for the $(3,20,2,1)$ GB fluid. The parallel and perpendicular pair correlation functions are presented in Fig. \ref{gdr-32021-T} a) and c) respectively, as function of temperature for density $\rho^*=0.3$, and b) and d) for $\rho^*=0.35$. At $T^*=0.4$ and $\rho^*=0.3$ the layers can be well identified as can be seen from \ref{gdr-32021-T} a); as temperature increases the layer order decreases and eventually it disappears, for instance at $T^* \ge 1.0$ it is completely absent. Looking at the intra-layer structure with $g(r^*_{\bot})$ in Fig. \ref{gdr-32021-T} c) we observed a well developed layer structure which takes place at shorter distances for $T^*=0.5$ than for $T^*=0.4$, but at this lower temperature the order by layer is quite considerable as compared to higher temperatures. In Fig. \ref{gdr-32021-T} b) we observed that at density $\rho^*=0.35$ the order by layers manifests at temperatures $T^*=0.5, 0.75, 1.00$, although it is not well developed at $T^*=0.4$ and it disappears at $T^*=1.25$. The intra-layer order manifest itself for all the temperatures except at the lowest $T^*=0.4$, as seen in Fig. \ref{gdr-32021-T} d). The parallel pair correlations at these conditions are shown in Fig. \ref{gdr-32021-RHO} at a) $T^*=0.5$ and b) $T^*=1.0$ for densities $\rho^*=0.26-0.38$. The arrows in Fig. \ref{gdr-32021-RHO} a) indicate regions where the function $g(r_{\|})$ shows an increase in its amplitude as an indicative of an smectic B phase in the system \cite{enrique1}. The perpendicular pair correlations are shown in Fig. \ref{gdr-32021-RHO} for c) $T^*=0.5$ and d) $T^*=1.00$, for densities $\rho^*=0.26-0.38$ as indicated on each figure. As the density increases the intra-layer order increases being more notorious at $T^*=0.5$. \begin{figure}[htp] \centering\mbox{\includegraphics[width=2.0in]{Figure9.eps}} \caption{Parallel $D_{tr(\|)}$ (open symbols) and perpendicular $D_{tr(\bot)}$ (full symbols) diffusion coefficients as functions of density at two fixed temperatures: a) $T^*$ = 0.50 and b) $T^*$ = 1.00, for $\kappa'$ = 5 (diamonds) and $\kappa'=$ 20 (circles).} \label{difusion} \end{figure} Finally, the parallel and perpendicular diffusion coefficients, $D_{tr\|}$ and $D_{tr\bot}$ as a function of density and temperature are presented as function of density for values $\kappa'=5, 20$ in Figs. \ref{difusion} a) at temperature $T^*=0.5$ and b) at $T^*=1.00$. As density increases both, $D_{tr\|}$ and $D_{tr\bot}$ decrease. However some particular features can be observed, for instance, for $\kappa'=20$ at densities $\rho^* < 0.22$ $D_{tr\|}$ and $D_{tr\bot}$ take the same value within the statistical error, but in the region $\rho^* = 0.22 - 0.28$ the diffusion is larger in the perpendicular than in the parallel direction. For $\kappa'=5$ the opposite behavior can be observed. At $\rho^* > 0.28$ the diffusion coefficient decays fast for both directions. At temperature $T^*=1$ a more complex behavior is found. Again there is a region where structural transitions take place and a non-systematic behavior in the diffusion is observed. \section{Conclusions} In this work we have studied Gay-Berne fluids by molecular dynamics simulations. Extensive simulations were performed to generate data for the pressure-density phase diagram, the orientational order parameter, the pair correlation functions and the translational diffusion coefficients via the velocity auto-correlation function. We studied Gay-Berne fluids with $\kappa=5$, $\mu=2$, $\nu=1$ and $\kappa'=5, 10, 15, 20$ at different conditions of density and temperature. The structure was analyzed in terms of the order parameter and the pair correlations, both parallel and perpendicular to the director. We explored the dependence of the thermodynamics and structural properties on changing the energy parameter $\kappa'$. Along a given isotherm, as density increases and the parameters ($\kappa$,$\kappa'$,$\mu$,$\nu$) are kept constant the system is forced to the ordering undergoing transitions to different ordered phases. Concerning the pressure when we fixed $T^*$, the general behavior is that pressure exhibits a monotonically increase in the low-density region, followed by several decays, the number of which depends on the $\kappa'$ value. The first decay is shifted to lower densities as $\kappa'$ increases. This effect was observed for all $\kappa'$ studied. This is in agreement with the behavior of the order parameter $\langle P_2 \rangle$ and the pair distribution functions, which was also confirmed by the diffusion coefficients, which are in particular shown in Fig. \ref{difusion} for $\kappa'=5$ and $20$. In addition, we explored the effect of changing the temperature when we fixed the GB parameters ($\kappa$,$\kappa'$,$\mu$,$\nu$) at constant $\rho^*$. We observed that the increase in temperature can suppress some of the structural phases, which in turns leads to the pressure to exhibit less decays as can be seen when we compared Fig. \ref{ppo-5101520} at a lower temperature with the results showed in Fig. \ref{ppo-5101520} b). From results on the parallel pair distribution function we believe that for high values of $\kappa'$ more than one smectic B phase can occur. In particular, when we simulated the GB fluid with $\kappa'=20$ the parallel pair correlation function showed increases in the amplitude of the maximums as density was increased. A non-monotonic behavior of $g(r_{\|})$ surrounding two regions of higher values is seen in the second maximum, as indicated with arrows in Fig. \ref{gdr-32021-RHO} a). Results on both parallel and perpendicular translational diffusion coefficients were obtained for a wide range of densities under different conditions of temperature and for different values of $\kappa'$. They can help in the description of the ordered phases. We would like to mention that all the data here reported can be used to classify different structural phases present in Gay-Berne fluids. Additional work is needed to complete this task but nevertheless they can be used in combination with thermodynamic integration for these purposes. \section*{Acknowledgements} The authors gratefully acknowledge supercomputer facilities of Laboratorio Nacional de Superc\'omputo del Suroeste de M\'exico (LNS) proyect 201801014N1R y 201901004N.
1,314,259,996,918
arxiv
\section*{\refname}} {\begin{multicols}{2}[\section*{\refname}]}{}{} \patchcmd{\endthebibliography}{\endlist}{\endlist\end{multicols}}{}{} \patchcmd{\thebibliography}{\section*{\refname}}{}{}{} \usepackage{multicol,caption,setspace} \captionsetup[figure]{font={stretch=1.0}} \usepackage{etoolbox} \usepackage[usenames, dvipsnames]{color} \usepackage[normalem]{ulem} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{amsmath} \usepackage{mathrsfs} \usepackage{dsfont} \usepackage{bbm} \usepackage{subfigure} \newcommand{\mathbb R}{\mathbb R} \usepackage{float} \usepackage{flushend} \usepackage{scrextend} \newenvironment{Figure} {\par\medskip\noindent\minipage{\linewidth}} {\endminipage\par\medskip} \newcommand{\comment}[1]{} \newcommand{\blue}[1]{\textcolor[rgb]{0,0,1}{#1}} \definecolor{marina}{rgb}{1, 0, 0.5} \newcommand{\discdMMG}[1]{\textcolor{marina}{\sout{#1}}} \newcommand{\chngMMG}[1]{\textcolor{marina}{#1}} \newcommand{\vect}[1]{\boldsymbol{#1}} \title{Appropriate kernels for Divisive Normalization \\ explained by Wilson-Cowan equations\footnote{Parts of this work have been presented at MODVIS-18, and at the Celebration of Cowan's 50th Anniv. Univ. Chicago}} \date{\vspace{-5ex}} \author[1]{\vspace{-0.3cm}J. Malo\footnote{Correspondence: jesus.malo@uv.es}} \author[2]{M. Bertalmio\vspace{-0.3cm}} \affil[1]{\vspace{-0.00cm}\small{Image Processing Lab. Parc Científic, Universitat de València, Spain}} \affil[2]{\vspace{-0.00cm}Dept. Tecnol. Inf. Comunic., Universitat Pompeu Fabra, Barcelona, Spain} \renewcommand\Authands{ and} \begin{document} \maketitle \thispagestyle{empty} \renewcommand{\baselinestretch}{1.2} \vspace{-0.0cm} \begin{abstract} The interaction between wavelet-like sensors in Divisive Normalization is classically described through Gaussian kernels that decay with spatial distance, angular distance and frequency distance. However, simultaneous explanation of (a)~distortion perception in natural image databases and (b)~contrast perception of artificial stimuli requires very specific modifications in classical Divisive Normalization. First, the wavelet response has to be high-pass filtered before the Gaussian interaction is applied. Then, distinct weights per subband are also required after the Gaussian interaction. In summary, the classical Gaussian kernel has to be left- and right-multiplied by two extra diagonal matrices. In this paper we provide a lower-level justification for this specific empirical modification required in the Gaussian kernel of Divisive Normalization. Here we assume that the psychophysical behavior described by Divisive Normalization comes from neural interactions following the Wilson-Cowan equations. In particular, we identify the Divisive Normalization response with the stationary regime of a Wilson-Cowan model. From this identification we derive an expression for the Divisive Normalization kernel in terms of the interaction kernel of the Wilson-Cowan equations. It turns out that the Wilson-Cowan kernel is left- and-right multiplied by diagonal matrices with high-pass structure. In conclusion, symmetric Gaussian inhibitory relations between wavelet-like sensors wired in the lower-level Wilson-Cowan model lead to the appropriate non-symmetric kernel that has to be empirically included in Divisive Normalization to explain a wider range of phenomena. \end{abstract} \section{Introduction} The general discussion on the circuits and mechanisms underlying the Divisive Normalization addressed in \cite{Carandini12} suggests that there may be different architectures leading to this specific computation. Recent results suggest specific mechanisms for Divisive Normalization in certain situations \cite{Carandini16}, but the general debate on the different physiological implementations that may occur is still open. On the other hand, a number of evidences and functional advantages suggest that the interaction kernel in Divisive Normalization should be adaptive (i.e. signal or context dependent) \cite{Schwartz09,Schwartz11,Coen12,Coen13}. Therefore, it is interesting to relate this successful adaptive gain control computation to other models of interaction in neural populations to explore alternative implementations or new interpretations of signal dependence in the kernel. An interesting possibility to consider is the classical Wilson-Cowan model \cite{Wilson72,Wilson73}, which is subtractive in nature. Subtractive and divisive adaptation models have been qualitatively related \cite{Wilson93,Cowan02}. Both models have been shown to have similar advantages in information-theoretic terms: univariate local histogram equalization in Wilson-Cowan \cite{Bertalmio14} and multivariate probability density factorization in Divisive Normalization \cite{Malo10}. Additionally, both models provide similar descriptions of pattern discrimination \cite{Wilson93,Bertalmio17}. However, despite all these similarities and relations, no direct analytical correspondence has been established between these models yet. In this paper, we assume that the psychophysical behavior described by Divisive Normalization comes from neural interactions following the Wilson-Cowan equations. In particular, we identify the Divisive Normalization response with the stationary regime of a Wilson-Cowan model. From this identification we derive an expression for the Divisive Normalization kernel in terms of the interaction kernel of the Wilson-Cowan equations. Interestingly, this relation explains (or is consistent with) the sort of empirical modifications that have to be included ad-hoc in Divisive Normalization based on Gaussian kernels to account for a variety of phenomena. The structure of the paper is as follows: in Section \ref{motivation} we recall some results presented in \cite{Martinez17b}, where we showed the ad-hoc modifications required in classical Gaussian kernels in Divisive Normalization for a proper balance of the different subbands in contrast perception. In Section \ref{equivalence} we derive the analytical relation between the kernel of Divisive Normalization and the kernel in the Wilson-Cowan equations so that these models are equivalent. In Section \ref{analysis} we illustrate the elements and the effect of the analytical result using a specific input image. Finally, in Section \ref{discussion} we discuss the consequences of the result. \section{Motivation: empirically tuning the Divisive Normalization} \label{motivation} Cascades of Linear+NonLinear Divisive Normalization transforms \cite{Heeger92,Carandini94,Carandini12} can be easily tuned using the derivatives introduced in \cite{Martinez17a} to reproduce the perception of image distortion in naturalistic environments. Previous brute-force explorations \cite{Watson02,Malo10,Laparra10a} suggested that spatial interactions in divisively normalized wavelets are more relevant to reproduce subjective opinion than scale and orientation interactions. Good results obtained from optimization of such spatial-only kernels confirms this \cite{Martinez17a}. In this intraband-only Divisive Normalization the vector of V1-like activations, $\vect{x}$, depends on the energy of linear wavelet responses, $\vect{e}$, dimension-wise normalized by a sum of neighbor energies, \vspace{-0.0cm} \begin{equation} \vect{x} = \frac{\vect{e}}{\vect{b} + H^{\vect{p}} \cdot \vect{e}} = \mathbb{D}^{-1}_{\left( \vect{b} + H^{\vect{p}} \cdot \vect{e} \right)} \cdot \vect{e} \label{DNormA} \end{equation} where the kernel $H^{\vect{p}}$ only considers the departure in spatial position, $\Delta \vect{p}$, between sensors of the same subband. The division in Eq.~\ref{DNormA} is a Hadamard (element-wise) operation. Further computations are more intuitive if these Hadamard operations are substituted by regular products of diagonal matrices on vectors: $\vect{a} \odot \vect{b} = \mathbb{D}_{\vect{a}} \cdot {\vect{b}}$, where $\mathbb{D}_{\vect{a}}$ is a diagonal matrix with vector $\vect{a}$ in the diagonal \cite{Minka00}. This is what leads to the (more convenient) matrix form of Divisive Normalization in the right-hand side of Eq. \ref{DNormA} \cite{Martinez17a}. We will refer to this intraband-only model as \textbf{Model A}. Fig~\ref{param_perform_A} shows the intraband kernel and semisaturation for the Divisive Normalization to reproduce perceived distortion in naturalistic databases. \vspace{-0.0cm} \paragraph{Obvious limitations of intraband kernels.} Despite successful optimization of \textbf{Model A} over large naturalistic image quality databases \cite{Martinez17a}, some basic effects with artificial stimuli may be poorly reproduced \cite{Martinez17b}: while \textbf{Model A} explains cross-orientation and cross-scale masking for low frequency tests seen on high frequency backgrounds it is not the case the other way around. Fig~\ref{failure_A} shows series of synthetic data that illustrate the nonlinear and adaptive nature of contrast responses. Moreover, it shows the failures of \textbf{Model A} for high frequency tests. To fix this, a more balanced interaction between subbands in the denominator of Eq. \ref{DNormA} is required, \emph{which cannot be introduced in intraband-only kernels}. \vspace{-0.0cm} \begin{figure}[!b] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{-0.0cm} \includegraphics[width=\textwidth]{parameters_performance_A.JPG} \\ \end{tabular} \vspace{-0.15cm} \caption{\emph{Parameters of MODEL-A (left) and performance on large scale naturalistic database (right)}. The parameters are: the interaction kernel (matrix on top), and the semisaturation per subband vector (plot as a function of the wavelet coefficient -from high-to-low frequencies-).} \label{param_perform_A} \vspace{-0.15cm} \end{figure} \begin{figure}[!t] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{0.5cm} \includegraphics[width=0.7\textwidth]{Efecto_celula.JPG} \\[0.1cm] \hline \\[0.1cm] \hspace{-0.0cm} \includegraphics[width=0.75\textwidth]{Failure2.jpg} \\ \end{tabular} \vspace{-0.15cm} \caption{Experimental response of V1 neurons (mean firing rate) in masking situations. Adapted from Schwartz and Simoncelli (2001); Cavanaugh (2000). It is important to stress the decay in the response when test and mask do have the same spatio-frequency characteristics, as opposed to the case where they do not (difference in the circles in green). \emph{Relative success and failures of optimized model}. Model-related construction of stimuli simplify the reproduction of results form model outputs and straightforward interpretation of results. }\label{failureA} \vspace{-0.15cm} \end{figure} \paragraph{Solution goes beyond Watson \& Solomon kernels.} The first guess to fix the imbalance is substituting the spatial-only kernel $H^{\vect{p}}$ in Eq. \ref{DNormA} by more general kernels, as the one proposed by Watson \& Solomon, $H^{ws} = H^{\vect{p}} \odot H^{f} \odot H^{\phi}$, that not only depends on departures in position, $\vect{p}$, but also in frequency, $f$, and in orientation $\phi$ \cite{Watson97}. We will call this first guess for correction \textbf{Model B - naive}. However, it turns out that Gaussian $H^{ws}$ may not provide the appropriate balance either: low frequency backgrounds may still have too much energy and bias the result for high frequency tests. In \cite{Martinez17b} we showed that this may be fixed ad-hoc by \emph{left} and \emph{right} multiplication of the Watson \& Solomon kernel with extra diagonal matrices: \begin{equation} H = \mathbb{D}_{\vect{l}} \cdot H^{ws} \cdot \mathbb{D}_{\vect{r}} \label{new_kernel_eq} \end{equation} While $\mathbb{D}_{\vect{r}}$, pre-weights the subbands of $\vect{e}$ to moderate the effect of low frequencies before computing the interaction, $\mathbb{D}_{\vect{l}}$, tunes the relative weight of the masking for each sensor, moderating low frequencies again. Additionally to the changes in $H$ to account for the artificial stimuli, the models B included an extra constant to keep the output dynamic range as in the simpler model of Eq.~\ref{DNormA}, just to keep the good performance of \textbf{Model A} for naturalistic stimuli. We will refer to this empirically-tuned model to as \textbf{Model B - fine-tuned}. Fig~\ref{new_parameters} compares the parameters of Model B - naive and Model B - fine-tuned, Fig~\ref{successB} shows how the fine-tuning solves the problem for artificial stimuli and how this fine-tuning preserves the good behavior on the naturalistic database. \begin{figure}[!t] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{-1cm} \includegraphics[width=1.1\textwidth]{parameters_B.JPG} \\ \end{tabular} \vspace{-0.15cm} \caption{\emph{Parameters of the modified models}. Left panel shows the interaction matrix and the semisaturation vector of the first guess for Model - B. It is called \emph{naive} because the semisaturation and amplitudes of the kernel are imported from the optimized case. The panel at the right shows the corresponding parameters for the fine-tuned version of Model B. }\label{new_parameters} \vspace{-0.15cm} \end{figure} \begin{figure}[!t] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{-1.0cm} \includegraphics[width=1.1\textwidth]{performance_resp_B.JPG} \\ \end{tabular} \vspace{-0.15cm} \caption{\emph{Responses for artificial stimuli (left) and performance in the natural image database (right) of the naive model (top) and fine-tuned model (down)}. }\label{successB} \vspace{-0.15cm} \end{figure} \paragraph{Question: where the fine-tuned solution comes from?.} Summarizing, in order to account simultaneously for the perception of distortion in naturalistic databases and for contrast perception of synthetic stimuli the response of \textbf{Model B - fine-tuned} is: \begin{equation} \vect{x} = \mathbb{D}_{\vect{k}} \cdot \mathbb{D}^{-1}_{\left( \vect{b} + H \cdot \vect{e} \right)} \cdot \vect{e} \label{DN_B} \end{equation} where the interaction kernel $H$ requires a specific structure, i.e Eq.~\ref{new_kernel_eq}, and vectors $\vect{l}$ and $\vect{r}$ have high-pass nature (see Fig. 3). \vspace{0.0cm} The question is: \emph{where the structure in Eq. \ref{new_kernel_eq} comes from?}. \vspace{-0.0cm} In order to explain this structure here we hypothesize that the psychophysical behavior described by Divisive Normalization comes from (lower-level) neural interactions in V1 (e.g. the classical Wilson-Cowan equations \cite{Wilson72,Wilson73}). In particular, we identify the Divisive Normalization response with the stationary regime of a Wilson-Cowan model. From this identification in Section \ref{equivalence} we derive a novel expression for the Divisive Normalization kernel in terms of the interaction kernel of the Wilson-Cowan equations. As shown below, this will explain the structure in Eq. \ref{new_kernel_eq}. \section{Equivalence between Divisive Normalization and Wilson-Cowan} \label{equivalence} The Divisive Normalization model \cite{Carandini94,Carandini12} and the Wilson-Cowan model \cite{Wilson72,Wilson73} are alternative (divisive versus subtractive) formulations of the interactions among sensors in neural populations. In this work we assume that the psychophysical behavior described by the Divisive Normalization response is the stationary solution of the dynamic system defined by the Wilson-Cowan equations, which leads to a relation between the parameters that describe the interaction and the auto-saturation in these models. This relation is relevant because it explains the kind of empirical modifications that had to be introduced ad-hoc in \cite{Martinez17b} in the standard Gaussian kernels of Divisive Normalization to simultaneously reproduce the perception of distortions on naturalistic and artificial environments. \subsection{Modelling cortical interactions.} In the case of the V1 cortex, we refer to the set of responses of a population of simple cells as the vector $\vect{y}$. The considered models (Divisive Normalization and Wilson-Cowan) define a nonlinear mapping that transforms the input vector $\vect{y}$ (before the interaction among neurons) into the output vector $\vect{x}$ (after the interaction), \vspace{-0.2cm} \begin{equation} \xymatrixcolsep{2pc} \xymatrix{ \vect{y} \,\,\,\, \ar@/^0.7pc/[r]^{\scalebox{0.85}{$\mathcal{N}$}} & \,\,\,\, \vect{x} } \label{global_response} \end{equation} In this setting, responses are called \emph{excitatory} or \emph{inhibitory}, depending on the corresponding \emph{sign} of the signal: $\vect{y} = \textrm{sign}(\vect{y}) |\vect{y}| $, and $\vect{x} = \textrm{sign}(\vect{x}) |\vect{x}|$. The map $\mathcal{N}$ is an adaptive saturating transform, but it preserves the sign of the responses (i.e. $\textrm{sign}(\vect{x})=\textrm{sign}(\vect{y})$). Therefore, the models care about cell activation (the modulus $|\cdot|$) but not about the excitatory or inhibitory nature of the sensors (the $\textrm{sign}(\cdot)=\pm$). We will refer to as the \emph{energy} of the input responses to the vector $\vect{e} = |\vect{y}|^\gamma$, where this is an element-wise exponentiation of the amplitudes $|y_i|$. Given the sign-preserving nature of the nonlinear mapping, for the sake of simplicity in notation, in the rest of the paper the variables $\vect{y}$ and $\vect{x}$ refer to the activations $|\vect{y}|$ and $|\vect{x}|$. \vspace{-0.0cm} \subsection{The Divisive Normalization model} \paragraph{Forward transform.} The input-output transform in the Divisive Normalization is given by Eq.~\ref{DN_B}: the output vector of nonlinear activations in V1, $\vect{x}$, depends on the energy of the input linear wavelet responses, $\vect{e}$, which are dimension-wise normalized by a sum of neighbor energies. The normalization by $\vect{b} + H \cdot \vect{e}$ and the non-diagonal nature of the interaction kernel $H$ implies that some specific response will be attenuated if the activity of the neighbor sensors is high. Each row of the kernel $H$ describes how the energies of the neighbor sensors attenuate the activity of each sensor after the interaction. The each element of the vectors $\vect{b}$ and $\vect{k}$ respectively determine the semisaturation and the dynamic range of the nonlinear response of each sensor. \vspace{-0.0cm} \paragraph{Inverse transform.} Relation between the two models is easier to obtain by identifying the corresponding decoding transforms in both models. In the case of Divisive Normalization, the analytical inverse is \cite{Malo06a,Martinez17a}: \begin{equation} \vect{e} = \left( I - \mathbb{D}^{-1}_{\vect{k}}\cdot\mathbb{D}_{\vect{x}}\cdot H \right)^{-1} \cdot \mathbb{D}_{\vect{b}} \cdot \mathbb{D}^{-1}_{\vect{k}} \cdot \vect{x} \label{invDN} \end{equation} \vspace{-0.0cm} \subsection{The Wilson-Cowan model} \paragraph{Dynamical system.} In the Wilson-Cowan model the variation of the activation vector, $\vect{\dot{x}}$, increases with the energy of the input, $\vect{e}$, but, for each sensor, this variation is also moderated by its own activity and by a linear combination of the activities of the neighbor sensors, \vspace{-0.0cm} \begin{equation} \vect{\dot{x}} = \vect{e} - \mathbb{D}_{\vect{\alpha}} \cdot \vect{x} - \vect{W} \cdot f(\vect{x}) \label{EqWC} \end{equation} where $\vect{W}$ is the matrix that describes the damping factor between sensors, and $f(\vect{x})$ is a dimension-wise saturating nonlinearity. Different convenient approximations can be taken for that saturation (either piece-wise or continuous, see fig. \ref{f_x}) as for instance (a) $f(\vect{x}) \approx \vect{x}$, or (b) $f(\vect{x}) \approx \vect{x}^\beta$. Note that in Eq.~\ref{EqWC} both the inhibitory and the excitatory cells are included in the same vector, thus the two traditional Wilson-Cowan equations are represented here by a single expression. \paragraph{Steady state and inverse.} Under the approximation (a) of the saturation, the stationary solution of the above differential equation, $\vect{\dot{x}} =0$ in Eq.~\ref{EqWC}, leads to the following decoding (input-from-output) relation: \begin{equation} \vect{e} = \left( \mathbb{D}_{\vect{\alpha}} + \vect{W} \right) \cdot \vect{x} \label{invWC} \end{equation} \begin{Figure} \centering \small \includegraphics[width=4cm,height=3.5cm]{f_x.JPG} \captionof{figure}{\emph{\textbf{Saturating function in Wilson-Cowan.} Sensible choices for this function include piece-wise linear functions (example in red) or saturating exponentials (example in orange).}} \label{f_x} \end{Figure} \vspace{-0.0cm} \subsection{Analytical relation between models} In this work we derive the equivalence between the models through the identification between the decoding equations in both cases (Eqs. \ref{invDN} and \ref{invWC}). This allows us to establish a relation between the parameters that describe the interaction and the auto-attenuation in both models. The identification is simpler by taking the series expansion of the inverse in Eq. \ref{invDN}. This expansion was used in \cite{Malo06a} because it clarifies the condition for invertibility of Divisive Normalization: \begin{equation} \left( I - \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{x}} \cdot H \right)^{-1} = I + \sum_{n=1}^{\infty} \left( \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{x}} \cdot H \right)^n \nonumber \end{equation} The inverse exist if the eigenvalues of $\mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{x}} \cdot H$ are smaller than one so that the series converges. In fact if the eigenvalues are small, the inverse can be well approximated by a small number of terms in the series. \begin{eqnarray} \vect{e} & = & \mathbb{D}_{\vect{b}} \cdot \mathbb{D}^{-1}_{\vect{k}} \cdot \vect{x} + \left( \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{x}} \cdot \vect{H} \right) \cdot \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{b}} \cdot \vect{x} + \nonumber \\ & & + \left( \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{x}} \cdot \vect{H} \right)^2 \cdot \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{b}} \cdot \vect{x} + \nonumber \\ & & + \left( \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{x}} \cdot \vect{H} \right)^3 \cdot \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{b}} \cdot \vect{x} + \cdots \nonumber \\[0.4cm] \vect{e} &\approx& \left( \mathbb{D}_{\vect{b}} \cdot \mathbb{D}^{-1}_{\vect{k}} + \mathbb{D}^{-1}_{\vect{k}} \cdot \mathbb{D}_{\vect{x}} \cdot \vect{H} \cdot \mathbb{D}_{\vect{b}} \cdot \mathbb{D}^{-1}_{\vect{k}} \right) \cdot \vect{x} \label{approx_invDN} \end{eqnarray} Now, identification of Eq. \ref{approx_invDN} and \ref{invWC} (i.e. the first order approximation of the inverse of the Divisive Normalization, and the decoding assuming a piece-wise linear approximation of $f(\vect{x})$ in the Wilson-Cowan model) is straightforward. As a result, we get the following relations between the parameters of both models: \begin{eqnarray} \vect{b} &=& \vect{k} \odot \vect{\alpha} \nonumber \\ H &=& \mathbb{D}_{\left(\frac{\vect{k}}{\vect{x}}\right)} \cdot \vect{W} \cdot \mathbb{D}_{\left(\frac{\vect{k}}{\vect{b}}\right)} \label{relation_W_H} \end{eqnarray} Note that the resulting $H$ in Eq. \ref{relation_W_H} has exactly the structure that had to be introduced ad-hoc in Eq.~\ref{new_kernel_eq}. Both models are equivalent if the Divisive Normalization kernel inherits the structure from the Wilson-Cowan kernel modified by the these pre- and post- diagonal matrices. Note that the weights after the interaction (the diagonal matrix at the left) is signal dependent, which implies that the interaction kernel in Divisive Normalization should be adaptive. In the next Section we show that the vectors (Hadamard quotients) $\vect{k}/\vect{x}$ and $\vect{k}/\vect{b}$ do have the high-pass frequency nature that explains why the low frequencies in $\vect{e}$ had to be attenuated by $\vect{r}$ and $\vect{l}$. \section{Analysis of the equivalence} \label{analysis} In this section we consider an illustrative signal and sensible values for the parameters involved in Eq. \ref{relation_W_H} ($\vect{k}$, $\vect{b}$ and $\vect{W}$), to analyze the effects in the Divisive Normalization kernel and compare with the hand-crafted kernel in Eq. \ref{new_kernel_eq}. Here we compare the empirical filters (vectors $\vect{l}$ and $\vect{r}$) presented in Section 1 (Eq. \ref{new_kernel_eq}) with the corresponding vectors in Eq. \ref{relation_W_H}. We also compare the masking term in the denominator of Divisive Normalization using (1) the Gaussian kernel $H^{ws} \cdot \vect{e}$, (2) the empirically modified kernel $\mathbb{D}_{\vect{l}} \cdot H^{ws} \cdot \mathbb{D}_{\vect{r}} \cdot \vect{e}$, and (3) the theoretically derived kernel obtained from Eq. \ref{relation_W_H}, $\mathbb{D}_{\left(\frac{\vect{k}}{\vect{x}}\right)} \cdot \vect{W} \cdot \mathbb{D}_{\left(\frac{\vect{k}}{\vect{b}}\right)} \cdot \vect{e}$. In this comparison we assume a Gaussian wiring in $\vect{W}$. Before doing so, given an illustrative input image, Fig.\ref{explanation} shows the corresponding responses of linear and nonlinear V1-like sensors based on steerable wavelets. Typical responses for natural images are low-pass signals. Fig.\ref{filters} compares the empirical vectors with those based on the relation with the Wilson-Cowan model: both show similar high-pass nature. Fig. \ref{kernels} compares different individual interaction kernels (rows of the kernel matrix) for the two sensors highlighted in yellow and blue in Fig. \ref{explanation}. Fig. \ref{masking} compares the masking terms: the one derived from the proposed relation is more similar to the (more general) empirically tuned result than to the one based on the (more limited) Gaussian kernel. \begin{figure}[!t] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{-2.0cm} \includegraphics[width=1.2\textwidth]{response_explanation.JPG} \\ \end{tabular} \vspace{-0.15cm} \caption{\emph{Responses of the model in V1: localized oriented filters at different scales}. Natural images typically have low-pass responses. Divisive Normalization implies adaptive saturating nonlinearity depending on the neighbors (i.e. a family of sigmoid functions in the input-output scatter plots). Note the response of the medium frequency horizontal sensors tuned to different positions highlighted in yellow and blue. }\label{explanation} \vspace{-0.15cm} \end{figure} \begin{figure}[!t] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{-0.0cm} \includegraphics[width=0.8\textwidth]{filters.JPG} \\ \end{tabular} \vspace{-0.15cm} \caption{\emph{Vectors in the left- and right- diagonal matrices that multiply the Gaussian kernel in the empirical tuning represented by Eq. \ref{new_kernel_eq} (top) and in the theoretically derived Eq. \ref{relation_W_H} (bottom)}. }\label{filters} \vspace{-0.15cm} \end{figure} \begin{figure}[!t] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{-0.0cm} \includegraphics[width=0.7\textwidth]{kernelsV2.JPG} \\ \end{tabular} \vspace{-0.15cm} \caption{\emph{Interaction kernels for the sensors highlighted in yellow and blue in Fig. \ref{explanation}}. In the Gaussian case (top) both low and high frequencies symmetrically affect each sensor. This implies the higher energy of low-frequency components bias the response and ruins the masking curves. This was corrected ad-hoc using right- and left- multiplication in Eq. \ref{new_kernel_eq} by hand-crafted high-pass filters. This leads to the empirical kernels in the center. In both cases (Gaussian and hand-crafted, top and center) the size of the interaction neighborhood is signal independent (the same for both locations). The kernels at the bottom are those obtained from Eq. \ref{relation_W_H}. These theoretically-derived kernels remove the bias due to the low frequencies (just as the hand-crafted kernel), but also introduce an extra signal dependence: note that the interaction neighborhood now depends on the location (bigger for the sensor in blue) due to the different value of the signal. }\label{kernels} \vspace{-0.15cm} \end{figure} \begin{figure}[!t] \centering \small \setlength{\tabcolsep}{2pt} \begin{tabular}{c} \hspace{-0.0cm} \includegraphics[width=0.7\textwidth]{maskingsV2.JPG} \\ \end{tabular} \vspace{-0.15cm} \caption{\emph{Masking term in the denominator of the Divisive Normalization for various kernel choices.} Gaussian kernel (top), Empirically tuned hand-crafted kernel (center), and Theoretically derived kernel (bottom)}\label{masking} \vspace{-0.15cm} \end{figure} \section{Final remarks} \label{discussion} This relation between models has a range of consequences. First, assuming fixed (hard-wired) interaction between the sensors in the Wilson-Cowan model, Eq.~\ref{relation_W_H} implies that the required kernel in Divisive Normalization, $H$, not only inherits the wiring in $\vect{W}$, but it also should be signal-dependent. Second, functional forms depending on \emph{proximity} (as in the Watson-Solomon kernel $H^{ws}$) seem sensible choices for wiring in $\vect{W}$, which would justify the hand-crafted trick in Eq. \ref{new_kernel_eq}. Last, but more importantly, Eq.~\ref{relation_W_H} implies that the variety of dynamic analysis already done for Wilson-Cowan systems \cite{Sejnowski09} can also be applied to the wide range of phenomena described by Divisive Normalization. \section*{References} \bibliographystyle{unsrt} \renewcommand{\baselinestretch}{1} {\footnotesize
1,314,259,996,919
arxiv
\section{Introduction} \noindent Over the last decade, the amount of video data that need to be processed has increased exponentially due to the frequent use of cameras everywhere \cite{lu2016optasia, xu2019vstore, moll2020exsample, poms2018scanner}. A wide range of propitious applications such as intelligent transportation systems, security systems, augmented/virtual reality, and advanced driving assistance systems rely on video analytics. Video analytics deals with automatic recognition of temporal and spatial events in videos. For instance, video analytics recognizes and detects humans and objects automatically in a video stream. Moreover, it recognizes human movements, actions, describes and captions videos, and classifies activities and objects. Recently, machine learning and computer vision have been used in multiple domains and they have revolutionized the field of video analytics. Applications use video analytics to take immediate actions based on their decision without any human interactions. Besides, several database management systems that work with data and query processing on videos have more advanced features \cite{daum2020tasm} to support video analytics. \noindent Although video analytics is one of the popular research areas now, it is still challenging to get the required accuracy and performance with algorithms available up to date. Query processing in video streams needs to be fast, precise, and scalable. It is important for the algorithms used in video analytics to be efficient enough for huge amount of data. For instance, while many applications require to produce query results in real-time, others can permit a lag of even many minutes to process queries. This allows for temporarily reallocating some resources from the lag-tolerant queries during the interim shortage of resources. Such shortage happens due to a burst of new video queries or “spikes” in resource usage of existing queries (e.g. due to an increase in the number of cars to track on the road) \cite{zhang2017live}. \subsection{Motivation} \subsubsection{Motivation for using Video Analytics} \noindent Motion detection, object tracking, and scene analysis are all considered as video analytics. Motion detection is to compare the current image with the static background of the scene. Object tracking finds a specific object in the current frame that relates to the object in the next frame. Scene analysis is to recognize actions in the scene. Video analytics helps in understanding the situation in a video and predict the next steps by tracking objects in the video. The detected behaviour of the object in the video helps users to take actions accordingly. Another example of video analytics applications is autonomous vehicles. Self-driving cars have to detect objects in real-time to avoid accidents and collisions. Therefore, optimizations in video analytics are required to reach higher accuracy and lower latency. \noindent Moreover, researchers use video analytics techniques to recognize the most common occurring road factors that influence the interaction between autonomous vehicles and other traffic participants \cite{madigan2019understanding}. Video analytics is also crucial in improving safety and security. For instance, behavior detection by determining a person’s posture is used in safety and security applications to detect if a person has fallen, crouched down, or jumping over barriers \cite{okita2020ai}. \subsubsection{Motivation of our work (difference with other surveys)} \noindent The current video analytics surveys are on the spatio-temporal and content-based viewpoints \cite{shih2017survey}. Some papers such as \cite{wang2003video, zhang2018physics, kong2018human} focus on video analytics technologies of human behavior and actions. Other works such as \cite{shih2017survey, cuevas2020techniques} focus on video analytics for sports. \cite{olatunji2019video} is a survey that focuses on video analytics and techniques for surveillance camera data and categorizes video analytics subdomains as behavior analysis, moving object classification, video summarization, object detection, object tracking, and congestion analysis. \cite{zhang2019edge} is another survey that focuses on reviewing the video analytics algorithms used in public safety. \cite{zhang2020machine} is a survey that focuses on techniques and methods for optimizing video coding, which is a video content representation format used for storing and transmitting video data. Moreover, there is a survey on human group activity recognition by analyzing person actions from video sequences using machine learning techniques \cite{kulkarni2020survey}. In \cite{premkumar2021video}, the modern deep learning based video analytics approaches are compared with the standard Computer Vision based approaches for Internet of Things (IoT) devices. \noindent As mentioned earlier, the previous surveys have reviewed application-specific methods; however, they do not include general-purpose video analytics techniques that are focused on optimizing the end-to-end performance of video analysis. In this survey paper, we focus only on Performance Optimization in Video Analytics Systems and we review how these systems work and the optimization strategy that they use. Other works such as \cite{suprem2020odin, ran2018deepdecision, bastani2020vaas, poddar2020visor} focus on improving the accuracy and privacy; hence, they are beyond the scope of this survey paper and we do not include them in our review. \subsection{Contributions} \label{sec:contributions} \noindent In this paper, we present an in-depth review of the recent advances in Optimization-based Video Analytics techniques. Our contributions are listed as following: \begin{itemize} \item We perform an in-depth review over Optimization-based Video Analytics techniques. Our review consists of the definitions, the workflow, and the ideas which are proposed in each technique to improve the end-to-end performance. \item We categorize the reviewed techniques based on the optimization strategy that they utilize. \end{itemize} \subsection{Paper Organization} \label{sec:organization} \noindent The remainder of the paper is organized as follows: In section \ref{sec:taxonomy} we present a detailed review of the Video Analytics Techniques that are proposed by different authors to optimize and improve the performance in various applications. Finally, we conclude the paper in Section \ref{sec:conclusions}. \section{Optimization-based Video Analytics Techniques} \label{sec:taxonomy} \noindent Traditional and naive video analytics systems require multiple queries to the learned deep models and suffer from great computational costs. Over the past recent years, researchers have proposed various optimization techniques to improve the performance of video analytics systems. In this section, we review the optimization-based methods that are proposed in video analytics systems to improve performance. We categorize the proposed techniques based on their type of optimization. Note that, techniques in each category are sorted by their year of publication. A diagram of the categorization is shown in Figure \ref{fig:taxonomy}. \begin{figure} \centering \begin{forest} forked edges, for tree={% thick, anchor=center, drop shadow, node options={ draw, font=\sffamily }, where level=0{ parent }{ folder, grow'=0, }, where level=1{ minimum height=1cm, child, for descendants={% grandchild, minimum height=0.6cm, }, for children={ before computing xy={s+=5mm}, } }{}, } [Optimization-based Video Analytics Techniques, xshift=3.6em [Query\\ Optimization [BlazeIt] [Rekall] ] [Inference\\ Optimization [NoScope] [Probabilistic Predicates] [Focus] [Chameleon] [Tahoma] [Panorama] [Smol] [THIA] ] [Frame-level\\ Optimization [SVQ] [MARLIN] [ExSample] [MIRIS] [Reducto] [AQuA] ] [Storage\\ Optimization [LightDB] [Vstore] [TASM] ] [Parallel\\ Processing [Optasia] [VideoStorm] [Scanner] [VideoEdge] [CONVINCE] [Distream] [RES] [Spatula] ] ] \end{forest} \caption{Categorization of Video Analytics Techniques based on their optimization strategy} \label{fig:taxonomy} \end{figure} \subsection{Query Optimization} \subsubsection{BlazeIt} \noindent BlazeIt \cite{kang2018blazeit} is a video analytics system with a declarative query language (called FRAMEQL), an aggregation algorithm, and an algorithm to limit queries. FRAMEQL is mainly used to retrieve the spatio-temporal information of video objects. The aggregation algorithm optimizes the aggregation's efficiency up to 14x compared to existing approximate query processing techniques by using control variates to leverage specialized neural networks. The algorithm that limits queries can have up to 83× speedup compared to recent work in video analytics and random sampling using specialized neural networks. BlazeIt optimizes query execution time by avoiding materialization using proxy models. \noindent Moreover, authors mention that systems like NoScope \cite{kang2017noscope} and Focus \cite{hsieh2018focus} are inflexible and cannot adapt to the user queries. Authors mention that these systems do not support the extension of specialization and present novel optimization for aggregation and limiting queries, which is supported by BlazeIt. \subsubsection{Rekall} \noindent Rekall \cite{fu2019rekall} is a library that exposes a data model and programming model for compositional specifications of video events. The compositional specifications of video events are proposed as a human-in-the-loop approach to detect new interesting events in the video quickly by adapting ideas from multimedia databases and complex event processing over temporal data streams. The experiments show that users who use Rekall can develop queries to retrieve new events in a video given only one hour. \subsection{Inference Optimization} \subsubsection{NoScope} \noindent NoScope \cite{kang2017noscope} is a system for querying videos that accelerates neural network analysis over videos using inference-optimized model search. Given an input video, reference neural network, and the target object, it automatically searches for and trains several models. NoScope takes advantage of two types of models. The first type is specialized models that dispense the generality of standard neural networks in exchange for much faster inference. The second type is difference detectors that identify temporal differences across frames. No Scope combines these two models by performing efficient cost-based optimization to select model architecture and thresholds for each model to maximize throughput subject to a specified accuracy target. NoScope prototype demonstrates speedups of two to three orders of magnitude for binary classification on fixed-angle video streams, with a 1-5\% loss in accuracy. \subsubsection{Probabilistic Predicates} \noindent In \cite{lu2018accelerating}, the authors focus on accelerating machine learning inference queries using Probabilistic Predicates, which are executed over the raw input without the need for the predicate column to mirror the original query predicates. The primary use of probabilistic predicates in this work is to filter data blobs that do not satisfy the query predicate. This filtering is parameterized to different target accuracies. The results show that the proposed method has as much as 10× better performance compared to machine learning queries on various large-scale datasets. Moreover, the authors mention that their Probabilistic Predicates strategy differs from NoScope \cite{kang2017noscope} in supporting a wider range of queries and datasets. \subsubsection{Focus} \noindent Focus \cite{hsieh2018focus} is a new system that flexibly divides the query processing work between the ingest time and query time. At the ingest time, Focus builds an approximate index of all possible classes of objects in each frame using cheap convolutional neural networks (CNN). It leverages the approximated indices at query time, and it uses an expensive CNNs to compensate for the lower precision. Focus reduces the ingest cost on average by 48× (up to 92×) compared to using expensive CNN. Also, Focus makes queries on average 125× (up to 607×) faster than a state-of-the-art video querying system like NoScope \cite{kang2017noscope}. \subsubsection{Chameleon} \noindent Chameleon \cite{jiang2018chameleon} is a video analytics system that reduces profiling costs, optimizes resource consumption, and improves inference accuracy of video analytics pipelines. Optimization in Chameleon is done by adapting the configuration of the existing neural network-based video analytics pipelines in real-time. The main idea of adapting the configuration is that the underlying characteristics that affect the best configuration have enough spatio-temporal correlation that amortizes the search cost over time and across multiple video feeds. \noindent Analysis shows that continuously adapting neural network configurations saves the computation resources by up to 10× and improves accuracy by up to 2× compared to one-time tuning. Moreover, evaluation using the video feeds of five cameras shows that Chameleon can improve the accuracy by 20\% to 50\% with the same amount of resources compared to baseline that picks a single optimal offline configuration, or it can achieve the same accuracy by using only 30\% to 50\% of the resources. \subsubsection{TAHOMA} \noindent TAHOMA \cite{anderson2019physical} is a system that accelerates content extraction from extensive visual data to support visual analytics queries. TAHOMA generates and evaluates many cascade classifiers that jointly optimize the Convolutional Neural Network (CNN) architecture and input data representation. The CNN optimization is done by designing and choosing the CNN-based operator that implements a high-quality sensitive relational predicate of images. Mainly, it is done by constructing a large number of cascade classifiers from a large variety of CNN-based classification models. \noindent Authors show that TAHOMA speeds up classifier cascade through input data transformation by up to 35×. Moreover, it speeds up the ResNet50 image classifier by 98× with no accuracy loss. \subsubsection{Panorama} \noindent Panorama \cite{zhang2019panorama} is the first information system architecture for unbounded vocabulary queries over video. The authors create a new multi-task convolutional neural network (CNN) architecture, which is called PanoramaNet. PanoramaNet supports unbounded vocabularies in a unified and unsupervised manner based on embedding extraction and content-based image retrieval (CBIR). Moreover, they come up with a new self short-circuiting configuration scheme for PanoramaNet to enable practical trade-offs between accuracy and efficiency. \noindent The results show that Panorama offers between 2x and 20x higher throughput than competitive accuracy for in-vocabulary queries while also generalizing well with out-of-vocabulary queries. As the vocabulary grows, Panorama saves the users from the difficulty of retraining models post-deployment. The authors also mention the difference between Panorama and NoScope \cite{kang2017noscope}. No Scope uses only two vocabularies, which are “yes” or “no” for a given object type. In Panorama, the number of vocabulary can be infinite, which they call an unbounded vocabulary. \subsubsection{SMOL} \noindent SMOL \cite{kang2020jointly} is an engine that focuses on Deep Neural Network (DNN) based visual analytics and optimizes end-to-end query time for DNN inference by optimizing the computational cost of preprocessing and DNN execution. SMOL uses low-resolution data to reduce preprocessing and execution costs, which causes an accuracy reduction. SMOL uses a DNN training procedure that uses data augmentation to recover the accuracy loss caused by using low-resolution data. This work proves that using an accurate large DNN with low-resolution data can be more efficient and higher in accuracy than using a small DNN with high-resolution data. \noindent The evaluation shows that SMOL can achieve 2.5× throughput improvement at a fixed error level. The comparison of SMOL with BlazeIt shows that SMOL outperforms BlazeIt in all settings. \subsubsection{THIA} \noindent THIA \cite{cao2021thia} is a video analytics system that overcomes the limitations of techniques that lower the computational overhead associated with deep learning models. A technique that uses a specialized lightweight model to answer the query has a limitation in providing accurate results for hard-to-detect events. Another technique that filters irrelevant frames using a lightweight model, and then, uses a heavyweight model to process the filtered frames has two limitations: 1) it cannot accelerate the queries that focus on frequently occurring events, and 2) the filter cannot eliminate a significant fraction of video frames. \noindent THIA works by utilizing three techniques. First, the Early Inference technique constructs a single object detection model with multiple exit points for short-circuiting the inference. This technique offers a set of throughput-accuracy tradeoffs. Second, the Fine-Grained technique uses different exit points to plan and process different video chunks. This technique works simultaneously with the Early Inference technique. Finally, the last technique is the Exit Point Estimation technique, which is a lightweight technique that directly estimates the exit point for a chunk to reduce the optimization overhead of the Fine-Grained Planning technique. Experiments show that THIA outperforms Probabilistic Predicates and BlazeIt systems on a wide range of queries by up to 6.5×. Moreover, it provides accurate query results even on queries that work on hard-to-detect events. \subsection{Frame-level Optimization} \subsubsection{SVQ} \noindent SVQ \cite{xarchakos2019svq} is a system that executes declarative queries on streaming videos. SVQ applies approximate filters to accelerate the query execution that contains specific objects on video frames with spatial relations. These filters use extensible deep neural architectures and easy to deploy and utilize. The results show that the application of filtering techniques with SVQ made the declarative queries on streaming video dramatically increase the frame processing rate. Depending on the query, it speeds up the query processing by at least two orders of magnitude. \subsubsection{MARLIN} \noindent MARLIN \cite{apicharttrisorn2019frugal} is a framework to manage and reduce energy consumption for object detection and tracking. It balances between improving tracking accuracy and saving energy by triggering deep neural networks only when needed. The key idea behind MARLIN is to examine only the portions of the frame outside of the currently tracked objects to check if there are any new objects present or recapture objects that significantly change in appearance. Moreover, it ignores the camera motions and effects. \noindent The results show that MARLIN saves energy by up to 73\%, with at most 7\% accuracy penalty for 75\% of tested videos. Furthermore, for 46.3\% of the cases, MARLIN improves accuracy and reduces the energy consumption compared to systems that continuously use deep neural networks. \subsubsection{ExSample} \noindent ExSample \cite{moll2020exsample} is a low-cost framework for object search in an unindexed video. ExSample processes search queries quickly by adapting the amount and location of sampled frames to the specified data and query that is being processed. Frame sampling is done to find the most distinctive objects in the shortest amount of time. ExSample iteratively processes batches of frames. Furthermore, tuning the sampling process is based on whether new objects are found in the previous frames sampled on each iteration. \noindent The evaluation of ExSample shows that it can reduce both the number of sampled frames and the execution time needed to achieve a particular recall. The authors mention that ExSample works better in time and number of frames compared to surrogate models, such as BlazeIt. BlazeIt selects frames for applying the object detector based on surrogate scores. However, ExSample works with an adaptive sampling technique that eliminates the need for a surrogate model. \subsubsection{MIRIS} \noindent MIRIS \cite{bastani2020miris} is a video query processor that integrates query processing and object tracking to select a variable video sampling frame rate that minimizes object detector workload while maintaining accurate query outputs. The Evaluation of MIRIS shows that it speeds up the object tracker execution by 9×, and it reduces the number of video frames that must be processed. \noindent The authors compare MIRIS with NoScope \cite{kang2017noscope}, probabilistic predicates \cite{lu2018accelerating}, BlazeIt \cite{kang2018blazeit}, and SVQ \cite{xarchakos2019svq}. These engines train lightweight specialized machine learning models to approximate the result of a predicate over individual video frames or sequence of frames. The authors also mention two issues of these engines. First, the object instance that satisfies the query predicate, most of the time, appears in almost all frames, which will make it difficult for these engines to offer a substantial speedup. Moreover, object tracking queries inherently involve a video sequence, which leads to expensive object detectors on almost every video frame. \subsubsection{Reducto} \noindent Reducto \cite{li2020reducto} is a video analytics system that performs on-camera frame filtering while supporting resource efficient real-time querying for video analytics. Reducto works by dynamically adapting filtering decisions according to the time-varying correlation between video feature type, filtering threshold, query accuracy, and video content. Reducto selects low-level video features by determining the best feature for each query class. It also chooses a filtering threshold using a lightweight machine learning technique to predict the chosen feature threshold while maintaining query accuracy. Evaluations show that Reducto achieves significant filtering benefits while meeting the desired accuracy. \subsubsection{AQuA} \noindent AQuA \cite{paul2021aqua} is a deep learning model that protects the accuracy of the application against poor quality frames by scoring the frames distortion level and assigning an analytical quality score according to whether the frame is good or bad for further analysis. AQuA can detect, score, or discard the distorted frames either after capture at the edge or after video compression and transmission. AQuA works by using classifier opinion scores to evaluate an analytical quality of a frame. The authors mention that AQuA is the first system to improve the real-time video analysis pipeline by considering a classifier assessment of image quality. \noindent The evaluation of AQuA shows that it reduces high-confidence errors for analytics applications by up to 17\% when filtering poor quality and distorted frames at the edge. Moreover, it reduces computation time and average bandwidth usage by up to 25\%. \subsection{Storage Optimization} \subsubsection{LightDB} \noindent LightDB \cite{haynes2018lightdb} is a database management system that efficiently manages virtual, augmented, and mixed reality (VAMR) video content. In LightDB, VAMR is treated as a six-dimensional light field. Besides, LightDB supports a rich set of operations over light fields, and it automatically transforms declarative queries into executable physical plans. Experimental results show that LightDB offers up to 4× throughput improvements compared to prior work. Moreover, it has easily expressible queries, and it improves the performance of queries up to 500× compared to other video processing frameworks. Authors also mention that LightDB can process up to 8× more frames per second than the Scanner method. \subsubsection{VStore} \noindent VStore \cite{xu2019vstore} is a data store that manages video ingestion, storage, retrieval, and video resource usage. VStore supports fast and efficient video analysis over large videos and it controls the video format along the data path. VStore works with an idea called backward derivation of configuration that works by passing the desired video quality and quantity back to retrieval, storage, and ingestion stages. Results show that VStore runs queries as fast as 362x of video runtime. Furthermore, authors mention that VStore is the first holistic system that manages the full video lifecycle for retrospective analytics. \subsubsection{TASM} \noindent TASM \cite{daum2020tasm} is a storage manager for video data that improves video query performance. TASM speeds up queries that retrieve objects in a video with low storage overhead and good video quality by splitting the video frames into independent tiles and optimizes the video file layout based on its content and the query workload. TASM designs tile layouts to improve video query performance by including information about the video content with the observation of the objects targeted by queries. Evaluations show that layouts picked by TASM accelerate individual queries by an average of 51\% and up to 94\% while maintaining good quality. Moreover, TASM can automatically adjust layouts over a small number of queries to improve query performance even for the unknown query workloads. \subsection{Parallel Processing} \subsubsection{Optasia} \noindent Optasia \cite{lu2016optasia} is a system that combines the most recent techniques from vision and data-parallel computing communities mainly for various surveillance applications. It provides a SQL-like declarative language. Moreover, it works with a cost-based query optimizer (QO) to connect and take benefits from both end-user queries and low-level vision modules. QO has several advantages. For instance, it outputs good parallel execution plans, it scales appropriately as the data size increases, and it helps in reducing the duplication of overall work by structuring the work of each query. Authors show that Optasia improves the accuracy and performance several times more than prior works on surveillance videos. \subsubsection{VideoStorm} \noindent VideoStorm \cite{zhang2017live} is a system for video analytics that scales to processing thousands of video queries on live video streams over large clusters. In this work, VideoStorm is deployed on an Azure cluster of 101 machines. Initially, users submit video queries that contain many arbitrary vision processors. Afterwards, VideoStorm generates query resource-quality profiles for different query knobs configurations. Simultaneously, it uses its scheduler to improve performance by maximizing the quality and minimizing the lag tolerance on video queries in allocating resources. \noindent Results show that generating query profiles uses 3.5× fewer CPU resources compared to a basic greedy search. The scheduler performs fair scheduling as much as 80\% in quality of real-world queries and 7× better in terms of lag. \subsubsection{Scanner} \noindent Scanner \cite{poms2018scanner} is a system for efficient video analysis at scale. It supports two important aspects of video analysis. First, storing and accessing pixel data from several large videos by organizing video collections as tables in a data store, whose implementation is optimized for compressed video. Second, executing expensive pixel-level operations in parallel by organizing pixel-analysis tasks as dataflow graphs that operate on sequences of frames sampled from tables. The Evaluation of Scanner shows that video analysis tasks that require days of processing can be done efficiently in hours to minutes. \subsubsection{VideoEdge} \noindent VideoEdge \cite{hung2018videoedge} is a system that identifies the best trade-off between multiple resources and accuracy. VideoEdge narrows search space by identifying the most promising options in the “Pareto band” and searches only within the band. Moreover, as video analysis queries have multiple implementations, VideoEdge decides the implementation and the knobs of queries, places the queries across the hierarchy, and merges queries with common processing. It balances the resource benefits and accuracy drawback of merging queries. Results show that VideoEdge improves accuracy by 25.4× compared to the fair distribution of resources and 5.4x compared to a recent video query planning solution. \subsubsection{CONVINCE} \noindent CONVINCE \cite{pasandi2020convince} is a cross-camera video analytics system that enables a collaborative video analytics pipeline among network-connected cameras. CONVINCE works by leveraging spatio-temporal correlations and knowledge sharing while preserving privacy. Spatio-temporal correlations are done by discarding redundant frames to reduce the cost of bandwidth and processing. Knowledge sharing is done to improve the accuracy of vision models. Results show that CONVINCE achieves 91\% accuracy of object identification by transmitting only about 25\% of all recorded frames. \subsubsection{Distream} \noindent Distream \cite{zeng2020distream} is a framework for distributed live video analytics based on the smart camera-edge cluster architecture. The main advantage of Distream is its ability to adapt the real-world workload dynamics to achieve low latency, high throughput, and scalable deep learning-based live video analytics. Distream is designed to balance the workloads between smart cameras, partition the workload between the cameras and the cluster, and adapt the workload dynamics. \noindent Authors evaluate Distream using 24 cameras and a 4-GPU edge cluster on 500 hours of a distributed video stream from two real-world video datasets. The evaluation results show that Distream outperforms the existing methods in throughput, latency, and service level objective (SLO) miss rate. \subsubsection{RES} \noindent RES \cite{ali2020res} is an edge-enhanced stream analysis system that complements a cloud-based platform for video stream analytics. RES works in two phases, while each phase consists of three stages. The first phase is the filtration phase, which reduces data by detecting and filtering low-value objects using user configuration rules. The second phase is the identification phase, which applies deep learning to objects of interest for further analysis. The three stages for both phases are basic, filter, and machine learning, that analyze and partition the video analysis pipeline. RES distributes the processing stages over available resources in the cloud to meet the user’s real-time quality of service (QoS) requirements. Experiment on a 10K datastream shows that RES reduces the time by 49\% and saves 99\% of bandwidth compared to a centralized cloud-based analytics approach. \subsubsection{Spatula} \noindent Spatula \cite{jain2020spatula} is a cross-camera analytics system that reduces the network and computation costs by leveraging the spatio-temporal cross-camera correlation. The main idea behind Spatula is to use the spatio-temporal correlation to limit the analyzed data. Limiting data is done by streaming and running the cross-camera inference on only the set of cameras and frames that contain the queried object and not the total number of deployed cameras, which in turn, reduces the cross-camera analytics cost. Evaluation on an 8-camera dataset shows that Spatula reduces the computation workload by 8.3× and improves the inference precision by 39\%. Moreover, on two datasets with hundreds of cameras, it reduces the computation workload by 23× to 86×. \section{Conclusion} \label{sec:conclusions} \noindent Video Analytics is an essential topic which deals with detecting objects in video, predicting the next steps of the moving objects, and understanding the behavior of objects by tracking them. Video Analytics is used in many fields such as self-driving cars, autonomous drones, and safety and security applications. In this survey, we reviewed the most recent Video Analytics techniques which focus in optimizing the performance. In our review, we categorized the techniques based on the type of optimization that they utilize. Specifically, this survey covers the definitions of each technique, how it works, how it improves performance, and the improved results. \bibliographystyle{plain}
1,314,259,996,920
arxiv
\section{Network structure} The parameter $n$ of residual blocks in the network (see section~\ref{sec:experiments}) may vary for different datasets. For all experiments in the paper the value of $n$ was set to $7$. Below we provide a detailed visualization the architecture of the model that generates $128\times 128$ images and has $n=8$ residual blocks. \begin{figure*}[h!] \begin{center} \includegraphics[height=0.7\textheight]{images_supplementary/neto_1x1} \end{center} \caption{\small{Network architecture with $8$ residual blocks for $128\times 128$ images.}} \label{fig:net} \end{figure*} \FloatBarrier \newpage \section{Examples of appearance sampling in different datasets} We show more examples highlighting the ability of our model to produce diverse samples similar to the results shown in Fig.~\ref{fig:edges2images_samples} and \ref{fig:stickman2people_samples}. In Fig.~\ref{fig:appearance_samples_00} we condition on edge images of shoes and handbags and sample the appearance from the learned prior. We also run pix2pix multiple times to compare the diversity of the produced samples. A similar experiment is shown in Fig.~\ref{fig:appearance_samples_01}, where we condition on human body joints instead of edge images. \begin{table*}[h!] \centering \begin{tabular}{c|c|ccccc|c} \toprule \small{GT} & \multicolumn{6}{c|}{\small{samples}} & \small{method} \\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159} \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159_edges}} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159_AB} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159_AB_01} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159_AB_02} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159_AB_03} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159_AB_04} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/159_AB_05} & pix2pix \\ & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/test_159_reconstruction} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/test_159_sample_01} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/test_159_sample_02} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/test_159_sample_03} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/test_159_sample_04} & \includegraphics[align=c,scale=0.1]{images_supplementary/appearance_samples/shoes/test_159_sample_05} & our \\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14} \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14_edges}} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14_AB} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14_AB_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14_AB_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14_AB_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14_AB_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/14_AB_05} & pix2pix \\ & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/test_14_reconstruction} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/test_14_sample_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/test_14_sample_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/test_14_sample_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/test_14_sample_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/shoes/test_14_sample_05} & our\\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164} \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164_edges}} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/164_AB} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/164_AB_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/164_AB_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/164_AB_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/164_AB_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/164_AB_05} & pix2pix\\ & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164_reconstruction} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164_sample} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164_sample_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164_sample_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164_sample_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_164_sample_05} & our \\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187} \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187_edges}} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/187_AB} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/187_AB_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/187_AB_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/187_AB_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/187_AB_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/187_AB_05} & pix2pix \\ & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187_reconstruction} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187_sample} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187_sample_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187_sample_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187_sample_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/handbags/test_187_sample_05} & our\\ \bottomrule \end{tabular} \captionof{figure}{\small{Generating images based only on the edge image as input (GT original image and corresponding edge image are held back). We compare our approach with pix2pix~\cite{pix2pix2016}. On the right: each odd row shows images synthesized by pix2pix, each even row presents samples generated by our model. Here again our first image (column $2$) is a generation with original appearance, whereby for the $5$ following images we sample appearance from the learned prior distribution. The GT images are taken from shoes~\cite{shoes} and handbags~\cite{igan}.}} \label{fig:appearance_samples_00} \end{table*} \begin{table*}[h!] \centering \begin{tabular}{c|c|ccccc|c} \toprule \small{GT} & \multicolumn{6}{c|}{\small{samples}} & \small{method} \\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155}} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_fake_B} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_fake_B_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_fake_B_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_fake_B_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_fake_B_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_fake_B_05} & pix2pix\\ & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_reconstruction} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_sample} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_sample_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_sample_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_sample_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_002155_sample_05} & our\\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783}} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_fake_B} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_fake_B_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_fake_B_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_fake_B_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_fake_B_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_fake_B_05} & pix2pix\\ & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_reconstruction} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_sample} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_sample_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_sample_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_sample_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/coco/valid_003783_sample_05} & our\\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/deepfashion/01585_7}} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_fake_B} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_fake_B_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_fake_B_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_fake_B_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_fake_B_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_fake_B_05} & pix2pix\\ & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_reconstruction} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_sample} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_sample_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_sample_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_sample_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/01585_7_sample_05} & our\\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples/deepfashion/00009_4}} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_fake_B} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_fake_B_01} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_fake_B_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_fake_B_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_fake_B_04} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_fake_B_05} & pix2pix\\ & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_reconstruction} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_sample} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_sample_02} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_sample_03} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_sample_06} & \includegraphics[align=c,scale=0.15]{images_supplementary/appearance_samples//deepfashion/00009_4_sample_07} & our\\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0133_c6s1_022876_01}} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0133_c6s1_022876_01_fake_B} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0133_c6s1_022876_01_fake_B_01} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0133_c6s1_022876_01_fake_B_02} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0133_c6s1_022876_01_fake_B_03} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0133_c6s1_022876_01_fake_B_04} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0133_c6s1_022876_01_fake_B_05} & pix2pix\\ & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0133_c6s1_022876_01_reconstruction} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0133_c6s1_022876_01_sample} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0133_c6s1_022876_01_sample_02} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0133_c6s1_022876_01_sample_03} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0133_c6s1_022876_01_sample_04} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0133_c6s1_022876_01_sample_05} & our\\ \midrule \multirow{ 2}{*}{\includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0467_c2s1_121066_01}}& \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0467_c2s1_121066_01_fake_B} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0467_c2s1_121066_01_fake_B_01} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0467_c2s1_121066_01_fake_B_02} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0467_c2s1_121066_01_fake_B_03} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0467_c2s1_121066_01_fake_B_04} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples//market/0467_c2s1_121066_01_fake_B_05} & pix2pix\\ & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0467_c2s1_121066_01_reconstruction} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0467_c2s1_121066_01_sample} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0467_c2s1_121066_01_sample_01} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0467_c2s1_121066_01_sample_02} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0467_c2s1_121066_01_sample_03} & \includegraphics[align=c,scale=0.3]{images_supplementary/appearance_samples/market/0467_c2s1_121066_01_sample_04} & our\\ \bottomrule \end{tabular} \captionof{figure}{\small{Generating images based only on the stickman as input (GT original image and corresponding stickman are held back). We compare our approach with pix2pix~\cite{pix2pix2016}. On the right: each odd row shows images synthesized by pix2pix, each even row presents samples generated by our model. Here again our first image (column $2$) is a generation with original appearance, whereby for the $5$ following images we sample appearance from the learned prior distribution. The GT images are taken from COCO~\cite{mscoco}, DeepFashion~\cite{deepFashion1,deepFashion2} and Market-1501~\cite{market1501}.}} \label{fig:appearance_samples_01} \end{table*} \FloatBarrier \newpage \section{Transfer of shape and appearance} We show additional examples of transferring appearances to different shapes and vice versa. We emphasize again that our approach does not require labeled examples of images depticting the same appearance in different shapes. This enables us to apply it on a broad range of datasets as summarized in Table~\ref{table:overview}. \begin{table}[h!] \centering \begin{tabular}{cccc} Figure & Shape Estimate & Appearance Source & Shape Target \\ \hline Fig.~\ref{fig:pairs2shoes} & Edges & Handbags & Shoes \\ Fig.~\ref{fig:pairs2handbags} & Edges & Shoes & Handbags \\ Fig.~\ref{fig:coco} & Body Joints & COCO & COCO \\ Fig.~\ref{fig:deepfashion} & Body Joints & DeepFashion & DeepFashion \\ Fig.~\ref{fig:market1501} & Body Joints & Market & Market \\ Fig.~\ref{fig:video} & Body Joints & COCO & Penn Action \\ \end{tabular} \caption{Overview of transfer experiments.} \label{table:overview} \end{table} \begin{figure*}[h!] \begin{center} \includegraphics[width=0.8\textwidth]{images_supplementary/videoscreen} \end{center} \caption{\small{Examples of shape and appearance transfer in video. Appearance is inferred from COCO and target shape is estimated from Penn Action sequences. An animated version can be found at \href{https://compvis.github.io/vunet}{https://compvis.github.io/vunet}. Note, that we generate the video independently frame by frame without any temporal smoothing etc.}} \label{fig:video} \end{figure*} \newpage \begin{figure*}[h!] \begin{center} \includegraphics[width=0.9\textwidth]{images_supplementary/pairs2shoes} \end{center} \caption{\small{Examples of shape and appearance transfer between two datasets: appearance is taken from the shoes and is used to generate matching handbags based on their desired shape. \textit{On the left}: original images from the shoe dataset. \textit{On the top}: edge images of the desired handbags. \textit{Single row}: transfer of fixed appearance to different shapes. \textit{Single column}: transfer of fixed shape to different appearances.}} \label{fig:pairs2shoes} \end{figure*} \FloatBarrier \newpage \begin{figure*}[h!] \begin{center} \includegraphics[width=0.9\textwidth]{images_supplementary/pairs2handbags} \end{center} \caption{\small{Examples of shape and appearance transfer between two datasets: appearance is taken from the handbags and is used to generate matching shoes based on their desired shape. \textit{On the left}: original images from the handbags dataset. \textit{On the top}: edge images of the desired shoes. \textit{Single row}: transfer of fixed appearance to different shapes. \textit{Single column}: transfer of fixed shape to different appearances.}} \label{fig:pairs2handbags} \end{figure*} \FloatBarrier \newpage \begin{figure*}[h!] \begin{center} \includegraphics[width=0.9\textwidth]{images_supplementary/transfer_8} \end{center} \caption{\small{Examples of shape and appearance transfer on COCO dataset. \textit{On the left}: original images from the test split. \textit{On the top}: corresponding stickmen. \textit{Single row}: transfer of fixed appearance to different shapes. \textit{Single column}: transfer of fixed shape to different appearances.}} \label{fig:coco} \end{figure*} \FloatBarrier \newpage \begin{figure*}[h!] \begin{center} \includegraphics[width=0.9\textwidth]{images_supplementary/transfer_01166} \end{center} \caption{\small{Examples of shape and appearance transfer on DeepFashion dataset. \textit{On the left}: original images from the test split. \textit{On the top}: corresponding stickmen. \textit{Single row}: transfer of fixed appearance to different shapes. \textit{Single column}: transfer of fixed shape to different appearances.}} \label{fig:deepfashion} \end{figure*} \FloatBarrier \newpage \begin{figure*}[h!] \begin{center} \includegraphics[height=0.8\textheight]{images_supplementary/transfer_07586} \end{center} \caption{\small{Examples of shape and appearance transfer on Market-1501. \textit{On the left}: original images from the test split. \textit{On the top}: corresponding stickmen. \textit{Single row}: transfer of fixed appearance to different shapes. \textit{Single column}: transfer of fixed shape to different appearances.}} \label{fig:market1501} \end{figure*} \FloatBarrier \newpage \section{Quantitative results for the ablation study} We have included quantitative results for the ablation study (see section~\ref{seq:Ablation}) in Table~\ref{table:is}. The positive effect of the KL-regularization cannot be quantified by the Inception Score and thus we presented the qualitative results in Fig.~\ref{fig:klnokl}. \begin{table}[h!] \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline method & \multicolumn{2}{c|}{Reconstruction} & \multicolumn{2}{c|}{Transfer}\\ \hline & \multicolumn{2}{c|}{IS} & \multicolumn{2}{c|}{IS}\\ & mean & std & mean & std \\ \hline our (no appearance) & 2.211 & 0.080 & 2.211 & 0.080 \\ our (no kl) & 3.168 & 0.296 & 3.594 & 0.199 \\ our (proposed) & 3.087 & 0.239 & 3.504 & 0.192 \\ \hline \end{tabular} \end{center} \caption{\small{Inception scores (IS) for ablation study. The positive effect of the KL-regularization as seen in Fig.~\ref{fig:klnokl} cannot be quantified by the IS.}} \label{table:is} \end{table} \FloatBarrier \section{Limitations} The quality of the generated images depends highly on the dataset used for training. Our method relies on appearance commonalities across the dataset that can be used to learn efficient, pose-invariant encodings. If the dataset provides sufficient support for appearance details, they are faithfully preserved by our model (e.g. hats in DeepFashion, see Fig.~8, third row). The COCO dataset shows large variance in both visual qualities (e.g. lighting conditions, resolutions, clutter and occlusion) as well as in appearance. This leads to little overlap of appearance details in different poses and the model focuses on aspects of appearance that can be reused for a large variety of poses in the dataset. We show some failure cases of our approach in Fig.~\ref{fig:fails}. The first row of Fig.~\ref{fig:fails} shows an example of rare data: children are underrepresented in COCO~\cite{mscoco}. A similar problem occurs in Market-1501~\cite{market1501} where most of the images represent a tight crop around a person and only some contain people from afar. This is shown in the second row which also contains an incorrect estimate for the left leg. Sometimes, estimated pose correlates with some other attribute of a dataset (e.g., gender as in DeepFashion~\cite{deepFashion1,deepFashion2}, where male and female models use very characteristic yet distinct set of poses). In this case our model morphs this attribute with the target appearance, e.g. generates a woman with definitely male body proportions (see row $3$ in Fig.~\ref{fig:fails}). Under heavy viewpoint changes, appearance can be entirely unrelated, e.g. front view showing a white t-shirt which is totally covered from the rear view (see fourth row of Fig.~\ref{fig:fails}). The algorithm however assumes that the appearance in both views is related. As the example in the last row of Fig.~\ref{fig:fails} shows, our model is confused if occluded body parts are annotated since this is not the case for most training samples. \begin{table}[h!] \centering \begin{tabular}{c|cc|c|c} & \multicolumn{2}{c|}{\small{target shape}} & \small{target} & \\ \small{reason} & \small{original image} & \small{shape estimate} & \small{appearance} & \vtop{\hbox{\strut \small{Ours}}} \\ \midrule \small{rare data} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/child_target} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/child_target_stickman} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/child_input} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/child_gen} \\ \midrule \vtop{\hbox{\strut \small{scale/}}\hbox{\strut \small{pose estimation error}}} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/scale_target} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/scale_target_stickman} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/scale_input} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/scale_gen} \\ \midrule \vtop{\hbox{\strut \small{discriminative}}\hbox{\strut \small{pose}}} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/pose_gender_target} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/pose_gender_target_stickman} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/pose_gender_input} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/pose_gender_gen} \\ \midrule \vtop{\hbox{\strut \small{frontal/}}\hbox{\strut \small{backward view}}} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/forw_back_target} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/forw_back_target_stickman} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/forw_back_input} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/forw_back_gen} \\ \midrule \vtop{\hbox{\strut \small{labeled shape}}\hbox{\strut \small{despite occlusion}}} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/occlusions_target} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/occlusions_target_stickman} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/occlusions_input} & \includegraphics[align=c,width=0.15\textwidth,height=0.15\textwidth]{images_supplementary/fail_cases/occlusions_gen} \\ \end{tabular} \captionof{figure}{\small{Examples of failure cases. As most of the errors are dataset specific we show a collection of cases over different datasets.}} \label{fig:fails} \end{table} \FloatBarrier \section{Introduction} \begin{figure} \begin{center} \includegraphics[width=0.38\textwidth]{images/teaser} \end{center} \caption{\small{Our model learns to infer appearance from the queries on the left and can synthesize images with that appearance in different poses given in the top row. An animated version can be found % at \href{https://compvis.github.io/vunet}{https://compvis.github.io/vunet}.}} \label{fig:teaser} \end{figure} Recently there has been great interest in generative models for image synthesis \cite{photographic, pix2pix2016, peopleInClothing, PoseGuidedGeneration, igan, cyclegan, rubio:PR:2015}. Generating images of objects requires a detailed understanding of both, their appearance and spatial layout. Therefore, we have to distinguish basic object characteristics. On the one hand, there is the shape and geometrical layout of an object relative to the viewpoint of the observer (a person sitting, standing, or lying or a folded handbag). On the other hand, there are inherent appearance properties such as those characterized by color and texture (curly long brown hair vs. buzz cut black hair or the pattern of corduroy). Evidently, objects naturally change their shape, while retaining their inherent appearance (bending a shoe does not change its style). However, the picture of the object varies dramatically in the process, e.g., due to translation or even self-occlusion. Conversely, the color or fabric of a dress can change with no impact on its shape, but again clearly altering the image of the dress. With deep learning, there has lately been great progress in generative models, in particular generative adversarial networks (GANs) \cite{wgan, infogan, gan, dcgan,cgan}, variational autoencoders \cite{vae}, and their combination \cite{cvaegan,vaegan}. Despite impressive results, these models still suffer from weak performance in case of image distributions with large spatial variation: while on perfectly registered faces (e.g., aligned CelebA dataset~\cite{celeba}) high-resolution images have been generated \cite{srgan, progressivegrowing}, synthesizing the full human body from datasets as diverse as COCO \cite{mscoco} is still an open challenge. The main reason for this is that these generative models directly synthesize the image of an object, but fail to model the intricate interplay of appearance and shape that is producing the image. Therefore, they can easily add facial hair or glasses to a face as this amounts to recoloring of image areas. Contrast this to a person moving their arm, which would be represented as coloring the arm at the old position with background color and turning the background at the new position into an arm. What we are lacking is a generative model that can move and deform objects and not only blend their color. Therefore, we seek to model both, appearance and shape, and their interplay when generating images. For general applicability, we want to be able to learn from mere still image datasets with no need for a series of images of the same object instance showing different articulations. We propose a conditional U-Net \cite{unet} architecture for mapping from shape to the target image and condition on a latent representation of a variational autoencoder for appearance. To disentangle shape and appearance, we allow to utilize easily available information related to shape, such as edges or automatic estimates of body joint locations. Our approach then enables conditional image generation and transfer: to synthesize different geometrical layouts or change the appearance of an object, either shape or appearance can be retained from a query image, whereas the other component can be freely altered or even imputed from other images. Moreover, the model also allows to sample from the appearance distribution without altering the shape. % % % % \section{Related work} \label{sec:relatedworks} In the context of deep learning, three different approaches to image generation can be identified. Generative Adversarial Networks \cite{gan}, Autoregressive (AR) models \cite{pixelcnndecoder} and Variational Auto-Encoders (VAE) \cite{vae}. Our method provides control over both, appearance and shape. In contrast, many previous methods can control the generative process only with respect to appearance. \cite{semisup, acgan, cgan} utilize class labels, \cite{attribute2image} attributes and \cite{stackgan, beYourOwnPrada} textual descriptions to control the appearance. Control over shape has been mainly obtained in the Image-to-Image translation framework. \cite{pix2pix2016} uses a discriminator to obtain realistic outputs but their method is limited to the synthesis of a single, uncontrollable appearance. To obtain a larger variety of appearances, \cite{peopleInClothing} first generates a segmentation mask of fashion articles and then synthesizes an image. This leads to larger variations in appearances but does not allow to change the pose of a given appearance. \cite{photographic} uses segmentation masks to produce images in the context of street scenes as well. They do not rely on adversarial training but directly learn a multimodal distribution for each segmentation label. The amount of appearances that can be produced is given by the number of combinations of modes, resulting in very coarse modeling of appearance. In contrast, our method makes no assumption that the data can be well represented by a limited number of modes, does not require segmentation masks, and it includes an inference mechanism for appearance. \cite{whatandwhere} utilizes the GAN framework and \cite{msar} the autoregressive framework to provide control over shape and appearance. However the appearance is specified by very coarse text descriptions. Furthermore, both methods have problems producing the desired shape consistently. In contrast to our generative approach, \cite{cliqueCNN,bautistaCVPR17} have pursued unsupervised learning of human posture similarity for retrieval in still images and \cite{milbichICCV17,brattoliCVPR17} in videos. Rendering images of persons in different poses has been considered by \cite{personmultiview} for a fixed, discrete set of target poses, and by \cite{PoseGuidedGeneration} for general poses. In the latter, the authors use a two-stage model. The first stage implements pixelwise regression to a target image from a conditional image and the pose of the target image. Thus the method is fully supervised and requires labeled examples of the same appearance in different poses. As the result of the first stage is in most cases too blurry, they use a second stage which employs adversarial training to produce more realistic images. Our method is never directly trained on the transfer task and therefore does not require such specific datasets. Instead, we carefully model the separation between shape and appearance and as a result, obtain an explicit representation of the appearance which can be combined with new poses. \section{Approach} \label{seq:methodology} Let $x$ be an image of an object from a dataset $X$. We want to understand how images are influenced by two essential characteristics of the objects that they depict: their shape $y$ and appearance $z$. Although the precise semantics of $y$ can vary, we assume it characterizes geometrical information, particularly location, shape, and pose. $z$ then represents the intrinsic appearance characteristics. If $y$ and $z$ capture all variations of interest, the variance of a probabilistic model for images conditioned on those two variables is only due to noise. Hence, the maximum a posteriori estimate $\arg \max_x p(x\vert y,z)$ serves as an image generator controlled by $y$ and $z$. How can we model this generator? \subsection{Variational Autoencoder based on latent shape and appearance} \label{sec:vae} If $y$ and $z$ are both latent variables, a popular way of learning the generator $p(x\vert y,z)$ is to use a VAE. To learn $p(x \vert y,z)$ we need to maximize the log-likelihood of observed data $x$ and marginalize out the latent variables $y$ and $z$. To avoid the intractable integral, one introduces an approximate posterior $q(y, z\vert x)$ to obtain the evidence lower bound (ELBO) from Jensen's inequality, \begin{align} \log p(x) &= \log \int p(x, y, z) \diff z \diff y \nonumber\\ &= \log \int \frac{p(x, y, z)}{q(y, z\vert x)} q(y, z\vert x) \nonumber\\ % &\ge \mathbb{E}_{q} \log \frac{p(x\vert y, z)p(y,z)}{q(y, z\vert x)}. % \label{eq:vae} \end{align} As one can see, Eq.~\ref{eq:vae} contains the prior $p(y,z)$, which is assumed to be a standard normal distribution in the VAE framework. With this joint prior we cannot guarantee that both variables, $y$ and $z$ would be separated in the latent space. Thus, our overall goal of separately altering shape and appearance cannot be met. A standard normal prior can model $z$ but it is not suited to describe the spatial information contained in $y$, which is localized and easily gets lost in the bottleneck. Therefore, we need additional information to disentangle y and z when learning the generator $p(x\vert y, z)$. \subsection{Conditional Variational Autoencoder with appearance} \label{sec:cvae} In the previous section we have shown that a standard VAE with two latent variables is not suitable for learning disentangled representations of $y$ and $z$. Instead we assume that we have an estimator function $e$ for the variable $y$, i.e., $\hat{y}=e(x)$. For example, $e$ could provide information on shape by extracting edges or automatically estimating body joint locations \cite{jointestimator,HED}. Following up on Eq.~\ref{eq:vae}, the task is now to infer the latent variable $z$ from the image and the estimate $\hat{y}= e(x)$ by maximizing their conditional log-likelihood. \begin{align} \log p(x\vert \hat{y}) & = \log \int_z p(x, z\vert \hat{y}) \diff z \ge \mathbb{E}_{q} \log \frac{p(x,z\vert \hat{y})}{q(z\vert x, \hat{y})} \nonumber\\ % % % &= \mathbb{E}_{q} \log \frac{p(x\vert \hat{y},z)p(z\vert \hat{y})}{q(z\vert x,\hat{y})} % % \label{eq:cvae} \end{align} Compared to Eq.~\ref{eq:vae}, the ELBO in Eq.~\ref{eq:cvae} depends now on the (conditional) prior $p(z\vert\hat{y})$. This distribution can now be estimated from the training data and captures potential interrelations between shape and appearance. For instance a person jumping is less likely to wear a dinner jacket than a T-shirt. Following~\cite{varappforgan} we model $p(x\vert \hat{y}, z)$ as a parametric Laplace and $q(z\vert x, \hat{y})$ as a parametric Gaussian distribution. The parameters of these distributions are estimated by two neural networks $G_\theta$ and $F_\phi$ respectively. Using the reparametrization trick~\cite{vae}, these networks can be trained end-to-end using standard gradient descent. The loss function for training follows directly from Eq.~\ref{eq:cvae} and has the form: \begin{align} \mathcal{L}(x, \theta, \phi) = -KL (&q_\phi(z\vert x, \hat{y}) \vert\vert p_\theta(z\vert\hat{y})) \nonumber \\ & +\mathbb{E}_{q_\phi(z\vert x,\hat{y})}[\log\ p_\theta(x\vert \hat{y},z)], \label{eq:vaeloss} \end{align} where $KL$ denotes Kullback-Leibler divergence. The next section derives the network architecture we use for modeling $G_\theta$ and $F_\phi$. \begin{figure}[t] \begin{center} \includegraphics[scale=1.0]{train_01} \end{center} \caption{\small{Our conditional U-Net combined with a variational autoencoder. $x$: query image, $\hat{y}$: shape estimate, $z$: appearance.}} \label{fig:model} \end{figure} \subsection{Generator} \label{sec:generator} \begin{table*}[t!] \begin{center} \begin{tabular}{cc|c|c|ccccc} \toprule \multicolumn{2}{c|}{GT} & pix2pix\cite{pix2pix2016} & our (reconst.) & \multicolumn{5}{c}{our (random samples)} \\ \midrule \includegraphics[scale=0.13]{images/edges2shoes/51} & \includegraphics[scale=0.13]{images/edges2shoes/51_edges} & \includegraphics[scale=0.13]{images/edges2shoes/51_pix2pix} & \includegraphics[scale=0.13]{images/edges2shoes/test_51_reconstruction} & \includegraphics[scale=0.13]{images/edges2shoes/test_51_sample_01} & \includegraphics[scale=0.13]{images/edges2shoes/test_51_sample_02} & \includegraphics[scale=0.13]{images/edges2shoes/test_51_sample_03} & \includegraphics[scale=0.13]{images/edges2shoes/test_51_sample_04} & \includegraphics[scale=0.13]{images/edges2shoes/test_51_sample_05} \\ % \includegraphics[scale=0.13]{images/edges2shoes/101} & \includegraphics[scale=0.13]{images/edges2shoes/101_edges} & \includegraphics[scale=0.13]{images/edges2shoes/101_pix2pix} & \includegraphics[scale=0.13]{images/edges2shoes/test_101_reconstruction} & \includegraphics[scale=0.13]{images/edges2shoes/test_101_sample_01} & \includegraphics[scale=0.13]{images/edges2shoes/test_101_sample_02} & \includegraphics[scale=0.13]{images/edges2shoes/test_101_sample_03} & \includegraphics[scale=0.13]{images/edges2shoes/test_101_sample_04} & \includegraphics[scale=0.13]{images/edges2shoes/test_101_sample_05} \\ \midrule \includegraphics[scale=0.13]{images/edges2handbags/2} & \includegraphics[scale=0.13]{images/edges2handbags/2_edges} & \includegraphics[scale=0.13]{images/edges2handbags/2_pix2pix} & \includegraphics[scale=0.13]{images/edges2handbags/test_2_reconstruction} & \includegraphics[scale=0.13]{images/edges2handbags/test_2_sample_01} & \includegraphics[scale=0.13]{images/edges2handbags/test_2_sample_02} & \includegraphics[scale=0.13]{images/edges2handbags/test_2_sample_03} & \includegraphics[scale=0.13]{images/edges2handbags/test_2_sample_04} & \includegraphics[scale=0.13]{images/edges2handbags/test_2_sample_05} \\ % \includegraphics[scale=0.13]{images/edges2handbags/test_183} & \includegraphics[scale=0.13]{images/edges2handbags/test_183_edges} & \includegraphics[scale=0.13]{images/edges2handbags/183_pix2pix} & \includegraphics[scale=0.13]{images/edges2handbags/test_183_reconstruction} & \includegraphics[scale=0.13]{images/edges2handbags/test_183_sample_01} & \includegraphics[scale=0.13]{images/edges2handbags/test_183_sample_02} & \includegraphics[scale=0.13]{images/edges2handbags/test_183_sample_03} & \includegraphics[scale=0.13]{images/edges2handbags/test_183_sample_01} & \includegraphics[scale=0.13]{images/edges2handbags/test_183_sample_05} \\ \bottomrule \end{tabular} \end{center} \captionof{figure}{\small{Generating images with only the edge image as input (GT image (left) is held back). We compare our approach to pix2pix on the datasets of shoes~\cite{shoes} and handbags~\cite{igan}. On the right: sampling from our latent appearance distribution.}} \label{fig:edges2images_samples} \end{table*} Let us first establish a network $G_\theta$ which estimates the parameters of the distribution $p(x\vert \hat{y}, z)$. We assume further, as it is common practice~\cite{vae}, that the distribution $p(x\vert \hat{y}, z)$ has constant standard deviation and the function $G_\theta(\hat{y}, z)$ is a deterministic function in $\hat{y}$. As a consequence, the network $G_\theta(\hat{y}, z)$ can be considered as an image generator network and we can replace the second term in Eq.~\ref{eq:vaeloss} with the reconstruction loss $\mathcal{L}(x, \theta) = \|x - G_\theta(\hat{y}, z)\|_1$: \begin{align} \mathcal{L}(x, \theta, \phi) = -KL (&q_\phi(z\vert x, \hat{y}) \vert\vert p_\theta(z\vert\hat{y})) \nonumber \\ & +\|x - G_\theta(\hat{y}, z)\|_1. \label{eq:vaeloss_2} \end{align} It is well known that pixelwise statistics of images, such as the $L_1$-norm here, do not model perceptual quality of images well \cite{vaegan}. Instead we adopt the perceptual loss from \cite{photographic} and formulate the final loss function as: \begin{align} \mathcal{L}(x, \theta, \phi) & = -KL (q_\phi(z\vert x, \hat{y}) \vert\vert p_\theta(z\vert\hat{y})) \nonumber \\ & +\sum_k\lambda_k\|\Phi_k(x) - \Phi_k(G_\theta(\hat{y}, z))\|_1, \label{eq:vaeloss_3} \end{align} where $\Phi$ is a network for measuring perceptual similarity (in our case VGG19~\cite{vgg}) and $\lambda_k, k$ are hyper-parameters that control the contribution of the different layers of $\Phi$ to the total loss. If we forget for a moment about $z$, the task of the network $G_\theta(\hat{y})$ is to generate an image $\bar{x}$ given the estimate $\hat{y}$ of the shape information of an image $x$. Here it is crucial that we want to preserve spatial information given by $\hat{y}$ in the output image $\bar{x}$. Therefore, we represent $\hat{y}$ in the form of an image of the same size as $x$. Depending on the estimate $e:\ e(x)=\hat{y}$ this is easy to achieve. For example, estimated joints of a human body can be used to draw a stickman for this person. Given such image representation of $\hat{y}$ we require that each keypoint of $\hat{y}$ is used to estimate $\bar{x}$. A U-Net architecture~\cite{unet} would be the most appropriate choice in this case, as its skip-connections help to propagate the information directly from input to output. In our case, however, the generator $G_\theta(\hat{y}, z)$ should learn about images by also conditioning on $z$. The appearance $z$ is sampled from the Gaussian distribution $q(z\vert x, \hat{y})$ whose parameters are estimated by the encoder network $F_\phi$. Its optimization requires balancing two terms. It has to encode enough information about $x$ into $z$ such that $p(x \vert \hat{y}, z)$ can describe the data well as measured by the reconstructions loss in \eqref{eq:vaeloss_2}. At the same time we penalize a deviation from the prior $p(z \vert \hat{y})$ by minimizing the Kullback-Leibler divergence between $q(z\vert x, \hat{y})$ and $p(z \vert \hat{y})$. The design of the generator $G_\theta$ as a U-Net already guarantees the preservation of spatial information in the output image. Therefore, any additional information about the shape encoded in $z$, which is not already contained in the prior, incurs a cost without providing new information on the likelihood $p(x \vert \hat{y}, z)$. Thus, an optimal encoder $F_\phi$ must be invariant to shape. In this case it suffices to include $z$ at the bottleneck of the generator $G_\theta$. More formally, let our U-Net-like generator $G_\theta(\hat{y})$ consist of two parts: an encoder $E_\theta$ and a decoder $D_\theta$ (see Fig.\ref{fig:model}). We concatenate the inferred appearance representation $z$ with the bottle-neck representation of $G_\theta$: $\gamma=[E_\theta(\hat{y}), z]$ and let the decoder $D_\theta(\gamma)$ generate an image from it. Concatenating the shape and appearance features keeps the gradients for training the respective encoders $F_\phi$ and $E_\theta$ well separated, while the decoder $D_\theta$ can learn to combine those representations for an optimal synthesis. Together $E_\theta$ and $D_\theta$ build a U-Net like network, which guarantees optimal transfer of spatial information from input to output images. On the other hand, $F_\phi$ when put together with $D_\theta$ frames a VAE that allows appearance inference. The prior $p(z\vert \hat{y})$ is estimated by $E_\theta$ just before it concatenates $z$ into its representation. We train all three networks jointly by maximizing the loss in Eq.~\ref{eq:vaeloss_3}. \section{Experiments} \label{sec:experiments} \begin{table*}[h!] \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline method & \multicolumn{4}{c|}{Market1501} & \multicolumn{4}{c|}{DeepFashion}\\ \hline & \multicolumn{2}{c|}{IS} & \multicolumn{2}{c|}{SSIM} & \multicolumn{2}{c|}{IS} & \multicolumn{2}{c|}{SSIM}\\ & mean & std & mean & std & mean & std & mean & std \\ \hline real data & $3.678$ & $0.274$ & $1.000$ & $0.000$ & $3.415$ & $0.399$ & $1.000$ & $0.000$ \\ \hline PG$^2$ G1-poseMaskedLoss & $3.326$ & $-$ & $0.340$ & $-$ & $2.668$ & $-$ & $0.779$ & $-$ \\ PG$^2$ G1+D & \boldmath{$3.490$} & $-$ & $0.283$ & $-$ & \boldmath{$3.091$} & $-$ & $0.761$ & $-$ \\ PG$^2$ G1+G2+D & $3.460$ & $-$ & $0.253$ & $-$ & $3.090$ & $-$ & $0.762$ & $-$ \\ \hline pix2pix & $2.289$ & $0.0489$ & $0.166$ & $0.060$ & $2.640$ & $0.2171$ & $0.646$ & $0.067$ \\ \hline our & $3.214$ & $0.119$ & \boldmath{$0.353$} & $0.097$ & $3.087$ & $0.2394$ & \boldmath{$0.786$} & $0.068$ \\ \hline \end{tabular} \end{center} \caption{\small{Inception scores (IS) and structured similarities (SSIM) of reconstructed test images on DeepFashion and Market1501 datasets. Our method outperforms both pix2pix~\cite{pix2pix2016} and PG$^2$~\cite{PoseGuidedGeneration} in terms of SSIM. As to IS the proposed method performs better than pix2pix and obtains comparable results to PG$^2$.}} \label{table:visual_quality} \end{table*} We now proof the advantages of the proposed method by showing the results of image generation in various datasets with different shape estimators $\hat{y}$. In addition to visual comparisons with other methods, all results are supported by numerical experiments. Code and additional experiments can be found at \href{https://compvis.github.io/vunet}{https://compvis.github.io/vunet}. \begin{table*}[t] \centering \begin{tabular}{cc|c|c|ccccc} \toprule \multicolumn{2}{c|}{GT} & pix2pix\cite{pix2pix2016} & our (reconst.) & \multicolumn{5}{c}{our (random samples)} \\ \midrule \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_stickman} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_pix2pix} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_reconstruction} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_sample} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_sample_02} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_sample_03} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_sample_04} & \includegraphics[scale=0.13]{images/deepfashion_samples/00304_2_sample_05} \\ \midrule \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_stickman} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_pix2pix} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_reconstruction} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_sample} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_sample_02} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_sample_03} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_sample_04} & \includegraphics[scale=0.3]{images/market_samples/0521_c4s5_016854_01_sample_05} \\ \bottomrule \end{tabular} \captionof{figure}{\small{Generating images based only the stickman as input (GT image is held back). We compare our approach with pix2pix~\cite{pix2pix2016} on Deepfashion and Market-1501 datasets. On the right: sampling from our latent appearance distribution.}} \label{fig:stickman2people_samples} \end{table*} \textbf{Datasets} To compare with other methods, we evaluate on: shoes~\cite{shoes}, handbags~\cite{igan}, Market-1501~\cite{market1501}, DeepFashion~\cite{deepFashion1,deepFashion2} and COCO~\cite{mscoco}. As baselines for our subsequent comparisons we use the state-of-the-art pix2pix model~\cite{pix2pix2016} and PG$^2$~\cite{PoseGuidedGeneration}. To the best of our knowledge PG$^2$ is the only one approach which is able to transfer one person to the pose of another. We show that we improve upon this method and do not require specific datasets for training. With regard to pix2pix, it is the most general image-to-image translation model which can work with different shape estimates. Where applicable we directly compare to the quantitative and qualitative results provided by the authors of the mentioned papers. As \cite{pix2pix2016} does not perform experiments on Market-1501, DeepFashion and COCO we train their model on these datasets using their published code~\cite{pix2pix_page}. \textbf{Shape estimate} In the following experiments we work with two kinds of shape estimates: edge images and, in case of humans, automatically regressed body joint positions. We utilize edges extracted with the HED algorithm~\cite{HED} by the authors of \cite{pix2pix2016}. Following \cite{PoseGuidedGeneration} we apply current state-of-the-art real time multi-person pose estimator~\cite{jointestimator} for body joint regression. \textbf{Network architecture} The generator $G_\theta$ is implemented as a U-Net architecture with $2n$ residual blocks~\cite{residual}: $n$ blocks in the encoder part $E_\theta$ and $n$ symmetric blocks in the decoder part $D_\theta$. Additional skip-connections link each block in $E_\theta$ to the corresponding block in $D_\theta$ and guarantee direct information flow from input to output. Empirically, we set the parameter $n=7$ which worked well for all considered datasets. Each residual block follows the architecture proposed in \cite{residual} without batch normalization. We use strided convolution with stride $2$ after each residual block to downsample the input until a bottleneck layer. In the decoder $D_\theta$ we utilize subpixel convolution~\cite{upsample} to perform the up-sampling between two consecutive residual blocks. All convolutional layers consists of $3\times3$ filters. The encoder $F_\phi$ follows the same architecture as the encoder $E_\theta$. We train our model separately for each dataset using the Adam~\cite{adam} optimizer with parameters $\beta_1=0.5$ and $\beta_2=0.9$ for $100K$ iterations. The initial learning rate is set to $0.001$ and linearly decreases to $0$ during training. We utilize weight normalization and data dependent initialization of weights as described in \cite{weightnorm}. Each $\lambda_k$ is set to the reciprocal of the total number of elements in layer $k$. \textbf{In-plane normalization} In some difficult cases, e.g. for datasets with high shape variability, it is difficult to perform appearance transfer from one object to another with no part correspondences between them. This problem is especially problematic when generating human beings. To cope with it we propose to use additional in-plane normalization utilizing the information provided by the shape estimate $\hat{y}$. In our case $\hat{y}$ is given by the positions of body joints which we use to crop out areas around body limbs. This results in $8$ image crops that we stack together and give as input to the generator $F_\phi$ instead of $x$. If some limbs are missing (e.g. due to occlusions) we use a black image instead of the corresponding crop. \begin{table}[t] \centering \begin{tabular}{ccc|ccc} \bfseries Input & \bfseries pix2pix & \bfseries Our & \bfseries Input & \bfseries pix2pix & \bfseries Our \\ \toprule \includegraphics[scale=0.12]{images/sketches2shoes/15016} & \includegraphics[scale=0.12]{images/sketches2shoes/15016_pix2pix} & \includegraphics[scale=0.12]{images/sketches2shoes/15016_sample} & \includegraphics[scale=0.12]{images/sketches2handbags/764} & \includegraphics[scale=0.12]{images/sketches2handbags/764_pix2pix} & \includegraphics[scale=0.12]{images/sketches2handbags/764_sample} \\ \includegraphics[scale=0.12]{images/sketches2shoes/14976} & \includegraphics[scale=0.12]{images/sketches2shoes/14976_pix2pix} & \includegraphics[scale=0.12]{images/sketches2shoes/14976_sample} & \includegraphics[scale=0.12]{images/sketches2handbags/13248} & \includegraphics[scale=0.12]{images/sketches2handbags/13248_pix2pix} & \includegraphics[scale=0.12]{images/sketches2handbags/13248_sample} \\ \includegraphics[scale=0.12]{images/sketches2shoes/15035} & \includegraphics[scale=0.12]{images/sketches2shoes/15035_pix2pix} & \includegraphics[scale=0.12]{images/sketches2shoes/15035_sample} & \includegraphics[scale=0.12]{images/sketches2handbags/13223} & \includegraphics[scale=0.12]{images/sketches2handbags/13223_pix2pix} & \includegraphics[scale=0.12]{images/sketches2handbags/13223_sample} \\ \bottomrule \end{tabular} \captionof{figure}{\small{Colorization of sketches: we compare generalization ability of pix2pix~\cite{pix2pix2016} and our model trained on real images. The task is to generate plausible appearances for human-drawn sketches of shoes and handbags~\cite{sketches}.}} \label{fig:sketches2images} \end{table} Let us now investigate the proposed model for conditional image generation based on three tasks: 1) reconstruction of an image $x$ given its shape estimate $\hat{y}$ and original appearance $z$; 2) conditional image generation based on a given shape estimate $\hat{y}$; 3) conditional image generation from arbitrary combinations of $\hat{y}$ and $z$. \subsection{Image reconstruction} \label{seq:imreconstruction} Given a query image $x$ and its shape estimate $\hat{y}$ we can use the network $F_\phi$ to infer appearance of the image $x$. Namely, we denote the mean of the distribution $q(z\vert x, \hat{y})$ predicted by $F_\phi$ from the single image $x$ as its original appearance $z$. Using these $z$ and $\hat{y}$ we can ask our generator $G_\theta$ to reconstruct $x$ from its two components. We show examples of images reconstructed by our methods in Figs.~\ref{fig:edges2images_samples} and \ref{fig:stickman2people_samples}. Additionally, we follow the experiment in~\cite{PoseGuidedGeneration} and calculate for the reconstructions of the test images in Market-1501 and DeepFashion dataset Structural Similarities (SSIM)~\cite{ssim} and Inception Scores (IS)~\cite{inception} (see Table~\ref{table:visual_quality}). Compared to pix2pix~\cite{pix2pix2016} and PG$^2$~\cite{PoseGuidedGeneration} our method outperforms both in terms of SSIM score. Note that SSIM compares the reconstructions directly against the original images. As our method differs from both by generating images conditioned on shape and appearance this underlines the benefit of this conditional representation for image generation. In contrast to SSIM, inception score is measured on the set of reconstructed images independently from the original images. In terms of IS we achieve comparable results to~\cite{PoseGuidedGeneration} and improve on~\cite{pix2pix2016}. % % % \subsection{Appearance sampling} \label{seq:imsampling} An important advantage of our model compared to \cite{pix2pix2016} and \cite{PoseGuidedGeneration} is its ability to generate multiple new images conditioned only on the estimate of an object's shape $\hat{y}$. This is achieved by randomly sampling $z$ from the learned prior $p(z\vert \hat{y})$ instead of inferring it directly from an image $x$. Thus, appearance can be explored while keeping shape fixed. % % % \textbf{Edges-to-images} We compare our method to pix2pix by generating images from edge images of shoes or handbags. The results can been seen in Fig.~\ref{fig:edges2images_samples}. As noted by the authors in \cite{pix2pix2016}, the outputs of pix2pix show only marginal diversity at test time, thus looking almost identical. To save space, we therefore present only one of them. In contrast, our model generates high-quality images with large diversity. We also observe that our model generalizes better to sketchy drawings made by humans~\cite{sketches} (see Fig.~\ref{fig:sketches2images}). Due to a higher abstraction level, sketches are quite different to the edges extracted from the real images in the previous experiment. In this challenging task our model shows higher coherence to the input edge image as well as less artifacts such as at the carrying strap of the backpack. \textbf{Stickman-to-person} Here we evaluate our model on the task of learning plausible appearances for rendering human beings. Given a $\hat{y}$ we thus sample $z$ and infer $x$. We compare our results with the ones achieved by pix2pix on Market-1501 and DeepFashion datasets (see Fig.~\ref{fig:stickman2people_samples}). Due to marginal diversity in the output of pix2pix we again only show one sample per row. We observe that our model has learned a significantly more natural latent representation of the distribution of appearance. Also it preserves the spatial layout of the human figure better. We prove this observation by re-estimating joint positions from the test images generated by each methods on all three datasets. For this we apply the same the algorithm we used to estimate the positions of body joints initially, namely~\cite{jointestimator} with parameter kept fixed. We report mean $L_2$-error in the positions of detected joints in Table~\ref{table:pose_preserving}. Our approach shows a significantly lower re-localization error, thus demonstrating that body pose has been favorably retained. \begin{figure} \begin{center} \includegraphics[scale=0.3]{images/transfer/market1501_shorter} \end{center} \caption{\small{Appearance transfer on Market-1501. Appearance is provided by image on bottom left. $\hat{y}$ (middle) is automatically extracted from image at the top and transferred to bottom.}} \label{fig:transfer_market1501} \end{figure} \begin{table}[h!] \begin{center} \begin{tabular}{|l|ccc|} \hline \small{method} & \small{our} & \small{pix2pix} & \small{PG$^2$} \\ \hline \small{COCO} & \boldmath{$23.23$} & $59.26$ & $-$\\ % \small{DeepFashion} & \boldmath{$7.34$} & $15.53$ & $19.04$ \\ % % \small{Market1501} & \boldmath{$54.60$} & $59.59$ & $59.95$ \\ % \hline \end{tabular} \end{center} \caption{\small{Automatic body joint detection is applied to images of humans synthesized by our method, pix2pix, and PG$^2$. The L2 error of joint location is presented, indicating how good shape is preserved. The error is measured in pixels based on a resolution of $256\times 256$.}} \label{table:pose_preserving} \end{table} \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{images/deepfashion/v5/transfer_z} \end{center} \caption{\small{Stability of appearance transfer on DeepFashion. Each row is synthesized using appearance information from the leftmost image and each column is synthesized from the pose in the first row. Notice that inferred appearance remains constant across a wide variety of viewpoints.}} \label{fig:stability} \end{figure} \subsection{Independent transfer of shape and appearance} \label{seq:transfer} We show performance of our method for conditional image transfer, Fig.~\ref{fig:stability}. Our disentangled representation of shape and appearance can transfer a single appearance over different shapes and vice versa. The model has learned a disentangled representation of both characteristics, so that one can be freely altered without affecting the other. This ability is further demonstrated in Fig.~\ref{fig:transfer_market1501} that shows a synthesis across a full $360^\circ$ turn. \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline \small{dataset} & \multicolumn{2}{c|}{\small{Our}} & \multicolumn{2}{c|}{\small{PG$^2$}} \\ & $\|std\|$ & \small{max pairwise} & $\|std\|$ & \small{max pairwise} \\ & & \small{dist} & & \small{dist} \\ \hline \small{market1501} & \boldmath{$55.95$} & \boldmath{$125.99$} & $67.39$ & $155.16$\\ \small{deepfashion} & \boldmath{$59.24$} & \boldmath{$135.83$} & $69.57$ & $149.66$\\ \small{deepfashion} & \boldmath{$56.24$} & \boldmath{$121.47$} & $59.73$ & $127.53$\\ \hline \end{tabular} \end{center} \caption{\small{Given an image its appearance is transferred from an image to different target poses. For these synthesized images, the unwanted deviation in appearance is measured using a pairwise perceptual VGG16 loss.}} \label{table:app_preserving} \end{table} \begin{table*} \centering \begin{tabular}{cccccccc} \multicolumn{4}{c}{Market} & \multicolumn{4}{c}{DeepFashion} \\ \cmidrule(lr){1-4}\cmidrule(lr){5-8} \bfseries \small{Conditional} & \bfseries \small{Target} & \bfseries \small{Stage} & \bfseries \small{Our} & \bfseries \small{Conditional} & \bfseries \small{Target} & \bfseries \small{Stage} & \bfseries \small{Our} \\ \bfseries \small{image} & \bfseries \small{image} & \bfseries \small{II\cite{PoseGuidedGeneration}}& & \bfseries \small{image} & \bfseries \small{image} & \bfseries \small{II\cite{PoseGuidedGeneration}}& \\ \cmidrule(lr){1-4}\cmidrule(lr){5-8} % % \includegraphics[scale=0.39]{images/market1501/id_0005_conditional} & \includegraphics[scale=0.39]{images/market1501/id_0005_target} & \includegraphics[scale=0.39]{images/market1501/id_0005_PG2} & \includegraphics[scale=0.39]{images/market1501/id_0005_boxnorm3} & % \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_01166_condition} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_01166_target} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_01166_pg2} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/v5/id_01166} \\ % \includegraphics[scale=0.39]{images/market1501/id_0006_conditional} & \includegraphics[scale=0.39]{images/market1501/id_0006_target} & \includegraphics[scale=0.39]{images/market1501/id_0006_PG2} & \includegraphics[scale=0.39]{images/market1501/id_0006_boxnorm3} & % \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_00281_condition} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_00281_target} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_00281_pg2} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/v5/id_00281} \\ % \includegraphics[scale=0.39]{images/market1501/id_0015_conditional} & \includegraphics[scale=0.39]{images/market1501/id_0015_target} & \includegraphics[scale=0.39]{images/market1501/id_0015_PG2} & \includegraphics[scale=0.39]{images/market1501/id_0015_boxnorm3} & % \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_06909_condition} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_06909_target} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_06909_pg2} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/v5/id_06909} \\ % \includegraphics[scale=0.39]{images/market1501/id_0165_conditional} & \includegraphics[scale=0.39]{images/market1501/id_0165_target} & \includegraphics[scale=0.39]{images/market1501/id_0165_PG2} & \includegraphics[scale=0.39]{images/market1501/id_0165_boxnorm3} & % \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_07607_condition} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_07607_target} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/id_07607_pg2} & \includegraphics[width=0.10\textwidth,height=0.10\textwidth]{images/deepfashion/v5/id_07607} \\ \end{tabular} \captionof{figure}{\small{Comparing image transfer against PG$^2$. Left: Results on Market. Right: Results on DeepFashion. Appearance is inferred from the conditional image, the pose is inferred from the target image. Note that our method does not require labels about person identity.}} \label{fig:transfer_pg2} \end{table*} The only other work we can compare with in this experiment is PG$^2$ from \cite{PoseGuidedGeneration}. In contrast to our method PG$^2$ was trained fully supervised on DeepFashion and Market-1501 datasets with pairs of images that share appearance (person id) but contain different shapes (in this case pose) of the same person. Despite the fact that we never train our model explicitly on pairs of images, we demonstrate both qualitatively and quantitatively that our method improves upon \cite{PoseGuidedGeneration}. A direct visual comparison is shown in Fig.~\ref{fig:transfer_pg2}. We further design a new metric to evaluate and compare against PG$^2$ on the appearance and shape transfer. Since code for \cite{PoseGuidedGeneration} is not available our comparison is limited to generated images provided by ~\cite{PoseGuidedGeneration}. The idea behind our metric is to compare how good an appearance $z$ of a reference image $x$ is preserved when synthesizing it with a new shape estimate $\hat{y}$. For that we first fine-tune an ImageNet~\cite{ILSVRC15} pretrained VGG16~\cite{vgg} on Market-1501 on the challenging task of person re-identification. In test phase this network achieves mean average precision (mAP) of $35.62\%$ and rank-1 accuracy of $63.00\%$ on a task of single query retrieval. These results are comparable to those reported in \cite{zheng2016discriminatively}. Due to the nature of Market-1501, which contains images of the same persons from multiple viewpoints, the features learned by the network should be pose invariant and mostly sensitive to appearance. Therefore, we use a difference between two features extracted by this network as a measure for appearance similarity. For all results on DeepFashion and Market-1501 datasets reported in \cite{PoseGuidedGeneration} we use our method to generate exactly the same images. Further we build groups of images sharing the same appearance and retain those groups that contain more than one element. As a result we obtain three groups of images (see Table.~\ref{table:app_preserving}) which we analyze independently. We denote these groups with $I_i, i=\{1,2,3\}$. For each image $j$ in the group $I_i$ we find its $10$ nearest neighbors $n^i_{j_1}, n^i_{j_2}, \dots n^i_{j_{10}}$ in the training set using the embedding of the fine-tuned VGG16. We search for the nearest neighbors in the training dataset, as the person IDs and poses were taken from the test dataset. We calculate the mean over each nearest-neighbor set and use this mean $m_j$ as the unique representation of the generated image $j$. For images $j$ in the group $I_i$ we calculate maximal pairwise distance between the $m_j$ as well as the length of the standard deviation vector. The results over all three image groups $I_1, I_2, I_3$ are summarized in Table~\ref{table:app_preserving}. One can see that our method shows higher compactness of the feature representations $m_j$ of the images in each group. From these results we conclude that our generated images are more consistent in their appearance than the results of PG$^2$. \textbf{Generalization to different poses} Because we are not limited by the availability of labeled images showing the same appearance in different poses, we can utilize additional large scale datasets. Results on COCO are shown in Fig.~\ref{fig:teaser}. Besides still images, we are able to synthesize videos. Examples can be found at \href{https://compvis.github.io/vunet}{https://compvis.github.io/vunet}, demonstrating the transfer of appearances from COCO to poses obtained from a video dataset \cite{pennaction}. \subsection{Ablation study} \label{seq:Ablation} At last we analyze the effect of individual components of our method on the quality of generated images (see Fig.~\ref{fig:klnokl}). \textbf{Absence of appearance} Without appearance information $z$ our generator $G_\theta$ is a U-Net performing a direct mapping from shape estimate $\hat{y}$ to the image $x$. In this case, the output of the generator is the mean of $p(x \vert y)$. Because we model it as a unimodal Laplace distribution, it is an estimate of the mean image over all possible images (of the dataset) with the given shape. As a result the output generations do not show any appearance at all (Fig.~\ref{fig:klnokl}, second row). \textbf{Importance of KL-loss} We show further what happens if we replace the VAE in our model with a simple autoencoder. In practice that means that we ignore the KL-term in the loss function in Eq.~\ref{eq:vaeloss_3}. In this case, the network has no incentive to learn a shape invariant representation of the appearance and just learns to copy and paste the appearance inputs to the positions provided by the shape estimate $\hat{y}$ (Fig.~\ref{fig:klnokl}, third row). \textbf{Our full model} The last row in Fig.~\ref{fig:klnokl} shows that our full model can successfully perform appearance transfer. % % % \begin{table} \centering \begin{tabular}{cc|ccc} % \vtop{\hbox{\strut \small{KL}}} & \vtop{\hbox{\strut \small{Appearance}}\hbox{\strut \small{Input}}} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/final_pose_1} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/final_pose_2} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/final_pose_9} \\ \midrule no & no & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/noapp_1} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/noapp_2} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/noapp_9} \\ \midrule no & % \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/nobox_style} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/nokl_1} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/nokl_2} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/nokl_9} \\ \midrule yes & % \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/nobox_style} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/final_1} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/final_2} & \includegraphics[align=c,width=0.09\textwidth,height=0.09\textwidth]{images/deepfashion/ablation/final_9} \\ \end{tabular} \captionof{figure}{\small{Ablation study on the task of appearance transfer. See Sec.~\ref{seq:Ablation}. }} % % % % \label{fig:klnokl} \end{table} \section{Conclusion} We have presented a variational U-Net for conditional image generation by modeling the interplay of shape and appearance. While a variational autoencoder allows to sample appearance, the U-Net preserves object shape. Experiments on several datasets and diverse objects have demonstrated that the model significantly improves the state-of-the-art in conditional image generation and transfer. \blfootnote{This work has been supported in part by the Heidelberg Academy of Science and a hardware donation from NVIDIA.} \FloatBarrier \newpage {\small \bibliographystyle{ieee}
1,314,259,996,921
arxiv
\section{Introduction} \label{sec-int} Heavy-quark production in $ep$ interactions in deep inelastic scattering (DIS) is dominated by the Boson Gluon Fusion (BGF) process. Heavy-quark production provides a two-fold test of perturbative quantum chromodynamics (pQCD); a study of the BGF process and the higher-order corrections to it, and an independent check of the validity of the gluon density in the proton extracted from the inclusive DIS data. Of the two heavy quarks whose production is accessible at HERA, $c$ and $b$, the latter is strongly suppressed due to its smaller electric charge and larger mass. The production of charm via the identification of $D$ and $D^{*}$ mesons in DIS has been extensively studied at HERA in the kinematic range $1 < Q^{2} < 1000{\,\text{Ge}\eVdist\text{V\/}}^{2}$, $p_{T}(D, D^{*}) > 1.5{\,\text{Ge}\eVdist\text{V\/}}$ ~\cite{pl:b407:402, np:b545:21, epj:c12:35, pl:b528:199, pr:d69:012004, Aktas:2004ka, Chekanov:2007ch, Chekanov:2008yd}, where $Q^{2}$ is the negative squared four-momentum exchange at the electron vertex and $p_{T}$ is the transverse momentum. The results are consistent with the calculations of pQCD. The fragmentation fraction $f(c\to \Lambda_{c}^{+})$ has been measured by the ZEUS collaboration in the photoproduction regime~\cite{Chekanov:2005mm}. The obtained fragmentation fraction is larger than but consistent within uncertainties with the average from $e^{+}e^{-}$ collisions~\cite{Gladilin:1999pj}. In this paper, a charm quark in the final state was identified by the presence of a charmed hadron. The production of $D^{+}$ mesons and $\Lambda_{c}^{+}$ baryons was studied using the decays\footnote{The charge conjugated modes are implied throughout this paper.} $D^{+}\to K^{0}_{S} \pi^{+}$, $\Lambda_{c}^{+}\to p K^{0}_{S}$ and $\Lambda_{c}^{+}\to \Lambda \pi^{+}$. These decay channels were chosen since the presence of a neutral strange hadron in the final state significantly reduces the combinatorial background. Measurements of $D^{+}$ and $\Lambda_{c}^{+}$ cross sections provide information about both $c$-quark production and its fragmentation. With respect to previous studies, in this analysis the kinematic region of the measurement is extended to very low transverse momenta of the produced charmed hadrons. No explicit cut on the transverse momenta of the reconstructed charmed hadrons was applied. This is particularly relevant at low $Q^{2}$, where charm quarks are predominantly produced with low transverse momentum. In addition, $\Lambda_{c}^{+}$ production was studied for the first time at HERA in DIS. From a comparison of the $D^{+}$ and $\Lambda_{c}^{+}$ cross sections, the fragmentation fraction $f(c\to \Lambda_{c}^{+})$ is extracted. \section{Experimental set-up} \label{sec-exp} The analysis was performed with data taken from 1996 to 2000 corresponding to a luminosity of $120.4 \pm 2.4\,\text{pb}^{-1}$. The sample consists of $38.6\,\text{pb}^{-1}$ of $e^{+}p$ data collected at a centre-of-mass energy of $300{\,\text{Ge}\eVdist\text{V\/}}$ and of $65.1\,\text{pb}^{-1}$ collected at $318{\,\text{Ge}\eVdist\text{V\/}}$, plus $16.7\,\text{pb}^{-1}$ of $e^{-}p$ data collected at $318{\,\text{Ge}\eVdist\text{V\/}}$.\footnote{Hereafter, both electrons and positrons are referred to as electrons, unless explicitly stated otherwise.} \Zdetdesc \Zctddesc\footnote{\ZcoosysB\Zpsrap} The transverse-momentum resolution for full-length tracks was $\sigma(p_T)/p_T=0.0058p_T\oplus0.0065\oplus0.0014/p_T$, with $p_T$ in ${\text{Ge}\eVdist\text{V\/}}$. To estimate the ionisation energy loss per unit length, $dE/dx$, of particles in the CTD\cite{pl:b481:213,*epj:c18:625,*thesis:bartsch:2007,*Chekanov:2008aaa}, the truncated mean of the anode-wire pulse heights was calculated, which removes the $10\%$ lowest and at least the $30\%$ highest pulses depending on the number of saturated hits. The measured $dE/dx$ values were corrected by normalising to the measured average $dE/dx$ for tracks around the region of minimum ionisation for pions with momentum $p$ satisfying $0.3~<~p~<~0.4{\,\text{Ge}\eVdist\text{V\/}}$. Henceforth, $dE/dx$ is quoted in units of minimum ionising particles (mips). \Zcaldesc The position of the scattered electron at the CAL was determined by combining information from the CAL and, where available, the small-angle rear tracking detector (SRTD)~\cite{nim:a401:63} and the hadron-electron separator (HES)~\cite{nim:a277:176}. The luminosity was measured from the rate of the bremsstrahlung process $ep\to e\gamma p$, where the photon was measured in a lead--scintillator calorimeter \cite{desy-92-066,*zfp:c63:391,*acpp:b32:2025} placed in the HERA tunnel at $Z=-107\,\text{m}$. \section{Theoretical predictions} \label{sec-theo} The next-to-leading-order (NLO) QCD predictions for the $c\bar{c}$ production cross sections were obtained using the HVQDIS program~\cite{pr:d57:2806} based on the fixed-flavour-number scheme (FFNS). In this scheme, only light quarks ($u$, $d$ and $s$) and gluons are included in the proton parton density functions (PDFs) which obey the DGLAP equations~\cite{sovjnp:15:438,*sovjnp:20:94,*np:b126:298,*jetp:46:641}, and the $c\bar{c}$ pair is produced via the BGF mechanism~\cite{np:b452:109,*pl:b353:535} with NLO corrections~\cite{np:b392:162,*np:b392:229}. The presence of different large scales, $Q$, $p_{T}$ and the mass of the $c$ quark, $m_{c}$, can spoil the convergence of the perturbative series because the neglected terms of orders higher than $\alpha_{s}^{2}$ (where $\alpha_{s}$ is the strong coupling constant) contain $\log(Q^{2}/m_{c}^{2})$ factors which can become large. The FFNS variant of the ZEUS-S NLO QCD fit~\cite{pr:d67:012007,*misc:www:zeus2002} to structure function data was used as the parametrisation of the proton PDFs. In this fit, $\alpha_{s}(M_{Z})$ was set to $0.118$ and the mass of the charm quark was set to $1.5{\,\text{Ge}\eVdist\text{V\/}}$; the same mass was used in the HVQDIS calculation. The renormalisation and factorisation scales were set to $\mu_{R} = \mu_{F} = \sqrt{Q^{2} + 4m_{c}^{2}}$. The charm fragmentation to the $D^{+}$ meson was modelled using the Peterson function~\cite{pr:d27:105} with the Peterson parameter, $\epsilon$, set to $0.079$~\cite{Chekanov:2008ur}. For the hadronisation fraction, $f(c\to D^{+})$, the value $0.216^{+0.021}_{-0.029}$ was used~\cite{Chekanov:2007ch}. The HVQDIS predictions for the production of $D^{+}$ mesons are affected by the theoretical uncertainties listed below. The uncertainty on the total cross section is given in parentheses: \begin{itemize} \item the ZEUS PDF uncertainties were propagated from the experimental uncertainties of the fitted data ($^{+5.3\%}_{-5.2\%}$); \item the charm quark mass was changed consistently in the PDF fit and in HVQDIS by $\pm0.15{\,\text{Ge}\eVdist\text{V\/}}$($^{+15.2\%}_{-13.5\%}$); \item the renormalisation scale was varied by a factor 2 ($^{+19.7\%}_{-12.6\%}$); \item the factorisation scale was changed by a factor 2 independently of the renormalisation scale ($^{+13.1\%}_{-21.7\%}$); \item the $\epsilon$ parameter of the Peterson fragmentation function was changed to 0.01 and 0.1~\cite{Chekanov:2008ur,Aaron:2008tt}. This modification affects the shapes of the $p_{T}$, $Q^{2}$ and $x$ distributions ($^{+0.1\%}_{-0.4\%}$). \end{itemize} \section{Monte Carlo models} \label{sec-mc} The detector acceptance was modelled using the {\sc Rapgap 3.00}~\cite{cpc:86:147} Monte Carlo (MC) program, interfaced with {\sc Heracles 4.6.1}~\cite{cpc:69:155} in order to incorporate first-order electroweak corrections. The generated events were passed through a full simulation of the detector, using {\sc Geant 3.13}~\cite{tech:cern-dd-ee-84-1}, and finally processed and selected in the same way as the data. The MC was used to simulate events containing charm quarks produced in the BGF process. The {\sc Rapgap} generator used leading-order matrix elements with leading-logarithmic parton showers. The CTEQ5L~\cite{epj:c12:375} PDFs were used for the proton. The charm-quark mass was set to $1.5{\,\text{Ge}\eVdist\text{V\/}}$. Charm fragmentation was simulated using the Lund string model~\cite{Andersson:1983ia}. The $D^{+}$ and $\Lambda_{c}^{+}$ hadrons originating from beauty decays were accounted for by including a {\sc Rapgap} $b$-quark sample where the $b$-quark mass was set to $4.75{\,\text{Ge}\eVdist\text{V\/}}$. An additional sample where charm was produced by the process $cg\to cg$ was generated and was used to study the model dependence of the simulation. For this process, the charm quark was treated as a part of the structure of the photon. The processes $gg\to c\bar{c}$ and $q\bar{q}\to c\bar{c}$ were not included because their contribution estimated using the {\sc Rapgap} MC was found to be less than $1\%$ in the studied kinematic range. In general, the MC gives a reasonable description of the data for DIS and $D^{+}$-meson variables when compared at detector level. To improve the description further, {\sc Rapgap} was reweighted to reproduce the $p_{T}(D^{+})$ distribution observed in the data. The same weights used for $D^{+}$ mesons were also applied to $D_{s}^{+}$ and $\Lambda_{c}^{+}$ hadrons. \section{Kinematic reconstruction and event selection} \label{sec-reco} A three-level trigger system was used to select events online~\cite{zeus:1993:bluebook,uproc:chep:1992:222}. At the third level, an electron with an energy greater than $4{\,\text{Ge}\eVdist\text{V\/}}$ and a position outside a box of $24\times 12\,\text{cm}^{2}$ centred around the beampipe on the face of the rear calorimeter was required by a fully inclusive DIS trigger which had a high acceptance for $Q^{2} \gtrsim 1{\,\text{Ge}\eVdist\text{V\/}}^{2}$. However, this trigger was heavily prescaled and the equivalent luminosity is $17\,\text{pb}^{-1}$. Additionally, events above $Q^{2} \approx 20{\,\text{Ge}\eVdist\text{V\/}}^{2}$ were selected by a medium-$Q^{2}$ trigger. The only difference to the inclusive DIS trigger is that the position of the scattered electron on the RCAL face had to lie outside a circle centred around the beampipe of radius between $25$ and $35\,\text{cm}$, depending on the running period. The fraction of the electron energy transferred to the proton in its rest frame, $y$, as well as the kinematic variables $Q^{2}$ and Bjorken $x$, were reconstructed offline using the electron method~\cite{proc:hera:1991:23,*hoeger} (denoted by the subscript $e$), which uses the energy and angle of the scattered electron. The inelasticity $y$ was also obtained using the Jacquet-Blondel (JB) method~\cite{proc:epfacility:1979:391}. The double angle (DA) method~\cite{proc:hera:1991:23,*hoeger}, which relies on the angles of the scattered electron and the hadronic-energy flow, was used as a systematic check. The following requirements were imposed offline: \begin{itemize} \item $38 \, < \, \delta \, < \, 65{\,\text{Ge}\eVdist\text{V\/}}$, where $\delta = \sum E_i(1-\cos\theta_i)$ and $E_{i}$ and $\theta_{i}$ are the energy and the polar angle of the $i^{th}$ energy-flow object (EFO)~\cite{thesis:briskin:1998} reconstructed from charged tracks, as measured in the CTD, and energy clusters measured in the CAL. The sum $i$ runs over all EFOs~\cite{pl:b303:183}; \item $E_{e}^{'} > 10{\,\text{Ge}\eVdist\text{V\/}}$, where $E_{e}^{'}$ is the energy of the scattered electron identified using a neural-network algorithm~\cite{nim:a365:508,nim:a391:360}; \item $E_{{\rm cone}} < 5{\,\text{Ge}\eVdist\text{V\/}}$, where $E_{{\rm cone}}$ is the calorimeter energy measured in a cone around the electron position that was not assigned to the electron cluster. The cone was defined by $R_{{\rm cone}} < 0.8$ with $R_{{\rm cone}} = \sqrt{(\Delta \phi)^{2} + (\Delta \eta)^{2}}$; \item a match between the tracking and the calorimeter information for electrons well within the CTD acceptance, $17^{\circ} < \theta_{e} < 149^{\circ}$. For $\theta_{e}$ outside this region, the cut $\delta > 44{\,\text{Ge}\eVdist\text{V\/}}$ was imposed; \item for events with the scattered electron reconstructed within the SRTD acceptance, the impact position of the electron on the face of the RCAL had to be outside the region $26 \times 14\,\text{cm}^{2}$ centred on $X = Y = 0$. If the electron position was reconstructed without using SRTD information, a box cut of $26 \times 20\,\text{cm}^{2}$ was imposed; \item $1.5 < Q^{2}_{e} < 1000{\,\text{Ge}\eVdist\text{V\/}}^{2}$; \item $y_{{\rm JB}} > 0.02$ and $y_{e} < 0.7$; \item a primary vertex position in the range $|Z_{\textnormal{vertex}}| < 50\,\text{cm}$. \end{itemize} This analysis used charged tracks measured in the CTD that were assigned either to the primary or to a secondary vertex. The tracks were required to have transverse momenta $p_{T} > 0.15{\,\text{Ge}\eVdist\text{V\/}}$ and pseudorapidity in the laboratory frame $|\eta| < 1.75$, restricting the study to a region where the CTD track acceptance and resolution were high. Candidates for long-lived neutral strange hadrons decaying to two charged particles were identified by selecting pairs of oppositely charged tracks, fitted to a displaced secondary vertex. The events were required to have at least one such candidate. \section{Strange-particle reconstruction} \label{sec-strange-reco} The $K^{0}_{S}$ mesons were identified by their charged decay mode, $K^{0}_{S}\to \pi^{+} \pi^{-}$. Both tracks were assigned the mass of the charged pion and the invariant mass, $M(\pi^{+} \pi^{-})$, of each track pair was calculated. Additional requirements to select $K^{0}_{S}$ were imposed: \begin{itemize} \item $M(e^{+}e^{-}) > 50{\,\text{Me}\eVdist\text{V\/}}$, where the electron mass was assigned to each track, to eliminate tracks from photon conversions; \item $M(p\pi) > 1121{\,\text{Me}\eVdist\text{V\/}}$, where the proton mass was assigned to the track with higher momentum, to eliminate $\Lambda$ contamination in the $K^{0}_{S}$ signal; \item $\cos\theta_{XY} > 0.98$, where $\theta_{XY}$ is defined as the angle between the momentum vector of the $K^{0}_{S}$ candidate and the vector defined by the primary interaction vertex and the $K^{0}_{S}$ decay vertex in the $X$-$Y$ plane; \item $483 < M(\pi^{+} \pi^{-}) < 513{\,\text{Me}\eVdist\text{V\/}}$; \item $|\eta(K^{0}_{S})| < 1.6$. \end{itemize} The $\Lambda$ candidates were reconstructed by their charged decay mode to $p\pi^{-}$. The track with the larger momentum was assigned the mass of the proton, while the other was assigned the mass of the charged pion, as the decay proton always has a larger momentum than the pion, provided the $\Lambda$ momentum is greater than $0.3{\,\text{Ge}\eVdist\text{V\/}}$. Additional requirements to select $\Lambda$ were imposed: \begin{itemize} \item $M(e^{+}e^{-}) > 50{\,\text{Me}\eVdist\text{V\/}}$; \item $M(\pi^{+}\pi^{-}) < 483{\,\text{Me}\eVdist\text{V\/}}$, where the charged pion mass was assigned to both tracks, to remove $K^{0}_{S}$ contamination in the $\Lambda$ signal; \item $\cos\theta_{XY} > 0.98$; \item $1112 < M(p\pi) < 1121{\,\text{Me}\eVdist\text{V\/}}$; \item $|\eta(\Lambda)| < 1.6$. \end{itemize} Figure~\ref{fig:peaks_v0} shows the invariant-mass spectra of $K^{0}_{S}$, $\Lambda$ and $\bar{\Lambda}$ candidates. Distributions of the reconstructed proper lifetime for these particles based on the same data sample as analysed in this paper were found to be satisfactory~\cite{Chekanov:2006wz}. \section{Reconstruction of charmed hadrons} \label{sec-charm-reco} The production of $D^{+}$ and $\Lambda_{c}^{+}$ hadrons was measured in the range of transverse momentum $0 < p_{T}(D^{+},\Lambda_{c}^{+}) < 10{\,\text{Ge}\eVdist\text{V\/}}$ and pseudorapidity $|\eta(D^{+},\Lambda_{c}^{+})| < 1.6$. Strange-hadron candidates were combined with a further track measured in the CTD which was assigned to the primary interaction vertex. The combinatorial background was significantly reduced by requiring $p_{T}(D^{+})/E_{T}^{\theta>10^{\circ}} > 0.1$ and $p_{T}(\Lambda_{c}^{+})/E_{T}^{\theta>10^{\circ}} > 0.12$, where the transverse energy $E_{T}^{\theta>10^{\circ}}$ was evaluated as $E_{T}^{\theta>10^{\circ}} = \sum_{i,\theta_{i} > 10^{\circ}} (E_{i}\sin{\theta_{i}})$. The sum runs over all energy deposits in the CAL with a polar angle $\theta$ above $10^{\circ}$. The details of the reconstruction of the three different decay channels are given in the next subsections. \boldmath \subsection{Reconstruction of the decay $D^{+}\to K^{0}_{S} \pi^{+}$} \unboldmath The $D^{+}$ mesons were reconstructed from the decay channel $D^{+}\to K^{0}_{S} \pi^{+}$. In each event, $D^{+}$ candidates were formed from combinations of $K^{0}_{S}$ candidates reconstructed as described in Section~\ref{sec-strange-reco} with further tracks assumed to be pions. The pion candidates were required to have $p_{T}(\pi^{+})/E_{T}^{\theta>10^{\circ}} > 0.04$. Only pion candidates with $dE/dx < 1.5$~mips were considered. Further reduction of the combinatorial background was achieved by cutting on the angle between the pion in the $D^{+}$ rest frame and the $D^{+}$ flight direction, $\theta^{*}(\pi^{+})$. Different cuts depending on $p_{T}(D^{+})$ were used to ensure optimal background suppression: \begin{itemize} \item $\cos\theta^{*}(\pi^{+}) < 0.9$ \;\;\; for \;\;\; $0.0 < p_{T}(D^{+}) < 1.5{\,\text{Ge}\eVdist\text{V\/}};$ \item $\cos\theta^{*}(\pi^{+}) < 0.8$ \;\;\; for \;\;\; $1.5 < p_{T}(D^{+}) < 3.0{\,\text{Ge}\eVdist\text{V\/}};$ \item $\cos\theta^{*}(\pi^{+}) < 0.6$ \;\;\; for \;\;\; $3.0 < p_{T}(D^{+}) < 10.0{\,\text{Ge}\eVdist\text{V\/}}.$ \end{itemize} The $K^{0}_{S}\pi^{+}$ invariant-mass distribution was fitted with the sum of contributions from the signal, the non-resonant background and a reflection caused by $D_{s}^{+}\to K^{0}_{S}K^{+}$ decays. The signal was described by a Gaussian function defined as: \begin{equation} g(\sigma,M_{0};m) = \frac{1}{\sqrt{2\pi}\sigma} \exp{\frac{-(m-M_{0})^{2}}{2\sigma^{2}}}, \end{equation} where $M_{0}$ and $\sigma$ are the resonance mass and width, respectively. For the background a sum of Chebyshev polynomials up to the second order was used: \begin{equation} b(A,B,C;y(m)) = A \cdot (1 + B \cdot y + C \cdot (2y^{2} - 1)), \label{background-cheby} \end{equation} where $y(m) = (2m - m_{{\rm max}} - m_{{\rm min}}) \, / \, (m_{{\rm max}} - m_{{\rm min}})$ and $m_{{\rm max}} (m_{{\rm min}}) = 2.1 (1.6){\,\text{Ge}\eVdist\text{V\/}}$ is the upper (lower) limit of the fitted range. The mass distribution of the reflection $r(m)$ caused by the decay $D_{s}^{+}\to K^{0}_{S}K^{+}\to \pi^{+}\pi^{-}K^{+}$ was obtained from $D_{s}^{+}$ combinations in the Monte Carlo at detector level matched to the same decay at generator level. The normalisation of the reflection with respect to the Gaussian signal assumed for $D^{+}\to K^{0}_{S}\pi^{+}$ decays is based on previously measured fragmentation fractions $f$~\cite{Chekanov:2007ch} and branching ratios $\mathcal{B}$~\cite{Amsler:2008zzb} (see also Table~\ref{tab-branching-ratios}) and the detector acceptances for both decay channels. For this purpose, the invariant mass distribution of the reflection was normalised to unity and then multiplied by the expected ratio of $D_{s}^{+}$ to $D^{+}$ mesons: \begin{equation} R = \frac{f(c\to D_{s}^{+}) \cdot \mathcal{B}(D_{s}^{+}\to K^{0}_{S}K^{+}\to \pi^{+}\pi^{-}K^{+})}{f(c\to D^{+}) \cdot \mathcal{B}(D^{+}\to K^{0}_{S}\pi^{+}\to \pi^{+}\pi^{-}\pi^{+})} \cdot \frac{\mathcal{A}(D_{s}^{+})}{\mathcal{A}(D^{+})} = 0.44 \pm 0.10, \label{ratio_reflection} \end{equation} where $\mathcal{A}(D_{s}^{+})$ and $\mathcal{A}(D^{+})$ are the reconstruction acceptances for $D_{s}^{+}$ and $D^{+}$ mesons, respectively, as obtained from the Monte Carlo. The resulting fitting function is given by: \begin{equation} F(A,B,C,D,\sigma,M_{0};m) = b(A,B,C;y(m)) + D \cdot [r(m) + g(\sigma,M_{0};m)], \end{equation} where the parameters $A$, $B$, $C$, $D$, $\sigma$ and $M_{0}$ were determined by the fit. Figure~\ref{fig:peak_dpm} shows the invariant mass spectrum for the $D^{+}$ candidates after the reflection was subtracted using the fit, resulting in a 20\% reduction in the number of $D^{+}$ mesons. A clear signal is visible. The fit yielded a $D^{+}$ mass of $1872 \pm 4{\,\text{Me}\eVdist\text{V\/}}$, in agreement with the PDG value~\cite{Amsler:2008zzb}. The width of the signal was $19.0 \pm 3.1{\,\text{Me}\eVdist\text{V\/}}$, reflecting the detector resolution. The number of $D^{+}$ mesons yielded by the fit was $N(D^{+}) = 691 \pm 107$. In order to extract the $D^{+}$-meson yields in bins of $p_{T}^{2}(D^{+})$, $\eta(D^{+})$, $Q^{2}$ and $x$, the signals in all analysis bins of a given quantity were fitted simultaneously, fixing the ratios of the widths in the bins to the Monte Carlo prediction. All other parameters including the masses were left free for all bins in the simultaneous fit. The signal in the region $0 < p_{T}(D^{+}) < 1.5{\,\text{Ge}\eVdist\text{V\/}}$ that was not accessible in previous measurements is shown in Fig.~\ref{fig:peak_dpm_onlyfirstbin}. \boldmath \subsection{Reconstruction of the decay $\Lambda_{c}^{+}\to pK^{0}_{S}$} \unboldmath The $\Lambda_{c}^{+}$ baryons were reconstructed from the decay channel $\Lambda_{c}^{+}\to pK^{0}_{S}$. In each event, $\Lambda_{c}^{+}$ candidates were formed from combinations of $K^{0}_{S}$ candidates reconstructed as described in Section~\ref{sec-strange-reco} with proton candidates. The proton-candidate selection used the energy-loss measurement in the CTD. Tracks fitted to the primary vertex with more than 40 hits were considered. The proton band was parametrised separately for positive and negative tracks from an examination of $dE/dx$ as a function of the momentum~\cite{thesis:roloff:2007}. The proton selection was checked by studying proton-candidate tracks from $\Lambda$ decays. To remove the region where the proton band completely overlaps the pion band, the proton momentum was required to be less than $1.5{\,\text{Ge}\eVdist\text{V\/}}$ and a cut on $dE/dx > 1.2$~mips was applied. Due to the proton selection described above, reflections from $D^{+}\to K^{0}_{S}\pi^{+}$ and $D_{s}^{+}\to K^{0}_{S}K^{+}$ decays are suppressed. As a result of the cut on the proton momentum, there is no acceptance for $\Lambda_{c}^{+}$ baryons at very high $p_{T}(\Lambda_{c}^{+})$. Hence the measurement of the cross section for this decay channel was restricted to the region $0 < p_{T}(\Lambda_{c}^{+}) < 6{\,\text{Ge}\eVdist\text{V\/}}$. Figure~\ref{fig:peak_k0p} shows the $M(pK^{0}_{S})$ distribution for the $\Lambda_{c}^{+}$ candidates. A clear signal is seen at the nominal value of the $\Lambda_{c}^{+}$ mass~\cite{Amsler:2008zzb}. The mass distribution was fitted to the sum of a Gaussian function describing the signal and the function defined in Eq.~(\ref{background-cheby}) to describe the non-resonant background. The number of reconstructed $\Lambda_{c}^{+}$ baryons yielded by the fit was $N(\Lambda_{c}^{+}) = 79 \pm 25$. \boldmath \subsection{Reconstruction of the decay $\Lambda_{c}^{+}\to \Lambda\pi^{+}$} \unboldmath The $\Lambda_{c}^{+}$ baryons were also reconstructed from the decay channel $\Lambda_{c}^{+}\to \Lambda\pi^{+}$. In each event, $\Lambda_{c}^{+}$ candidates were formed from combinations of $\Lambda$ candidates as described in Section~\ref{sec-strange-reco}, with further tracks assumed to be pions. The pion candidates were required to have $p_{T}(\pi^{+})/E_{T}^{\theta>10^{\circ}} > 0.05$. Only pion candidates with $dE/dx < 1.5$~mips were considered. To suppress combinatorial background further, the cut $\cos{\theta^{*}(\pi^{+})} < 0.8$ was imposed, where $\theta^{*}(\pi^{+})$ is the angle between the pion in the $\Lambda_{c}^{+}$ rest frame and the $\Lambda_{c}^{+}$ flight direction. Figure~\ref{fig:peak_lambdapi} shows the $M(\Lambda\pi)$ distribution for the $\Lambda_{c}^{+}$ candidates. Wrong-charge combinations in the data sample, normalised to the right-charge combinations in the region outside the peak, are also shown. For wrong-charge combinations, the sum of the charges of the proton from the $\Lambda$ candidate and the further track is equal to zero. The data were fitted to the sum of a Gaussian function describing the signal and the background function defined in Eq.~(\ref{background-cheby}). The number of reconstructed $\Lambda_{c}^{+}$ baryons obtained from the fit was $N(\Lambda_{c}^{+}) = 84 \pm 34$. The signal-to-background ratio for both studied $\Lambda_{c}^{+}$ decay channels is similar. Figure~\ref{fig:peak_lambdac_combined} shows the invariant-mass spectrum containing both $\Lambda_{c}^{+}\to pK^{0}_{S}$ and $\Lambda_{c}^{+}\to \Lambda\pi^{+}$ candidates. The fit yielded $N(\Lambda_{c}^{+}) = 146 \pm 33$ candidates. This combined peak was not used to extract any cross sections or fragmentation fractions. \section{Cross sections and acceptance corrections} \label{sec-accep} For a given observable, $Y$, the differential cross section in a bin $i$ was determined using \begin{equation} \frac {d\sigma_{i}}{dY} = \frac {N_{i}(D^{+}) } {\mathcal {A}_{i} \cdot \mathcal {L} \cdot \mathcal {B} \cdot \Delta Y_{i}}, \nonumber \end{equation} where $N_{i}(D^{+})$ is the number of reconstructed $D^{+}$ mesons in bin $i$ having size $\Delta Y_{i}$. The reconstruction acceptance, $\mathcal {A}_{i}$, takes into account migrations, efficiencies and QED radiative effects for the $i^{th}$ bin, $\mathcal {L}$ is the integrated luminosity and $\mathcal {B}$ is the branching ratio~\cite{Amsler:2008zzb} for the decay channel used in the reconstruction (see Table~\ref{tab-branching-ratios}). The total visible production cross sections were determined using \begin{equation} \sigma = \frac {N(D^{+},\Lambda_{c}^{+}) } {\mathcal {A} \cdot \mathcal {L} \cdot \mathcal {B}}, \nonumber \end{equation} where $N(D^{+},\Lambda_{c}^{+})$ and $\mathcal {A}$ were determined for the whole kinematic range of the measurement. All acceptances were obtained from the Monte Carlo. The $b$-quark contribution, predicted by the MC simulation, was subtracted from all measured cross sections. The {\sc Rapgap} prediction for beauty production was multiplied by two, in agreement with a previous ZEUS measurement of beauty production in DIS~\cite{pl:b599:173}. The subtraction of the $b$-quark contribution reduced the measured cross sections by $2-3\%$ for the $D^{+}$ and about $1\%$ for the $\Lambda_{c}^{+}$. There is no sizeable acceptance for charmed hadrons in the transverse-momentum range $0 < p_{T}(D^{+}, \Lambda_{c}^{+}) < 0.5{\,\text{Ge}\eVdist\text{V\/}}$. Hence an extrapolation using the reference Monte Carlo was performed when the cross sections were extracted. For example, the extrapolation accounts for $6\%$ of the $D^{+}$ production in the full kinematic range of the measurement and for $11\%$ of the $D^{+}$ production in the restricted range $0 < p_{T}(D^{+}) < 1.5{\,\text{Ge}\eVdist\text{V\/}}$. \section{Systematic uncertainties} \label{sec-syst} The systematic uncertainties of the measured cross sections and fragmentation fractions were determined by changing the analysis procedure and repeating all calculations. In the measurement of the differential and total cross sections, the following groups of systematic uncertainty sources were considered. The effects on the total cross sections are shown in parentheses ($D^{+}$; $\Lambda_{c}^{+}\to pK^{0}_{S}$; $\Lambda_{c}^{+}\to \Lambda\pi^{+}$): \begin{itemize} \item[$\bullet$] {\{$\delta_{1}$\} event and DIS selection ($^{+4\%}_{-3\%}$; $^{+1\%}_{-2\%}$; $^{+8\%}_{-4\%}$). The following cut variations were applied to data and MC simultaneously:} \begin{itemize} \item the cut on $y_{{\rm JB}}$ was changed to $y_{{\rm JB}} > 0.03$; \item the cut on the scattered electron energy $E_{e}^{'}$ was changed to $E_{e}^{'} > 11{\,\text{Ge}\eVdist\text{V\/}}$; \item the cuts on $\delta$ were changed by $+2{\,\text{Ge}\eVdist\text{V\/}}$; \item the cut on $|Z_{\textnormal{vertex}}|$ was changed to $|Z_{\textnormal{vertex}}| < 45\,\text{cm}$; \item additionally, a box cut of $26 \times 14\,\text{cm}^{2}$ was used for all electron candidates without an SRTD requirement; \end{itemize} \item[$\bullet$] {\{$\delta_{2}$\} $Q^{2}$ and $x$ reconstruction ($<\!\!1\%$; $-3\%$; $-6\%$). The DA method was used for the reconstruction of $Q^{2}$ and $x$ instead of the electron method;} \item[$\bullet$] {\{$\delta_{3}$\} energy scale ($\pm 2\%$; $^{+3\%}_{-4\%}$; $^{+2\%}_{-4\%}$). To account for the uncertainty of the absolute CAL energy scale, the energy of the scattered electron was raised and lowered by $1\%$ and $E_{T}^{\theta>10^{\circ}}$ was raised and lowered by $2\%$. These variations were only applied to the MC;} \item[$\bullet$] {\{$\delta_{4}$\} model dependence of the acceptance corrections:} \begin{itemize} \item the process $cg\to cg$ was included in the {\sc Rapgap} MC sample ($+5\%$; $+3\%$; $+9\%$); \item the MC samples were not reweighted in $p_{T}(D^{+}, D_{s}^{+}, \Lambda_{c}^{+})$ ($-17\%$; $-6\%$; $-21\%$); \end{itemize} \item[$\bullet$] {\{$\delta_{5}$\} uncertainty of the beauty subtraction ($^{+1\%}_{-3\%}$; $\pm1\%$; $<\!\! 1\%$). This was determined by varying the subtracted $b$-quark contributions by a factor 2;} \item[$\bullet$] {\{$\delta_{6}$\} uncertainty of the signal extraction procedure ($^{+12\%}_{-9\%}$; $^{+14\%}_{-5\%}$; $^{+24\%}_{-8\%}$):} \begin{itemize} \item the fit was repeated changing the invariant mass window of $1.6 - 2.1{\,\text{Ge}\eVdist\text{V\/}}$ by $\pm 50{\,\text{Me}\eVdist\text{V\/}}$ on both sides for $D^{+}\to K^{0}_{S}\pi^{+}$ decays. Similarly, the considered invariant mass region of $2.0 - 2.5{\,\text{Ge}\eVdist\text{V\/}}$ was changed by $\pm 50{\,\text{Me}\eVdist\text{V\/}}$ for $\Lambda_{c}^{+}\to pK^{0}_{S}$ decays and by $\pm 30{\,\text{Me}\eVdist\text{V\/}}$ for the channel $\Lambda_{c}^{+}\to \Lambda\pi^{+}$; \item the choice of the background function was assigned an uncertainty of $\pm 5\%$. This value was estimated by comparing the fit results obtained using different choices for the background function, such as polynominals of different orders or exponential functions; \item for differential cross sections, the assumed Gaussian width ratios were varied by $\pm 10\%$; \end{itemize} \item[$\bullet$] {\{$\delta_{7}$\} uncertainty in the luminosity measurement of $\pm 2.0\%$.} \end{itemize} The following uncertainty was considered only for the decays $D^{+}\to K^{0}_{S}\pi^{+}$ and $\Lambda_{c}^{+}\to K^{0}_{S}p$: \begin{itemize} \item[$\bullet$] {\{$\delta_{8}$\} $K^{0}_{S}$ reconstruction ($+2\%$; $+1\%$; $-$). Since the MC signal had a narrower width than observed in the data, the invariant-mass window for the $K^{0}_{S}$ candidate selection was reduced to $0.486 < M(\pi^{+}\pi^{-}) < 0.510{\,\text{Ge}\eVdist\text{V\/}}$ in the MC only.} \end{itemize} The following source of uncertainty was considered only for the decay $D^{+}\to K^{0}_{S}\pi^{+}$: \begin{itemize} \item[$\bullet$] {\{$\delta_{9}$\} uncertainty of the reflection subtraction($\pm5\%$; $-$; $-$). The normalisation of the $D_{s}^{+}$ reflection was changed by the uncertainty of $R$ (see Eq.~(\ref{ratio_reflection})) due to the uncertainties of the fragmentation fractions and branching ratios used in the calculation.} \end{itemize} The following source of uncertainty was considered only for the decay $\Lambda_{c}^{+}\to K^{0}_{S}p$: \begin{itemize} \item[$\bullet$] {\{$\delta_{10}$\} proton reconstruction ($-$; $-14\%$; $-$). The following checks were performed:} \begin{itemize} \item{the number of hits required for the proton candidates was lowered to 32;} \item the uncertainty of the $dE/dx$ simulation for low-momentum protons was evaluated changing the parametrisation of the proton band~\cite{thesis:roloff:2007}; \item the cut on the energy loss was lowered to $dE/dx > 1.15$~mips. \end{itemize} \end{itemize} The following source of uncertainty was considered only for the decay $\Lambda_{c}^{+}\to \Lambda\pi^{+}$: \begin{itemize} \item[$\bullet$] {\{$\delta_{11}$\} $\Lambda$ reconstruction ($-$; $-$; $+4\%$). Since the MC signals had a narrower width than observed in the data, the invariant-mass window for the $\Lambda$ candidate selection was reduced to $1.113 < M(p\pi) < 1.120{\,\text{Ge}\eVdist\text{V\/}}$ in the MC only.} \end{itemize} Contributions from the different systematic uncertainties were calculated and added in quadrature separately for positive and negative variations. These estimates were made in each bin in which the differential cross sections were measured. Uncertainties due to those on the luminosity measurement and branching ratios were only included in the measured $D^{+}$ and $\Lambda_{c}^{+}$ total cross sections. For differential cross sections, these uncertainties are not included. As an additional check, the $dE/dx$ efficiency for pions and protons was verified directly in the data using $K^{0}_{S}$ and $\Lambda$ decays. For the $D^{+}\to K^{0}_{S}\pi^{+}$ decay channel, the effect of the $dE/dx$ cut on the pion candidate tracks was very small and the result changed only marginally when the cut was released. The average cross sections obtained from the two different running periods ($\sqrt{s}=300$ and $318{\,\text{Ge}\eVdist\text{V\/}}$) are expressed in terms of cross sections at $\sqrt{s}=318{\,\text{Ge}\eVdist\text{V\/}}$. This involves a typical correction of +1\% determined using HVQDIS. \section{Results} \label{sec-results} Charm hadron cross sections were measured using the reconstructed $D^{+}$ and $\Lambda^{+}_{c}$ signals (see Section~\ref{sec-charm-reco}) in the kinematic range $0 < p_{T}(D^{+}, \Lambda_{c}^{+}) < 10{\,\text{Ge}\eVdist\text{V\/}}$, $|\eta(D^{+},\Lambda_{c}^{+})| < 1.6$, $1.5 < Q^{2} < 1000{\,\text{Ge}\eVdist\text{V\/}}^{2}$ and $0.02 < y < 0.7$. In addition to the statistical and systematic uncertainties, a third set of uncertainties is quoted for the measured cross sections and charm fragmentation fractions, due to the propagation of the relevant branching-ratio uncertainties (Table~\ref{tab-branching-ratios}). \boldmath \subsection{$D^{+}$ cross sections} \unboldmath The following total visible cross section for $D^{+}$ mesons was measured: \begin{equation} \sigma(D^{+}) = 25.7 \pm 4.1 ~(\rm stat.) ~^{+3.8}_{-5.2} ~(\rm syst.) \pm 0.8 ~(\rm br.) \rm ~nb. \nonumber \end{equation} The corresponding prediction from HVQDIS is $\sigma(D^{+}) = 12.7 ~^{+3.8}_{-4.1} \rm ~nb$. The measured and predicted cross sections are in agreement to better than two standard deviations. To allow a direct comparison to a recent measurement of $D^{+}$ production by the ZEUS collaboration using a lifetime tag~\cite{Chekanov:2008yd}, the cross section was extracted for the kinematic region defined by $1.5 < p_{T}(D^{+}) < 15{\,\text{Ge}\eVdist\text{V\/}}$, $|\eta(D^{+})| < 1.6$, $5.0 < Q^{2} < 1000{\,\text{Ge}\eVdist\text{V\/}}^{2}$ and $0.02 < y < 0.7$. The measurements using different decay channels and different techniques were found to be consistent. The differential cross sections as functions of $p_{T}^{2}(D^{+})$, $\eta(D^{+})$, $x$ and $Q^{2}$ are shown in Fig.~\ref{fig:xsections_dpm} and given in Table~\ref{tab:dplus_xsections_pt2_eta_q2_x}. The cross sections in $Q^{2}$ and $x$ fall by about three orders of magnitude, while the cross section in $p_{T}^{2}(D^{+})$ falls by about two orders of magnitude in the measured region. There is no significant dependence of the cross section on $\eta(D^{+})$. The HVQDIS predictions describe the shape of all measured differential cross sections reasonably well. The differential cross section in $p_{T}^{2}(D^{+})$ is compared to a previous ZEUS result~\cite{Chekanov:2007ch} for $p_{T}^{2}(D^{+}) > 9{\,\text{Ge}\eVdist\text{V\/}}^{2}$. The two measurements are in good agreement. \boldmath \subsection{$\Lambda_{c}^{+}$ cross sections and fragmentation fractions} \unboldmath The following $\Lambda_{c}^{+}$ cross sections were measured: \begin{itemize} \item{using the decay channel $\Lambda_{c}^{+}\to pK^{0}_{S}$ in the restricted range $0 < p_{T}(\Lambda_{c}^{+}) < 6{\,\text{Ge}\eVdist\text{V\/}}$:} \begin{equation} \sigma(\Lambda_{c}^{+}) = 14.9 \pm 4.9 ~(\rm stat.) ~^{+2.2}_{-2.6} ~(\rm syst.) \pm 3.9 ~(\rm br.) \rm ~nb; \nonumber \end{equation} \item{using the decay channel $\Lambda_{c}^{+}\to \Lambda\pi^{+}$:} \begin{equation} \sigma(\Lambda_{c}^{+}) = 14.0 \pm 5.8 ~(\rm stat.) ~^{+3.8}_{-3.3} ~(\rm syst.) \pm 3.7 ~(\rm br.) \rm ~nb. \nonumber \end{equation} \end{itemize} To compare and combine both measurements, the value obtained for the decay channel $\Lambda_{c}^{+}\to pK^{0}_{S}$ was multiplied by $1.01 \pm 0.01$ to extrapolate to the full kinematic region considered in this paper. The cross sections obtained using different decay channels are in good agreement. To extract the $\Lambda_{c}^{+}$ fragmentation fraction, the measurements were combined taking into account all systematic uncertainties and their correlations: \begin{equation} \sigma_{\rm combined}(\Lambda_{c}^{+}) = 14.7 \pm 3.8 ~(\rm stat.) ~^{+2.1}_{-2.2} ~(\rm syst.) \pm 3.9 ~(\rm br.) \rm ~nb. \nonumber \end{equation} The uncertainty of the branching ratio was treated as partially correlated since both branching ratios, $\mathcal{B}(\Lambda_{c}^{+}\to pK^{0}_{S})$ and $\mathcal{B}(\Lambda_{c}^{+}\to \Lambda\pi^{+})$, were measured relative to the decay mode $\Lambda_{c}^{+}\to pK^{-}\pi^{+}$~\cite{Amsler:2008zzb}. The fragmentation fraction $f(c\to \Lambda_{c}^{+})$ can be calculated using the $D^{+}$ cross section: \begin{equation} f(c\to \Lambda_{c}^{+}) = \frac{\sigma(\Lambda_{c}^{+})}{\sigma(D^{+})} \cdot f(c\to D^{+}). \label{f_lambdac} \end{equation} In a previous ZEUS publication~\cite{Chekanov:2007ch} $f(c\to D^{+})$ was defined as: \begin{equation} f(c\to D^{+}) = \frac{\sigma^{0}(D^{+})}{\sigma^{0}(D^{+}) + \sigma^{0}(D^{0}) + \sigma^{0}(D_{s}^{+})} \cdot \left[ 1 - 1.14 \cdot f(c\to \Lambda_{c}^{+}) \right], \label{f_dplus} \end{equation} where $\sigma^{0}(D^{+})$, $\sigma^{0}(D^{0})$ and $\sigma^{0}(D_{s}^{+})$ are the cross sections for $p_{T}(D) > 3{\,\text{Ge}\eVdist\text{V\/}}$. The factor $1.14$ takes into account the production of charm-strange baryons~\cite{Chekanov:2007ch}. For $D^{+}$ and $D^{0}$ mesons the equivalent cross sections (as described elsewhere~\cite{Chekanov:2005mm}) were used. Combining Eqs.~(\ref{f_lambdac}) and~(\ref{f_dplus}) yields: \begin{equation} f(c\to \Lambda_{c}^{+}) = \frac{\sigma(\Lambda_{c}^{+}) \cdot \sigma^{0}(D^{+})}{\sigma(D^{+}) \cdot (\sigma^{0}(D^{+}) + \sigma^{0}(D^{0}) + \sigma^{0}(D_{s}^{+})) + 1.14 \; \sigma(\Lambda_{c}^{+}) \cdot \sigma^{0}(D^{+})} \nonumber \end{equation} Since the cross sections $\sigma(D^{+})$ and $\sigma(\Lambda_{c}^{+})$ were measured down to $p_{T}(D^{+},\Lambda_{c}^{+}) = 0{\,\text{Ge}\eVdist\text{V\/}}$, no treatment of the different transverse momentum distributions for $D^{+}$ and $\Lambda_{c}^{+}$ hadrons was necessary. The measured value: \begin{equation} f(c\to \Lambda_{c}^{+}) = 0.117 \pm 0.033 ~(\rm stat.) ~^{+0.026}_{-0.022} ~(\rm syst.) \pm 0.027 ~(\rm br.), \nonumber \end{equation} is compared to previous measurements in Table~\ref{tab:fragmentation_fraction}. The result is consistent with a previous ZEUS measurement in the photoproduction regime~\cite{Chekanov:2005mm} and with the $e^{+}e^{-}$ average value. \section{Conclusions} \label{sec-concl} Open-charm production in $ep$ collisions at HERA has been measured in deep inelastic scattering using three decay channels. The presence of a neutral strange hadron in the final state allowed the measurement to be extended to very low transverse momenta of the reconstructed charmed hadrons. The total visible and differential cross sections for $D^{+}$ production are in reasonable agreement with NLO QCD predictions. The measured $D^{+}$ cross sections are consistent with previous ZEUS results. The fragmentation fraction $f(c\to \Lambda_{c}^{+})$ has been measured for the first time at HERA in deep inelastic scattering. The result obtained from a combination of two decay channels is consistent with a previous measurement performed in the photoproduction regime and with the average $e^{+}e^{-}$ value. \section{Acknowledgements} \label{sec-acknow} We appreciate the contributions to the construction and maintenance of the ZEUS detector of many people who are not listed as authors. The HERA machine group and the DESY computing staff are especially acknowledged for their success in providing excellent operation of the collider and the data-analysis environment. We thank the DESY directorate for their strong support and encouragement. \vfill\eject
1,314,259,996,922
arxiv
\section{Introduction} \label{sect:Intro} In many biological systems the small number of participating molecules make the chemical reactions inherently stochastic. The system state is described by probability densities of the numbers of molecules of different species. The evolution of probabilities in time is described by the chemical master equation (CME) \cite{Gillespie_1977}. Gillespie proposed the Stochastic Simulation Algorithm (SSA), a Monte Carlo approach that samples from CME \cite{Gillespie_1977}. SSA became the standard method for solving well-stirred chemically reacting systems. However, SSA simulates one reaction and is inefficient for most realistic problems. This motivated the quest for approximate sampling techniques to enhance the efficiency. The first approximate acceleration technique is the tau-leaping method \cite{Gillespie_2001} which is able to simulate multiple chemical reactions appearing in a pre-selected time step of length $\tau$. The tau-leap method is accurate if $\tau$ is small enough to satisfy the leap condition, meaning that propensity functions remain nearly constant in a time step. The number of firing reactions in a time step is approximated by a Poisson random variable \cite{Kurz_1972_SSA}. Explicit tau-leaping method is numerically unstable for stiff systems \cite{Cao_2004_stability}. Stiffness systems have well-separated ``fast'' and ``slow'' time scales present, and the ``fast modes'' are stable. The implicit tau-leap method \cite{Rathinam_2003} overcomes the stability issue but it has a damping effect on the computed variances. More accurate variations of the implicit tau-leap method have been proposed to alleviate the damping \cite{Gillespie_2003,Gillespie_2001,Sandu_2013_SSA,Cao_2005,Cao_2004,Rathinam_2005}. Simulation efficiency has been increased via parallelization \cite{Sandu_2012_parallel}. Direct solutions of the CME are computationally important specially in order to estimate moments of the distributions of the chemical species \cite{Burrage_2006_multiscale}. Various approaches to solve the CME are discussed in \cite{Engblom_2006_thesis}. Sandu has explained the explicit tau-leap method as an exact sampling procedure from an approximate solution of the CME \cite{Sandu_2013_CME}. This paper extends that study and proposes new approximations to the CME solution based on various approximations of matrix exponentials. Accelerated stochastic simulation algorithms are the built by performing exact sampling of these approximate probability densities. The paper is organized as follows. Section \ref{sect:StochasticChem} reviews the stochastic simulation of chemical kinetics. Section \ref{sect:ApproxExponential} developed the new approximation methods. Numerical experiments to illustrate the proposed schemes are carried out in Section \ref{sect:NumericalExperim}. Conclusions are drawn in Section \ref{sect:Conclusion}. \section{Simulation of stochastic chemical kinetics} \label{sect:StochasticChem} Consider a chemical system in a constant volume container. The system is well-stirred and in thermal equilibrium at some constant temperature. There are $N$ different chemical species $S^1,\, \ldots\,,S^N$. Let $X^i(t)$ denote the number of molecules of species $S_i$ at time $t$. The state vector $x(t)=[X^1(t),\, \ldots\,,X^N(t)]$ defines the numbers of molecules of each species present at time $t$. The chemical network consists of $M$ reaction channels $R_1,\, \ldots\,,R_M$. Each individual reaction destroys a number of molecules of reactant species, and produces a number of molecules of the products. Let $\nu_j^i$ be the change in the number of $S^i$ molecules caused by a single reaction $R_j$. The state change vector $\nu_j = [\nu_j^{1}, \ldots , \nu_j^{N}]$ describes the change in the entire state following $R_j$. A propensity function $a_j(x)$ is associated with each reaction channel $R_j$. The probability that one $R_j$ reaction will occur in the next infinitesimal time interval $[t, t+dt)$ is $a_j(x(t))\cdot dt$. The purpose of a stochastic chemical simulation is to trace the time evolution of the system state $x(t)$ given that at the initial time $\bar{t}$ the system is in the initial state $x\left(\bar{t}\right)$. \subsection{Chemical Master Equation} \label{sect:ChemicalMaster} The Chemical Master Equation (CME) \cite{Gillespie_1977} has complete information about time evolution of probability of system's state \begin{equation}\label{eq_CME} \frac {\partial\mathcal{P}\left(x,t\right)}{\partial t}=\sum_{r=1}^M a_{r}\left(x-v_{r}\right)% \mathcal{P}\left(x-v_{r},t\right)-a_0\left(x\right)\mathcal{P}\left(x,t\right)\,. \end{equation} Let $Q^i$ be the total possible number of molecules of species $S^i$. The total number of all possible states of the system is: \[ \label{eq_Q} Q=\prod_{i=1}^{N}\left(Q^i+1\right). \] We denote by $\mathcal{I}(x)$ the state-space index of state $x=[X^1,\, \ldots\,,X^N]$ \[ \begin{array}{l} \mathcal{I}(x) = \left(Q^{N-1}+1\right)\cdots \left(Q^1+1\right)\cdot X^N+\cdots \\ +\left(Q^2+1\right)\left(Q^1+1\right)\cdot X^3+ \left(Q^1+1\right)\cdot X^2+X^1+1 \end{array}% \] One firing of reaction $R_{r}$ changes the state from $x$ to $\bar {x}=x-v_{r}$. The corresponding change in state space index is: \[ \label{eq_d} \begin{array}{l} \mathcal{I}(x)-\mathcal{I}\left(x-v_{r}\right)=d_{r}, \\ d_{r}=\left(Q^{N-1}+1\right)\cdots\left(Q^1+1\right).v_{r}^N+...\\ \qquad +\left(Q^2+1\right)\left(Q^1+1\right).v_{r}^3+\left(Q^1+1\right).v_{r}^2+v_{r}^1. \end{array} \] The discrete solutions of the CME \eqref{eq_CME} are vectors in the discrete state space, $\mathcal{P}\left(t\right) \in \mathbb{R}^{Q}$. Consider the diagonal matrix $A_{0}\in \mathbb{R}^{Q \times Q} $ and the Toeplitz matrices $A_{1},\cdots,A_{M}\in \mathbb{R}^{Q \times Q} $ \cite{Sandu_2013_CME} \[ \label{eq_sumofexponentexact} ({A_{0}})_{i,j}=\left\{ \begin{array}{rl} -a_{0}\left(x_j\right) & \mbox{if $i=j$} ,\\ 0 & \mbox{if $i \not =j$} , \end{array} \right. \,, \quad ({A_{r}})_{i,j}=\left\{ \begin{array}{rl} a_{r}(x_j) & \mbox{if $i-j=d_{r}$}, \\ 0 & \mbox{if $i-j \not =d_{r}$}, \end{array} \right. \] as well as their sum $A \in \mathbb{R}^{Q \times Q}$ with entries \begin{equation} \label{eq_exact} A = A_{0} + A_{1} + \dots + A_{M}\,, \quad A_{i,j}=\left\{ \begin{array}{rl} -a_{0}(x_j) & \mbox{if }i=j\,, \\ a_{r}(x_j) & \mbox{if }i-j=d_{r},~ r=1,\cdots,M\,, \\ 0 & \mbox{otherwise} \,, \end{array} \right. \end{equation} where $x_j$ denotes the unique state with state space index $j=\mathcal{I}(x_j)$. In fact matrix A is a square $\left(Q \times Q \right)$ matrix which contains all the propensity values for each possible value of all species or let's say all possible states of reaction system. All possible states for a reaction system consists of $N$ species where each specie has at most $Q^{i}$ $i=1,2,...,N$ value. The CME \eqref{eq_CME} is a linear ODE on the discrete state space \begin{equation} \label{eq_cme_mat} \mathcal{P}' = A \cdot \mathcal{P}\,, \quad \mathcal{P}(\bar{t}) = \delta_{\mathcal{I}(\bar{x})}\,, \quad t \ge \bar{t}\,, \end{equation} where the system is initially in the known state $x(0)=\bar{x}$ and therefore the initial probability distribution vector $\mathcal{P}(0) \in \mathbb{R}^{Q}$ is equal to one at $\mathcal{I}(\bar{x})$ and is zero everywhere else. The exact solution of the linear ODE \eqref{eq_cme_mat} is follows: \begin{equation} \label{eq_exact2} \mathcal{P}\left(\bar{t}+T\right)=\exp\left(T\, A\right)\cdot \mathcal{P}\left(\bar{t}\right) = \exp\left(T\, \sum_{r=0}^M A_r\right)\cdot \mathcal{P}\left(\bar{t}\right)\,. \end{equation} \subsection{Approximation to Chemical Master Equation} \label{sect:ApproxChemicalMaster} Although the CME \eqref{eq_CME} fully describes the evolution of probabilities it is difficult to solve in practice due to large state space. Sandu \cite{Sandu_2013_CME} considers the following approximation of the CME: \begin{equation} \label{eq_approx_CME} \frac {\partial\mathcal{P}\left(x,t\right)}{\partial t}=\sum_{r=1}^M a_{r}\left(\bar{x}\right)% \mathcal{P}\left(x-v_{r},t\right)- a_0\left(\bar{x}\right)\mathcal{P}\left(x,t\right) \end{equation} where the arguments of all propensity functions have been changed from $x$ or $x-v_{j} $ to $\bar{x} $. In order to obtain an exponential solution to \eqref{eq_approx_CME} in probability space we consider the diagonal matrix $\bar{A_{0}}\in \mathbb{R}^{Q \times Q} $ and the Toeplitz matrices $\bar{A_{1}},...,\bar{A_{M}}\in \mathbb{R}^{Q \times Q} $ \cite{Sandu_2013_CME}. $\bar{A_{r}}$ matrices are square $\left(Q \times Q\right) $ matrices are built upon the current state of system in reaction system which is against $A_{r}$ matrices that contain all possible states of reaction system. \begin{equation} \label{eq_sumofexponent} (\bar{A_{0}})_{i,j}=\left\{ \begin{array}{rl} -a_{0}\left(\bar{x}\right) & \mbox{if $i=j$} ,\\ 0 & \mbox{if $i \not =j$} , \end{array} \right. \,, \quad (\bar{A_{r}})_{i,j}=\left\{ \begin{array}{rl} a_{r}(\bar{x}) & \mbox{if $i-j=d_{r}$}, \\ 0 & \mbox{if $i-j \not =d_{r}$}, \end{array} \right. \end{equation} together with their sum $\bar{A} = \bar{A_{0}} + \dots + \bar{A_{M}}$. The approximate CME \eqref{eq_approx_CME} can be written as the linear ODE \[ \label{eq_sumofexponent2} \mathcal{P}' = \bar{A} \cdot \mathcal{P}\,, \quad \mathcal{P}(\bar{t}) = \delta_{\mathcal{I}(\bar{x})}\,, \quad t \ge \bar{t}\,, \] and has an exact solution \begin{equation} \label{eq_sumofexponent3} \mathcal{P}\left(\bar{t}+T\right)=\exp\left(T\, \bar{A}\right)\cdot \mathcal{P}\left(\bar{t}\right) = \exp\left(T\, \sum_{r=0}^M \bar{A}_r\right)\cdot \mathcal{P}\left(\bar{t}\right)\,. \end{equation} \subsection{Tau-leaping method} \label{sect:TauLeap} In tau-leap method the number of times a reaction fires is a random variable from a Poisson distribution with parameter $a_{r}\left( \bar{x}\right)\tau$. Since each reaction fires independently, the probability that each reaction $ R_{r} $ fires exactly $k_{r} $ times, $r=1, 2,\cdots, M $, is the product of $ M $ Poisson probabilities. \[ \mathcal{P}\left(K_{1}=k_{1},\cdots,K_{M}=k_{M}\right)= \prod_{r=1}^{M}e^{-a_{r}\left(\bar{x} \right)\tau}\cdot\frac{\left(a_{r}(\bar{x}\tau\right)^{k_{r}}}{K_{r}!}= e^{-a_{0}\left(\bar{x} \right)\tau}\cdot \prod_{r=1}^{M}\frac{\left(a_{r}\left(\bar{x}\tau\right)\right)^{k_{r}}}{K_{r}!} \] Then the state vector after these reactions will change as follows: \begin{equation} \label{eq_tauleap} X\left(\bar{t}+\tau\right)=\bar{x}+\sum_{r=1}^{M}K_{r}v_{r} \end{equation} The probability to go from state $\bar{x}$ at $\bar{t}$ to state $x$ at $ \bar{t}+\tau$, $\mathcal{P}\left(X\left(\bar{t}+\tau\right)\right)=x$, is the sum of all possible firing reactions which is: \[ \label{eq_tauleap1} \mathcal{P}\left(X,\bar{t}+\tau\right)=e^{-a_{0}\left(\bar{x}\right)T}\cdot\Sigma_{k \in \mathcal{K }\left(x - \xi\right)} ~ \prod_{r=1}^{M}\frac{\left(a_{r}\left(\bar{x}T\right)\right)^{k_{r}}}{K_{r}!} \] Equation \eqref{eq_sumofexponent3} can be approximated by product of each matrix exponential: \begin{equation} \label{eq__productofexponents} \mathcal{P}\left(\bar{t}+T\right)=\exp\left(T\bar{A_{0}}\right)\cdot \exp\left(T\bar{A_{1}} \right)\cdots \exp\left(T\bar{A_{r}}\right) \cdot \mathcal{P}\left(\bar{t}\right). \end{equation} It has been shown in \cite{Sandu_2013_CME} that the probability given by the tau-leaping method is exactly the probability evolved by the approximate solution \eqref{eq__productofexponents}. \section{Approximations to the exponential solution} \label{sect:ApproxExponential} \subsection{Strang splitting} \label{sect:StrangSplitting} In order to improve the approximation of the matrix exponential in \eqref{eq__productofexponents} we consider the symmetric Strang splitting \cite{Strang_1968}. For $T=n\tau$ Strang splitting applied to an interval of length $\tau$ leads to the approximation \begin{equation} \label{eq_strang} \mathcal{P}\left(\bar{t}+i \tau\right)=e^{\tau/2 \bar{A}_{r}}\cdots e^{\tau/2 \bar{A}_{1}% }e^{\tau/2 \bar{A}_{0}} \cdot e^{\tau/2 \bar{A}_{1}} \cdots e^{\tau/2 \bar{A}_{r}}\cdot P\left(\bar{t} + (i-1)\tau\right) \end{equation} where the matrices $\bar{A_{r}}$ are defined in \eqref{eq_sumofexponent}. \subsection{Column based splitting} \label{sect:ColumnBased} In column based splitting the matrix $A$ \eqref{eq_exact} is decomposed in a sum of columns \[ \label{eq_colbased} A=\sum_{j=1}^Q A_{j}\,, \quad A_{j}=c_{j}e_{j}^T\,. \] Each matrix $A_{j}$ has the same $j$-th column as the matrix $A$, and is zero everywhere else. Here $c_{j} $ is the $j_{th}$ column of matrix A and $e_{j}$ is the canonical vector which is zero every where except the $j_{th}$ component. The exponential of $\tau A_{j}$ is: \begin{equation} \label{eq_colbased2} e^{\tau A_{j}}=\sum_{k \ge 0} \frac {\tau^k \left(A_{j}\right)^k}{k!}\,. \end{equation} Since $e_{j}^Tc_{j}$ is equal to the $j$-th diagonal entry of matrix A: \[ \label{eq_colbased4} e_{j}^T\,c_{j}=-a_{0}\left(x_{j}\right) \] the matrix power $A_{j}^k$ reads \[ \label{eq_colbased3} A_{j}^k=c_{j}e_{j}^T \, c_{j}e_{j}^T\, \cdots \, c_{j}e_{j}^T = \left(-a_{0}\left(x_{j}\right)\right)^{k-1} c_{j}e_{j}^T = \left(-a_{0}\left(x_{j}\right)\right)^{k-1}A_{j}\,. \] Consequently the matrix exponential \eqref{eq_colbased2} becomes \[ \label{eq_colbased5} e^{\tau A_{j}}=I+ \sum_{k \geq 1} \frac {\left(-\tau a_{0}\left(x_{j}\right)\right)^{k-1}}{k!}\left(\tau A_{j}\right) = I+ S_{j}\, \tau A_{j}\,, \quad S_{j}=\sum_{k \geq 1} \frac {\left(-\tau a_{0}\left(x_{j}\right)\right)^{k-1}}{k!}\,. \] We have \[ \label{eq_colbased7} e^{\tau A}=e^{\tau \sum_{j=1}^Q A_{j}}\approx \prod_{j=1}^Q e^{\tau A_{j}} \approx\prod_{j=1}^Q \left(I+S_{j}\tau A_{j}\right) \] and the approximation to the CME solution reads \[ \mathcal{P}\left(\bar{t}+i \tau\right)\approx \prod_{j=1}^Q \left(I+S_{j}\tau A_{j}\right)\cdot P\left(\bar{t} + (i-1)\tau\right)\,. \] \subsection{Accelerated tau-leaping} \label{sect:AccelTauleap} In this approximation method we build the matrices \[ (B_{r})_{i,j}=\left\{ \begin{array}{rl} -a_{r}(x_{j}) & \mbox{if $i=j$}, \\ a_{r}(x_{j}) & \mbox{if $i-j =d_{r}$}, \\ 0 & \textnormal{otherwise} \end{array} \right. \] where $a_{r}(x)$ are the propensity functions. The matrix $A$ in \eqref{eq_exact} can be written as \[ A=\sum_{r=1}^M B_{r}\,. \] The solution of the linear CME \eqref{eq_exact2} can be approximated by \begin{equation} \label{eq_accelerated1} \mathcal{P}\left(\bar{t}+\tau\right)=e^{\tau A}\cdot \mathcal{P}\left(\bar{t}\right) \approx e^{\tau B_{1}} e^{\tau B_{2}} \cdots e^{\tau B_{M}} \cdot P\left(\bar{t}\right)\,. \end{equation} Note that the evolution of state probability by $e^{\tau B_{j}}\cdot P\left(\bar{t}\right)$ describes the change in probability when only reaction $j$ fires in the time interval $\tau$. The corresponding evolution of the number of molecules that samples the evolved probability is \[ x\left(\bar{t}+\tau\right)=x\left(\bar{t}\right)+V_{j}\, K\left(a_j\left(x\left(\bar{t}\right)\right) \tau\right). \] where $K\left(a_j\left(x\left(\bar{t}\right)\right) \tau\right)$ is a random number drawn from a Poisson distribution with parameter $a_j\left(x\left(\bar{t}\right)\right) \tau$, and $V_{j}$ is the $j$-{th} column of stoichiometry matrix. The approximate solution \eqref{eq_accelerated1} accounts for the change in probability due to a sequential firing of reactions $M$, $M-1$, down to $1$. Sampling from the resulting probability density can be done by changing the system state sequentially consistent with the firing of each reaction. This results in the following accelerated tau-leaping algorithm: \begin{equation}\label{eq_accelerated2} \begin{array}{l} \hat{X}_{M}= x\left(\bar{t}\right) \\ \textnormal{for } i=M,M-1,\cdots,1 \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a_{i}\left(\hat{X}_{i}\right)\tau\right) \\ x(\bar{t}+\tau)=\hat{X}_{0}. \end{array} \end{equation} Moreover, \eqref{eq_accelerated1} can also be written as: \begin{eqnarray} \label{eq_accelerated3} \mathcal{P}\left(\bar{t}+\tau\right) \approx e^{\tau B_{1}} e^{\tau B_{2}} \cdots e^{\tau B_{M}} \cdot P\left(\bar{t}\right)\,\\ \nonumber \approx \left( e^{\tau B_{1}} e^{\tau B_{2}} \cdots e^{\tau B_{\frac{M}{2}-1}} \right) \cdot \\ \nonumber \left(e^{\tau B_{\frac{M}{2}}} e^{\tau B_{\frac{M}{2}+1}} \cdots e^{\tau B_{M}} \cdot P\left(\bar{t}\right) \right).\ \nonumber \end{eqnarray} Then, \eqref{eq_accelerated2} can be written as: \begin{equation} \label{eq_accelerated4} \begin{array}{l} \hat{X}_{M}= x\left(\bar{t}\right) \\ \textnormal{for } i=M,M-1,\cdots,\frac{M}{2} \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a\left(\hat{X}_{M}\right)\tau\right) \\ \textnormal{for } i=\frac{M}{2}-1,\cdots,1 \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a\left(\hat{X}_{\frac{M}{2}-1}\right)\tau\right) \\ x(\bar{t}+\tau)=\hat{X}_{0}. \end{array} \end{equation} \subsection{Symmetric accelerated tau-leaping} \label{sect:SymmetricTauleap} A more accurate version of accelerated tau-leaping can be constructed by using symmetric Strang splitting \eqref{eq_strang} to approximate the matrix exponential in \eqref{eq_accelerated1}. Following the procedure used to derive \eqref{eq_accelerated2} leads to the following sampling algorithm: \begin{equation}\label{eq_symmetric} \begin{array}{l} \hat{X}_{M}= x\left(\bar{t}\right) \\ \textnormal{for } i=M,M-1,\cdots,1 \\ \qquad \hat{X}_{i-1}=\hat{X}_{i}+V_{i}\, K\left(a_{i}\left(\hat{X}_{i}\right)\tau/2\right) \\ \textnormal{for } i=1,2,\cdots,M \\ \qquad \hat{X}_{i}=\hat{X}_{i}+V_{i-1}\, K\left(a_{i}\left(\hat{X}_{i-1}\right)\tau/2\right) \\ x(\bar{t}+\tau)=\hat{X}_{M}. \end{array} \end{equation} \section{Numerical experiments} \label{sect:NumericalExperim} The above approximation techniques are used to solve two test systems, reversible isomer and the Schlogl reactions \cite{Cao_2004_stability}. The experimental results are presented in following sections. \subsection{Isomer reaction} \label{sect:Isomer} The reversible isomer reaction system is \cite{Cao_2004_stability} \begin{equation} \label{eqn:isomer} \ce{ x_1 <=>[\ce{c_1}][\ce{c_2}] x_2. } \end{equation} The stoichiometry matrix and the propensity functions are: \[ V= \left[\begin{array}{rr} -1 & 1 \\ 1 & -1 \end{array}\right]\,, \qquad \begin{array}{l} a_{1}(x)= c_{1}x_{1} \,, ~~\\ a_{2}(x) = c_{2}x_{2} \,. \end{array} \] The reaction rate values are $ c_{1}=10$, $c_{2}=10$ (units), the time interval is $[0,T]$ with $T=10$ (time units), initial conditions are $x_{1}(0)=40$, $x_{2}(0)=40$ molecules, and maximum values of species are $Q^1=80$ and $Q^2=80$ molecules. The exact exponential solution of CME obtained from \eqref{eq_exact2} is a joint probability distribution vector for the two species at final time. Figure \ref{fig:exact_isomer} shows that the histogram of 10,000 SSA solutions is very close to the exact exponential solution. The approximate solution using the sum of exponentials \eqref{eq_sumofexponent3} is illustrated in Figure \ref{fig:approx_isomer}. This approximation is not very accurate since it uses only the current state of the system. Other approximation methods based on the product of exponentials \eqref{eq__productofexponents} and Strang splitting \eqref{eq_strang} are not very strong approximations as the exact solution hence, the results are not reported. \begin{figure}[tb] \begin{centering} \subfigure[10,000 SSA runs versus the exact solution \eqref{eq_exact2}]{ \includegraphics[width=0.47\textwidth,height=0.3\textwidth]{isomer_exact.jpg} \label{fig:exact_isomer} } \subfigure[Exact solution \eqref{eq_exact2} versus the approximation to exact solution using sum of exponentials \eqref{eq_sumofexponent3} ]{ \includegraphics[width=0.47\textwidth,height=0.3\textwidth]{isomer_approx_sum.jpg} \label{fig:approx_isomer} } \caption{Histograms of the isomer system \eqref{eqn:isomer} results at the final time T=10.} \label{fig:isomer} \end{centering} \end{figure} The results reported in Figure \ref{fig:accelerated_isomer} indicate that for small time steps $\tau$ the accelerated tau-leap \eqref{eq_accelerated2} solution is very close to the results provided by traditional explicit tau-leap. Symmetric accelerated tau-leap method \eqref{eq_symmetric} yields even better results, as shown in Figure \ref{fig:symmetric_isomer_1ten}. For small time steps the traditional and symmetric accelerated methods give similar results, however, for large time steps, the results of the symmetric accelerated method is considerably more stable. \begin{figure}[tb] \begin{centering} \includegraphics[width=0.475\textwidth,height=0.3\textwidth]{isomer_accelerated.jpg} \caption{Isomer system \eqref{eqn:isomer} solutions provided by the traditional tau-leap \eqref{eq_tauleap} and by accelerated tau-leap \eqref{eq_accelerated2} methods at the final time T=10 (units). A small time step of $\tau=0.01$ (units) is used. The number of samples for both methods is 10,000.} \label{fig:accelerated_isomer} \end{centering} \end{figure} \begin{figure}[tb] \begin{centering} \subfigure[$\tau = 0.01$ (units)]{ \includegraphics[width=0.475\textwidth,height=0.3\textwidth]{isomer_accelerated_1en.jpg} \label{fig:symmetric_isomer_1per} } \subfigure[$\tau = 0.1$ (units)]{ \includegraphics[width=0.475\textwidth,height=0.3\textwidth]{isomer_accelerated_1ten.jpg} \label{fig:symmetric_isomer_1en} } \caption{Histograms of isomer system \eqref{eqn:isomer} solutions obtained with SSA, traditional tau-leap \eqref{eq_tauleap}, and symmetric accelerated tau-leap \eqref{eq_symmetric} methods at the final time T=10. The number of samples is 10,000 for all methods.} \label{fig:symmetric_isomer_1ten} \end{centering} \end{figure} \subsection{Schlogl reaction} \label{sect:Schlogl} We next consider the Schlogl reaction system \cite{Cao_2004_stability} \begin{equation} \label{eqn:schlogl} \begin{array}{r} \ce{ B_{1} + 2x <=>[\ce{c_1}][\ce{c_2}] 3x }\\ \ce{ B_{2} <=>[\ce{c_3}][\ce{c_4}] x } \end{array} \end{equation} whose solution has a bi-stable distribution. Let $N_1$, $N_2$ be the numbers of molecules of species $B_1$ and $B_2$, respectively. The reaction stoichiometry matrix and the propensity functions are: \[ \begin{array}{l} V= \begin{bmatrix} 1 & -1 & 1 & -1 \end{bmatrix} \\ \begin{array}{l} a_{1}(x)= \frac{c_{1}}{2}N_{1}x(x-1), \\ a_{2}(x) = \frac{c_{2}}{6}N_{1}x(x-1)(x-2), \\ a_{3}(x) = c_{3}N_{2}, \\ a_{4}(x) = c_{4}x. \end{array}% \end{array} \] The following parameter values (each in appropriate units) are used: \begin{small} \[ \begin{array}{lll} c_{1}=3 \times 10^{-7}, &c_{2}=10^{-4}, &c_{3}=10^{-3}, \\ c_{4}=3.5, &N_{1}=1 \times 10^5, &N_{2}=2 \times10^5. \end{array}% \] \end{small} with the final time $T=4$ (units), the initial condition $x(0)=250$ molecules, and the maximum values of species $Q^1=900$ molecules. Figure \ref{fig:exact_schlogl} illustrates the result of exact exponential solution \eqref{eq_exact2} versus SSA. Figure \ref{fig:approx_exact_schlogl} reports the sum of exponentials \eqref{eq_sumofexponent3} result which is not a very good approximation. The product of exponentials \eqref{eq__productofexponents} and Strang splitting \eqref{eq_strang} results are not reported here since they are poor in approximation. \begin{figure}[tb] \begin{centering} \subfigure[10,000 SSA runs versus the exact solution \eqref{eq_exact2}]{ \includegraphics[width=0.475\textwidth,height=0.3\textwidth]{schlogl.jpg} \label{fig:exact_schlogl} } \subfigure[Exact solution \eqref{eq_exact2} versus the approximation to exact solution using sum of exponentials \eqref{eq_sumofexponent3}]{ \includegraphics[width=0.475\textwidth,height=0.3\textwidth]{schlogl_approx1.jpg} \label{fig:approx_exact_schlogl}} \caption{Histograms of Schlogl system \eqref{eqn:schlogl} results at final time T=4 (units).} \end{centering} \end{figure} Figures \ref{fig:accelerated_schlogl} and \ref{fig:symmetric_schlogl} present the results obtained with the accelerated tau-leap and the symmetric tau-leap, respectively. For small time step the results are very accurate. However, for large step sizes, the results quickly become less accurate. The lower accuracy may affect systems having more reactions. The accuracy can be improved to some extent using the strategies described in \eqref{eq_accelerated3} and \eqref{eq_accelerated4}. \begin{figure}[tb] \begin{centering} \subfigure[Traditional tau-leap \eqref{eq_tauleap} and accelerated tau-leap \eqref{eq_accelerated2}]{ \includegraphics[width=0.475\textwidth,height=0.3\textwidth]{accelerated.jpg} \label{fig:accelerated_schlogl} } \subfigure[Traditional tau-leap \eqref{eq_tauleap} and symmetric accelerated tau-leap \eqref{eq_symmetric}]{ \includegraphics[width=0.475\textwidth,height=0.3\textwidth]{symmetric.jpg} \label{fig:symmetric_schlogl}} \caption{Histograms of Schlogl system \eqref{eqn:schlogl} solutions with $\tau=0.0001$ (units), final time T=4 (units), and 10,000 samples.} \end{centering} \end{figure} \section{Conclusions} \label{sect:Conclusion} This study proposes new numerical solvers for stochastic simulations of chemical kinetics. The proposed approach exploits the linearity of the CME and the exponential form of its exact solution. The matrix exponential appearing in the CME solution is approximated as a product of simpler matrix exponentials. This leads to an approximate (``numerical'') solution of the probability density evolved to a future time. The solution algorithms sample exactly this approximate probability density and provide extensions of the traditional tau-leap approach. Different approximations of the matrix exponential lead to different numerical algorithms: Strang splitting, column splitting, accelerated tau-leap, and symmetric accelerated tau-leap. Current work by the authors focuses on improving the accuracy of these novel approximation techniques for stochastic chemical kinetics. \bibliographystyle{plain}
1,314,259,996,923
arxiv
\section{Introduction} Modern Deep Neural Networks (DNN) have pushed the state-of-the-art in many computer vision tasks including image recognition, object detection, etc. Underlying the success of DNNs, are the millions of constituent parameters and their non-linear combinations that allow DNNs to accurately model a wide variety functions over their input features. However, running inference over these massive models imposes exorbitant memory and computational costs that make deploying them at scale in the real-world with more stringent latency, compute and energy constraints, a challenging problem. This problem is exacerbated if later models build on existing models, which are themselves very large. Often, the intermediate outputs of a pretrained image recognition model are used to encode visual information for another downstream task, such as visual question answering, image retrieval, etc. Since these intermediate outputs can be very high-dimensional, using them as features can introduce a large number of parameters to downstream models. Therefore, methods for compressing the neural models can have far reaching effects vis-a-vis their deployability and scalability. As deep learning models make their way from research labs to real world environments, the task of making them more resource efficient has received a great deal of attention. This has given rise to a large body of work focused on compressing and/or accelerating DNNs. One of the most common techniques for compressing neural models is parameter pruning, i.e. pruning away model parameters or neurons based on some metric. Such methods have many limitations ; one of them is that the heuristic used attempts to identify "weak" elements (with small magnitude or derivative, for instance), but parameters can be unnecessary in more ways than being small. Consider two neurons taken individually that both have large amplitude, but happen to yield identical outputs ; one of them can be pruned without loss of information, but most current pruning methods could not figure it out. In this paper we propose a novel neural model compression technique that exploits the dependencies in the non-linear activations of the units in each layer to identify redundant units and prune them. Our technique is based on the observation model optimization can potentially converge to a point at which the outputs of several units in one layer become highly correlated, even linearly dependent, with each other, and thus, by removing one of them and adjusting the outgoing weights of the other units we can obtain a smaller model with identical outputs. We identify redundant units by measuring the degree to which they can be predicted as a linear combination of other units in the same layer. Specifically, we learn a transformation matrix, $A$, that best approximates an identity mapping for the activations of the units in this layer, while constraining the diagonal of $A$ to be zero. We select the units with the lowest prediction error to remove, and adjust the outgoing weights of the remaining units using the values in the corresponding columns $A$ such that input to the next layer remains the same (or is minimally perturbed). Once we have removed all the predictable units, removing any additional units will cause a reduction in model performance. We then fine-tune the model to recover the lost accuracy. In order to facilitate tuning, we use distillation \cite{hinton2015distilling} to bring the compressed model's output distribution close to the uncompressed model's output distribution raised to a high temperature. We demonstrate the efficacy of our technique on two popular image recognition models, AlexNet\cite{alexnet} and VGG\cite{vgg}, and three popular benchmark datasets, CIFAR-10\cite{cifar10} and Caltech-256\cite{caltech256}. We demonstrate, theoretically and empirically, that under our proposed weight readjustment scheme, the inputs to the subsequent layer are only minimally perturbed while redundant units are present. Our technique can reduce the parameters of VGG and AlexNet by more than 99\% on CIFAR10 and by more than 80\% on Caltech-256. Finally, we inspect the intermediate representations of the compressed models and show that the data remains cleanly separable post-compression, which suggests that the intermediate representations continue to capture rich information about the data which may be useful for transfer learning tasks. \section{Related Work} Most existing techniques for reducing the size of neural models can be grouped into three high-level categories, namely low-rank factorization, knowledge distillation and, parameter pruning. We argue that all of them have significant shortcomings. There are also methods that reduce computation while not really affecting the number of parameters in the network, such as quantization \cite{Han2016}\cite{oland2015reducing}; however these are somewhat orthogonal to the scope of this paper. \subsection{Pruning} A common pruning approach to model compression attempts to eliminate individual parameters in the network \cite{touretzky1996advances,xing2020probabilistic,LIU2020Dynamic}. Many of them do so by enforcing a sparsity constraint, such as $L1$ regularization, to push some parameters to $0$, or $L2$ regularization to simply keep weights small and then prune the small ones \cite{han15}. Those methods can achieve reasonable performance (up to 35x compression in \cite{Han2016}). One of theses methods' main limitations however, is that their outputs take the form of sparse weight matrices, and benefiting of those in terms of computation time is not always easy in practice. A different family of methods overcome that shortcoming by pruning out entire neurons and/or convolution filters \cite{Li17,liu2017learning,zhuang18,mussay2020dataindependent,peng2019collaborative}. Those methods identify weak neurons using heuristics such as activation value \cite{Hu16} or absolute sum of weights \cite{Li17}. Fine-tuning may be used afterwards \cite{liu2017learning}. Both of these methods treat each unit independently, in that they prune the unit if that unit has little effect on the downstream computations. However, these techniques would not prune a unit whose output is significantly impacts downstream computation but is largely predicted as a linear combination of the outputs of the other units in the same layer. Such units can be pruned, and their weights distributed, to achieve lossless compression. \subsection{Matrix Factorization} Factorization-based approaches \cite{Denton14,lu2017fully,yang2015deep} factor the weight matrices of the neural network into multiple low-rank components which can be then used to approximate the output of the original weight matrix. Those techniques are seemingly more similar to our approach : low-rank matrices eliminate some redundancies by projecting on a subspace of smaller dimension. The key difference is that those methods work at the weight matrix level, while we find redundancies within post-activation representations. The non-linearity of those activations is likely to create new redundancies, that escape factorization methods but that our approach can capture. However, the case of a linear activation is similar to a factorization process, and we will to some extent use it in \ref{sec:math} to better understand the process of weight readjustment and how it affects the error surface. \subsection{Distillation} Model compression using knowledge distillation \cite{hinton2015distilling} involves training a more compact student model to predict the outputs of a larger teacher model, instead of the original targets \cite{luo2016face,belagiannis2018adversarial}. Distillation, or annealing, provides an interesting angle to the compression problem, but in itself suffers from several shortcomings. First, it requires the student model to be predefined, which is a rather tedious task. By default this model would need to be trained from scratch. Finally, this process relies solely on the extra supervision of the teacher model to overcome the challenges of training a less overparametrized model, with complex error surfaces ; this seems sub-optimal. On the contrary, distillation can become very useful as a complementary compression tool. Assuming a compression method induces some drop in performance in the smaller model, a short amount of fine-tuning may boost its performance, and using knowledge distillation from the original model at that step can speed up that process. We make use of distillation in such a manner, in an iterative fashion, and discuss it in \ref{sec : annealing} \section{Lossless redundancy elimination : formalism and justification} \label{sec:math} \subsection{Notations and task definition} Throughout the paper, we consider a feed-forward neural network $F = \phi_N\circ L_N\circ\phi_{N-1}\circ L_{N-1}\circ ...\circ\phi_1\circ L_1$ where $L_k$ is the k-th dense or convolutional (which is dense with shared parameters) layer and $\phi_k$ is the following activation, in the largest sense. For example $\phi_k$ may involve a pooling, or a softmax operator in the case of $\phi_N$. The weight matrix of $L_k$ is $W_k$, of size $(n_k^o,n_{k+1}^{i^2})$. Depending on the nature of $\phi_k$, $n_k^i$ and $n_k^o$ may be different ; however, to alleviate notations later on, we will consider that $n_k^i=n_k^o =n_k$ which does not induce a loss of generality in our algorithms. For a given input vector $X$, sampled from data distribution $\mathcal{D}$ we define the intermediate representations $Z_k$ (activations) and $Y_k$ (pre-activation), such that $$Z_0=X\;\; ; \;\;Y_k = W_k.Z_{k-1}\;\; ; \;\;Z_k = \phi_k(Y_k)$$ Our goal is to eliminate redundancies within the activations of a given layer. To do so, we consider for each activation $Z_k[i]$ the task of predicting it as a linear combination of the neighbouring activations $Z_k[j],j\neq i$. Solving that task, evaluated with the $L_2$ norm, amounts to solving the following problem : $$\min_{A_k\in \mathcal{M}_n(\mathbf{R})} \mathbf{E}_{x\sim D}[\norm{Z_k-A_kZ_k}_2^2] \;\;\;s.t.\; diag(A_k)=0$$ \subsection{Expression of the $A_k$ matrix} Let us find the expression of redundancy matrix $A_k$. We start by simplifying the problem's formulation. Rewriting the objective $\min_{A_k\in \mathcal{M}_n(\mathbf{R})} \sum_{i=1}^{n_k}\mathbf{E}_{x\sim D}[(Z_k[i]-A_k[i].Z_k)^2]$ ($M[i]$ being the ith row vector in matrix $M$), elements of $A$ of different rows are clearly uncoupled both in the objective and the constraint. We can therefore solve that problem row-wise. Writing $U = I_{n_k}-A_k$, we must solve $n_k$ problems $P_l$ of the form $$\min_{u\in\mathbf{R}^n_k}\mathbf{E}_{x\sim D}[(u^TZ_k)^2]\;\;\;s.t\; u_l=1$$. Define $$g(u)=\mathbf{E}_{x\sim D}[(u^TZ_k)^2] = \mathbf{E}_{x\sim D}[(u^TZ_k).(u^TZ_k)^T] = \mathbf{E}_{x\sim D}[u^TZ_kZ_k^Tu] = u^TSu$$ where $S = \mathbf{E}_{x\sim D}[Z_kZ_k^T]$ is $Z_k$'s correlation matrix, which is positive semidefinite. Let's consider only the non-degenerate case where it is positive definite. $g_l$ is convex (of hessian $\frac{1}{2}S$, so the zeros of the gradients indicate exactly its minima). Specifically, within the hyperplane of admissible points $H={u\in \mathbf{R}^n_k,u_l=1}$, $g_l$ is minimal if and only if : $\nabla_ug = 2Su \in H^\bot = \mathbf{R}e_l$, where $e_l$ is the lth vector of the canonical base. Or in other words, $u$ is a minimum iff it is a multiple of $S^{-1}e_i = S^{-1}[l]^T$, the lth column of $S^{-1}$. Therefore the minimum is $u^* = \frac{1}{S^{-1}[l][l]}S^{-1}[l]^T$. In full matrix form, we conclude that the solution $A_k$ to the equation is $A_k = S^{-1}D$, with $S = \mathbf{E}_{x\sim D}[Z_kZ_k^T]$ and $D = diag(S^{ -1})^{-1}$ In practice, due to the presence of a matrix inversion, it is simpler and faster to obtain $A_k$ using gradient descent, given inputs sampled in the training set. \subsection{Weight readjustment} \label{sec : readjust} The previous weight matrix, and the residual error provides information regarding which activation is most predictable and should be removed. We then wish to adjust the remaining weight matrix of the following layer to account for this redundancy elimination. Assume only one activation $l$ was removed. We consider the compressed vectors $Z_k^l$ (of size $n_k-1$) where activation l was removed. We can infer a transformation matrix $T_k^l$ from $Z_k^l$ to $Z_k$, with minimal error, using the lth row of $A_k$ : $Z_k \approx T_k^lZ_k^l$, where : $$\forall{i<l}, T_k^l[i][j] = \delta_{ij}\;\;\forall{i>l}, T_k^l[i][j] = \delta_{i(j+1)}$$ $$\forall{j<l}, T_k^l[l][j] = A_k[l][j]\;\;\forall{j\geq l}, T_k^l[l][j] = A_k[l][j+1]$$ i.e. $ T_k^l$ is the identity matrix $I_{n_k-1}$ where $A_k[l]$ has been inserted as the lth row, minus the $0$ coefficient $A_k[l][l]$. Therefore, a natural adjustment for the weight matrix is $W_{k+1}^l=W_{k+1}T_k^l$, such that $W_{k+1}^lZ_k^l \approx W_{k+1}Z_k$. If we remove more than one coefficient at once, the expression of the transformation matrix is less straightforward. Assume for example that activations $l$ and $j$ are removed ; Obtaining the approximate expression of $Z_k[l]$ obtained from $A_k$, using only remaining activations, leads to the following derivations : $$Z_k[l] = \sum_{i=1}^{n_k}A_k[l,i]Z_k[i] = \sum_{i=1\\i\neq l,m}^{n_k}A_k[l,i]Z_k[i] + A_k[l,j]Z_k[j]$$ $$Z_k[l] =A_k[l,j]A_k[j,l]Z_k[l] \sum_{i=1\\i\neq l,j}^{n_k}A_k[l,i]Z_k[i]+ A_k[l,j]A_k[j,i]Z_k[i]$$ $$Z_k[l] =\frac{\sum_{i=1\\i\neq l,j}^{n_k}A_k[l,i]Z_k[i]+ A_k[l,j]A_k[j,i]Z_k[i]}{1-A_k[l,j]A_k[j,l]}$$ More generally, assume a set $J = \{j_1<...<j_m\} \subset [1..n_k]$ of activations is eliminated. We note its complementary set $H=[1..n_k]\setminus J = \{h_1<...<h_{n_k-m}\}$. We define $A_k^{J+}$, the $(m,m)$ matrix defined by $A_k^{J+}[i][p]=A_k[j_i][j_p]$, and $A_k^{J-}$, the $(m,n_k-m)$ matrix such that $A_k^{J-}[i][p]=A_k[j_i][h_p]$. Then we can write, where $Z_k^{J}$ contains only the activations in $J$ and $Z_k^{H}$ only those not in $J$: $$Z_k^{J} = A_k^{J+}Z_k^{J}+A_k^{J-}Z_k^{H}$$ i.e. $Z_k^{J} = (I_m-A_k^{J+})^{-1}A_k^{J-}Z_k^{H} = U_k^JZ_k^{H}$ provided that $(I_m-A_k^{J+})$ is invertible. From there we can easily obtain the equivalent of the transformation matrix in the single activation case :$Z = T_k^JZ_k^{H}$, where $T_k^J$ is $U_k^J$ completed with ones on the diagonal for the additional rows. \subsection{Stability results on compression} In practice, we wish to compress a network at some point during training, followed by further fine-tuning. We may expect that if we are close enough to a good local minimum in a convex region of the error surface, we may capture that minimum's redundancies and eliminate them. One question however arises : after compression, will we still be close enough to the minimum that the fine-tuning will converge, or may it diverge? The following stability result gives partial answers to that question. \theoremstyle{definition} \begin{theorem} Let the kth activation in the neural network be linear, so that we can write $F(X) = f_\theta(W_{k+1}.W_k.g_\phi(X))$ for $X$ in the training set $\mathcal{X}$. Let $p^* = (\theta^*,\phi^*,W_{k+1}^*,W_k^*)$ a local minimum for the loss function $L$ in the parameter space $\mathcal{P}$, and assume an exact redundancy of the final activation of the kth layer : $W_{k+1}^{*}.W_k^{*}=W_{k+1}^{*\prime}.W_k^{*\prime}$, where $W_k^{*\prime}$ is $W_k^{*}$ minus its final row, and $W_{k+1}^{*\prime}$ is the contracted weight matrix as computed in \ref{sec : readjust} with matrix $A_k$. The compression-readjustment operation projects $p*$ onto $p*' = (\theta^*,\phi^*,W_{k+1}^{*\prime},W_k^{*\prime})$ Assume there is a ${L_2}$-ball $B\subset \mathcal{P}$ of radius $R$ centered $p*$, on which $L$ is convex. There is an ellipsoid $E$ centered on $p*$ of equation $$\norm{\theta-\theta^*}_2^2+\norm{\phi-\phi^*}_2^2+\norm{W_{k+1}-W_{k+1}^*}_2^2+\sum_{i=1}^{n_k-1}\sum_{p=1}^{n_k}(1+A_k[n_k][i]^2)V[i][p]^2 \leq R$$ onto a convex region of the compressed parameter space. \end{theorem} We delay the full proof to an appendix. We can however discuss the implications of the theorem. We note first that the theorem by itself assumes linear activations. Assume our training brought us near a (global or local) minimum displaying some redundancies that we manage to eliminate. We would like regularities of the error space in that region, such as convexity, to be preserved after compression. That the point before compression was in a convex region around the minimum is not enough to have convexity in the compressed space; however, our theorem shows that there is a slightly different region, determined by the radius of the convex region around the minimum, that does ensure post-compression convexity. That region is obtained by flattening the ball onto the subspace of $W_{k+1}$. Besides, the subspace corresponding to the compressed coefficient can be ignored. While the above theorem applies only to the linear activation case, we argue that the results extends naturally to locally linear or nearly linear activations. Consider for example a ReLU activation: around any parameter point, there is a ball on which the network is identical to one with linear activation; we can apply theorem 1 on that restricted area of the space. \subsection{An empirical justification for readjustment} \begin{figure}[t!] \centering \subfloat[]{\includegraphics[scale=.35]{images/y_test_plot/alexnet_cifar10_layer16_adjustW_Ys_scatter.png}} \subfloat[]{\includegraphics[scale=.35]{images/y_test_plot/alexnet_cifar10_layer16_Ys_scatter.png}} \caption{\label{fig : norm}\small The norm of the change in the inputs to layer 17 of AlexNet after shrinking layer 16 down to almost 1\% of its original size (a0 with and (b) without weight readjustment. Every step on the x-axis represents a shrinking step in which the layer is shrunk by 25\%.} \vspace{-10pt} \label{fig:my_label} \end{figure} The proposed pruning approach is based on the hypothesis that elimination of linearly dependent (or almost dependent) neurons (or filters) within any layer with appropriate weight adjustment will result in minimal or even zero changes to the representations seen by {\em subsequent} layers in the network. In other words, compressing the $k$th layer of a network as proposed should not significantly change the pre-activation values observed by layer $k+1$. We evaluate this hypothesis in this section. In Fig. \ref{fig : norm}, we plot pre-activation norm differences on an AlexNet network for an intermediate convolutional layer, computed on a random input sample, after compressing the previous layer. The norm difference is computed as $\frac{\norm{Z_{17}^{(i)} - Z_{17}^{(0)}}_2}{\norm{Z_{17}^{(0)}}_2}$, where $Z_{17}^{(i)}$ represents the activations after the $i^{th}$ shrinking step. We compare the results obtained from just trimming dependent neurons without subsequent adjustment of weights, to those obtained after weight readjustment. As expected, trimming neurons modifies $Z$, but subsequent weight readjustment largely eliminates the changes from trimming -- after 15 compression steps we have only a $2\%$ norm change, confirming the intuition behind our method. \section{When Activations Are Not Dependent} The above analysis shows that if there is perfect linear dependence in the neuron activations, i.e. $\norm{Z_k-A_kZ_k}_2^2=0$, then we can achieve lossless compression, however, in many cases this condition may not hold. In such situations, the parameters of the pruned model, even after readjustment, may end up in a suboptimal region of the loss surface. This is because readjustment weights in $A$ are imperfect and error prone, and therefore will move the model parameters to a different, potentially suboptimal, point on the error surface. Since, reducing the size of the model makes the error surface less smooth \cite{allen2018convergence}, even if the operating point of the smaller model is close to the operating point of the larger model, it may have a much higher loss. To keep the model parameters from deviating too far from the optima during compression we employ a modified version of Annealed Model Contraction (AMC) \cite{shah2019annealing}, which attempts to keep the model in the optimal region by cycles between pruning and fine-tuning phases. Below we provide a description of AMC, and our modifications to it. \subsection{Annealed Model Contraction}\label{sec : annealing} AMC is an iterative method that greedily prunes the model layer-by-layer. As formulated in \cite{shah2019annealing}, AMC starts from the first (or the last) layer of the model and proceeds to maximally shrink the current layer before moving on to the next. While compressing a layer, AMC alternates between pruning and fine-tuning. Pruning is performed by reinitializing the layer with $\gamma\%$ fewer neurons/filters and the whole network is then fine-tuned end-to-end. During fine-tuning, knowledge distillation \cite{hinton2015distilling} is used to facilitate training and the following loss is minimized \begin{equation} \label{eq:amc-loss} \mathcal{L} = (1-\lambda)H\left(\text{softmax}\left(\frac{\mathbf{z}}{T}\right),\text{softmax}\left(\frac{\mathbf{v}}{T}\right)\right) + \lambda H\left(\mathbf{y_{true}}, \text{softmax}\left(\frac{\mathbf{v}}{1}\right)\right) \end{equation} Where $\mathbf{z}$ and $\mathbf{v}$ are the logits returned by the teacher and student models, respectively, $T$ is a hyperparameter referred to as the temperature of the distribution, and $\lambda$ controls the contribution of the loss against the target label to the the overall loss. AMC continues to prune a layer as long as the pruned model's accuracy remains within a threshold, $\epsilon$, of the uncompressed model's accuracy. Once the current layer can not be pruned any further, AMC proceeds to shrink the next layer in the model. AMC can be applied to both, dense and convolutional layers. In the case of the former, it prunes neurons while in the latter it prunes convolutional filters. \subsection{Annealed Model Contraction with Lossless Redundancy Elimination} \begin{algorithm} \SetAlgoLined $\text{RemoveAndAdjust}(A, W, j)$: adjust the weight matrix after the removal of the $j^{th}$ neuron from the previous layer using the method in \ref{sec : readjust}\\ \SetKwFunction{FLREShrink}{LREShrink} \SetKwProg{Fn}{Function}{:}{} \Fn{\FLREShrink{$F$, $l$, $\gamma$}}{ $Z \leftarrow F_{1:l}(\mathcal{X})$\tcp{compute the activations of the $l^{th}$ layer.} $A \leftarrow \min_A \norm{ZA - Z}^2 \text{s.t } \text{diag}(A)=\mathbf{0}$\\ $\mathcal{E} \leftarrow \arg\text{sort}(\norm{ZA - Z}^2)[:\lfloor\gamma * \text{sizeof}(F[l])\rfloor]$\\ $\Bar{W}^{(l+1)}\leftarrow [W^{(l+1)}; b^{(l+1)}]$\tcp{Concatenate the weights and bias.} \For{$j\in \mathcal{E}$}{ $W^{(l)} \leftarrow W^{(l)}_{-j.}$ \tcp{drop the $j^{th}$ row of $W^{(l)}$} $W^{(l+1)} \leftarrow \text{RemoveAndAdjust}(A, \Bar{W}^{(l+1)}, j)$\\ } } $Acc\leftarrow \text{evaluate}(F_t)$\\ $F_s'[i_B]\leftarrow\text{LREShrink}(F_s, i_B, \gamma)$\\ $Acc'\leftarrow \text{evaluate}(F_s')$\\ \While{$Acc-Acc'\leq\epsilon$}{ $F_s\leftarrow F_s'$\\ $F_s'[i_B]\leftarrow\text{LREShrink}(F_s',i_B, \gamma)$\\ $Acc'\leftarrow \text{evaluate}(F_s')$\\ \If{$Acc-Acc'>\epsilon$ }{ $F_s'\leftarrow \text{distill}(F_s')$\\ $Acc'\leftarrow \text{evaluate}(F_s')$\\ } } \caption{\label{alg:LRE-AMC}LRE-AMC Algorithm} \end{algorithm} While effective, AMC has the shortcoming that it takes an ad-hoc approach to parameter pruning. AMC removes neurons from a layer by \textit{reinitializing} the layer with fewer neurons. The new initialization is random and therefore can land the model arbitrarily far away from the optimal point. On the other hand, the Lossless Redundancy Elimination (LRE) formalism presented in Section \ref{sec:math} provides a method of setting the parameters of the pruned layer that guarantees (under some constraints) that the model remains near the optimal operating point. However, LRE only considers the activations and weights between two layer, and thus does not account for the effects of pruning on the operating point of the whole model. Therefore, we propose to combine LRE and AMC in a novel model compression algorithm, which we call LRE-AMC, that compensates for the inadequacies of both LRE and AMC. LRE-AMC (Algorithm \ref{alg:LRE-AMC}) differs from vanilla AMC in two significant ways. \textit{First}, instead of pruning neurons by reinitializing the layer, LRE-AMC uses the LRE formalism to select neurons/filters (in the following we will use the term \textit{units} to refer to neurons and filters) to prune away based on the degree of linear dependency between their activations. Thus, LRE-AMC retains units that have linearly independent activation and thus have learned to encode unique aspects of the data, whereas these units have to be relearned under AMC. \textit{Second}, LRE-AMC breaks the pruning process into two phases. In the first phase LRE is used to remove the selected units one-by-one and adjust the weight matrix such that the outputs of the layer are minimally perturbed. After each pruning stage we measure the performance of the model on a held-out set and continue pruning without fine-tuning until the performance of the model remains within a threshold, $\epsilon$, of the original. When the performance drops below the threshold $\epsilon$, we start phase two in which we use distillation to fine-tune the model to bring the model's performance to within $\epsilon$ of the pre-compression performance. \section{Evaluation} \subsection{Datasets} \label{sec:dataset} We evaluate our proposed method three datasets of real-world images, namely CIFAR-10 and Caltech-256. CIFAR10 contains 50,000 training images and 10,000 testing images. Each images has a size of $32\times32$ pixels and is assigned one out of 10 class labels. Caltech-256 contains 30,607 real-world images, of different sizes, spanning 257 classes. Following the protocol from \cite{vgg}, we construct a balanced training set for Caltech 256 with 60 images per class. For both, Caltech256 and CIFAR10, we used 20\% of the training images for validation during training. We apply data augmentation to increase the size of the training data and improve generalization. Specifically, we augment CIFAR10 with random affine transformations, horizontal flips and grayscaling. Meanwhile, we augment Caltech-256 by taking a random $256\times256$ crop at a random scale between 0.8 and 1.0, and applying rotation, color jitter and horizontal flipping before resizing the image to $224\times224$. The pixel values of images from both datasets are normalized by mean [0.485, 0.456, 0.406] and standard deviation [0.229, 0.224, 0.225]. \subsection{Experimental Setup} We implemented AMC and LRE-AMC using Pytorch and Python 3. We use AlexNet and VGG16 as our base models which we will compress. Since the receptive field of the first convolutional layer in AlexNet is too large $32\times32$ images, we reduced it to $3\times3$ when training on CIFAR-10. When training on Caltech256, we initialized the models with weights from models pretrained on ImageNet and tuned only the final classification layer. The accuracy and number of parameters of the base models are presented in Table \ref{tab:results_final_layer}. \begin{table}[] \centering \begin{tabular}{c|c|c|c|c|c} Model & Dataset & Total ($\times 10^7$) & Dense ($\times 10^7$) & Conv ($\times 10^7$) & Acc \% \\ \hline \multirow{2}{*}{AlexNet} & CIFAR10 & 5.68 & 5.46 & 0.23 & 79.2\\ & Caltech256 & 5.74 & 5.50 & 0.25 & 64.8\\\hline \multirow{2}{*}{VGG16} & CIFAR10 & 13.4 & 11.95 & 1.47 & 89.8\\ & Caltech256 & 13.5 & 11.96 & 1.47 & 77.5 \end{tabular} \caption{\small The accuracy of the baseline models on CIFAR10 and Caltech256 and the number of parameters that they contain.} \label{tab:results_final_layer} \vspace{-10pt} \end{table} Since AMC does not define an order in which the layers must be shrunk, we must define one ourselves. We experiment with two orderings, namely, top down (TD) and round robin (RR). In TD we start from the penultimate layer, maximally shrink it and move down the network. In round robin (RR) we again start from the penultimate layer, but instead of maximally shrinking it we shrink each layer by at most a factor $\gamma$ and then move to the next layer. We also introduce additional constraints in \texttt{LREShrink} (Algorithm \ref{alg:LRE-AMC}) to prevent the removal of neurons with independent observations and to stop removing neurons when $A$ becomes too error prone. Specifically, we do not apply the update if the average norm of the rows in the update is larger than the average norm of rows in the weight matrix i.e. ${\frac{1}{n^o_k}\sum_i\norm{\hat{W_{k+1}[i]} - W_{k+1}[i]}_2 > \frac{1}{n^o_l}\sum_i\norm{W_{k+1}}_2}$ or $\mathbbm{E}[|A_{.j}|\mathbbm{E}\left[Z^l_{.j}]\right] > 1$. To measure the effect of adjusting the network parameters using LRE, run experiments in which we do not adjust the network parameters using the LRE formalism presented in \ref{sec : readjust}. Instead, we prune the neurons with linearly dependent activations by simply dropping the corresponding columns from the weight matrix, and keeping the other columns as is. Unless otherwise specified, we use the following hyperparameter settings. For experiments with AlexNet we use a learning rate of $10^{-4}$ and set $T=4$ in equation \ref{eq:amc-loss}. For experiments with VGG16 we use a learning rate of $5\times10^{-5}$ and set $T=5$. For both the models we set $\lambda=0.75$ in equation \ref{eq:amc-loss} and $\gamma=0.75$. During the fine-tuning phase, we tune the model for up to 50 epochs. We stop with the accuracy comes within $\epsilon=0.05$ of the precompression accuracy. If the accuracy on the held-out set does not improve for $3$ epochs we reduce the learning rate by 50\%. We stop tuning if the learning rate drops below $10^{-6}$. \subsection{Results} We present the percentage reduction in the number of model parameters, and the consequent loss in accuracy in Table \ref{tab:main-results}. The ``wAdj'' and ``noAdj'' settings correspond to the setting in which LRE is used and the setting in which LRE is not used. Under both these settings we demonstrate that our technique is able to decimate the number of parameters of AlexNet and VGG16, by pruning as much as 99\% of the model parameters. \subsubsection{Top Down Shrinking} We find that when we shrink the layers in top-down order we find that adjusting the model weights with LRE results in a significant reduction in model parameters. Adjusting the weights of AlexNet using LRE allows us to remove almost 30\% more parameters on CIFAR10 and 47\% more parameters on Caltech256, compared to when we did not adjust the weights. Furthermore, we observe that adjusting the weights allows us to prune additional neurons/filters from both, the dense and the convolution layers. This is an impressive result, not only because LRE-AMC able to reduce the number number of parameters in the network drastically but also because it yields better compression on the more difficult dataset. When we ran the same experiment with VGG16 we found that adjusting the weights using LRE results in slightly lower compression on CIFAR10 than however LRE is able to prune an additional 20\% of the model parameters, most of which are pruned from the dense layers. \begin{table}[] \centering \input{results_table} \caption{\small The percentage reduction in the number of total parameters ($-\Delta_A$), dense layer parameters ($-\Delta_D$), convolutional layer parameters ($-\Delta_C$), and classification accuracy ($-\Delta_{Acc}$). } \label{tab:main-results} \vspace{-40pt} \end{table} \subsubsection{Round Robin Shrinking} When we shrink the layers in a round robin fashion we find that we can achieve greater compression of the convolutional layers. Since the convolution layers scan the input, computing their activations involves a lot of floating-point operations (FLOPs). Reducing the number of convolutional filters greatly reduces the FLOPs of the model. Interestingly, performing round robin shrinking has a more significant impact on the total number of model parameter in AlexNet when the weights are not adjusted using LRE. In fact, under round robin shrinking not adjusting the weights yields \textit{slightly} better compression both in terms of reduction in the number of model parameters and the accuracy degredation. We also observe that under round robin shrinking, we achieve lower compression in terms of dense layer parameters on Caltech256 but we are able to prune away many more parameters from the convolutional layers. This seems to suggest that round robin shrinking would be ideal when minimizing FLOPs is more important than reducing memory consumption, while top-down shrinking should be preferred when memory consumption is to be optimized. \subsection{Analysis} \subsubsection{Accuracy Error Tradeoff} In this section we present experimental results that describe the compression-performance trade-off of our approach. As mentioned, we have used a tolerance of $\epsilon=0.05$ to limit the deterioration of accuracy during and after compression. In Figure \ref{fig:tolPlot} we plot the decrease in accuracy against the percentage of the parameters pruned for top-down and round robin shrinking of AlexNet on Caltech-256 at different values of $\epsilon$. Figure \ref{fig:td-tolPlot} exhibits the expected trend, in that, as we decrease $\epsilon$ both, the decrease in accuracy and the fraction of removed parameters decrease. We see that the the parameter reduction falls \textit{much} faster as we decrease $\epsilon$, indicating that that under the top-down shrinking scheme additional accuracy comes at a steep cost in compression performance. On the other hand, Figure \ref{fig:rr_tolPlot} exhibits a very different trend. As we decrease $\epsilon$ from 0.05 to 0.03 the compression improves, however, it deteriorates as when $\epsilon=0.01$ and improves again when $\epsilon=0.0$. Even though compression suffers when we set $\epsilon=0.0$, the deterioration is modest compared to the top-down shrinking. We do not have a reliable explanation for this phenomenon, because the repetitive nature of the round-robin shrinking approach makes its analysis complicated. \begin{figure} \vspace{-10pt} \setlength{\abovecaptionskip}{-1.5pt} \setlength{\belowcaptionskip}{-14pt} \centering \subfloat[]{\includegraphics[width=0.3\textwidth]{images/alexnet_caltech256_adjustW_tolerance_plot.png} \label{fig:td-tolPlot}} \subfloat[]{\includegraphics[width=0.3\textwidth]{images/alexnet_roundRobin_caltech256_adjustW_tolerance_plot.png} \label{fig:rr_tolPlot}} \caption{\label{fig:tolPlot}\small The change in the compression percentage of AlexNet as the accuracy tolerance is reduced from 5\% to 0\% (a) under top-down shrinking and (b) round robin shrinking on Caltech-256. In both settings the weights are adjusted using LRE.} \end{figure} It is entirely possible that removing neurons/filters in a certain order can lead to greater compression than removing neurons in some other order. The complexity arises if the optimal order spans across layers, something which the LRE framework does not account for. Though we do not prove it conclusively, the round robin shrinking approach seems to maintain compression even under very stringent accuracy constraints, and, therefore, shows promise as an effective model compression approach that could benefit from further study. \subsubsection{Representation learning} \begin{figure}[t!] \centering \begin{minipage}{0.32\textwidth} \includegraphics[width=1.\textwidth]{images/vgg16_cifar10_layer44_tsne_embeddings.png} \includegraphics[width=1.\textwidth]{images/vgg16_cifar10_layer44_pca_embeddings.png} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=1.\textwidth]{images/vgg16_cifar10_allLayers_reverse_roundRobin_predictive_pruning_postActivation_distilled_layer44_tsne_embeddings.png} \includegraphics[width=1.\textwidth]{images/vgg16_cifar10_allLayers_reverse_roundRobin_predictive_pruning_postActivation_distilled_layer44_pca_embeddings.png} \end{minipage} \begin{minipage}{0.32\textwidth} \includegraphics[width=1.\textwidth]{images/vgg16_cifar10_allLayers_reverse_roundRobin_predictive_pruning_adjustingW_postActivation_distilled_layer44_tsne_embeddings.png} \includegraphics[width=1.\textwidth]{images/vgg16_cifar10_allLayers_reverse_roundRobin_predictive_pruning_adjustingW_postActivation_distilled_layer44_pca_embeddings.png} \end{minipage} \caption{\label{fig : tsne}\small Low-dimensional representations of the final convolutional layer outputs of the VGG16 network trained on CIFAR. Points are test images colored by label. On the first line is t-SNE, on the second PCA. The left column corresponds to the original network ; the middle one to the compressed network without weight readjustment ; the right one to the compressed network with readjustment.} \vspace{-15pt} \end{figure} As we discussed earlier, one of the main use of a compressed representation on a task such as image recognition should be to provide useful pretrained representations for potential downstream tasks. When using performance-guided pruning, it is possible to degrade the learned representations while resorting on the final layers to use artifacts to maintain good performance on the recognition task. To make sure that our method isn't such a case, we provide some insight on the final convolutional layer of our VGG16 network trained on CIFAR through low-dimensional visualizations. In \ref{fig : tsne} we plot t-SNE visualizations to observe separability. We can observe the compressed representations images are almost as separable as they are in the original network. Since t-SNE distorts the point clusters, we also plot PCA representations to assess the shape of the clusters. We can see that weight readjustment plays a significant role here : without it, clusters are more scattered and less convex, while with it they are more similar to the original network. This vouches for redundancy elimination and weight readjustment as compression methods that respect the semantics of the data, and that arguably are compatible with transfer learning between vision tasks. \section{Conclusion} We have presented a novel neural model compression technique called LRE-AMC, which eliminates units (neurons or convolutional filters) whose activations are linearly dependent on the activations of other neurons in the same layer. Since entire units are pruned away, and not just individual parameters, the weight matrices in compressed model are smaller, which reduces the model's memory requirements and makes computation more efficient. We demonstrate the efficacy of LRE-AMC by applying it to AlexNet and VGG16 and show that we can remove more than 99\% of the units in both these models while suffering only a 5-6\% loss in accuracy on CIFAR-10. We have also applied LRE-AMC to the more difficult Caltech-256 dataset and achieved more than 80\% compression. Furthermore, we show that after compression the data remains separable in the model's intermediate layers, suggesting that the intermediate representation could carry sufficient information for transfer learning tasks. For future work we will explore methods of incorporating information from the derivatives of the subsequent layers to better estimate the effect of removing a unit on the overall output of the network, and prune neurons that minimally impact the output. We expect that this modification will result in smaller models and greater accuracy. \bibliographystyle{splncs04}
1,314,259,996,924
arxiv
\section{Introduction} An $n\times n$ complex matrix $A$ is called \emph{coninvolutory} if $\bar AA=I_n$ and \emph{skew-coninvolutory} if $\bar AA=-I_n$ (and so $n$ is even since $\det(\bar AA)\ge 0$). We prove that each matrix of size $n\times n$ with $n\ge 2$ is a sum of 5 coninvolutory matrices and each matrix of size $2m\times 2m$ is a sum of 5 skew-coninvolutory matrices. These results are somewhat unexpected since the set of matrices that are sums of involutory matrices is very restricted. Indeed, if $A^2=I_n$ and $J$ is the Jordan form of $A$, then $J^2=I_n$, $J=\diag(1,\dots,1,-1,\dots,-1)$, and so $\trace(A)=\trace(J)$ is an integer. Thus, if a matrix is a sum of involutory matrices, then its trace is an integer. Wu \cite[Corollary 3]{wu} and Spiegel \cite[Theorem 5]{Spiegel} prove that an $n\times n$ matrix can be decomposed into a sum of involutory matrices if and only if its trace is an integer being even if $n$ is even. We also prove that each square complex matrix is a sum of a coninvolutory matrix and a condiagonalizable matrix. A matrix is \emph{condiagonalizable} if it can be written in the form $\bar S^{-1}DS$ in which $S$ is nonsingular and $D$ is diagonal; the set of condiagonalizable matrices is described in \cite[Theorem 4.6.11]{H-J}. Similar problems are discussed in Wu's survey \cite{wu1}. Wu \cite{wu1} shows that each matrix is a sum of unitary matrices and discusses the number of summands (see also \cite{mer}). Wu \cite{wu} establishes that $M$ is a sum of idempotent matrices if and only if $\trace(M)$ is an integer and $\trace(M)\ge \rank(M)$. Rabanovich \cite{rab} proves that every square complex matrix is a linear combination of three idempotent matrices. Abara, Merino, and Paras \cite{aba} study coninvolutory and skew-coninvolutory matrices. \section{Each matrix is a sum of a coninvolutory matrix and a condiagonalizable matrix} Two matrices $A$ and $B$ over a field $\F$ are \emph{similar} (or, more accurately, \emph{$\F$-similar}) if there exists a nonsingular matrix $S$ over $\F$ such that $S^{-1}AS=B$. A matrix $A$ is \emph{diagonalizable} if it is similar to a diagonal matrix. Two complex matrices $A$ and $B$ are \emph{consimilar} if there exists a nonsingular matrix $S$ such that $\bar S^{-1}AS=B$; a canonical form under consimilarity is given in \cite[Theorem 4.6.12]{H-J}. A complex matrix $A$ is \emph{real-condiagonalizable} if it is consimilar to a diagonal real matrix. By the statement (b) of the following theorem, each square complex matrix is a sum of two condiagonalizable matrices, one of which may be taken to be coninvolutory. \begin{theorem}\label{t1} \begin{itemize} \item[\rm(a)] Each square matrix over an infinite field is a sum of an involutory matrix and a diagonalizable matrix. \item[\rm(b)] Each square complex matrix is a sum of a coninvolutory matrix and a real-condiagonalizable matrix. \item[\rm(c)] Each square complex matrix is consimilar to $I_n+D$, in which $D$ is a real-condiagonalizable matrix. \item[\rm(d)] Each square complex matrix is consimilar to $C+D$, in which $C$ is coninvolutory and $D$ is a diagonal real matrix. \end{itemize} \end{theorem} \begin{proof} The theorem is trivial for $1\times 1$ matrices. Let $\F$ be any field. The \emph{companion matrix of a polynomial} \[ f(x)=x^m-a_1x^{m-1}-\dots-a_m\in\F[x] \] is the matrix \begin{equation}\label{kjw} F(f):= \begin{bmatrix} 0&&0&a_m\\1&\ddots&&\vdots\\ &\ddots&0&a_{2}\\0&&1&a_1 \end{bmatrix}\in\F^{m\times m}; \end{equation} its characteristic polynomial is $f(x)$. By \cite[Section 12.5]{van}, \begin{equation}\label{mlr1} \parbox[c]{0.7\textwidth}{each $A\in\F^{n\times n}$ is $\F$-similar to a direct sum of companion matrices whose characteristic polynomials are powers of prime polynomials; this direct sum is uniquely determined by $A$, up to permutations of summands.} \end{equation} Moreover, \begin{equation}\label{mlr} \parbox[c]{0.7\textwidth}{if $f,g\in\F[x]$ are relatively prime, \\then $F(f)\oplus F(g)$ is $\F$-similar to $F(fg)$.} \end{equation} \medskip (a) Let $A$ be a matrix of size $n\times n$ with $n\ge 1$ over an infinite field $\F$. It is similar to a direct sum of companion matrices: \[ SAS^{-1}=B=F_1\oplus\dots\oplus F_t,\qquad S\text{ is nonsingular}. \] If $B=C+D$ is the sum of an involutory matrix $C$ and a diagonalizable matrix $D$, then $A=S^{-1}CS+S^{-1}DS$ is also the sum of an involutory matrix and a diagonalizable matrix. Thus, it suffices to prove the statement (a) for $B$. Moreover, it suffices to prove it for an arbitrary companion matrix \eqref{kjw}. Each matrix \[ G=\begin{bmatrix} 1&&0&b_m\\&\ddots&&\vdots\\ &&1&b_{2}\\0&&&-1 \end{bmatrix}\in\F^{m\times m} \] is involutory. Changing $b_2,\dots,b_{m}$, we get \[ F(f)-G+I_m= \begin{bmatrix} 0&&0&c_m\\1&\ddots&&\vdots\\ &\ddots&0&c_{2}\\0&&1&a_1+2 \end{bmatrix} \] with arbitrary $c_2,\dots,c_{m}\in\mathbb F$. For each pairwise unequal $\lambda _1,\dots,\lambda _m\in\mathbb F$ such that $\lambda _1+\dots+\lambda _m=a_{1}+2=\text{trace}(F(f)-G+I_m)$, we can take $G$ such that the characteristic polynomial of $F(f)-G+I_m$ is equal to \[ x^m-(a_1+2)x^{m-1}-c_{2}x^{m-2}-\dots-c_m =(x-\lambda_1)\cdots(x-\lambda_m). \] Thus, \begin{equation}\label{mry} \parbox[c]{0.7\textwidth}{$F(f)-G+I_m$ is $\F$-similar to $\diag(\lambda _1,\dots,\lambda _m)$,} \end{equation} and so the matrix $F(f)-G$ is diagonalizable. \medskip (b) Let us prove the statement (b) for $A\in \C^{n\times n}$ with $n>1$. By \cite[Corollary 4.6.15]{H-J}, \begin{equation}\label{jdr} \text{each square complex matrix is consimilar to a real matrix,} \end{equation} hence $A=\bar S^{-1}BS$ for some $B\in\R^{n\times n}$ and nonsingular $S\in\C^{n\times n}$. By the statement (a), $B=C+D$, in which $C\in\R^{n\times n}$ is involutory and $D\in\R^{n\times n}$ is real-diagonalizable. Then $D=R^{-1}ER$, in which $R\in\R^{n\times n}$ is nonsingular and $E\in\R^{n\times n}$ is diagonal. Thus, $A=\bar S^{-1}CS+\overline{(RS)}^{-1}E(RS)$ is a sum of a coninvolutory matrix and a real-condiagonalizable matrix. (c) Let $A\in \C^{n\times n}$ with $n>1$. By (b), $A=C+D$, in which $C$ is coninvolutory and $D$ is real-condiagonalizable. By \cite[Lemma 4.6.9]{H-J}, $C$ is coninvolutory if and only if there exists a nonsingular $S$ such that $C=\bar S^{-1}S$ (that is, $C$ is consimilar to the identity). Then $\bar SAS^{-1}=I_n+\bar SDS^{-1}$, in which $\bar SDS^{-1}$ is real-condiagonalizable. (d) This statement follows from (b). \end{proof} \begin{corollary}\label{rem1} Each $m\times m$ companion matrix \eqref{kjw} with $m\ge 2$ is $\F$-similar to $G+\diag(\mu _1,\dots,\mu_m)$, in which $G$ is involutory and $\mu _1,\dots,\mu_m\in\F$ are arbitrary pairwise unequal numbers such that $\mu _1+\dots+\mu_m= a_1+2-m$. \end{corollary} We get this corollary from \eqref{mry} by taking $\diag(\mu_1,\dots,\mu_m ):=\diag(\lambda _1,\dots,\lambda _m)-I$. \section{Each $n\times n$ matrix with $n>1$ is a sum of 5 coninvolutory matrices} \begin{theorem}\label{t2} Each $n\times n$ complex matrix with $n\ge 2$ is a sum of 4 coninvolutory matrices if $n=2$ and 5 coninvolutory matrices if $n\ge 2$. \end{theorem} \begin{proof} Let us prove the theorem for $M\in\C^{n\times n}$. By \eqref{jdr}, $M=\bar S^{-1}AS$ for some $A\in\R^{n\times n}$ and a nonsingular $S$. If $A=C_1+\dots+C_k$ is a sum of coninvolutory matrices, then $M=\bar S^{-1}C_1S+\dots+\bar S^{-1}C_kS$ is also a sum of coninvolutory matrices. Thus, it suffices to prove Theorem \ref{t2} for $A\in\R^{n\times n}$. \medskip \emph{Case 1: $n=2$.} By \cite[Theorem 3.4.1.5]{H-J}, each $2\times 2$ real matrix is $\R$-similar to one of the matrices \begin{equation}\label{ndj} \begin{bmatrix} a & 0 \\ 0 & b \\ \end{bmatrix},\quad \begin{bmatrix} a & 1 \\ 0 & a \\ \end{bmatrix},\quad \begin{bmatrix} a & b \\ -b & a \\ \end{bmatrix}\ (b>0),\qquad a,b\in\R. \end{equation} (i) The first matrix is a sum of 4 coninvolutory matrices since it is represented in the form \[ \begin{bmatrix} a & 0 \\ 0 & b \\ \end{bmatrix} =\begin{bmatrix} (a-b)/2 & 0 \\ 0 & -(a-b)/2 \\ \end{bmatrix}+\begin{bmatrix} (a+b)/2 & 0 \\ 0 & (a+b)/2 \\ \end{bmatrix} \] and each summand is a sum of two coninvolutory matrices because \[ \begin{bmatrix} 2c & 0 \\ 0 & -2c \\ \end{bmatrix} =\begin{bmatrix} c & 1 \\ (1-c^2) & -c \\ \end{bmatrix}+\begin{bmatrix} c & -1 \\ -(1-c^2) & -c \\ \end{bmatrix} \] and \begin{equation}\label{ssd2} \begin{bmatrix} 2c & 0 \\ 0 & 2c \\ \end{bmatrix} =\begin{bmatrix} c & i \\ (1-c^2)i & c \\ \end{bmatrix}+\begin{bmatrix} c & -i \\ -(1-c^2)i & c \\ \end{bmatrix} \end{equation} are sums of two coninvolutory matrices for all $c\in\R$. (ii) The second matrix is a sum of 4 coninvolutory matrices since \[ \begin{bmatrix} a & 1 \\ 0 & a \\ \end{bmatrix} =\begin{bmatrix} a & 0 \\ 0 & a \\ \end{bmatrix}+\begin{bmatrix} 0 & 1 \\ 0 & 0 \\ \end{bmatrix} \] and each summand is a sum of two coninvolutory matrices: the first due to \eqref{ssd2} and the second due to \[ \begin{bmatrix} 0 & 1 \\ 0 & 0 \\ \end{bmatrix} =\begin{bmatrix} 1 & 1 \\ 0 & -1 \\ \end{bmatrix}+\begin{bmatrix} -1 & 0 \\ 0 & 1 \\ \end{bmatrix}. \] (iii) The third matrix is a sum of 4 coninvolutory matrices since \[ \begin{bmatrix} a & b \\ -b & a \\ \end{bmatrix} =\begin{bmatrix} a & 0 \\ 0 & a \\ \end{bmatrix}+\begin{bmatrix} 0 & b \\ -b & 0 \\ \end{bmatrix} \] and each summand is a sum of two coninvolutory matrices due to \eqref{ssd2} and \[ \begin{bmatrix} 0 & b \\ -b & 0 \\ \end{bmatrix} =\begin{bmatrix} 1 & b \\ 0 & -1 \\ \end{bmatrix}+\begin{bmatrix} -1 & 0 \\ -b & 1 \\ \end{bmatrix}. \] Thus, each $2\times 2$ matrix $A$ is a sum of 4 coninvolutory matrices. Applying this statement to $A-I_2$, we get that $A=I_2+(A-I_2)$ is also a sum of 5 coninvolutory matrices. \medskip \emph{Case 2: $n$ is even}. By Theorem \ref{t1}(d), $A$ is consimilar to $C+D$, where $C$ is coninvolutory and $D$ is a diagonal real matrix, which proves Theorem \ref{t2} in this case due to Case 1 since $D$ is a direct sum of $2\times 2$ matrices. \medskip \emph{Case 3: $n$ is odd}. By \eqref{mlr1}, $A$ is $\R$-similar to a direct sum \begin{equation}\label{feo} B=F(f_1)\oplus\dots\oplus F(f_t),\qquad f_i(x)=x^{m_i}-a_{i1}x^{m_i-1} -\dots-a_{im}\in\R[x]. \end{equation} We can suppose that $m_1>1$. Indeed, if $m_i>1$ for some $i$, then we interchange $F(f_1)$ and $F(f_i)$. Let $m_1=\dots=m_t=1$ and let $a_{11} \ne 0$ (if $B=0$, then $B=I+(-I)$ is the sum of involutory matrices). If $a_{11}=a_{21}$, then we replace $a_{11}$ by $-a_{11}$ using the consimilarity of $[a_{11}]$ and $[-a_{11}]$. By \eqref{mlr}, $F(f_1)\oplus F(f_2)=[a_{11}]\oplus[a_{21}]$ is $\R$-similar to $F((x-a_{11})(x-a_{21}))$. We obtain $B$ of the form $F(f_1)\oplus C$ with $m_1>1$. By Corollary \ref{rem1}, $F(f_1)$ is $\R$-similar to $G+\diag(\mu _1,\dots,\mu_{m_1})$, in which $G$ is a real involutory matrix and $\mu _1,\dots,\mu_{m_1}\in\R$ are arbitrary pairwise unequal numbers such that $\mu _1+\dots+\mu_{m_1}= a_{11}+2-{m_1}$. We take $\mu _1=2$ (and then $\mu_2=-2$) if $f_1(x)=x^2-a_{12}$. We take $\mu _1=0$ if $f_1(x)\ne x^2-a_{12}$. Applying Theorem \ref{t1}(d) to the other direct summands $F(f_2),\dots, F(f_t)$, we find that $B$ is $\R$-similar to \[ \begin{bmatrix} G & 0\\ 0& C\\ \end{bmatrix}+ \begin{bmatrix} \mu_1 & 0\\ 0& D\\ \end{bmatrix}, \] in which the first summand is coninvolutory and the second is a diagonal real matrix. By Case 1, \[ D=C_1+C_2+C_3+C_4, \] in which $C_1,C_2,C_3,C_4$ are coninvolutory matrices. Then \[ \begin{bmatrix} \mu_1 & 0 \\ 0 & D \\ \end{bmatrix}= \begin{bmatrix} 1 & 0 \\ 0 & C_1 \\ \end{bmatrix} + \begin{bmatrix} \mu_1-1 & 0 \\ 0 & C_2 \\ \end{bmatrix} + \begin{bmatrix} 1 & 0 \\ 0 & C_3 \\ \end{bmatrix} +\begin{bmatrix} -1 & 0 \\ 0 & C_4 \\ \end{bmatrix} \] is a sum of 4 coninvolutory matrices. \end{proof} \section{Each $2m\times 2m$ matrix is a sum of 5 skew-coninvolutory matrices} We recall that an $n\times n$ complex matrix $A$ is called \emph{skew-coninvolutory} if $\bar AA=-I_n$ (and so $n$ is even since $\det(\bar AA)\ge 0$). \begin{theorem}\label{ts} Each $2m\times 2m$ complex matrix is a sum of at most 5 skew-coninvolutory matrices. \end{theorem} \begin{proof} Let us prove the theorem for $A\in\C^{2m\times 2m}$. If $A=\bar S^{-1}BS$ and $B=C_1+\dots+C_k$ is a sum of skew-coninvolutory matrices, then $A=\bar S^{-1}C_1S+\dots+\bar S^{-1}C_kS$ is a sum of skew-coninvolutory matrices too. Thus, it suffices to prove the theorem for any matrix that is consimilar to $A$. By \cite[Theorem 4.6.12]{H-J}, each square complex matrix is consimilar to a direct sum, uniquely determined up to permutation of summands, of matrices of the following two types: \begin{equation}\label{mmu} J_n(\lambda ):=\begin{bmatrix} \lambda & 1&&0 \\ &\lambda & \ddots \\ &&\ddots&1\\ 0&&&\lambda \\ \end{bmatrix}\quad (n\text{-by-}n,\ \lambda \in\R,\ \lambda \ge 0) \end{equation} and \begin{equation}\label{ccd} H_{2m}(\mu):=\begin{bmatrix} 0& I_n \\ J_n(\mu )&0\\ \end{bmatrix}\quad (\mu \in\C,\ \mu<0\text{ if }\mu \in\R). \end{equation} Thus, we suppose that $A$ is a direct sum of matrices of these types. \medskip \emph{Case 1: $A$ is diagonal.} Then $A$ is a sum of 4 skew-coninvolutory matrices since $A$ is a direct sum of $m$ real diagonal 2-by-2 matrices and each real diagonal 2-by-2 matrix is represented in the form \[ \begin{bmatrix} a & 0 \\ 0 & b \\ \end{bmatrix} =\begin{bmatrix} (a-b)/2 & 0 \\ 0 & -(a-b)/2 \\ \end{bmatrix}+\begin{bmatrix} (a+b)/2 & 0 \\ 0 & (a+b)/2 \\ \end{bmatrix} \] in which each summand is a sum of two skew-coninvolutory matrices because \[ \begin{bmatrix} 2c & 0 \\ 0 & -2c \\ \end{bmatrix} =\begin{bmatrix} c & -1 \\ (1+c^2) & -c \\ \end{bmatrix}+\begin{bmatrix} c & 1 \\ -(1+c^2) & -c \\ \end{bmatrix} \] and \begin{equation}\label{bfl} \begin{bmatrix} 2c & 0 \\ 0 & 2c \\ \end{bmatrix} =\begin{bmatrix} c & -i \\ (1+c^2)i & c \\ \end{bmatrix}+\begin{bmatrix} c & i \\ -(1+c^2)i & c \\ \end{bmatrix} \end{equation} are sums of two skew-coninvolutory matrices for all $c\in\R$. \medskip \emph{Case 2: $A$ is a direct sum of matrices of type \eqref{mmu}.} Then it has the form \begin{equation*}\label{saj} A=\begin{bmatrix} \lambda_1 & \varepsilon _1&&0 \\ &\lambda_2 & \ddots \\ &&\ddots&\varepsilon _{2m-1}\\ 0&&&\lambda_{2m} \\ \end{bmatrix} \end{equation*} in which all $\lambda _i\ge 0$ and all $\varepsilon_i\in\{0,1\}$. Represent $A$ in the form $A=C+D$, in which \[ C:=\begin{bmatrix} c_1 & 1 \\ -1+c_1^2 & -c_1 \\ \end{bmatrix}\oplus\dots\oplus \begin{bmatrix} c_m & 1 \\ -1+c_m^2 &- c_m \\ \end{bmatrix},\quad \text{all }c_i\in\R, \] is a skew-coninvolutory matrix. Let us show that $c_1,\dots,c_m$ can be chosen such that all eigenvalues of $D$ are distinct real numbers. The matrix $D$ is upper block-triangular with the diagonal blocks \[ D_1:=\begin{bmatrix} \lambda _1-c_1 & \varepsilon _1-1 \\ 1-c_1^2 & \lambda _2+c_1 \\ \end{bmatrix},\ \dots, \ D_m:= \begin{bmatrix} \lambda _{2m-1}-c_m &\varepsilon _{2m-1}- 1 \\ 1-c_m^2 &\lambda _{2m}+ c_m \\ \end{bmatrix}. \] Hence, the the set of eigenvalues of $D$ is the union of the sets of eigenvalues of $D_1,\dots ,D_m$. Let $c_1,\dots,c_{k-1}$ have been chosen such that the eigenvalues of $D_1,\dots,D_{k-1}$ are distinct real numbers $\nu _1,\dots,\nu _{2k-2}$. Depending on $\varepsilon _{2k-1}\in\{0,1\}$, the matrix $D_k$ is \begin{equation}\label{gfe} \begin{bmatrix} \lambda _{2k-1}-c_k & -1 \\ 1-c_k^2 & \lambda _{2k}+c_k \\ \end{bmatrix} \quad\text{or}\quad \begin{bmatrix} \lambda _{2k}-c_k & 0 \\ 1-c_k^2 & \lambda _{2k}+c_k \\ \end{bmatrix}. \end{equation} \begin{itemize} \item Let $D_k$ be the first matrix in \eqref{gfe}. Its characteristic polynomial is \begin{align*} \chi _k(x)&=x^2-\trace(D_k)x+\det(D_k)\\&= x^2-(\lambda _{2k-1}+\lambda _{2k})x+( \lambda _{2k-1}-c_k)(\lambda _{2k}+c_k)+1-c_k^2. \end{align*} Its discriminant is \begin{align*} \Delta _k=&(\lambda _{2k-1}+\lambda _{2k})^2-4[\lambda _{2k-1}\lambda _{2k}+(\lambda _{2k-1}-\lambda _{2k})c_k-2c_k^2+1]\\ =&(\lambda _{2k-1}-\lambda _{2k})^2+4 (-\lambda _{2k-1}+\lambda _{2k})c_k+8c_k^2-4. \end{align*} For a sufficiently large $c_k$, $\Delta _k>0$ and so the roots of $\chi _k(x)$ are some distinct real numbers $\nu _{2k-1}$ and $\nu _{2k}$. Since \[\nu _{2k-1}+\nu _{2k}=\trace(D_k)=\lambda _{2k-1}+\lambda _{2k},\] we have \begin{align*} \det(D_k)&=\nu _{2k-1}\nu _{2k}=\nu _{2k-1} (\lambda _{2k-1}+\lambda _{2k}-\nu _{2k-1})\\&= (\lambda _{2k-1}+\lambda _{2k}-\nu _{2k})\nu _{2k}. \end{align*} Taking $c_k$ such that \[ \det(D_k)\ne \nu _{i} (\lambda _{2k-1}+\lambda _{2k}-\nu _{i})\quad\text{for all }i=1,\dots,2k-2, \] we get $\nu _{2k-1}$ and $\nu _{2k}$ that are not equal to $\nu _{1},\dots,\nu _{2k-2}$. \item Let $D_k$ be the second matrix in \eqref{gfe}. Then its eigenvalues are $\lambda _{2k}-c_k$ and $\lambda _{2k}+c_k$. We choose a nonzero real $c_k$ such that these eigenvalues are not equal to $\nu _{1},\dots,\nu _{2k-2}$. \end{itemize} We have constructed the real skew-coninvolutory matrix $C$ such that $A=C+D$, in which $D$ is a real matrix with distinct eigenvalues $\nu _{1},\dots,\nu _{2m}\in \R$. Since $D$ is $\R$-similar to a diagonal matrix and by Case 1, $D$ is a sum of 4 skew-coninvolutory matrices. \medskip \emph{Case 3: $A$ is a direct sum of matrices of types \eqref{mmu} and \eqref{ccd}.} Due to Case 2, it suffices to prove that each matrix $H_{2m}(\mu)$ is a sum of 5 skew-coninvolutory matrices. Write \[ \begin{bmatrix} 0& I_n \\ J_n(\mu )&0\\ \end{bmatrix}=\begin{bmatrix} 0& I_n \\ -I_n&0\\ \end{bmatrix}+\begin{bmatrix} 0& 0 \\ J_n(\mu )+I_n&0\\ \end{bmatrix}. \] The first summand is a skew-coninvolutory matrix, and so we need to proof that the second summand is a sum of 4 skew-coninvolutory matrices. By \eqref{jdr}, there exists a nonsingular $S$ such that $B:=\bar S^{-1}(J_n(\mu )+I_n)S$ is a real matrix. Then the second summand is consimilar to a real matrix: \[ \begin{bmatrix} \bar S^{-1}&0 \\ 0&\bar S^{-1}\\ \end{bmatrix} \begin{bmatrix} 0& 0 \\ J_n(\mu )+I_n&0\\ \end{bmatrix} \begin{bmatrix} S&0 \\ 0&S\\ \end{bmatrix}= \begin{bmatrix} 0&0 \\ B&0\\ \end{bmatrix}, \] which is the sum of two coninvolutory matrices: \begin{equation}\label{nur} \begin{bmatrix} 0&0 \\ B&0\\ \end{bmatrix}= \begin{bmatrix} I_n&0 \\ B&-I_n\\ \end{bmatrix}+ \begin{bmatrix} -I_n&0 \\ 0&I_n\\ \end{bmatrix}. \end{equation} By \cite[Lemma 4.6.9]{H-J}, each coninvolutory matrix is consimilar to the identity matrix. Hence, each summand in \eqref{nur} is consimilar to $I_{2n}$, which is a sum of two skew-coninvolutory matrices due to \eqref{bfl}. Thus, the matrix \eqref{nur} is a sum of 4 skew-coninvolutory matrices. \end{proof} \section*{Acknowledgments} The work of V.V. Sergeichuk was done during his visit to the University of S\~ao Paulo supported by FAPESP, grant 2015/05864-9.
1,314,259,996,925
arxiv
\section{\label{}} \section{Introduction} The top quark was discovered in 1995 at Fermilab in pair production mode ($t\bar{t}$ events) through strong interactions[1]. Several properties of the top quark such as the mass, charge, lifetime, production cross-section and rare decay through flavour changing neutral currents (FCNC) have been explored in Fermilab, but most of these studies are limited by low statistics. Due to the high event rates at LHC (one $t\bar{t}$ event per second at a luminosity of $10^{33}$ cm$^{-2}$s$^{-1}$) at ATLAS, these top quark properties can be studied extensively at ATLAS giving the possibility to discover physics beyond the Standard Model. Due to the very short life time, the top quark decays before it has time to hadronise. But its spin properties are not washed out by hadronization, rather the top quark spin information propagates to its decay products. This unique feature allows direct top quark spin studies. The top quark spin can be reconstructed by measuring the angular distributions of its decay products in the top quark rest frame. Measurements of $W$-boson polarization complement top quark spin studies which can disentangle the origin of new physics. In the Standard Model, flavour changing neutral currents (FCNC) are strongly suppressed at the tree level due to the Glashow-Iliopoulos-Maiani mechanism. At the one loop level, small FCNC contributions are expected due to the CKM mixing matrix. The existence of $q\bar{q}$ bound states (mesons) of all other quarks encourages us to look for $t\bar{t}$ bound states. New resonances and gauge bosons strongly coupled to the top quark are expected in several theoretical models which can decay into $t\bar{t}$ pairs, leading to deviations from Standard Model $t\bar{t}$ production cross-section and top quark kinematics[2]. These new particles can reveal themselves in the $t\bar{t}$ invariant mass distribution. \section{Basic Event Selection} We have used semileptonic ($t\bar{t}\rightarrow W W b \bar{b} \rightarrow l\nu j_1 j_2 b \bar{b}$ with $l=e,\mu$) and dileptonic ($t\bar{t}\rightarrow W W b \bar{b} \rightarrow l \nu l^{'} \nu^{'} b \bar{b}$ with $l,l^{'}=e,\mu$) decays of $t\bar{t}$ events for the top quark charge reconstruction. We have used only the semileptonic decay channel for $Wtb$ anomalous coupling, $t\bar{t}$ spin and spin correlation and $t\bar{t}$ resonance. For the semileptonic topology, we require exactly one isolated electron (muon) with $|\eta|<2.5$ and $p_{\mathrm{T}} > 25 \GeVc$ ($p_{\mathrm{T}} > 20 \GeVc$), at least 4 jets with $|\eta|<2.5$ and $p_{\mathrm{T}} > 30 \GeVc$, at least 2 jets tagged as $b$-jets and missing transverse energy above 20 \GeVc[3]. For the dileptonic topology, we require exactly two isolated electrons (muons) with $|\eta|<2.5$ and $p_{\mathrm{T}} > 25 \GeVc$ ($p_{\mathrm{T}} > 20 \GeVc$), at least 2 jets with $|\eta|<2.5$ and $p_{\mathrm{T}} > 30 \GeVc$, at least 2 jets tagged as $b$-jets and missing transverse energy above 20 \GeVc[3]. Since the final state topology for the rare top quark decays via FCNC are different from semileptonic and dileptonic topologies, we have used different selection criteria which will be described in section 2.3. \subsection{Top quark charge measurement} We have presented the measurement of the top quark charge based on the reconstruction of the charge of the top quark decay products. The $W$ boson charge can be directly measured easily using its leptonic decay modes. Due to quark confinement inside hadrons, we cannot measure the $b$ quark charge directly. We have used $b$-jet charge weighting (weighted sum of all the tracks in the jet) and semileptonic $b$-decay approaches to measure $b$ quark charge. By using the weighting technique, it is possible to distinguish between the $b$-jet charges associated with leptons of opposite charges with a $5\sigma$ significance with only $0.1$~fb$^{-1}$ of data ( $1$~fb$^{-1}$ for semileptonic $b$-decay approach) which allows the Standard Model (${t\rightarrow W^{+}b}$) and exotic (${t'\rightarrow W^{-}b}$) scenarios to be distinguished. Reconstruction of the magnitude of the top quark charge seems to be possible with $\simeq 1$~fb$^{-1}$ using the weighting technique, but it is necessary to check the performance of the method with real data. The reconstructed $b$ quark and top quark charge are shown in Figure 1 with $1$~fb$^{-1}$ of simulated data. The resulting top quark charge is $Q_{t}= 0.67 \pm 0.06\;(stat) \pm 0.08\;(syst)$. \begin{figure}[!h] \centering \centering\epsfig{file=charge.eps, height=3cm, width=9cm, clip=} \caption{Left: the $b$-jet charge ($Q_\mathrm{{b}}$) distribution; right: the reconstructed top quark charge ($Q\mathrm{_t}$).} \label{bkgd_signal} \end{figure} \subsection{Top quark spin and spin correlations and \emph{Wtb} anomalous couplings} In the Standard Model, the top quarks are produced unpolarised in $t\bar{t}$ events, but their spins are correlated[4]. The production asymmetry ($A$ and $A_{\mathrm{D}}$) can be obtained from the angular distribution of the top quark decay products. In addition to the $t\bar{t}$ spin correlation, we can measure the $W$ polarization. The $W$-boson can be produced with right($F_{\mathrm{R}}$), left($F_{\mathrm{L}}$) or longitudinal polarizations($F_{\mathrm{0}}$) with $F_{\mathrm{0}} + F_{\mathrm{L}} + F_{\mathrm{R}} = 1$. The expected measurement results, using 1~fb$^{-1}$ of simulated data, are shown in Table 1. It is also possible to parameterise new physics in the $Wtb$ vertex using anomalous coupling parameters $V_{\mathrm{L}}$, $V_{\mathrm{R}}$, $g_{\mathrm{L}}$ and $g_{\mathrm{R}}$. Figure 2 shows the expected 68\% C.L. allowed regions on the $Wtb$ anomalous couplings for 1~fb$^{-1}$. \begin{table}[htbp] \caption{$W$-boson polarization and top quark spin correlation parameters with statistical and systematic errors. } \begin{center} \begin{tabular}{|c|c|c|c|} \hline $W$-boson polarization & $F_{\mathrm{L}}$ & $F_{\mathrm{0}}$ & $F_{\mathrm{R}}$ \\ & 0.29 $\pm$0.02 $\pm$0.03 & 0.70 $\pm$0.04 $\pm$0.02 & 0.01 $\pm$0.02 $\pm$0.02 \\ \hline\hline $t\bar{t}$ spin correlation & $A$ & $A_{\mathrm{D}}$ & \\ & 0.67 $\pm$0.17$\pm$0.18 & -0.40 $\pm$0.11 $\pm$0.09 & \\ \hline \end{tabular} \label{tab:pola_result} \end{center} \end{table} \begin{figure}[!] \hspace{0.022\textwidth} \begin{minipage}[t]{.3\textwidth} \centerin \includegraphics[width=1.05\textwidth,angle=0]{fcnc.eps} \caption{The expected 68\% C.L. allowed regions on the $Wtb$ anomalous couplings for $L=1$~fb$^{-1}$} \label{resolutionMtt} \end{minipage} \hfill \begin{minipage}[t]{.25\linewidth} \centerin \includegraphics[width=1.05\textwidth,angle=0]{wtb.eps} \caption{95\% C.L. expected limits on the $BR(t \to q\gamma)$ vs $BR(t \to qZ)$} \label{resolutionMtt} \end{minipage} \hfill \hspace{0.022\textwidth} \begin{minipage}[t]{.3\textwidth} \centerin \includegraphics[width=1.05\textwidth,angle=0]{reso.eps} \caption{5$\sigma$ discovery potential of a generic narrow $t\bar{t}$ resonance as a function of the integrated luminosity.} \label{DiscovPot} \end{minipage} \end{figure} \subsection{ATLAS sensitivity to FCNC top quark decays} We have studied the rare top quark decays via FCNC ($t\rightarrow qX$, $X=\gamma, Z, g$) using $t\bar{t}$ events in $1$~fb$^{-1}$ of simulated LHC data. One of the top quarks is assumed to decay through its dominant decay mode ($t\to bW$), while the other top quark decays via one of the FCNC modes ($t\to qZ$, $t\to q\gamma$, $t\to qg$). Due to the large QCD background, it is very difficult to search for FCNC signal using modes where $W$ or $Z$ decay hadronically. Due to this reason, only leptonic decays of both $W$ and $Z$ were taken into account. For signal events, we have used $t\bar{t} \to b\ell\nu qX$, where $X=\gamma,Z\to\ell\ell,g$ and $\ell=\mathrm{e},\mu$ and taken into account the expected Standard Model backgrounds. For $t\bar{t} \to bWq\gamma$, we require exactly one lepton with $p_{\mathrm{T}}>25$~GeV, at least two jets with $p_{\mathrm{T}}>20$~GeV, one $\gamma$ with $p_{\mathrm{T}}>25$~GeV and $\not\!p_{\mathrm{T}}>20$~GeV. For $t\bar{t}\to bWqg$, we require exactly one lepton with $p_{\mathrm{T}}>25$~GeV, exactly three jets with $p_{\mathrm{T}}>40, 20, 20$~GeV and $\not\!p_{\mathrm{T}}>20$~GeV. For $t\bar{t}\to bWqZ$, we require exactly three leptons with $p_{\mathrm{T}}>25, 15, 15$~GeV, at least two jets with $p_{\mathrm{T}}>30, 20$~GeV and $\not\!p_{\mathrm{T}}>20$~GeV. The neutrino four momentum was estimated using a kinematic fit[3]. The expected 95\%~ C.L. upper limits on the branching ratios for $t\to qZ, t\to q\gamma, t\to qg$ are 10$^{-3}$, 10$^{-3}$, 10$^{-2}$ respectively using $1$~fb$^{-1}$ simulated data. Figure 3 shows the expected 95\%~C.L. for the first $1$~fb$^{-1}$ in the absence of signal for the $t\to q\gamma$ and $t \to qZ$ channels. \subsection{\textbf{$t\bar{t}$} resonances} The discovery potential for generic $t\bar{t}$ resonances with the ATLAS detector has been explored as a function of the resonance mass for the semileptonic $t\bar{t}$ channels[5]. $t\bar{t}$ resonances were produced with \textsc{Pythia} for $Z' \to t\bar{t}$ channel. The common selection criteria have been applied for event reconstruction. The main source of background for $t\bar{t}$ resonances is the Standard Model $t\bar{t}$ events (other backgrounds like $W$+jets are negligible). It is possible to discover a 700 GeV $Z'$ resonance produced with a $\sigma \times Br(Z' \to t\bar{t})$ of 11~pb with a 5$\sigma$ significance with 1~fb$^{-1}$ of data [Figure 4]. Using a model-independent approach, ATLAS can exclude Kaluza-Klein gluon resonances upto 1.5 TeV with only 1~fb$^{-1}$ data[3]. \section{References} [1] F. Abe $et$ $al.$ (CDF Collaboration), Phys. Rev. Lett. \textbf{74}, 2626 (1995); S. Abachi $et$ $al.$ (D0 Collaboration, $ibid.$ \textbf{74}, 2632 (1995). [2] Lillie, B and Randall, L. and Wang, L.-T., hep-ph/0701166v1 (2007) [3] ATLAS Collaboration, CERN-OPEN--2008-020, Geneva, 2008 to appear. [4] W. Bernreuther, Nucl. Phys. B \textbf{690} 81 (2004). [5] E. Cogneras and D. Pallin, ATL-PHYS-PUB-2006-033 (2006). \section{Acknowledgements} The author would like to thank the organizers of ICHEP08 conference for creating fruitful collaborative environment. My sincere thanks to Antonio Onofre, Patrick Skubic and Martine Bosman for valuable suggestions. \end{document}
1,314,259,996,926
arxiv
\section{setup details} The diamond sample containing the NV center is cooled to $\approx 4$~K. The optical setup is schematically depicted in Fig.~2a. Laser light at $637$~nm is used to apply the optical $\pi$-pulses. In the photon detection path, the emitted 637~nm photons are separated from reflected excitation light using a cross-polarization configuration and time filtering. The off-resonant phonon side band emission is separated by dichroic filtering and sent to a detector (D1) for spin readout. The 637~nm photons are combined with a strong pump laser (emission wavelength of $1064$~nm) and directed into the PPLN crystal for the DFG process. Afterwards, the remaining pump laser light is filtered out by a prism, a long-pass dielectric filter and a narrow-band fiber Bragg grating. The total conversion efficiency of the DFG setup is $\eta_c\approx 17\%$ \cite{Dreau2018}. To ensure the frequency and phase stability of the converted photons, both the NV excitation laser and the pump laser are locked to an external reference cavity (Stable Laser Systems). Figure 2b shows the experimental sequence used in the experiments. Our protocol starts with checking whether the NV center is in the desired charge state and on resonance with the control lasers \cite{Robledo2010}. Once this test is passed, the spin-photon entangled state is generated. If a photon is detected, we read out the spin state in the appropriate basis and re-start the protocol. In case no photon is detected, we reinitialize the spin and again generate an entangled state. After 250 failed attempts to detect a photon, we re-start the protocol. \begin{figure*}[t] \centering \includegraphics[width = 0.8\textwidth]{fig2.png} \caption{(a) Experimental setup for the spin-telecom photon entangled state generation. Emitted 637~nm photons are combined with the pump laser (1064~nm) in the difference frequency generation setup (DFG1). The two lasers are frequency-locked to an external reference cavity. Tomography in the Z-basis: the frequency converted photons are detected using a superconducting nanowire detector (D2) discriminating the early and late time bins. (b) Experimental protocol for generating and detecting spin-telecom photon entangled states (see main text). (c) Results for correlations measured in the Z~basis both for the red and for the frequency-converted photons at telecom wavelength. } \end{figure*} We first measure spin-photon correlations in the ZZ basis. To measure the photon in the Z basis, we send the frequency-converted photons directly to a superconducting nanowire detector (D2) that projects the photonic qubit in the time-bin basis, and, upon photon detection, we read out the spin qubit in the corresponding Z basis. Figure 2c shows the observed correlation data. The probability to measure the spin in $\ket{0}$ is plotted for photon detection events in the early and late time-bins. We have performed this measurement for both the 637 nm photons (red) and for the frequency-converted photons at 1588~nm (purple). For the unconverted photons we measure correlations that are perfect within measurement uncertainty (contrast of $E_{Z} = |P_E\left( \ket{0}\right) - P_L\left( \ket{0}\right)| = 0.997 \pm 0.018$). For the frequency converted photons we measure $P_E\left( \ket{0}\right) = 0.09 \pm 0.05$ for the early time bin and $P_L\left( \ket{0}\right) = 0.95 \pm 0.05$ for the late time bin, yielding a contrast of $E_{Z} = 0.86 \pm 0.07$. All data in this work are corrected for spin readout infidelity and dark counts of the detectors, both of which are determined independently. The contrast for the telecom photons is lowered by noise coming from spontaneous parametric down converted (SPDC) photons and Raman scattering induced by the strong pump laser~\cite{Dreau2018,Fejer2010}. We characterize this noise contribution separately by blocking the incoming 637~nm path and find an expected signal to noise ratio (SNR) between $4.8$ and $7.7$. This SNR bounds the maximum observable contrast for the ZZ correlations to $0.85 \pm 0.03$, and thus fully explains our data. We use this SNR later to determine the different noise contributions for the correlation data in the other bases. Additionally, we conclude from the relative number of detection events in the early and late time bin (659 vs 642 events) that the amplitudes of the two parts of the spin-photon entangled state are well balanced. To verify the spin-photon entanglement, we measure spin-photon correlations in two other spin-photon bases by sending the frequency-converted photons into the imbalanced fiber interferometer (see Fig.~3a). The fiber arm length difference is $\approx 40$~m, which corresponds to a photon travel time difference of $190$~ns between the two arms. In this way the early time bin taking the long arm overlaps at the second beam splitter with the late time bin taking the short arm, thus allowing us to access the phase relation between the two. To access a specific photon qubit basis, we introduce a tunable phase difference $\Delta \phi$ between the long and short arms of the interferometer. In particular, detection of a photon by the detector D3 projects the spin into the state \begin{equation} \ket{\text{NV}}_{D3} = \frac{1}{\sqrt{2}}\left( \ket{0} + e^{i\left(\Delta\phi-\frac{\pi}{4} \right)}\ket{1}\right). \end{equation} We use two orthogonal set points, labelled X and Y, with $\Delta\phi = \pi/4$ and $\Delta\phi = 3\pi/4$, respectively, as indicated in Fig.~3c. A key requirement for this experiment is that the interferometer is stable with respect to the frequency of the down-converted photons; any instabilities in the interferometer will reduce the interference contrast and prevent us from accessing the true spin-photon correlations. For this reason the interferometer is thermally and vibrationally isolated. Furthermore, we split the experiment into cycles of 1 second (see Fig.~4a), of which the first 100 ms is used to actively stabilize the phase setpoint of the interferometer. Within this 100 ms, we feed metrology light into the interferometer in the reverse direction via shutter S and a circulator. This metrology light is generated by a second DFG setup, using input from the excitation and pump lasers, thus ensuring a fixed frequency relation between the metrology light and the frequency-converted photons. By comparing the light intensities on detectors PD2 and PD3 with the values corresponding to the desired $\Delta\phi$ setpoint as determined from a visibility fringe (calibrated every $100$~s), an error signal is computed and feedback is applied to the fiber piezo stretcher (FPS). After this adjustment the light intensities are measured again. A histogram of the measured phases during the experiments relative to the setpoints is plotted in Fig.~4b. We note that one could also measure the spin-photon correlations at the second output of the interferometer, which for symmetric states as Eq.1 would yield the same correlations but with opposite sign; however, in the current experiment the slow ($\approx 1$~s) recovery of the detector after being blinded due to metrology light leakage through this output port prevented us from using the second output. \begin{figure} \centering \includegraphics[width = 0.4\textwidth]{fig3.png} \caption{ (a) Polarization-maintaining fiber-based imbalanced interferometer used for the photon state readout in X and Y bases. The frequency-converted single photons are directed into the interferometer. One output port of the interferometer is connected to a superconducting nanowire detector (detector D3). Every second the phase of the interferometer is stabilized. Classical frequency-converted light created by a second DFG setup (DFG2) is sent into the interferometer via a shutter S and a circulator. Light intensities measured by photodiodes PD2 and PD3 are used to generate a feedback signal to the fiber piezo stretcher (FPS) to maintain the target phase $\Delta \phi$. (b) Bloch sphere with the selected photon qubit readout bases indicated on it, and the corresponding phase set points of the imbalanced interferometer.} \end{figure} In the remaining 900 ms of each cycle spin-photon correlations are measured using the same protocol as for the ZZ basis (see Fig. 2b). To read out the NV spin state in the appropriate rotated basis, the eigenstates $\ket{\text{X}}$ ($\ket{\text{Y}}$) and the $\ket{\text{-X}}$ ($\ket{\text{-Y}}$) are mapped onto the $\ket{0}$ and $\ket{1}$ states, respectively, by applying an appropriate MW pulse before optical readout. Figure 4c shows the measured spin-photon correlations in the X and Y basis (bottom), along with expected correlations for the ideal state (top). The letters indicate the spin and photon bases respectively, for example -XX indicates that the NV spin is measured along the -X axis on the Bloch sphere, while the photon is projected on +X. The measured contrast between the correlations and anti-correlations in the X basis is $E_{X} = 0.52 \pm 0.07$ and $E_{Y} = 0.69 \pm 0.07$ in the Y basis. All data show clear (anti-)correlation between the NV spin qubit and the telecom photonic qubit. With the contrast data from all three orthogonal photon readout bases, we calculate the fidelity $\mathcal{F}$ of our produced state (conditioned on photon detection) to the maximally entangled state of Eq.~1 as \begin{equation} \mathcal{F}= \frac{1}{4}\left(1+ E_{X} + E_{Y} + E_{Z}\right), \end{equation} yielding a fidelity of $\mathcal{F} = 0.77 \pm 0.03$. This value exceeds the classical boundary of $0.5$ by more than eight standard deviations, proving the generation of entanglement between the NV spin qubit and the frequency-converted photonic qubit. For comparison, reported fidelities for unconverted NV spin-photon entangled states range from $\approx 0.7$~\cite{Togan2010, Trupke2018} to more than $ 0.9$ (estimated from an observed spin-spin entangled state fidelity of $\approx 0.9$~\cite{Hensen.Bell.test}). The observed fidelity is reduced compared to the ideal value of 1 due to several factors. First, the initial spin-photon entangled state has imperfections, for instance due to photon emission and re-excitation of the NV center during the optical $\pi$-pulse~\cite{Humphreys2018} and small frequency shifts due to spectral diffusion. In addition, the remaining frequency variations of the two locked lasers ($\sim$200~kHz) leads to phase uncertainty between the two terms in Eq.1. All these effects reduce the contrast of the XX and YY correlations, but not that of the ZZ correlations. Second, spontaneous parametric downconversion (SPDC) and Raman scattered photons, produced during the frequency conversion process, add noise to the state as described above and reduce correlations in all bases. Based on these factors, we expect a state fidelity in the range $0.82-0.87$. \begin{figure}[tb] \centering \includegraphics[width = 0.4\textwidth]{fig4.png} \caption{(a) Experimental protocol for measurements in the photon X and Y basis. (b) Measured phase difference $\Delta\phi$ just before stabilization (orange, with 900 ms free evoluation time) and directly after stabilization (blue) for the two setpoints $\Delta\phi_X = \pi/4$ and $\Delta\phi_Y = 3\pi/4$. From the standard deviations in these data, we estimate a residual phase drift of $0.05$ and $0.01$ rad/s for the X and Y photon qubit readout bases, respectively. (c) Results for the correlations in the X and Y basis in purple. The top-panel shows ideal correlations. In total we have measured 1595 photon detection events.} \end{figure} The slight difference between the expected and measured state fidelity could be due to the inaccuracies and fluctuations in setting the interferometer phase setpoint. Imperfect interferometer settings result in measurement bases that slightly deviate from the expected X and Y bases, reducing the maximally observable correlations. Therefore, the obtained $\mathcal{F}\geq 0.77 \pm 0.03$ sets a lower bound on the true entangled state fidelity. In conclusion, we demonstrated entanglement between an NV center spin qubit and a time-bin encoded photonic qubit at telecom wavelength, which is an essential step towards long-distance quantum networks based on remote entanglement between NV center nodes. In future experiments the observed state fidelity can be further increased in several ways. A more narrow band frequency filtering after the DFG1 setup would reduce the added noise in the frequency conversion, as the current narrow-band filter has a linewidth $\sim 10$ times larger than the NV-emitted resonant ZPL photons. The signal could be increased by improving the conversion efficiency. Finally, the emission rate of resonant photons and the collection efficiency can be increased by placing the NV center in an optical cavity \cite{Faraon2012, Johnson2015, Hunger2016, Riedel2017, Bogdanovic2017,Englund2018}. {\it Acknowledgements.} We thank L. Schriek, E. Nieuwkoop, J. Lugtenburg and W. Peterse for experimental assistance, and M.~J.~A.~de~Dood and C. Osorio Tamayo for useful discussions. We acknowledge financial support from the Netherlands Organisation for Scientific Research (NWO) through a VICI grant and the Zwaartekracht Grant Quantum Software Consortium and the European Research Council through an ERC Consolidator Grant.\\
1,314,259,996,927
arxiv
\section{Introduction}% \label{sec:Introduction} It is now accepted, within the $\Lambda$ Cold Dark Matter ($\Lambda$CDM) model, that galaxy mergers play a fundamental role in the formation and evolution of galaxies \citep{White1978}. These events change the nature of galaxies in a number of ways: post-mergers exhibit a diminished central metallicity \citep[e.g.][]{Kewley2006,Scudder2012}; while merging galaxies not only have both enhanced morphological disturbances \citep[e.g.][]{Conselice2003,Lotz2008,Casteels2014} and star formation rates \citep[e.g.][]{Patton2013,Thorp2019} in comparison to non-mergers, but also tend to exhibit active galactic nuclei \citep[e.g.][]{Ellison2011,Ellison2019}. Modern cosmological simulations, such as those from the EAGLE \citep{Schaye2015,Crain2015}, Illustris \citep{Vogelsberger2014,Vogelsberger2014a,Genel2014,Sijacki2015}, IllustrisTNG \citep{Marinacci2018,Naiman2018,Nelson2018,Pillepich2018, Springel2018,Nelson2019} and Horizon-AGN \citep{Dubois2014} projects, have been able not only to corroborate most observational findings about mergers, but also to provide additional insights that can be used to inform and interpret observations. For example, one of the most interesting and studied topics is the determination of the galaxy merger rate, namely, the number of mergers per unit time. A proper determination of this quantity is important to fully understand the role of interactions in galactic structure and star formation rates, as well as to test hierarchical galaxy formation models. Theoretically, the merger rate is estimated via semi-empirical (e.g. \citealt{Stewart2009,Hopkins2010}) and semi-analytic (e.g. \citealt{Guo2008}) models as well as using hydrodynamic cosmological simulations (e.g. \citealt{Rodriguez-Gomez2015}). In particular, \citet{Rodriguez-Gomez2015} used the Illustris simulation to quantify the galaxy-galaxy merger rate as a function of stellar mass, merger mass ratio, and redshift, finding that it increases steadily with stellar mass and redshift, while being in good agreement with observational constraints for both intermediate-sized and massive galaxies. On the observational side, it is important to note that the merger rate cannot be estimated directly. Instead, the merger fraction, i.e. the fraction of galaxies observed to be undergoing a merger, must be computed first, and then translated into a rate dividing by an appropriate \textit{observability} time-scale reflecting the merging detection period. The merger fraction is typically computed by using observations of close pairs or morphologically disturbed galaxies. Close-pair merger candidates have been identified by several authors (e.g. \citealt{Lin2004,Propris2005,Kartaltepe2007,Besla2018,Duncan2019}) as galaxies with a neighbour within a small projected angular separation and with a small line-of-sight relative radial velocity. Alternatively, since there is a close connection between galactic structure and merging processes, some types of morphologically disturbed galaxies are also considered to be ideal merger candidates. In particular, non-parametric morphological diagnostics, such as the concentration--asymmetry--smoothness (CAS, \citealt{Conselice2003}), Gini--$M_{20}$ \citep{Lotz2004}, and multimode--intensity--deviation \citep[MID,][]{Freeman2013} statistics, have been successfully used to identify galaxy mergers and to study, among other things, the evolution of the observational merger rate \citep[e.g.][]{Lotz2011}, its dependence on galaxy stellar mass \citep[e.g.][]{Casteels2014}, as well as to obtain the galaxy merger rate and merging time-scales from hydrodynamical simulations, allowing direct comparisons with observational estimates \citep[e.g.][]{Bignone2016,Whitney2021}. More recently, machine learning and deep learning methods have been adopted, for the same purpose, as an alternative to more standard methods. For instance, \citet{Snyder2019} considered a high-mass galaxy sample from the original Illustris simulation to create, based on image-based morphological calculations and merger statistics, a training data set that was fed to a random forest classifier. The resulting model was then used to perform observational and theoretical estimates of the merger rate as a function of redshift. This method was also recently used to perform merger classifications on JWST-like simulated images \citep{2022arXiv220811164R}. Similarly, \citet{2019ApJ...872...76N} combined non-parametric morphological statistics with a linear discriminant analysis (LDA) classifier to characterize simulated mergers and found that the LDA classifier outperformed the individual metrics. Furthermore, convolutional neural networks have been adopted to study galaxy morphology in different contexts. To list a few, they have been applied to differentiate disk galaxies from bulge-dominated systems in both observations and cosmological simulations \citep{HuertasCompany2015,Huertas-Company2019}, to identify merging galaxies in cosmological simulations \citep{Ciprijanovic2020}, to determine the effect of merging events on the star formation rates of galaxies \citep{Pearson2019}, to predict the stage of interaction on synthetic galaxy images and to assess the degree of realism and post-processing needed on these synthetic images to perform adequate deep learning estimations \citep{Bottrell2019}, and to identify high-mass major merger events in observations and simulations and subsequently estimate the merger fraction \citep{2020A&A...644A..87W}. Despite the well-established link between mergers and morphology in massive galaxies, the incidence and effects of mergers in dwarf galaxies $(M_\ast < 10^{9.5}\,\mathrm{M}_\odot)$ are more uncertain. This is an important topic to explore, since galaxy mergers can be transformative events and dwarfs represent the majority of the galaxy population. For example, \citet{Casteels2014} studied a local galaxy sample at $z<0.2$ and $10^{8}\,\mathrm{M}_\odot<M_\ast<10^{11.5}\,\mathrm{M}_\odot$ to infer the mass-dependent galaxy merger fraction and merger rate by measuring the asymmetry parameter from galaxy images. Their estimated major merger fraction is a decreasing function of stellar mass, falling from $4\%$ at $M_\ast\sim10^{9}\,\mathrm{M}_\odot$ to $2\%$ at $M_\ast\sim10^{11}\,\mathrm{M}_\odot$. This finding suggests that galaxy interactions might become increasingly important for lower-mass galaxies. Similarly, \citet{Besla2018} computed the frequency of companions for a low-redshift dwarf galaxy ($0.013<z<0.0252$; $2\times10^{8}\,\text{M}_\odot<M_\ast<5\times10^{9}\,\text{M}_\odot$) sample from the Sloan Digital Sky Survey (SDSS), comparing it to a mock galaxy sample from the original Illustris simulation. One of the goals of their study was to estimate the major pair fraction as a function of stellar mass, finding that this quantity increases slowly with stellar mass, but does not follow the decreasing trend reported by \citet{Casteels2014}. Motivated by such opposing results, in this paper we revisit the topic of the mass-dependent merger fraction, with an emphasis on the regime of dwarf galaxies. In this work, we use the TNG$50$ simulation from the IllustrisTNG project to investigate the galaxy merger fraction at the low-mass end ($8.5\leqslant\log(M_\ast/\text{M}_\odot)\leqslant11$), and whether it can be inferred statistically from morphological disturbances in large samples of galaxies. For this purpose, we generate a large set of synthetic images of TNG$50$ galaxies, including the effects of dust attenuation and scattering, designed to match observations from the Kilo-Degree Survey (KiDS). We then calculate several image-based morphological statistics for both the real and simulated galaxy samples, and explore the connection between morphology and mergers in the simulation. Instead of solely using the asymmetry statistic as a merger indicator, we follow a similar approach to that of \citet{Snyder2019} and train a random forest classifier using several non-parametric morphological indicators as the model features and merger statistics from the simulation merger trees as the ground truth. The trained models are then applied to the observational sample in order to estimate the local merger fraction as a function of galaxy mass in the real Universe. This paper is structured as follows. In \cref{sub:The IllustrisTNG simulations} we briefly review the IllustrisTNG simulations, and in \cref{sub:Observational sample,sub:Simulated sample} we describe the observational and simulated galaxy samples used in our analysis. In \cref{sub:Synthetic image generation,sub:Source segmentation} we describe the design and generation of synthetic images from the simulated sample, as well as the source segmentation and deblending procedures. Subsequently, in \cref{sub:Morphological_measurements} we present the morphological diagnostics measured on both galaxy samples. In \cref{sub:Merger identification} we define our merging and non-merging simulated samples, and in \cref{sub:Random forest classification} we describe the training and calibration of our random forest classifier. Our main results are given in \cref{sec:results}, where we examine the morphological differences between our observational and synthetic galaxy samples (\cref{sub:The morphologies of TNG50 galaxies}) as well as between merging and non-merging simulated galaxies (\cref{sub:The morphologies of intrinsic mergers}), evaluate the performance of our random forest classifier (\cref{sub:Random forest classification performance}), and present the resulting mass-dependent merger fraction (\cref{sub:The merger incidence of GAMA-KiDS observations}). Finally, we discuss our results in \cref{sec:Discussion} and present our conclusions in \cref{sec:Summary}. \section{Methodology}% \label{sec:Methodology} \subsection{The IllustrisTNG simulations}% \label{sub:The IllustrisTNG simulations} The IllustrisTNG project is a suite of $N$-body magneto-hydrodynamical cosmological simulations that model dark and baryonic matter assuming a $\Lambda$CDM framework \citep{Marinacci2018,Naiman2018,Nelson2018,Pillepich2018,Springel2018,Nelson2019}. The simulation suite consists of three cubic volumes with periodic boundary conditions: TNG50, TNG100, and TNG300, which measure 51.7, 110.7, and 302.6 Mpc on a side, respectively. In this work we use the highest resolution version of the TNG$50$ simulation \citep{2019MNRAS.490.3196P,2019MNRAS.490.3234N}, which has a volume of $\quantity(\SI{51.7}{\mega\pc})^3$ at a baryonic (dark) mass resolution of $8.5\cdot10^4\,\mathrm{M}_\odot$ ($4.5\cdot10^5\,\mathrm{M}_\odot$) and a spatial resolution (effectively set by the gravitational softening length of stellar and DM particles) of ${\sim}\SI{300}{\pc}$ at $z=0$. The starting redshift of the simulation is $z=127$ which is evolved down to $z=0$. The assumed cosmological parameters, obtained from \citet{Ade2016}, are $\Omega_{\Lambda,\,0}=0.6911$, $\Omega_{m,\,0}=0.3089$, $\Omega_{b,\,0}=0.0486$, $\sigma_8=0.8159$, $n_s=0.9667$ and $h=0.6774$. The galaxy formation model in IllustrisTNG includes prescriptions for gas radiative cooling, star formation and evolution, supernova feedback, metal enrichment, and feedback from supermassive black holes \citep[see][for a full description]{Weinberger2017a,2018MNRAS.473.4077P}. This model was tuned to approximately match several observational properties, such as the star formation rate density at $z=0-8$, the galaxy mass function and sizes at $z=0$, the stellar-to-halo and BH-to-halo mass relations at $z=0$ and the gas mass fraction within galaxy clusters \citep{2018MNRAS.473.4077P}. Since the simulations were not adjusted to match galaxy morphology, it is noteworthy that there is a reasonable level of morphological consistency between TNG$100$ galaxies and a comparable observational sample \citep{Rodriguez-Gomez2019}, as well as other properties of galaxies such as their star formation activity \citep{2019MNRAS.485.4817D}, resolved star formation \citep{10.1093/mnras/stab2131}, and metallicities \citep{2019MNRAS.484.5587T}. In order to identify DM haloes in the simulation, friends-of-friends groups are constructed using the percolation algorithm by \citet{Davis1985}, linking dark matter particles based on their inter-particle separation. The linking length used in the simulations is $b=0.2$ (in units of the mean interparticle distance). Furthermore, subhaloes are identified with the \textsc{\textsf{subfind}} algorithm \citep{Springel2001a,Dolag2009}. \begin{figure \begin{center} \includegraphics[scale=0.6] {./figures/cropped_ra_dec.pdf-1.png} \end{center} \caption[Position of GAMA galaxies with respect to KiDS-N tiles.] {Position of GAMA galaxies with respect to KiDS-N tiles. This figure demonstrates that the GAMA sample is fully contained within KiDS. Galaxies shown here constitute our final GAMA-KiDS sample with $z<0.05$ and $8.5\leqslant\log \quantity( M_\ast/\mathrm{M}_\odot)\leqslant11$.} \label{fig:gamakidstiles} \end{figure} \begin{figure \begin{center} \includegraphics[scale=0.45]{./figures/z_vs_mass_obs_v999x.pdf-1.png} \end{center} \caption[Distribution of stellar masses and redshifts of GAMA galaxies.]{Distribution of stellar masses and redshifts of GAMA galaxies. The rectangle encloses the examined ($z<0.05;\, 8.5\leqslant\log \quantity( M_\ast / \mathrm{M}_\odot)\leqslant11$) GAMA-KiDS sample.} \label{fig:z_vs_mass} \end{figure} \subsection{Observational sample}% \label{sub:Observational sample} The Kilo-Degree Survey (KiDS; \citealt{deJong2013}) is an ongoing optical wide-field imaging survey operating with the OmegaCAM camera at the Very Large Telescope (VLT) Survey Telescope, whose main goal is to map the Universe's large-scale matter distribution using weak lensing shear and photometric redshift measurements. KiDS is divided into two patches, one in the north (KiDS-N) and the other in the south (KiDS-S). The first of these is found near the equator in the Northern Galactic Cap, while the second is found around the South Galactic Pole; in combination, they cover $\SI{\sim1350}{\degg\squared}$ of the sky. The analysis presented in this work was conducted using products from the fourth data release of KiDS \citep{Kuijken2019}. The survey has a typical seeing of \ang[angle-symbol-over-decimal=true]{;;0.7} with $5\sigma$ depths of 24.2, 25.1, 25.0 and 23.7 mag for the filters \emph{u}, \emph{g}, \emph{r} and \emph{i}, respectively. We only considered $r$-band stacked images (pixel scale of \SI{0.2}{\arcsec\per\pixel}) from KiDS-N. Our galaxy sample was constructed using data from the Galaxy and Mass Assembly (GAMA; \citealt{Baldry2018}) survey, a large catalogue of galaxies with reliable redshifts obtained from spectroscopic and multi-wavelength observations. The Anglo-Australian Telescope's (AAT) AAOmega multi-object spectrograph was used to conduct the survey, which covered three equatorial (G$09$, G$12$ and G$15$) and two southern regions (G$02$ and G$23$). The first three, chosen for this study, each encompassed \ang{\sim5} in declination and \ang{\sim12} in right ascension, and were centred at roughly \SI{9}{\ahour} (G$09$), \SI{12}{\ahour} (G$12$) and \SI{15}{\ahour} (G$15$). \cref{fig:gamakidstiles} shows the locations of galaxies from regions G$09$, G$12$ and G$15$ in relation to tiles from KiDS-N, revealing that practically all of these galaxies are included within the KiDS footprint. This makes it straightforward to match GAMA galaxies to KiDS tiles and create cutouts centred on any individual galaxy. Our \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{./figures/tng-50_comp_examples_v1.2.2.png} \end{center} \caption{Composite $g,r,i$ idealized synthetic images from the simulated TNG50 sample, as obtained with the process described in \cref{sub:Synthetic image generation}. Realistic $r$-band counterparts are shown in the first three rows of \cref{fig:sim_vs_obs_synth_images}. Labels denote the stellar mass of each galaxy.} \label{fig:comp_images} \end{figure*} \begin{figure* \begin{center} \includegraphics[width=\textwidth]{./figures/tng-50_examples_v1.2.2.png}\\ \includegraphics[width=\textwidth]{./figures/gama-kids_examples_v999x_new.png} \end{center} \caption{First three rows: $r$-band synthetic images of TNG50 galaxies after applying realism (convolution with a PSF and addition of shot and background noise) to the underlying idealized images, as described in \cref{sub:Synthetic image generation}, with the labels indicating the corresponding stellar masses. Last three rows: GAMA-KiDS $r$-band galaxy images with upper (lower) labels indicating their redshift (stellar mass).} \label{fig:sim_vs_obs_synth_images} \end{figure*} \noindent position matching approach yields a sample of $104~993$ objects with KiDS imaging from an initial sample of $105~474$ GAMA galaxies. From GAMA's third data release \citep{Baldry2018} we use the \textsf{StellarMasses} \citep{Taylor2011} and \textsf{SpecCat} \citep{Liske2015} products to obtain the stellar mass, redshift and location (right ascension and declination) for the examined galaxies. The distribution of stellar masses and redshifts of GAMA galaxies can be seen in \cref{fig:z_vs_mass}. For this study, we selected galaxies in the mass range $8.5\leqslant\log \quantity( M_\ast/\mathrm{M}_\odot)\leqslant11$ with redshift $z<0.05$ in order to perform a direct comparison to a single snapshot from the TNG$50$ simulation. Our final GAMA-KiDS sample consisted of $1238$ objects, and is illustrated in both \cref{fig:gamakidstiles,fig:z_vs_mass}. Finally, having defined our galaxy sample based on the GAMA catalogues, KiDS cutouts for individual galaxies were created using utilities from the \textsf{astropy} library \citep{Robitaille2013,Price-Whelan2018}. Specifically, the function \texttt{match\_coordinates\_sky} was used to locate the nearest KiDS tile that contained a given GAMA galaxy. Then, the function \texttt{Cutout2D} was employed to create individual images with a fixed size of $240\times240$ pixels. Overall, $104~993$ cutouts were obtained from this matching procedure. \subsection{Simulated sample}% \label{sub:Simulated sample} We consider galaxies from the TNG50 simulation with stellar masses in the range $8.5\leqslant\log(M_\ast/\text{M}_\odot)\leqslant11$, which is consistent with our GAMA-KiDS sample, from a single simulation snapshot at $z=0.034$ (snapshot $96$) that is close to the median redshift of the galaxies in our observational sample. The resulting mock galaxy catalogue consists of $5561$ galaxies. \begin{figure* \begin{center} \includegraphics[width=\textwidth]{./figures/deblending_example_v2_low_vx_noid.png} \end{center} \caption{Segmentation procedure. Leftmost panel: real KiDS $r$-band image showing multiple sources; second from left: segmentation map obtained from the previous image; second from right: regularised segmentation map for the object of interest (central galaxy); rightmost entry: regularised mask.} \label{fig:deblending} \end{figure*} \subsection{Synthetic image generation}% \label{sub:Synthetic image generation} Galaxy images were constructed from the light distributions of stellar populations, including dust effects such as scattering and attenuation. Following \citet{Rodriguez-Gomez2019}, we use different image generation pipelines based on the value of the star-forming gas fraction ($f_{\mathrm{ gas,\,sf }}$) in simulated galaxies: synthetic images for galaxies with $f_{\mathrm{ gas,\,sf }} < 0.01$ were generated using the \textsc{\textsf{galaxev}} stellar population synthesis code \citep{Bruzual2003}, while galaxies with $f_{\mathrm{ gas,\,sf }} \geqslant 0.01$ additionally model young stellar populations (age $<$ 10 Myr) with the \textsc{\textsf{mappings-iii}} libraries \citep{Groves2008} and include dust radiative transfer using the \textsc{\textsf{skirt}} code \citep{Baes2011,Camps2015}. We note that we avoid processing low-gas galaxies with \textsc{\textsf{skirt}} simply for performance reasons, and that both pipelines would produce essentially indistinguishable images for such objects (further details can be consulted in \citealt{Rodriguez-Gomez2019}). For both pipelines, the light contribution from each stellar particle was smoothed using a particle hydrodynamics spline kernel \citep{Hernquist1989,Springel2001} with an adaptive smoothing scale given, for each simulation particle, as the three-dimensional distance to the $32$nd nearest neighbour. Furthermore, the synthetic images were created with the pixel scale of KiDS (\SI{0.2}{\arcsec\per\pixel}) and were mock-observed from the Cartesian projections \emph{xy, yz} and \emph{zx}, also setting the field of view of each object to $240\times240$ pixels, and taking cosmological effects into account (such as surface brightness dimming) assuming they are located at $z = 0.034$. This procedure yielded idealised images in four broadband filters corresponding to the $g,r,i,z$ bands, of which we exclusively used the $r$-band for our analysis. We point out that galaxies from the $xy$ projection were used in the discussion presented in \cref{sub:The morphologies of TNG50 galaxies,sub:The morphologies of intrinsic mergers}, while galaxies from all three projections were employed from \cref{sub:Random forest classification performance} onwards. The units of the synthetic images are analog-to-digital units (ADU) per second, consistent with real KiDS science images. \cref{fig:comp_images} shows composite images of randomly selected galaxies from the simulated sample, presented in order of increasing stellar mass, using the KiDS $g,r,i$ filters. Finally, realism was added to the idealised images via convolution with a point spread function (PSF) and addition of shot and uniform background noise. We convolved each image with a 2D Gaussian PSF with full width at half maximum (FWHM) equal to \ang[angle-symbol-over-decimal=true]{;;0.7}, which corresponds to the median PSF from KiDS $r$-band images. Shot noise was included by assuming an effective gain of $3\times10^{13}$ electrons per data unit, also consistent with KiDS data products, while background noise was modelled as a Gaussian random variable with uniform standard deviation $\sigma_{\mathrm{ bkg }}=2\times10^{-12}\,\mathrm{ADU\,s^{-1}}$ across each simulated image. The first three rows from \cref{fig:sim_vs_obs_synth_images} show the same synthetic images of \cref{fig:comp_images} after adding realism, which are compared in the next three rows to randomly selected galaxies from observations. \begin{table} \caption{Galaxy deblending and segmentation parameters used by \textsc{\textsf{sep}}.} \centering \bgroup \def1.1{1.2} \begin{tabular}{ccc} \toprule \text{Parameter} & \text{Description} & \text{Value} \\ \midrule \texttt{\textbf{minarea}} & \text{Minimum galaxy area} & 10 pixels \\ \texttt{\textbf{deblend\_nthresh}}& \text{Number of deblending levels} & 16 \\ \texttt{\textbf{deblend\_cont}} & \text{Minimum deblending contrast} & 0.0001 \\ \texttt{\textbf{thresh}} & \text{Minimum detection threshold} & 0.75 \\ \bottomrule \end{tabular} \egroup \label{tab:sepparams} \end{table} \subsection{Source segmentation and deblending}% \label{sub:Source segmentation} Segmentation is the process in which distinct sources within an astronomical image are labelled with different integer values, reserving the zero-value for the background. The resulting array, with the same shape as the original image, is called a segmentation map or segmentation image. Proper segmentation is a crucial step for the morphological measurements presented in \cref{sub:Morphological_measurements}, since it defines the region that corresponds to the galaxy of interest while removing contaminant objects. The most challenging stage of this procedure is deblending, i.e. the separation of two or more overlapping different sources. Since we are interested in morphology-based merger identifications on individual galaxies (instead of, for example, counting close pairs) and because it is almost impossible, based on imaging alone, to distinguish between true companions and contaminants (such as chance projections along the line of sight), deblending was applied to all examined sources. Thus, we have created segmentation maps for both the observational and mock samples using \textsc{\textsf{sep}} \citep{Barbary2016}, a \textsf{Python} library that implements the core functionality of \textsf{SE}\textsc{\textsf{xtractor}} \citep{Bertin1996}. We have controlled source detection and deblending using the following \textsc{\textsf{sep}} input parameters: \texttt{\textbf{thresh}}, the detection threshold in standard deviations; \texttt{\textbf{minarea}}, the minimum number of pixels in an object; \texttt{\textbf{deblend\_nthresh}}, the number of deblending levels, and \texttt{\textbf{deblend\_cont}}, the minimum contrast ratio for source deblending. The values of these input parameters are given in \cref{tab:sepparams}. Based on segmentation maps, we produced a mask for each galaxy. This was accomplished by identifying the object that coincided with the galaxy of interest and labelling it as the main source; the remainder of the segments, excluding the background, constituted the mask. It is worth mentioning that the main segment and the mask were \emph{regularised} in the sense that they were smoothed by a uniform filter with a size of $10\times10$ pixels. \cref{fig:deblending} shows a schematic of this procedure. \subsection{Morphological measurements}% \label{sub:Morphological_measurements} Morphology calculations were done using \textsf{statmorph} \citep{Rodriguez-Gomez2019}, a \textsf{Python} package for computing non-parametric morphological diagnostics of galaxy images, as well as fitting 2D Sérsic profiles. In order to run the code we used the science images, their segmentation maps and their associated masks, as well as the \texttt{\textbf{gain}} factor, a scalar that converts the image units into $e^{-}\,\text{pixel}^{-1}$, using the same value as in \cref{sub:Synthetic image generation}. In this work we consider the following parameters: concentration, asymmetry and smoothness (CAS; \citealp{,Conselice2003}), Gini-$M_{20}$ statistics \citep{Lotz2004,Snyder2015a,Snyder2015b}, multimode, intensity and deviation (MID; \citealp{Freeman2013}) and variations of the asymmetry parameter, such as the outer ($A_O$; \citealp{Wen2014}) and shape ($A_S$; \citealp{Pawlik2016}) asymmetries. Below we briefly describe each of them. The concentration parameter ($C$; \citealt{Bershady2000,Conselice2003}) is a measure of the quantity of light at a galaxy's centre in comparison to its outskirts and is given by $5 \log(r_{80}/r_{20})$, where $r_{20}$ and $r_{80}$ are the radii of circular apertures containing 20\% ($r_{20}$) and 80\% ($r_{80}$) of the galaxy's total flux. Elliptical galaxies exhibit high concentration ($\sim4$) values, whereas spiral galaxies have smaller ($\sim3$) values. The asymmetry index ($A$; \citealt{Abraham1996x,Conselice2000,Conselice2003}) is calculated by subtracting a galaxy image from its $ { 180 }^{ \circ } $-rotated counterpart, and is used as a measure of what fraction of light is due to non-symmetric components. The equation for computing this parameter is given by \begin{equation} A = \frac{ \sum_{i,\,j}\abs{I_{ij} - I_{ij}^{180}} }{ \sum_{i,\,j} \abs{I_{ij}}} - A_{ \textsc{bkg} }, \label{eq:asymmetry} \end{equation} where $ I_{ij} $ and $ { I }_{ij}^{ 180 } $ are, respectively, the pixel flux values of the original and rotated distributions, and $ A_ \textsc{bkg} $ is the average asymmetry of the background. High asymmetry values are often used to identify possible recent interactions and galaxy mergers. \begin{table* \centering \bgroup \def1.1{1.1} \begin{tabular}{ccccccccccc} \toprule \multirow{2}{*}{Combination} & \multicolumn{10}{c}{Parameter} \\ & $C$ & $ A $ & $ S $ & $F\quantity(G,\,M_{20})$ & $S\quantity(G,\,M_{20})$ & $M$ & $I$ & $D$ & $A_O$ & $A_S$ \\ \midrule 1 &\textbf{Yes} & \textbf{Yes} &\textbf{Yes} &\textbf{Yes}&\textbf{Yes} &No&No&No&No&No \\ 2 &\textbf{Yes} & \textbf{Yes} &\textbf{Yes} &\textbf{Yes}&\textbf{Yes} &\textbf{Yes} &\textbf{Yes} &\textbf{Yes} &No&No \\ 3 &No & \textbf{Yes} &\textbf{Yes} &No&No &No &No &No &\textbf{Yes} &\textbf{Yes} \\ \bottomrule \end{tabular} \egroup \caption[Feature combinations considered during training.]{Feature combinations considered during hyper-parameter tuning and cross-validation.} \label{tab:comb_features} \end{table*} The smoothness parameter ($S$; \citealt{Conselice2003}) is estimated in a similar way to the asymmetry, by subtracting a galaxy distribution from a counterpart that has been smoothed by a boxcar filter of width $ \sigma $, and it indicates the fraction of light that is contained in \emph{clumpy} regions (e.g. high frequency disturbances). Following \citet{Lotz2004}, we set the value of $\sigma$ to $25\%$ of the Petrosian radius. The Gini index ($G$; \citealt{Abraham2003,Lotz2004}) quantifies the degree of inequality of the brightness distribution in a set of pixels. The Gini coefficient is equal to $1$ when all the galaxy light is concentrated in one pixel; conversely, it is equal to zero when the light distribution is homogeneous across all pixels. Early-type galaxies, as well as galaxies with one or more bright nuclei, exhibit high Gini values. The $ M_{\mathrm{ 20 }} $ coefficient is the normalised second moment of a galaxy’s brightest regions, containing $20\%$ of the total flux. Mergers and star-forming disk galaxies tend to have high $M_{20}$ values. The bulge parameter ($F(G,\,M_{20})$; \citealt{Snyder2015b}) is a linear combination of $G$ and $M_{\mathrm{ 20 }}$. In the $G$-$M_{\mathrm{ 20 }}$ space, early-type, late-type and merging galaxies are found by the position they occupy relative to two intersecting lines \citep{Lotz2008}. The bulge parameter, $F(G,\,M_{20})$, is then defined as the position along the line with origin at the intersection $ \quantity( G_0=0.565,M_{20,\,0}=-1.679 ) $ that is perpendicular to the line that separates early-type and late-type galaxies, scaled by a factor of $5$, \begin{equation} F \quantity( G,\,M_{20} ) = -0.693M_{20}+4.95G-3.96. \label{eq:} \end{equation} The merger parameter ($S(G,\,M_{20})$, \citealt{Snyder2015a}) has a similar definition to $F \quantity( G,\,M_{20} )$. It is given as the position along a line with origin at $ \quantity( G_0,M_{20,\,0} ) $ that is perpendicular to the line that separates mergers from non-mergers, \begin{equation} S \quantity( G,\,M_{20} ) = 0.139M_{20}+0.990G-0.327 \label{eq:}. \end{equation} The multimode ($M$; \citealt{Freeman2013}) parameter is the pixel ratio of the two brightest regions of a galaxy, which are identified following a threshold method: bright regions are identified when they are above a threshold value, a process repeated for different values until the ratio is the largest. Double-nuclei systems tend to have values close to one. The intensity ($I$; \citealt{Freeman2013}) parameter is the flux ratio between the two brightest subregions of a galaxy. For its computation the watershed algorithm is used, i.e. the galaxy image is divided into groups such that each subregion consists of all pixels whose maximum gradient paths lead to the same local maximum. Clumpy systems often exhibit high intensity values. The deviation ($D$; \citealt{Freeman2013}) parameter is given as the normalised distance between the image centroid and the centre of the brightest region found during the computation of the $I$-statistic, and is used to quantify the offset between bright regions of a galaxy and the centroid. The outer asymmetry ($A_O$; \citealt{Wen2014}) parameter is defined in the same way as the conventional asymmetry (see \cref{eq:asymmetry}), with the exception that pixels from the inner elliptical aperture that contains $50\%$ of the galaxy’s light are not included in the computation. Pixels outside this area build up the outer half-flux region, for which $A_O$ is estimated. Lastly, the shape asymmetry ($A_S$; \citealt{Pawlik2016}) parameter is also calculated in the same way as the standard asymmetry, with the difference that the measurement is done over a binary segmentation map rather than the galaxy brightness distribution. Finally, \texttt{\textbf{statmorph}} provides a \texttt{\textbf{flag}} quality parameter to distinguish between reliable and unsuccessful measurements, which are labelled with \texttt{\textbf{flag} == 0} and \texttt{\textbf{flag} == 1}, respectively. We find that the overall fraction of flagged galaxies is low, representing ${\lesssim}10\%$ for both our observational and simulated samples. From this point onwards we only consider galaxies with reliable morphological measurements, also imposing, for each object, a minimum signal-to-noise ratio $\expval{S/N}\geqslant2.5$. \subsection{Merger identification}% \label{sub:Merger identification} Galaxy mergers in the TNG50 simulation can be identified using merger trees created using the \textsc{\textsf{sublink}} code \citep{Rodriguez-Gomez2015}. The idea behind the merger trees is to associate a given subhalo with its progenitors and descendants from adjacent snapshots, in such a way that a merger event occurs when a subhalo has two or more different progenitors. From the merging history catalogues of the TNG50 simulation, we have determined which galaxies from our synthetic sample have experienced a merger within a given period. The merger mass ratio is defined as $\mu = M_2 / M_1$, where $M_1$ and $M_2$ are the stellar masses of the primary and secondary progenitors, respectively, measured at the moment when the secondary progenitor reaches its maximum stellar mass \citep{Rodriguez-Gomez2015}. Traditionally, major and minor mergers are defined as those with $\mu > 1/4$ and $1/10 < \mu < 1/4$, respectively. Throughout this paper we consider a combined sample of major + minor mergers, with mass ratios $\mu > 1/10$, for all our computations. The main reason for this choice is to have a larger training sample for our classifier, but it also has the advantage of potentially detecting merger signatures that are more subtle or long-lasting than those produced by major mergers. As discussed in \citet{Lotz2011}, both the Gini--$M_{20}$ statistics and the asymmetry parameter are sensitive to minor mergers as well as major ones. In this context, our intrinsic merger sample is composed of mergers that occurred within ${\pm}0.5$ Gyr relative to the reference redshift ($z=0.034$; snapshot 96). For definiteness, we note that this time window includes mergers that were recorded in the snapshots 94 to 99 (since for a merger event recorded at snapshot $k$, the merger must have actually taken place at some time between snapshots $k-1$ and $k$), and represents an observability time-scale of $\approx 1$ Gyr. From a sample of 15~463 galaxy images with successful morphological measurements, we identified 833 major + minor mergers within the specified time window, representing an overall merger fraction of about 5\%. \subsection{Random forest classification}% \label{sub:Random forest classification} The random forest (RF) algorithm \citep{Breiman2001} is an ensemble method based on independent decision trees, each of them learning from subsamples of the input data. Predictions on unseen data are then given as a majority vote among all uncorrelated models given by the trees in the forest. The goal of the algorithm is to learn a rule from the galaxy inputs $\vb{x}$ (morphological measurements) to the labels $y$ (merger statistics), and then generalise over novel inputs. In this study, feature space is defined by the CAS, Gini-$M_{20}$, MID, $A_O$ and $A_S$ parameters, representing a total of 10 features, with inputs given as values of these attributes for the galaxies. In other words, inputs can be seen as vectors $\vb{x}_i$ having different morphological values in each entry. Likewise, the target values of the algorithm are given by the merger label $y_i$ of each galaxy, with $y_i=1$ if the given input is a true intrinsic merger (as defined in \cref{sub:Merger identification}), and $y_i=0$ otherwise. The \textsf{scikit-learn} module \citep{Pedregosa2011} was used to construct random forests. The library’s internal implementation of the classifier is based on the algorithm of \citet{Breiman2001}, which incorporates bootstrapping of the training set and randomised feature selection. Utilities from this and the \textsf{imbalanced-learn} library \citep{Lemaitre2017} were also used. The receiver operating characteristic (ROC) curve was considered to assess the performance of each model. In this sense, positive predictive value (PPV, also known as purity or precision) is the ratio between the true positives (TP; true mergers selected by the classifier) and the sum of all objects selected, which are the false positives (FP; non-mergers selected by the classification) and true positives, that is, \( \text{PPV} = \text{TP}/(\text{TP}+\text{FP}). \) Similarly, true positive rate (TPR, also known as completeness or recall) is the ratio between TP and the total number of intrinsic mergers, which is the sum of true positives and false negatives (FN; true mergers rejected by the classifier), i.e. \( \mathbf{\text{TPR} = \text{TP}/(\text{TP}+\text{FN}).} \) The ROC curve is a plot of the TPR against the false positive rate (FPR) at various threshold values, where the latter is computed as the ratio of FP and the total number of intrinsic non-mergers, which in turn are given as the sum of FP and true negatives (TN; non-mergers rejected by the algorithm): \( \text{FPR} = \text{FP}/(\text{FP}+\text{TN}) \). The ROC curve of a random classifier (with no predictive power) is a diagonal line from the origin to the point $(1, 1)$ whereas a perfect classifier is described by two lines, the first of these starting from the origin to $(0, 1)$ and the second one from $(0,1)$ to $(1, 1)$, so that it has $\text{TPR} = 1$ and $\text{FPR} = 0$. Our full learning dataset consists of 15~463 inputs corresponding to successful morphological measurements performed on synthetic galaxy images in three orientations along the axes of the simulation volume, each of them having different values, along with their corresponding merger labels (ground truth). We split our full dataset into training and test sets, assigning $70\%$ of the inputs to the first set, and the remaining $30\%$ to the second set. This division was done in a stratified manner, which keeps the original proportion of distinct classes (merger or non-merger) in both samples. We point out that in the full dataset there are ${\sim}18$ non-mergers for each merger. Thus, class imbalance was offset by randomly under-sampling (RUS) the majority class (non-mergers) to bring it to the same size as the merger set. This method is implemented in the \textsf{imbalanced-learn} library, and was applied before fitting the random forest. We also tried over-sampling the minority class (mergers) using the Synthetic Minority Over-sampling Technique (SMOTE, \citealt{Chawla2002}), which resulted in approximately the same performance. Therefore, from this point onwards we will only show results obtained with the RUS technique. Similarly, the tuning of hyperparameters (i.e. parameters used to control the training process, instead of those derived from it) was done by means of cross-validation on the training set, conducting a 5-fold stratified scheme. This entails randomly dividing the training set into five folds, retaining the proportion of mergers and non-mergers, and repeatedly training the classifier with a combination of four of these samples while validating (testing) the remaining one. Cycling over all splits, the random forest classifier is trained and validated on five different subsets of the original training set. Simultaneously, the hyper-parameters of each forest can be optimised in a grid search fashion. In this step the main optimised parameters were the number of trees in the forest, the depth of each tree, the maximum number of leaf (terminal) nodes, and the balancing of the sample. Accordingly, we used, respectively, the \texttt{\textbf{RandomForestClassifier}} function from {\textsf{scikit-learn}} to tune the \texttt{\textbf{n\_estimators}}, \texttt{\textbf{max\_depth}}, \texttt{\textbf{max\_leaf\_nodes}} and \texttt{\textbf{class\_weight}} parameters, while keeping the rest of hyper-parameters at their default values. Furthermore, during training we have explored several combinations and sub-samples from all morphological attributes in order to test how well they would perform on their own. \cref{tab:comb_features} shows all such combinations considered in the RF models. Lastly, the entire procedure was carried out with the \texttt{\textbf{pipeline}} and \texttt{\textbf{GridSearchCV}} functions from the \textsf{scikit-learn} and \textsf{imbalanced-learn} libraries, which allow for cross-validation and exhaustive hyper-parameter tuning at the same time. The end result of the process is a trained model and a combination of hyper-parameters that give, according to a particular metric, the 'best' classification of the data. Additionally, the classifier provides for each of the galaxies considered, a probability score that is used to label them (as mergers or non-mergers). As such, this threshold can be varied in order to reach a compromise between successful and unsuccessful classifications. This is usually done by maximising certain metrics, such as the $F_1$-score or the Matthews correlation coefficient\footnote{The $F_1$-score is defined as the harmonic mean of precision and recall, taking values between 0 (worst) and 1 (best). The Matthews correlation coefficient (MCC) correlates the ground truth with predictions in a binary classification and is given by \[ \text{MCC} = \frac{\text{TP}\times \text{TN}-\text{FP}\times\text{FN}}{\sqrt{(\text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+\text{FN})}}, \] being equal to $+1$ for a perfect prediction, to $0$ for random classifications, and to $-1$ for wrong predictions.} , or by computing the balance point, namely the point for which $\mathrm{TPR = 1-FPR}$. In this work we use the latter approach as the default probability threshold, emphasizing that we compared this value to those computed with the $F_1$-score and the Matthews coefficient, finding similar results between these thresholds and the balance point, which in turn mean that there are no significant differences for the performance metrics that we present in \cref{sub:Random forest classification performance}. \begin{figure*} \begin{center} \includegraphics[width=0.98\textwidth]{./figures/pairplots_tng50_vs_gama-kids_v1.2x.pdf-1.png} \end{center} \caption{Pairwise plots of $r$-band morphological parameters from the observational GAMA-KiDS (blue) and simulated TNG50 (red) samples, with univariate distributions shown on the diagonal. The bi-dimensional kernel density estimates are shown with contours at $\{0.1, 0.2, 0.5, 0.8, 0.95\}$. This plot indicates that there is good overall agreement between the morphologies of these samples.} \label{fig:pairs_obs_Sim_comb} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.88\textwidth]{./figures/medians_tng50_vs_gama-kids_v1.2x.pdf-1.png} \end{center} \caption{Median trends as a function of stellar mass for several $r$-band morphological parameters. The solid red and blue lines indicate the simulated (SKIRT pipeline) and observational results, respectively; the dashed line corresponds to synthetic images generated without the effects of a dust distribution (GALAXEV pipeline). This figure again shows that there is good agreement between theory and observations, with the median values (at fixed stellar mass) of all morphological parameters in TNG50 lying within 1$\sigma$ of the observational trends; it also shows that the simulated galaxies are more concentrated and asymmetric than their observational counterparts.} \label{fig:medians} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.98\textwidth]{./figures/pairplots_merg_vs_non-merg_v1.2x.pdf-1.png} \end{center} \caption{Pairwise distributions of $r$-band morphological parameters for the non-merging population (black) against the distributions of mergers (green), with contours located at $\{0.05,0.1,0.2,0.5,0.8,0.95\}$. This figure demonstrates that simulated merging and non-merging systems exhibit a high degree of overlap in their morphologies, occupying similar regions in parameter space; it also indicates that the distributions of the merging population tend to have higher distribution values, which is more noticeable for the asymmetry parameter.} \label{fig:pairs_merg_nonmerg} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.88\textwidth]{./figures/medians_merg_vs_non-merg_v1.2x.pdf-1.png} \end{center} \caption{Median trends as a function of stellar mass for several $r$-band morphological parameters. The solid black and green lines indicate the low-mass non-merging and merging TNG50 galaxy populations, respectively. Although merging systems tend to be more asymmetric at all stellar masses, this plot shows that the morphologies of these two samples are highly comparable, particularly at the low mass end.} \label{fig:medians_merg_nonmerg} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.97\textwidth]{./figures/roc_feature_importance_v1.2x.pdf-1.png} \end{center} \caption{Top row: ROC curves for random forest models trained on the full dataset using the combinations in listed in \cref{tab:comb_features}; asymmetry and merger statistic lines indicate merger selections using only one of these parameters. The hexagons, triangles and circles indicate default threshold values for the classifier (balance point), asymmetry parameter ($0.25$) and merger statistic ($0.0$) selections. Bottom row: feature importance for each model corresponding to the random forest ROC curves above. The vertical axis indicates the name of the corresponding attributes, while the horizontal axis denotes the percentage, i.e., the parameters' contribution to classification decisions. In all cases the asymmetry or outer asymmetry parameters had the highest weight on decisions, with the merger statistic, smoothness or asymmetry parameter (combination 3) ranking second in their respective combination.} \label{fig:roc_fimp} \end{figure*} \section{Results}% \label{sec:results} \subsection{The morphologies of TNG50 and GAMA-KiDS galaxies}% \label{sub:The morphologies of TNG50 galaxies} \cref{fig:pairs_obs_Sim_comb} shows pairwise plots of various morphological parameters measured by \textsf{statmorph} for our simulated (TNG50, red shaded regions) and observational (KiDS, blue contours) galaxy samples. In the former case, we only show measurements for a single projection (onto the $xy$-plane). The plots in the lower triangle of \cref{fig:pairs_obs_Sim_comb} show the joint distributions of the various morphological parameters, while the plots on the diagonal show their univariate distributions. As can be seen, there is reasonable agreement between the morphologies of simulated galaxies and those of the real observational sample. However, \cref{fig:pairs_obs_Sim_comb} also reveals some differences between the two samples. For instance, the concentrations and Gini coefficients of TNG50 galaxies peak at slightly higher values than their observational counterparts, while the distribution of the $M_{20}$ parameter reaches a maximum at a slightly lower value. These trends indicate that TNG50 galaxies tend to be slightly more concentrated objects than real galaxies of similar stellar mass. On the other hand, the asymmetry distribution reaches a peak very close to zero for both samples, but displays a tail toward higher values for TNG50 galaxies, indicating that these objects tend to be slightly more asymmetric than their observational counterparts. \cref{fig:medians} shows median trends as a function of stellar mass for all morphological parameters considered. The red solid line corresponds to galaxies modelled following our full radiative transfer pipeline, while the dashed one was obtained using simpler models without including the effects of a dust distribution; similarly, the blue solid line indicates the median trend for our GAMA-KiDS sample. The blue and red shaded regions denote the corresponding 16th to 84th percentile range, at a fixed stellar mass, for the observational and simulated (dust effects included) samples, respectively. These figures confirm the overall morphological agreement between observations and simulations: all median trends from TNG50 lie within the $1\sigma$ scatter of the observational measurements. However, closer inspection of \cref{fig:medians} shows again that TNG50 galaxies tend to be slightly more concentrated and asymmetric than their observational counterparts. Finally, \cref{fig:medians} corroborates that simulated galaxies modelled with dust effects have morphologies better aligned with observations, since dust attenuation tends to reduce the brightness of the central regions, where higher concentrations of gas and dust are typically encountered in the simulation. This effect, however, is not enough to bring the concentrations of low-mass galaxies into full agreement with observations. \subsection{The morphologies of true mergers and non-mergers}% \label{sub:The morphologies of intrinsic mergers} \cref{fig:pairs_merg_nonmerg} shows morphology distributions for the merging sample (green shaded regions) defined in \cref{sub:Merger identification} as well as for the non-merging TNG50 population (black contours). As can be seen, the morphologies of these two groups do not differ significantly from each other, i.e. they occupy similar regions in parameter space, as previously pointed out for the case of galaxies at higher redshifts \citep{Snyder2019}. The morphological similarity between mergers and non-mergers can also be seen in \cref{fig:medians_merg_nonmerg}, which shows median trends (solid lines) and $1\sigma$ scatter regions (shaded zones) for the relevant morphological parameters as a function of stellar mass. Nevertheless, the asymmetry and outer asymmetry parameters reveal that, although there is significant morphological overlap between mergers and non-mergers, the former are more asymmetric than the latter. In the next sections, we exploit these differences in order to train an image-based galaxy merger classifier. \subsection{Classifying mergers with random forests}% \label{sub:Random forest classification performance} \subsubsection{Performance and feature importance} In this section we present results about the merger classification of the mock-catalog as detailed in \cref{sub:Random forest classification}. The hyper-parameter tuning and cross-validation procedures performed on the training sets result in the best trained models for each of the feature combinations listed in \cref{tab:comb_features}. Each of these models was then applied to the corresponding test set, which had ${\approx}4389$ non-merging objects and ${\approx}250$ merging galaxies, with stellar masses $8.5\leqslant\log \quantity( M_\ast / \mathrm{M}_\odot)\leqslant11$. We found that all models yield similar classifications independently of the feature combination used: ${\approx}179$ and $2425$ objects were correctly classified as mergers and non-mergers, respectively; $1964$ non-merging galaxies were misclassified as mergers, and $71$ mergers were misclassified as non-mergers. These findings are translated, for the merger class, into an average purity and completeness of ${\approx}8.4\%$ and ${\approx}72\%$, respectively.\footnote{A random classifier (without predictive power) would have a purity of 5.4\%, equal to the overall merger fraction.} The upper panels of \cref{fig:roc_fimp} show the ROC curve (see \cref{sub:Random forest classification}) for the trained models using combinations 1--3 from \cref{tab:comb_features}. For reference, we also show the ROC curves that would be obtained by using only the asymmetry parameter or the Gini--$M_{20}$ merger statistic to select merging galaxies. A perfect classifier lies at the upper-left corner and has $\text{TPR}=1$ and $\text{FPR}=0$, while a model that goes in diagonal from $(0,0)$ to $(1,1)$ has no predictive power. Our models have loci between these two regions, with an area under the ROC curve of ${\approx}0.7$, indicating that they are moderately adequate classifiers. The lower panels of \cref{fig:roc_fimp}. show, also for the model combinations 1--3, input features and their relative importance to the classification decisions. In this context, feature importance is defined as the mean decrease in impurity achieved by each variable at all relevant nodes in the random forest. In all cases the asymmetry or outer asymmetry parameters had the highest weight ($\sim30-50\%$) on the classifier decisions, followed in second place by smoothness parameter ($15\%)$ or the merger statistic ($12\%)$. None of the other parameters had a feature importance above 15\%. \subsubsection{Default random forest model}% \label{sub:Default_rf_model} The models presented above have similar performance regarding purity and completeness values. These results are consistent with the work by \citet{Snyder2019}, who designed random forest models for high-mass galaxies at different redshifts from the original Illustris simulation. For their lowest redshift sample at $z=0.5$, they obtained purity values of up to $10\%$, with completeness at roughly $70\%$. Similarly, the metrics produced by our models are consistent with those found by \citet{Bignone2016} for the Illustris simulation. Specifically, they studied the morphologies of a galaxy sample at $z=0$ with $M_\ast > 10^{10}\,\text{M}_\odot$ and subsequently used the Gini--$M_{20}$ criterion from \citet{Lotz2004} to identify galaxy mergers. Their results, as a function of the time $t$ elapsed since the last merger, show that for $t\sim1$ Gyr the purity metric is around $5\%$ for cases with $\mu > 1/10$ and equal to $9\%$ for $\mu > 1/4$. Throughout the rest of this paper we set the RUS+RF model trained with combination one in \cref{tab:comb_features} as our default random forest model. This decision is mainly based on the robustness of classification performance for different sets of features, but also because the morphological parameters included in that combination are widely used in the literature (e.g. \citealt{Lotz2011}, and references therein) to identify both major and minor mergers. In \cref{sub:The merger incidence of GAMA-KiDS observations} we use this model to estimate the galaxy merger fraction in the TNG50 simulation, which we compare to the intrinsic merger fraction (i.e. computed with the merger trees), as well as to estimate the galaxy merger fraction in the real Universe by applying our classifier to GAMA-KiDS observations. \begin{figure*} \begin{center} \includegraphics[width=0.49\textwidth]{./figures/random_tp_v1.2.2xxx.png} \includegraphics[width=0.49\textwidth]{./figures/random_tn_v1.2.2xxx.png} \includegraphics[width=0.49\textwidth]{./figures/random_fp_v1.2.2xxx.png} \includegraphics[width=0.49\textwidth]{./figures/random_fn_v1.2.2xxx.png} \end{center} \caption{Classification result examples made by the random forest model. The upper-left block of figures shows true positives (true mergers selected) while the upper-right block shows true negatives (non-mergers rejected). These objects typically exhibit the expected attributes: true positives (mergers) are perturbed and often have companions, whereas true negatives (non-mergers) tend to be isolated and unperturbed. Similarly, the lower-left block of figures shows false positives (non-mergers selected) while the lower-right part shows false negatives (true mergers rejected). False positives might arise from mergers taking place outside the window detection period or by having similar morphologies to some true mergers; false negatives could emerge from minor mergers failing to trigger morphological disturbances at the time of detection. In all cases the upper text labels indicate the probability value assigned to that object by the random forest, as well as its corresponding asymmetry value.} \label{fig:tp_tn_fp_fn} \end{figure*} \subsubsection{Classification result examples}% \label{sub:Classification result examples} In \cref{fig:tp_tn_fp_fn} we present examples of simulated galaxies that have been classified by our default random forest model and were categorised as true positives and true negatives as well as false positives and false negatives. Note that these objects are sorted in ascending order according to stellar mass. As can be seen in the upper row from \cref{fig:tp_tn_fp_fn}, most true mergers exhibit clear signs of interaction, such as asymmetric structures and neighbouring or overlapping companions. Likewise, most non-mergers do not have significantly perturbed morphologies and look relatively isolated. This is particularly noticeable for low-mass objects. Similarly, the lower row in \cref{fig:tp_tn_fp_fn} shows examples of the failure modes of the classifier. We found that some false positives had relatively asymmetric structures but were not labelled as mergers in our merger tree-based selection (see \cref{sub:Merger identification}). Thus, such cases might arise from merging events taking place outside our window detection period or from isolated galaxies that are morphologically similar to mergers (see \cref{fig:pairs_merg_nonmerg,fig:medians_merg_nonmerg}). Conversely, some false negatives appear unperturbed so that they are probably the result of minor mergers that did not trigger perceptible morphological signatures. These cases are reminiscent of the findings by \citet{2021MNRAS.500.4937M}, who found that mergers induce limited morphological changes in dwarf galaxies. Thus, the lower part of \cref{fig:tp_tn_fp_fn} illustrates the most common challenges faced by the models. On the one hand, merging events are rare, which reduces the number of class examples and statistics for training; on the other hand, there is a significant degree of similarity between mergers and non-mergers, which explains most of the false positives. \begin{figure*} \begin{center} \subfloat[\label{fig:merger_frac_rf}]{\includegraphics[width=0.45\textwidth]{./figures/merger_fraction_int_rus+rf_run_new_v1.2.pdf-1.png}} \subfloat[\label{fig:merger_frac_asym}]{\includegraphics[width=0.45\textwidth]{./figures/merger_fraction_int_asymetry_run_new_v1.2.pdf-1.png}} \end{center} \caption{(a) Galaxy merger fraction as a function of stellar mass as predicted by our random forest classifier for our simulated and observational galaxy samples, as indicated by the legend. (b) Mass-dependent merger fraction for the observational and simulated galaxy samples as predicted by an asymmetry-based criterion. In both panels, the solid blue line represents the intrinsic merger fraction measured in TNG50 using the merger trees. The error bars represent Poisson uncertainty from the number of mergers in each mass bin. This figure shows that the merger fraction increases steadily as a function of stellar mass for all the approaches considered.} \label{fig:merger_frac_ab} \end{figure*} \subsection{The merger incidence of GAMA-KiDS observations}% \label{sub:The merger incidence of GAMA-KiDS observations} We used our default random forest model to estimate the merger fraction in our observational $(z<0.05; 8.5\leqslant\log \quantity( M_\ast /\mathrm{M}_\odot)\leqslant11)$ galaxy sample. The input entries for this model consist of the morphological measurements carried out on the GAMA-KiDS sample (see \cref{sub:Morphological_measurements}), corresponding to combination one in \cref{tab:comb_features}. Following \citet{Snyder2019}, we estimate the merger fraction $f_{\mathrm{ merger }}$ as \begin{equation} f_{\mathrm{ merger }}=\frac{ N_{\mathrm{ RF }} }{ N_{\mathrm{T}} }\frac{\mathrm{ PPV } }{ \mathrm{ TPR } }\expval{M/N}, \label{eq:merger_frac_rf} \end{equation} where $N_{\mathrm{ RF }}$ and $N_{\mathrm{T}}$ are the number of galaxies selected by the forest and the total number of galaxies, respectively; the factor $\expval{M/N}$ is the average total number of simulated mergers divided by the number of galaxies with at least one such merger, which in our case is approximately equal to one; the term $\mathrm{ PPV }/\mathrm{ TPR }$ is a corrective factor that takes into account the completeness and purity of the default model, so that when used on novel data the best-guess merger fraction is obtained \citep{Snyder2019}. For comparison, we have determined the intrinsic merger fraction of simulated galaxies by estimating, for each mass bin, the quantity $N_{\mathrm{ merger }}/N_{\mathrm{T}}$, where $N_{\mathrm{merger}}$ is the number of intrinsic mergers, obtained from the merger trees as described in \cref{sub:Merger identification}. \cref{fig:merger_frac_rf} shows the estimated merger fraction, as a function of stellar mass, for the simulated and observational samples. As can be seen, both follow a qualitatively similar trend to the intrinsic merger fraction, but show differences within a factor of ${\sim}2$, as discussed below. The error bars in the estimated merger fractions are given by Poisson statistics from the number of mergers in each bin. The fact that the random forest classifier predicts a lower merger fraction for observed galaxies than for simulated ones is perhaps not surprising by noting that the most important feature for the classifications is the asymmetry parameter (\cref{sub:Random forest classification performance}), and that observed galaxies have somewhat lower asymmetry values than simulated ones (\cref{sub:The morphologies of TNG50 galaxies}). A more robust result is that the galaxy merger fraction is an increasing function of stellar mass for all the cases considered. These findings are in contrast with the major merger fraction estimate by \citet{Casteels2014}, who found a decreasing merger fraction within the stellar mass range $9.0\leq\log \quantity( M_\ast / \mathrm{M}_\odot)\leq9.5$, and a roughly constant trend above that mass range, while all of our estimates indicate that the merger fraction increases steadily as a function of stellar mass. This qualitative difference is puzzling, considering that the estimate by \citealp{Casteels2014} was performed on a similar observational galaxy sample, using the asymmetry parameter to estimate the fraction of asymmetric galaxies and subsequently the major merger fraction as a function of stellar mass. For comparison, we have applied an asymmetry-based criterion to our samples to identify highly asymmetric galaxies. These objects were selected as those for which $A>0.25$. In the following step we computed the PPV and TPR for these predictions, and we applied a modified version of \cref{eq:merger_frac_rf} to estimate the merger fraction derived from the asymmetry criterion. We note that $\text{PPV}\approx8.5\%$ and $\text{TPR}\approx7.5\%$ for this classification, which represents a purity of the same order as that of our RF models but with a completeness that is considerably smaller. \cref{fig:merger_frac_asym} shows a comparison between the intrinsic merger fraction from the simulation and the asymmetry-based merger fraction, $f_{\mathrm{m},\,A}$, for both simulation and observations. As can be seen, such estimates qualitatively follow the trend of the intrinsic merger fraction: both are increasing functions of stellar mass. However, $f_{\mathrm{m},\,A}(\mathrm{obs})$ is smaller than the other two fractions by a factor of ${\sim}2$, which again reflects the fact that the asymmetry parameter tends to be lower for our observational sample than for our simulated one. \section{Discussion}% \label{sec:Discussion} Using the state-of-the-art TNG50 cosmological simulation and KiDS observations, we have studied the optical morphologies of galaxies at low redshift ($z < 0.05$) over a wide range of stellar masses ($8.5 < \log_{10}(M_\ast/\mathrm{M}_\odot) < 11$). The goal of this analysis has been threefold: (i) to carry out an `apples-to-apples' comparison between the optical morphologies of TNG50 and KiDS galaxies, allowing us to identify possible weaknesses in the IllustrisTNG galaxy formation model at unprecedentedly high mass resolution (16 times better than TNG100); (ii) combining morphological measurements of the simulated galaxies with information from the merger trees, to train and evaluate the performance of an algorithm for identifying merging galaxies based on morphological diagnostics alone; and (iii) to apply this simulation-trained algorithm to observations in order to estimate the galaxy merger fraction in the real Universe. The first step for carrying out this work was to prepare the observational data set, shown in \cref{fig:gamakidstiles,fig:z_vs_mass}, which consisted in selecting galaxies from the GAMA catalogues satisfying $8.5 \leqslant \log_{10}(M_\ast/\mathrm{M}_\odot) \leqslant 11$ and $z < 0.05$, and extracting their corresponding `cutouts' from KiDS mosaic images. Similarly, we prepared a simulation data set by selecting TNG50 galaxies from snapshot 96 (corresponding to $z = 0.034$, close to the median redshift of the observational sample) also satisfying $8.5 \leqslant \log_{10}(M_\ast/\mathrm{M}_\odot) \leqslant 11$, and then generating synthetic images for all simulated galaxies (including the effects of dust attenuation and scattering, and for three different projections) designed to match the KiDS data set. \cref{fig:comp_images} shows idealized, composite ($g,r,i$ bands) images for some of our simulated galaxies, while \cref{fig:sim_vs_obs_synth_images} shows the corresponding $r$-band images after including realism (convolution with a PSF and noise modelling), along with some example galaxies from the observational sample. After preparing the observational and simulated data sets, we performed source segmentation and deblending on each image in order to isolate the galaxy of interest and remove unwanted or contaminating sources, as illustrated in \cref{fig:deblending}. We then measured various morphological diagnostics in the $r$-band for galaxies from both data sets using the same code (\textsf{statmorph}), which represents a robust, quantitative comparison between theory and observations. This comparison showed good overall agreement between TNG50 and KiDS galaxies, with the median trend as a function of stellar mass for TNG50 galaxies lying within $\sim$1$\sigma$ of the observational distribution for every morphological parameter considered (\cref{fig:pairs_obs_Sim_comb,fig:medians}). However, TNG50 galaxies tend to be slightly more concentrated and asymmetric than their observational counterparts, and show wider distributions for most parameters. Interestingly, using the TNG100 simulation, \citet{Rodriguez-Gomez2019} also found that some IllustrisTNG galaxies are more concentrated compared to their observational counterparts from the Pan-STARRS $3\pi$ Survey \citep{Chambers2016}. However, this discrepancy was observed at higher masses, $M_{\ast} \sim 10^{11} \, {\rm M}_{\odot}$, and was attributed to the implementation details of the active galactic nuclei (AGN) feedback -- specifically, it was argued that the spherical region over which energy and momentum are injected by the AGN might be too large, and therefore ineffective at small radii. In the present work we reach much lower masses than those achievable in TNG100 (by a factor of 16) and find that the discrepancy pointed out by \citet{Rodriguez-Gomez2019} reappears at $M_{\ast} \sim 10^{9} \, {\rm M}_{\odot}$. Despite the different stellar masses, it is possible that the reason for the higher concentrations of TNG50 galaxies at $M_{\ast} \sim 10^{9} \, {\rm M}_{\odot}$ relative to observations is essentially the same as that for TNG100 galaxies at $M_{\ast} \sim 10^{11} \, {\rm M}_{\odot}$, namely, inefficient AGN feedback at the smallest radii. While the AGN feedback implementation operates in very different modes in such different mass ranges (`thermal' versus `kinetic'; \citealt{Weinberger2017a}), the size of the `injection region' is determined by the same prescription in both feedback modes (a sphere enclosing an approximately fixed number of gas cells). It will be interesting and important to explore in future galaxy formation models whether reducing the size of the injection region for AGN feedback produces galaxies with concentrations in better agreement with observations. We note, however, that TNG50 produces deficits in the star formation density on small scales that agrees well with observations \citep{10.1093/mnras/stab2131} On the other hand, the slightly higher asymmetries of TNG50 galaxies compared to observations could simply be a matter of resolution. Young stellar populations, in particular, are undersampled in hydrodynamic cosmological simulations, and manifest as bright clumps that become more noticeable in synthetic images produced with `bluer' broadband filters \citep{Torrey2015}. In principle, this issue could be mitigated by resampling the young stellar populations at a higher resolution \citep{Trayford2017}. However, such procedures would introduce additional complexity to our modelling and, importantly for our goal of characterising galaxy morphology, it is unclear what would be an appropriate spatial distribution for the resampled stellar populations. Therefore, we have adopted the simpler approach of smoothing the light contribution from every stellar particle using the same SPH-like kernel, regardless of the age of the stellar population. A related issue is that the outskirts of simulated galaxies are subject to particle noise, which could further contribute to overestimating the asymmetry parameter. It seems plausible that the asymmetries of simulated galaxies will automatically become more realistic with improved resolution, without the need to make substantial changes to the galaxy formation model. Having compared the optical morphologies of TNG50 galaxies to those from KiDS observations, we proceeded to compare the morphologies of merging and non-merging galaxies in the simulation. In order to do this, we first defined a merger sample composed of simulated galaxies that experienced a major or minor merger (i.e. those with stellar mass ratio $\mu > 1/10$) within a time window of approximately $\pm 0.5$ Gyr. We found that the morphology distributions of our merging and non-merging samples show a large degree of overlap, with the exception of the asymmetry-based statistics, as shown in \cref{fig:pairs_merg_nonmerg,fig:medians_merg_nonmerg}. However, despite such visually similar morphological distributions of our merging and non-meging samples, it is in principle possible that a combination of various morphological parameters would encode information about the merger histories of the galaxies that is unavailable when using the morphological parameters individually. To this end, we trained RFs using several combinations of morphological parameters of TNG50 galaxies as the model features, along with the merger label (0 or 1 for our non-merging and merging samples, respectively) as the ground truth, finding in all cases that the most important feature for identifying mergers is the asymmetry statistic. Therefore, the performance of our RF algorithm, usually quantified by the so-called ROC curve, is comparable to that of the more traditional method of selecting highly asymmetric galaxies, but is superior to a direct application of the Gini--$M_{20}$ merger statistic (\cref{fig:roc_fimp}). \cref{fig:tp_tn_fp_fn} shows some examples of the galaxy merger classifications (both successful and unsuccessful) returned by our RF algorithm. The high importance of the asymmetry parameter might appear to be in tension with \citet{Snyder2019}, where the bulge indicators ($F(G,M_{20})$, concentration) had similar or greater importance than the asymmetry statistic, and the RFs clearly outperformed asymmetry alone. We attribute these differences to the distinct nature of the galaxies considered: massive galaxies ($M_\ast > 10^{10}\,\text{M}_\odot$) at high redshifts in the case of \citet{Snyder2019}, and dwarf galaxies (mostly $M_\ast \lesssim 10^{10}\,\text{M}_\odot$) at low redshifts in the present work. In fact, the RF classification models by \citet{2022arXiv220811164R} indicate that the asymmetry parameter is more significant for identifying mergers at low redshift than indicators of bulge strength (such as the concentration and Gini statistics), while the latter have a higher importance for high-redshift events. These findings help to reconcile our results with those of \citet{Snyder2019}. Another possible factor is the choice of broadband filters. The varying importance of different image-based merger diagnostics in different redshift and stellar mass ranges, as well as for different broadband filters, will be explored in upcoming work. Finally, we applied our RFs to a test sample from the TNG50 simulation and to the observational sample, in order to estimate the galaxy merger fraction as a function of stellar mass in both simulations and observations (\cref{fig:merger_frac_rf}). In the case of the simulation, our RF was able to recover the `intrinsic' merger fraction (obtained directly from the merger trees) reasonably well (within a factor of $\sim$2). When applied to KiDS observations, our RF returned a galaxy merger fraction that increases steadily with stellar mass, just like the intrinsic merger fraction in TNG50, although with a systematic offset of a factor of $\sim$2. For comparison, we repeated this experiment using the asymmetry statistic alone, separating mergers from non-mergers using a `standard' cut at $A = 0.25$ (\cref{fig:merger_frac_asym}). This yielded a steadily rising merger fraction in both simulations and observations, but again with a persistent offset between the two data sets, with the observational merger fraction lying a factor of $\sim$2--3 below the simulation trend. This offset probably reflects the fact that our simulated galaxies are slightly more asymmetric than their observational counterparts. The results shown in \cref{fig:merger_frac_ab} imply that the merger fraction increases steadily with stellar mass, both when using the RF or the asymmetry parameter alone. These findings are qualitatively consistent with those of \citet{Besla2018}, who considered a low-redshift dwarf galaxy ($0.013<z<0.0252$; $2\times10^{8}\,\text{M}_\odot<M_\ast<5\times10^{9}\,\text{M}_\odot$) sample from SDSS to compute and compare the major pair fraction (the fraction of primary dwarf galaxies that have a secondary with a stellar mass ratio $\mu > 1/4$) with estimations from the original Illustris simulation, also finding an increasing trend (their fig. 14). However, our results are in stark contrast with those of \citet{Casteels2014}, who found a decreasing merger fraction over a comparable stellar mass range (their fig. 13), also using the asymmetry statistic as a merger indicator. \section{Summary and outlook} \label{sec:Summary} We have carried out an `apples-to-apples' comparison between the optical morphologies of galaxies from the high-resolution, state-of-the-art TNG50 simulation and those of a comparable galaxy sample from KiDS observations. Overall, we have found good agreement between the simulated and observed data sets, which is remarkable considering that the IllustrisTNG galaxy formation model was not tuned to match morphological observations. The TNG50 galaxies, however, are somewhat more concentrated and asymmetric than their observational counterparts. Using additional information from the simulation that is not available in observations -- namely, the merger trees -- we have trained a random forest algorithm to classify merging galaxies using image-based morphological diagnostics, and applied the random forest to observations in order to estimate the merger fraction in the real Universe. We found that the asymmetry statistic is the single most useful parameter for identifying galaxy mergers, at least in the mass and redshift regime we considered ($8.5\leqslant\log(M_\ast/\text{M}_\odot)\leqslant11$; $z<0.05$), and that the merger fraction is a steadily increasing function of stellar mass for both the simulated and observational samples. Currently, it is still challenging to precisely determine the merger fraction in observations, especially using galaxy morphology alone. However, we are approaching an era in which galaxy formation models will become so realistic that it will be possible to exploit subtle trends in morphological measurements -- such as the ones studied in this paper -- to infer properties of galaxies that are not directly observable in the real Universe, such as their merging histories or even the assembly histories of their host DM haloes. At the same time, our work highlights the importance of developing sophisticated tools to carry out robust comparisons between theory and observations, which will become indispensable in the upcoming years as both computational capacity and astronomical instruments continue to evolve. \section*{Acknowledgements} We thank Gurtina Besla for useful comments and discussions. VRG acknowledges support from UC MEXUS-CONACyT grant CN-19-154. This work used the Extreme Science and Engineering Discovery Environment \citep[XSEDE;][]{Towns2014}, which is supported by NSF grant ACI-1548562. The XSEDE allocation TG-AST160043 utilized the Comet and Data Oasis resources provided by the San Diego Supercomputer Center. The IllustrisTNG flagship simulations were run on the HazelHen Cray XC40 supercomputer at the High Performance Computing Center Stuttgart (HLRS) as part of project GCS-ILLU of the Gauss Centre for Supercomputing (GCS). Ancillary and test runs of the project were also run on the compute cluster operated by HITS, on the Stampede supercomputer at TACC/XSEDE (allocation AST140063), at the Hydra and Draco supercomputers at the Max Planck Computing and Data Facility, and on the MIT/Harvard computing facilities supported by FAS and MIT MKI. This research is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 177.A-3016, 177.A-3017, 177.A-3018 and 179.A-2004, and on data products produced by the KiDS consortium. The KiDS production team acknowledges support from: Deutsche Forschungsgemeinschaft, ERC, NOVA and NWO-M grants; Target; the University of Padova, and the University Federico II (Naples). GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. \section*{Data availability} The data from the IllustrisTNG simulations used in this work are publicly available at the website \href{https://www.tng-project.org}{https://www.tng-project.org} \citep{Nelson2019}. The KiDS and GAMA data are available at the websites \href{http://kids.strw.leidenuniv.nl/}{http://kids.strw.leidenuniv.nl/} and \href{http://www.gama-survey.org/}{http://www.gama-survey.org/}. \bibliographystyle{mnras}
1,314,259,996,928
arxiv
\section{Introduction} The determination of the dilaton ($S$) and moduli VEVs is an important problem in string phenomenology because it is directly related to the predictions of the string models \cite{wdr}. The dilaton and moduli are flat directions to all orders in string perturbation theory, and it is hoped that some non-perturbative effects will lift these flat directions and determine these VEVs dynamically. Non-perturbative gaugino condensation seems to be a ready candidate for doing this, but so far much effort has failed to achieve convincing success. A major reason is that the determination of dilaton VEV through gaugino condensation suffers from the dilaton runaway problem \cite{wit2}, {\em i.e.} the potential energy is minimized at either $S \rightarrow 0$ or $S \rightarrow \infty$. There are several proposals to solve the problem at tree-level in the observable sector fields. One is c-number solution \cite{wit2,subr2,rulin}, another is multiple gaugino condensation \cite{kay,dixon,tom3,cas}. In ref.~\cite{mary2} it is pointed out that another necessary condition for preventing the dilaton from running away at tree-level is that the potential energy is positive semi-definite. This is because the K\"ahler potential for the dilaton is usually assumed to be $K(S,\bar{S}) = -\log(S + \bar{S})$ and the potential energy is proportional to $1 / (S+\bar{S})$. If the potential energy is negative, it would always have a minimum at $S=0$ and the dilaton still runs away. The no-scale structure, which was proposed to naturally suppress the cosmological constant \cite{no}, is so far the most natural way to yield the positive semi-definite potential energy. We therefore focus on the models with the no-scale structure in this paper. In no-scale models, the dilaton corresponds to a flat direction at tree level; one-loop effects will lift this degeneracy. But for many string models, the dilaton VEV determined by the one-loop potential energy is rather small. The dilaton VEV consistent with the weak scale measurement is $\langle S \rangle \sim 2$, which corresponds to a (model-dependent) hierarchy of at least an order of magnitude between the Planck scale and the gaugino condensation. It appears difficult in the present formulation to dynamically generate such a hierarchy since the only scale in the theory is the Planck scale or the string scale. Because of this, in many models it is found that the gaugino condensation scale as determined by the dilaton VEV is again on the Planck scale, i.e. $\langle e^{-S/2b_0}\rangle \sim 1$. These features are illustrated in the simplest c-number gaugino condensation model \begin{eqnarray} K &=& -\log(S + \bar{S}) - 3 \log(T+\bar{T}-|\phi|^2), \\ W &=& c_0 + a e^{-3S / 2 b_0}. \end{eqnarray} At tree level, dilaton potential is flat and the one-loop corrections give a potential which is minimized at $\langle e^{-S / 2b_0} \rangle \sim 1$. In this model, although one avoid the dilaton runaway problem at the tree-level, but it ``runs too little'' at the one-loop level. In this sense, the dilaton runaway appears to be a generic problem in many models. In this work we examine the one-loop structure of the formulation for the dynamics of gaugino condensation coupled to the dilaton first proposed in ref.~\cite{mary1}, in which a chiral field $H$ corresponding to the gaugino bilinear is introduced. The new ingredient of our analysis is that we take into account the one-loop corrections to the dilaton K\"ahler potential. The inclusion of these effects leads to conclusions that are quite different from the usual scenario of gaugino condensation. We find that the dilaton mass as well as that of the $H$ field are on the same order as the gaugino condensation scale. This indicates that the usual approach to determining the dilaton VEV which treats the dilaton as a light field below the gaugino condensation scale may be incorrect. We also find that the supersymmetry in the Yang-Mills sector is broken by gaugino condensation, unlike the pure Yang-Mills theory first discussed in \cite{vani}. These results may shed some lights on the dilaton runaway problem. The computations also show that the loop-correction to the dilaton K\"ahler potential may lead to determining the dilaton VEV at a value consistent with weak scale measurements. This paper is organized as follows: In section 2, we will derive in more detail the no-scale formulation of gaugino condensation proposed in ref.~\cite{mary1}. We show that the inclusion of the one-loop correction to the dilaton K\"ahler potential and the introduction of a chiral field related to the gaugino bilinear are crucial for making the model of the no-scale type. In section 3, we give the necessary formulas for the analysis of the one-loop effective theory which is developed in \cite{mary3}. In section 4 and 5, we will analyze the one-loop vacuum structure for our model. We will do this in two different ways. In section 4, we treat the gaugino-bilinear chiral fields as heavy field and we integrate it out by solving its equation of motion. In section 5, we treat it as a dynamical field with mass comparable to the dilaton. We will find that the second analysis yields a minimum at the finite dilaton VEV, giving a new scenario for gaugino condensation. Section 6 contains our conclusions. \section{No-Scale Formulation} In this section, we discuss the no-scale formulation of gaugino condensation for the string models in which the gauge coupling constant does not receive string threshold corrections. We are particularly interested in the models with the no-scale structure because they naturally suppress the cosmological constant \cite{no}, prevent the dilaton from running-away at tree level \cite{mary2}, and also because they might be a natural symmetry for generating the large mass hierarchy between $M_{\sl GUT}= 0^{16}$ GeV and $M_{\sl SUSY} = 1$ TeV \cite{rara,rulin}. In \cite{rulin} it is shown that in a modified c-number model, the induced constant term in the superpotential is not quantized; furthermore this supersymmetry breaking scheme makes it possible to construct affine level one $SU(5)$ or $SO(10)$ string models with the intermediate gauge symmetry breaking scale $M_{\sl GUT} = 10^{16}$ GeV. We therefore restrict attention to the c-number supersymmetry breaking schemes. The modular-invariant formulation of effective action with gaugino condensation coupled to the dilaton field has been discussed in \cite{modu1,mivr1,mivr2} for string models in which the gauge coupling constant receives string threshold corrections. The no-scale formulation for this type of string models is given in \cite{mary2}. The one-loop analysis of these models are given in \cite{mary3}. In \cite{mary1}, the no-scale formulation of the supersymmetry breaking dynamics is given for the string models in which the gauge coupling constant does not receive string threshold corrections. The one-loop analysis of these models has not been carried out so far; this is the subject of this work. In the following, we will give a more detailed derivation of the no-scale formulation of gaugino condensation for the string models that do not receive string threshold corrections. We will show that the one-loop correction to the dilaton K\"ahler potential and the introduction of the $H$ field play a crucial role in the no-scale structure, and in preventing the dilaton from running away at tree level. It has been shown in ref.~\cite{sfer} that the one-loop contribution to the gauge kinetic terms should be viewed as a field-dependent wave-function renormalization of the dilaton field rather than a renormalization of gauge coupling function $f$. For example, to cancel the modular anomaly the K\"ahler potential is modified to be \begin{eqnarray} K & = & -\log\left(S+\bar{S}-\frac{2b_0}{3}k\right) - k, \\ k & = & -3\log(T+\bar{T}-|\phi|^2), \end{eqnarray} while the gauge coupling function remain unchanged: \begin{equation} f = S. \end{equation} This is usually called Green--Schwarz mechanism \cite{green,sfer,lop,tom1}. The above K\"ahler potential is obviously not of the no-scale type. Now one takes into account the full one-loop contribution to the dilaton K\"ahler potential which has the form: \begin{eqnarray} K = -\log \left(S+\bar{S}+2b_0\log\Lambda^2-\frac{2b_0}{3}k\right)-k, \end{eqnarray} where $\Lambda$ is the renormalization scale. The gaugino condensation scale corresponds to $\Lambda^2=e^{k/3}|H|^2$, where $H$ is the chiral field relating to the gaugino bilinear $\langle \lambda \bar{\lambda} \rangle$ (the factor $e^{k/3}$ is included to make $\Lambda$ modular invariant). We then obtain \begin{eqnarray} K &=& -\log \left(S+\bar{S}+2b_0\log|H|^2\right)-k,. \end{eqnarray} This model is of no-scale type provided that the superpotential is independent of the moduli fields. The superpotential for the $H$ field can be obtained from symmetry arguments \cite{vani,local,glo,lo}: \begin{equation} W = d\left[\frac{1}{4}SH^3+\frac{b_0}{2}H^3\log(\eta H)\right]+c_0+W_0. \end{equation} Here $c_0$ and $W_0$ are the contribution from the charged background VEVs and matter fields. The parameters $d$ and $\eta$ are not fixed by symmetry requirements, and specified by the underlying gaugino condensation dynamics. Under modular transformations, \begin{eqnarray} T &\mapsto& T' = \frac{aT-ib}{icT+d}, \\ \Phi^i &\mapsto& \Phi^{i\prime} = \frac{\Phi^i}{icT+d}, \\ S &\mapsto& S' = S + 2b_0(icT+d), \\ H &\mapsto& H'=\frac{H}{icT+d}, \\ c_0 &\mapsto& c_0'= \frac{c_0}{(icT+d)^3}, \qquad ad-bc=1, \end{eqnarray} so that \begin{eqnarray} K & \mapsto & K' = K + F + \bar{F}, \\ W & \mapsto & W' = We^{-F}, \\ F & = & 3\log(icT+d). \end{eqnarray} For the supergravity theory, the lagrangian depends on the combination of the K\"ahler potential and superpotential $G=K+\log|W|^2$, the above theory is invariant under the modular transformation and also has the desired no-scale structure. Although the modification of the dilaton kinetic energy by loop effects has been known for several years, its consequences for gaugino condensation have not been fully explored. We see from the above derivation that the inclusion of the one-loop effects to the dilaton K\"ahler potential is crucial for the no-scale formulation of the gaugino condensation. In our following analysis, we find that it has nontrivial effects on the one-loop vacuum structure and the dynamical determination of dilaton VEV. \section{Formalism} In this section, we write down the necessary formulas for our analysis. Our calculation largely follows ref.~\cite{mary3}, although (as we explain below) some aspects of the analysis are quite different. At tree level, the potential energy can be written as \cite{mary3} \begin{equation} V=e^K\left(K_{a\bar{b}}^{-1}\tilde{W}^{a} \bar{\tilde{W}}^{\bar{b}}\right), \end{equation} where $z_{a} = (S, \phi,H)$ and \begin{equation} \tilde{W}_{a} \equiv \frac{\partial W}{\partial z_{a}} + K_{a} W - 3W \frac{K_{a \bar{T}}}{K_{\bar{T}}}, \end{equation} which manifestly has the no-scale structure. The tree-level vacuum conditions are \begin{equation} \langle \tilde{W}_{a} \rangle =0. \end{equation} To calculate the one-loop effective potential, one must first calculate the mass matrices of the chiral fields. The scalar squared mass matrix is given by \begin{eqnarray} M_S^2&=&\pmatrix{v_{a\bar{b}} & v_{ac} \cr v_{\bar{d}\bar{b}} & v_{\bar{d}b} \cr}, \\ v_{a\bar{b}}&\equiv&\frac{\partial^2V}{\partial z_a \partial \bar{z}_b}, \\ v_{ab}&\equiv&\frac{\partial^2V}{\partial z_a \partial z_b}- G_{cd\bar{e}}(G^{-1})^{\bar{e}f}V_f. \end{eqnarray} The normalized scalar masses are \begin{eqnarray} M_S^{r2}\equiv\pmatrix{G^{-1/2} & 0 \cr 0 & (G^{-1/2})^T \cr} \pmatrix{v_{a\bar{b}} & v_{ac} \cr v_{\bar{d}\bar{b}} & v_{\bar{d}b} \cr} \pmatrix{G^{-1/2} & 0 \cr 0 & (G^{-1/2})^T \cr}, \end{eqnarray} which has the same eigenvalues as \begin{eqnarray} \tilde{M}_s^{r2}=\pmatrix{v_{a\bar{b}} & v_{ac} \cr v_{\bar{d}\bar{b}} & v_{\bar{d}b}\cr} \pmatrix{G^{-1} & 0 \cr 0 & (G^{-1})^T \cr}. \end{eqnarray} Under the tree-level vacuum condition above, one obtains \begin{eqnarray} v_{a\bar{b}}&=&e^K \left[ \tilde{W}_{ac}(G^{-1})^{c\bar{d}}\bar{\tilde{W}}_{\bar{d}b}+ \bar{\tilde{W}}_{a\bar{c}}(G^{-1})^{\bar{c}d}\tilde{W}_{d\bar{b}}\right], \\ v_{ab}&=&e^K\left[ \tilde{W}_{ac}(G^{-1})^{c\bar{d}}\bar{\tilde{W}}_{\bar{d}b}+ \bar{\tilde{W}}_{a\bar{c}}(G^{-1})^{\bar{c}d}\tilde{W}_{db} \right]. \end{eqnarray} The gaugino mass parameter is given by \begin{equation} (M_{1/2})_{\alpha\beta}=\frac{1}{2}e^{G/2}(G^{-1})^{a\bar{b}}G_{\bar{b}} f_{\alpha\beta,a}, \end{equation} while the normalized gaugino mass-squared is \begin{equation} (M_{1/2}M^{+}_{1/2})_{\alpha\beta}\equiv (M_{1/2})_{\alpha\gamma} [(Re f)^{-1}]^{\gamma\delta}(M^{+}_{1/2})_{\delta\beta}. \end{equation} The fermion masses are given by \begin{eqnarray} \mu_{ab}=e^{G/2}\left[G_{ab}+G_aG_b- \frac{1}{3}G_c(G^{-1})^{c\bar{d}}G_{\bar{d}ab}\right] \end{eqnarray} The normalized fermion mass matrix is, \begin{eqnarray} m_{ab}=(G^{-\frac{1}{2}})^c_a\mu_{cd}(G^{-\frac{1}{2}})^d_b, \end{eqnarray} which has the same eigenvalue as \begin{equation} \tilde{m}_{ab}\equiv m_{ac}(G^{-1})^c_b. \end{equation} In the above, \begin{eqnarray} G_a&=&K_a+\frac{W_a}{W}, \\ G_{ab}&=&K_{ab}+\frac{W_{ab}}{W}-\frac{W_aW_b}{W^2}, \\ G_{a\bar{b}}&=&K_{a\bar{b}}, \\ (G^{-1})^{a\bar{c}}G_{\bar{c}b}&=&\delta^a_b. \end{eqnarray} The one-loop potential is \begin{equation} V^{1-loop} = \frac{1}{4(4\pi)^2}[2 {\rm Str}(M^2\Lambda^2)+ {\rm Str}(M^4\log(M^2/\Lambda^2))], \end{equation} where Str is the supertrace. In our one-loop analysis, we use the tree-level vacuum conditions to calculate the one-loop potential energy, with which we determine the rest of the VEVs of the scalar fields. This approximation corresponds to determining the VEVs to order $\hbar$ \section{Gaugino Bilinear as a Heavy Field} In this section, we will carry out the one-loop analysis for the models formulated in Section 2. Here we treat the gaugino bilinear as a heavy field (as is usually assumed), {\em i.e.} we integrate out the $H$ field using its equation of motion. {}From the tree-level vacuum condition, we get the classical equation of motion for $H$ field, \begin{equation} \tilde{W}_H \equiv W_H - \frac{2b_0}{H(S+\bar{S}+2b_0\log|H|^2)} W =0. \end{equation} To solve this equation, we write \begin{equation} H=h(S) e^{S/2b_0}. \end{equation} The function $h$ is determined by \begin{equation} d h^3[(3L-2b_0)\log(h\eta)+L]=4c_0e^{3S/2b_0}, \end{equation} where \begin{equation} L=S+\bar{S}+2b_0\log|H|^2=2b_0\log|h|^2. \end{equation} It is easy to see that gaugino condensation occurs ($h \ne 0$) if and only if the constant part of the superpotential $c_0$ is nonzero. This is consistent with our assumption that the $c_0$ is induced by gaugino condensation. After solving for $H$, we obtain the effective theory below the gaugino condensation scale: \begin{eqnarray} K&=&-\log L+k, \qquad k=-3 \log\left[T+\bar{T}-|h|^2e^{(S+\bar{S})/2b_0}\right], \\ W&=&d \frac{b_0}{2} e^{-3S/2b_0}h^3 \log\eta h + c_0 + W_0. \end{eqnarray} The tree level potential energy is \begin{equation} V=e^K \left(|\tilde{W}_s|^2+|W_i|^2\right), \end{equation} which is minimized at \begin{eqnarray} \label{vacc} \langle W_i \rangle &=&0, \\ \label{vaccc} \langle \tilde{W}_S \rangle &\equiv& \langle W_S+K_SW-\frac{3K_{S\bar{T}}}{K_{\bar{T}}}W \rangle =\langle W_S-\frac{L_S}{L}W \rangle=0. \end{eqnarray} Note that in the tree-level potential energy, the $A$ term and scalar masses of the matter sector remain zero although local supersymmetry is broken by $\langle W \rangle \ne 0$. We find that at the tree-level minimum \begin{equation} G^S = (G^{-1})^{S\bar{a}}G_{\bar{a}}=0, \qquad G^T = (G^{-1})^{T\bar{a}}G_{\bar{a}}= -e^{k/3}, \end{equation} so the gauginos in the matter sector remain massless if the $f$ function does not depend on the moduli; these are the models in which the gauge coupling constant does not receive moduli-dependent string threshold corrections. We see that at tree-level, global supersymmetry is broken in the dilaton sector but not in the matter sector for the models we are considering. In this approximation, all the moduli masses vanish. We now solve the tree-level vacuum conditions eqs.~(\ref{vacc}) and (\ref{vaccc}). Eq.~(\ref{vacc}) is automatically satisfied if the VEVs of the matter fields are zero for the tri-linear superpotential. Eq.~(\ref{vaccc}) imposes one relation between $c_0$ and $S$; the dilaton and moduli fields correspond to flat directions which might be lifted by loop effects. Solving eq.~(\ref{vaccc}), we obtain \begin{equation} \langle h \rangle=\eta^{-1}. \end{equation} The parameter $\eta$ is determined by the gaugino condensation dynamics and is expected to be of order 1. Now we can also determine $c_0$ in terms of $S$, \begin{equation} \langle W \rangle = c_0 = \frac{1}{4} \langle d L h^3 e^{-3S/2b_0} \rangle. \end{equation} We obtain the dilaton scalar squared masses \begin{eqnarray} M_S^2 &=& m_{3/2}^2 \left[ (G^{-1})^{S\bar{S}}\frac{1\pm 6L/(2b_0)}{4L^2} \right]^2 \nonumber\\ &=& \frac{(1\pm 6L/(2b_0))^2}{(1+3xL^2/(4b_0^2))^4}, \end{eqnarray} and the dilaton fermion mass \begin{eqnarray} M_f &=& m_{3/2} (G^{-1})^{S\bar{S}}\frac{3}{4 b_0 L} \\ &=& m_{3/2} \frac{3L/b_0}{(1+3L^2x/4b_0^2)^2}, \end{eqnarray} with \begin{eqnarray} (G^{-1})^{S\bar{S}}&=&\frac{4L^2}{(1+ 3L^2 x / 4b_0^2)^2}, \\ m^2_{3/2}&=&e^K|W|^2=\frac{1}{16}d^2Lx^3,\\ x &=& \frac{|h|^2}{k}e^{-(S+\bar{S})/2b_0}. \end{eqnarray} The cutoff is \begin{eqnarray} \Lambda^2 &=& M_S^2|H|^2e^{K/3} \nonumber\\ &=&\frac{2}{S+\bar{S}+2b_0\log(T+\bar{T})}|H|^2e^{K/3} \nonumber\\ &=&\frac{2x}{L+2b_0 \log(x+1)}, \end{eqnarray} where \begin{equation} M_S^2=\frac{2M^2_p}{S+\bar{S}+2b_0\log(T+\bar{T})} \end{equation} is the string scale, which is also the scale at which the string gauge coupling constants ``unify'' \cite{kap,mary4}. We can now obtain the one-loop potential energy, which depends only on the moduli-invariant dilaton and moduli combination $x$: \begin{eqnarray} V&=&2[-4+\frac{2}{(1+3L2x/4b_0^2)^2}]\frac{Ld^2}{16}x^4 \\ \nonumber & + &(\frac{Ld^2}{16})^2x^6 \{[-4+\frac{2+12(6L/2b_0)^2}{(1+3L^2x)^4}]\log \left[ \frac{d^2L}{32}(L+2b_0 \log(x+1))x^2\right] + g\} \\ g &=& \frac{1}{(1+3L^2x/4b_0^2)^4}[(1+6L/2b_0)^4\log(1+6L/2b_0)^2 \\ \nonumber &+&(1-6L/2b_0)^4\log(1-6L/2b_0)^2-2(6L/2b_0)^4\log(6L/2b_0)^2]. \end{eqnarray} We find that the minimum of the above one-loop potential energy is reached at either $x\rightarrow \infty$ or $x=0$ depending the different ways of minimizing the potential. In any case, the model suffers the dilaton runaway problem. In the next section, we consider the possibility that the dilaton mass is on the same order as the gaugino condensation scale, and find by an explicit calculation that this is a real possibility. \section {Gaugino Bilinear As A Dynamical Field} In this Section, we study the one-loop vacuum structure of our model treating the $H$ field as a dynamical field. As already pointed out, this model cannot be viewed as anything more than a model of the dynamics at the gaugino condensation scale, similar in spirit to the linear sigma model as a model of QCD. Still, it is a reasonable model, and the qualitative conclusions may be correct. In this case, the dilaton field and the $H$ field mix, and the inverse of the K\"ahler metric is \begin{equation} (G^{-1})^{i\bar{j}}=\pmatrix{L^2 + 4b^2C / (3|H|^2) & -2bC / (3H) & -\frac{2}{3} bC \cr -2bC/{3\bar{H}} & \frac{1}{3} {C} & \frac{1}{3} {HC} \cr -\frac{2}{3} {bC}& \frac{1}{3}{\bar{H}C}& \frac{1}{3}C(|H|^2+C) \cr}, \end{equation} where \begin{equation} L=S+\bar{S}+2b_0\log|H|^2, \qquad C=T+\bar{T}-|H|^2. \end{equation} Solving the tree-level vacuum condition \begin{eqnarray} \label{tvacc} \langle \tilde{W}_H \rangle \equiv \langle W_H-\frac{L_H}{L}W\rangle=0, \\ \label{tvaccc} \langle \tilde{W}_S \rangle \equiv\langle W_S+K_SW\rangle =0, \end{eqnarray} we get \begin{eqnarray} H &=& \frac{1}{\eta} e^{-s / (2b_0)} = h e^{-s /(2b_0)}, \\ \langle W \rangle &=& c_0 = \frac{1}{4} \langle dLH^3 \rangle. \end{eqnarray} The orders of magnitude of the solutions are \begin{eqnarray} H &\sim& e^{-s / (2b_0)}, \\ \langle W \rangle &\sim& e^{-3s / (2b_0)}, \end{eqnarray} so $h\sim 1$, $d_0\equiv \frac{1}{4}dL=\frac{1}{4}d\log(2b_0|h|^2) \sim 1$. We take the point of view that $h$ and $d_0$ are constants to be determined from a better understanding of the gaugino condensation dynamics. This is different from the point of view of ref.~\cite{mary3}, where these parameters are taken to be dynamical variables to be determined by minimization of the effective potential. {}From the vacuum conditions eqs.~(\ref{tvacc}) and (\ref{tvaccc}) we obtain \begin{eqnarray} \langle G^i \rangle = \langle G^{i\bar{k}}G_{\bar{k}} \rangle = \pmatrix{0&0&-C\cr}, \end{eqnarray} so the tree-level gaugino mass is zero in the models we are discussing. With the above relations, we obtain the fermion mass matrix \begin{eqnarray} (\mu_{1/2})_{IJ}=e^{{G}/{2}} \pmatrix{0 & {3}/(LH)&0\cr {3}/(LH)& {12b_0} / ({H^2L}) &0\cr 0&0&0\cr}. \end{eqnarray} The normalized fermion mass matrix is \begin{eqnarray} && (m_{1/2})_{IJ} = (\mu_{1/2})_{IJ}(G^{-1})^{J\bar{K}} \nonumber\\ &=& e^{{G}/{2}} \pmatrix{-{2bC}/({L|H|^2})&{c}/({LH})&{C}/{L}\cr {3L}/{H} - {4b_0^2C}/ ({|H|^2HL}) & {2b_0C}/({H^2L})& {2b_0C}/({HL})\cr 0&0&0\cr}. \end{eqnarray} Assuming that $S=\bar{S},H=\bar{H}$ ({\em i.e.}\ $CP$ violation is highly suppressed) we obtain the fermion mass eigenvalues \begin{eqnarray} m_{1}^{1/2}=m_{2}^{1/2}=e^{{G}/{2}}\frac{\sqrt{3C}}{HL}, \qquad m^{1/2}_3=0. \end{eqnarray} The scalar masses are computed from \begin{eqnarray} \tilde{W}_{i\bar{k}}&=&\pmatrix{1&{2b_0}/({\bar{H}})&0\cr {2b_0}/{H}&{4b_0^2}/{|H|^2}&0\cr 0&0&0 \cr} {W}/{L^2}, \\ \tilde{W}_{ik}&=&\frac{3}{4}H^2 \pmatrix{0&1&0\cr 1&{4b_0}/{H}&0 \cr 0&0&0 \cr}, \\ v_{a\bar{b}}&=&e^G\pmatrix{\frac{3C+|H|^2}{|H|^2L^2} & \frac{2b_0(3C+\bar{H}^2)}{|H|^2\bar{H}L^2}&0\cr \frac{2b_0(3C+H^2)}{|H|^2HL^2}&\frac{4b_0^2(3C+|H|^2)+ 9|H|^2L^2}{|H|^4L^2}&0\cr 0&0&0\cr}, \\ v_{ab}&=&e^G\pmatrix{0&\frac{3}{HL}&0\cr \frac{3}{HL}& \frac{12b_0}{H^2L}&0\cr 0&0&0\cr}. \end{eqnarray} Again assuming $S=\bar{S}$, $H=\bar{H}$, the normalized mass matrix has the same eigenvalues as \begin{equation} \tilde{M}_s^{r2}\equiv\frac{1}{2}e^G\pmatrix{1&-1\cr 1&1} \pmatrix{v_{a\bar{b}}G^{-1}&v_{ac}G^{-1}\cr v_{\bar{d}\bar{b}}G^{-1}&v_{\bar{d}b}G^{-1}\cr} \pmatrix{1&1\cr -1&1}. \end{equation} With this trick, the scalar masses can be easily obtained: \begin{eqnarray} M^{s2}_{1,3}&=&e^G\frac{|H|^2(6C+|H|^2)+H^3\sqrt{12C+|H|^2}}{2|H|^4}, \\ M^{s2}_{2,4}&=&e^G\frac{|H|^2(6C+|H|^2)-H^3\sqrt{12C+|H|^2}}{2|H|^4}, \\ M^{s2}_{5,6}&=& 0. \end{eqnarray} Here the gravitino mass is \begin{equation} m_{3/2}^2=e^{G}=e^K|W|^2=\frac{d_0^2 |H|^6}{L C^3}. \end{equation} The moduli scalar masses are also zero as expected. The above result indicates that the inclusion of one-loop correction to the dilaton K\"ahler potential yields several interesting features. The dilaton mass is equal to the $H$ mass (to be identified with the scale of gaugino condensation) {\em independently of the value of the dilaton VEV}. Also, supersymmetry is broken in the hidden sector. We will discuss the possibility of obtaining a hierarchy between the gaugino condensation scale and the Planck (or string) scale below. We also calculate the $H$ field and dilaton masses in a more general class of models which have the same K\"ahler potential but different superpotential: \begin{equation} W = d e^{-3S/2b_0}Y^n \log(\eta Y), \qquad Y = e^{S/2b_0} H. \end{equation} (The case $n=3$ corresponds to the model discussed above.) In this class of models, we find the fermion masses are \begin{eqnarray} m^f = e^{G/2}\left\{ \frac{(n-3)L}{2b_0}\pm \sqrt{\left[\frac{(n-3)L}{2b_0}\right]^2+3z} \right\} \end{eqnarray} and the scalar masses are \begin{eqnarray} M^{s2}_{1,3} &=& e^G \Biggl\{ 3z+\frac{1}{2}\left[1+(3-n)\frac{L}{b_0}\right]^2 \nonumber\\ && \quad \pm \frac{1}{2} \left[1+(3-n)\frac{L}{b_0}\right] \sqrt{12z+\left[1+(3-n)\frac{L}{b_0}\right]^2} \,\Biggr\}, \\ M^{s2}_{2,4} &=& e^G \Biggl\{ 3z+\frac{1}{2}\left[1-(3-n)\frac{L}{b_0}\right]^2 \nonumber\\ && \quad \pm \frac{1}{2} \left[1-(3-n)\frac{L}{b_0}\right] \sqrt{12z+\left[1-(3-n)\frac{L}{b_0}\right]^2} \,\Biggr\}, \\ M^{s2}_{5,6} &=& 0, \end{eqnarray} here $z={C}/{|H|^2}$. For $n\neq 3$, the dilaton and the $H$ field masses are no longer equal, but they are still the same order of magnitude for any value of the dilaton VEV. Futhermore, supersymmetry is still broken in the Yang-Mills sector. With the above mass matrixes calculated, we can easily write down the one-loop vacuum potential energy. The potential energy only depends on the modular invariant function $z={C}/{|H|^2}$ so the moduli and dilaton VEVs are not uniquely determined in this model. We take the cut-off scale to be \begin{eqnarray} \Lambda^2 &=& M_S^2|H|^2e^{K/3} \nonumber\\ &=&\frac{2}{S+\bar{S}+2b_0\log(T+\bar{T})}|H|^2e^{K/3} \nonumber\\ &=&\frac{2z^{-1}}{L+2b_0 \log(z+1)}, \end{eqnarray} where \begin{equation} M_S^2=\frac{2M^2_p}{S+\bar{S}+2b_0\log(T+\bar{T})} \end{equation} is the string scale, which is also the scale at which the string gauge coupling constants ``unify'' \cite{kap,mary4}. The one-loop effective potential is \begin{eqnarray} v_1 &=& 64\pi^2V^{\rm 1-loop} \nonumber\\ &=&-\frac{8d_0^2L^{-1}}{L+2b_0 \log(z+1)}z^{-4} \nonumber\\ && +\, d_0^4z^{-6}L^{-2} \left\{ (-2+24z)\log\left[d_0z^{-2} \left(\frac{1}{2}+b_0/L\log(z+1)\right)\right] + g_0 \right\} \nonumber \\ g_0 &=& -4(3z)^2\log(3z)+2 \left(\sqrt{3z+\frac{1}{4}}+\frac{1}{2}\right)^4 \log(\sqrt{3z+\frac{1}{4}} + \frac{1}{2})^2 \\ \nonumber &+&2\left(\sqrt{3z+\frac{1}{4}} -\frac{1}{2}\right)^4 \log\left(\sqrt{3z+\frac{1}{4}}-\frac{1}{2}\right)^2 \end{eqnarray} The minimum condition of the potential energy determine $z$ as a function of $d_0$ and $h$. The result is that for fixed $d_0 \sim 1$, $z$ is a smooth function of $h$ which diverges at $h = 1$ as \begin{equation} \frac{z}{ \log z} \propto \frac{ b_0 d_0}{L} = \frac{d_0}{4 \log h}. \end{equation} We know that $L$ is related to the one-loop gauge coupling constant $g^2$ at the gaugino condensation scale through: $2/L = g^2$. One can see that the large $g^2$ might yield to the large $z$, thus the hierarchy between the string scale and the gaugino condensation scale. Since the gauge coupling constant at the gaugino condensation scale is very large, the higher-loop correction becomes very important. The conclusive result will depend on the inclusion of the higher-loop corrections to the dilaton K\"ahler potential. We will defer this discussion to the future work. \section{Conclusion} We have shown that the one-loop correction to the dilaton K\"ahler potential may significantly change the dynamics of gaugino condensation coupled to the dilaton. In a specific model including a dynamical field $H$ for the gaugino bilinear, we find that the supersymmetry is broken by gaugino condensation in the Yang-Mills sector. We also find that the dilaton and the $H$ field have masses on the same order of magnitude as the gaugino condensation scale. We thus propose that the determination of the dilaton VEV through gaugino condensation may depend sensitively on the dynamics at the gaugino condensation scale. This is very different from the usual scenario in which the dilaton is lighter than the gaugino condensation scale, and is treated as a light field in an effective theory below the gaugino condensation scale. We also find that the large value of the gauge coupling at the gaugino condensation scale might lead to a hierarchy between the string scale and the gaugino condensation scale, and fix the dilaton VEV at a realistic value. The higher loop corrections to the dilaton K\"ahler potential may also be important and we defer the investigation to the future work. The main conclusion of this work is that the inclusion of the loop correction to the dilaton K\"ahler potential may dramatically change the scenario of determining the dilaton VEV through the gaugino condensation and may lead to the solution of the dilaton runaway problem. \vskip 28pt \noindent{\bf Acknowledgement} \vskip 12pt I would like to thank Mary K. Gaillard, Jim Liu and Markus Luty for very helpful discussions. \vskip 28pt \vfill\eject
1,314,259,996,929
arxiv
\section{Objective} In countless situations of basic experimental and theo\-retical interest, ranging from superconducting charge pumps~\cite{GasparinettiEtAl13} over Dirac fermions in graphene coupled to acoustic phonons~\cite{IadecolaEtAl13} and quantum-dot devices~\cite{StaceEtAl13} to superconductors under phonon driving~\cite{MurakamiEtAl17}, emitters in laser-driven cavities~\cite{PagelFehske17}, or few-level systems coupled to transmission lines~\cite{ReimerEtAl18}, one encounters periodically driven quantum systems interacting with their environment~\cite{BlumelEtAl91,GrahamHuebner94,Kohn01, HoneEtAl09,ShiraiEtAl15,Liu15,IadecolaEtAl15,SeetharamEtAl15,DaiEtAl16, VajnaEtAl16,RestrepoEtAl16,LazaridesMoessner17,TuorilaEtAl17,HartmannEtAl17, VorbergEtAl13,SchnellEtAl18}. Such systems usually adopt a quasistationary state which may depend substantially on that interaction, even to the extent that the dynamics are fully environment-governed~\cite{GasparinettiEtAl13}. An investigation of a periodically driven dissipative Bose-Hubbard model has demonstrated that the interaction of a driven system with a reservoir can protect the system against heating~\cite{IwahoriKawakami17}. In the same spirit, a recent study of steady states of interacting Floquet insulators has confirmed that a heat bath can stabilize certain low-entropy states of periodically driven interacting systems~\cite{SeetharamEtAl19}. This is what one may expect on intuitive grounds, since the system can dump the energy absorbed from the drive into the reservoir~\cite{LangemeyerHolthaus14, BulnesCuetaraEtAl15}. With the present contribution we direct this line of research into previously unknown territory, and establish a fairly counter\-intuitive phenomenon: A periodically driven quantum system interacting with a thermal bath can effectively be made even {\em colder\/} than the bath it is coupled to, ``effectively'' meaning here that a Floquet state of a driven-dissipative quantum system can carry much {\em higher\/} population than the undriven system's ground state in thermal equilibrium. \section{Model system} To demonstrate the feasibility of such unexpected Floquet-state cooling we employ the model of a harmonic oscillator with a periodically time-dependent spring function~$k(t)$, which often has served as a workhorse in studies of quantum thermodynamics~\cite{KohlerEtAl97,OchoaEtAl18,FreitasPaz18, DiermannEtAl19}. It is given by the Hamiltonian \begin{equation} H_0(t) = \frac{p^2}{2M} + \frac{1}{2} k(t) x^2 \; , \label{eq:HAM} \end{equation} with $M$ denoting the mass of the oscillator particle. We take the spring function to be of the form \begin{equation} k(t) = M \Omega_0^2 - M \Omega_1^2 \cos(\omega t) \; , \label{eq:MSF} \end{equation} so that $\Omega_0$ is the angular frequency of the undriven oscillator, and the frequency~$\Omega_1$ quantifies the driving strength; with this choice the classical equation of motion $M\ddot\xi(t) = -k(t) \,\xi(t)$ becomes equal to the famous Mathieu equation~\cite{AbramowitzStegun70,MagnusWinkler04}. Provided the parameters $\Omega_0/\omega$ and $\Omega_1/\omega$ are chosen such that the Mathieu equation admits stable solutions, the time-dependent Schr\"odinger equation with the Hamiltonian~(\ref{eq:HAM}) possesses a complete set of square-integrable Floquet states, that is, of solutions \begin{equation} \psi_n(x,t) = u_n(x,t) \exp(-\ri\varepsilon_n t/\hbar) \label{eq:FLS} \end{equation} with square-integrable Floquet functions $u_n(x,t)$ which acquire the period $T = 2\pi/\omega$ of the drive, \begin{equation} u_n(x,t) = u_n(x,t+T) \; . \end{equation} These Floquet states~(\ref{eq:FLS}) have been obtained independently by several authors~\cite{PopovPerelomov70,Combescure86,Brown91}, and been utilized, {\em e.g.\/}, to describe the quantum dynamics of particles in a Paul trap~\cite{Paul90}. The key element entering into their construction are stable classical Floquet solutions \begin{equation} \xi(t) = v(t)\exp(\ri \nu t) \label{eq:CFS} \end{equation} with a $T$-periodic function $v(t) = v(t+T)$ and a corresponding characteristic exponent~$\nu$~\cite{AbramowitzStegun70,MagnusWinkler04}. The latter can be chosen such that it connects continuously to the oscillator frequency~$\Omega_0$ when the drive is switched off, so that $\Omega_1$ goes to zero~\cite{DiermannEtAl19}. The quasienergies~$\varepsilon_n$ which accompany the time evolution of the Floquet states~(\ref{eq:FLS}) then take the form \begin{equation} \varepsilon_n = \hbar\nu(n + 1/2) \qquad \bmod \; \hbar\omega \; , \label{eq:QES} \end{equation} where $n = 0, 1,2,3, \ldots$ is the usual integer oscillator quantum number. If the combination of parameters $\Omega_0/\omega$ and $\Omega_1/\omega$ gives rise to unstable solutions of the Mathieu equation, the quasienergy spectrum of the parametrically driven oscillator becomes absolutely continuous~\cite{Howland92} so that the system can absorb an infinite amount of energy from the drive; this case is not being considered here. Now let us couple this system~(\ref{eq:HAM}) to an infinite phonon bath modeled by thermally occupied harmonic oscillators with frequencies~$\wo$. To this end, we adopt the interaction Hamiltonian~\cite{BreuerPetruccione02} \begin{equation} H_{\rm int} = \gamma x \sum_{\wo} \left( b^{\phantom\dagger}_{\wo} + b^\dagger_{\wo} \right) \; , \label{eq:SBC} \end{equation} where the constant $\gamma$ carries the dimension of energy per length, and the operators $b^{\phantom\dagger}_{\wo}$ and $b^\dagger_{\wo}$ describe, respectively, annihilation and creation processes in the bath. Following the approach pioneered by Breuer {\em et al.\/}~\cite{BreuerEtAl00}, the rate~$\Gamma_{fi}$ of bath-induced transitions from an initial Floquet state~$i$ to a final one~$f$ then is obtained as a sum \begin{equation} \Gamma_{fi} = \sum_\ell \Gamma_{fi}^{(\ell)} \; , \label{eq:TOR} \end{equation} where the partial rates \begin{equation} \Gamma_{fi}^{(\ell)} = \frac{2\pi}{\hbar^2} \left| V_{fi}^{(\ell)} \right|^2 N(\omega_{fi}^{(\ell)}) \, J(|\omega_{fi}^{(\ell)}|) \label{eq:PAR} \end{equation} correspond to the individual Floquet transition frequencies \begin{equation} \omega_{fi}^{(\ell)} = (\varepsilon_f - \varepsilon_i)/\hbar + \ell\omega \label{eq:FTF} \end{equation} with $\ell = 0,\pm 1, \pm 2, \ldots \;$. The quantities $V_{fi}^{(\ell)}$ are given by the Fourier components of the system's transition matrix elements, \begin{equation} \langle u_f(t) | \, \gamma x \, | u_i(t) \rangle = \sum_{\ell } {\mathrm e}^{\ri \ell \omega t} \, V_{fi}^{(\ell)} \; , \label{eq:FTM} \end{equation} and the numbers $N(\wo)$ specify the thermal occupation of the phonon modes, \begin{equation} N(\wo) = \left\{ \begin{array}{ll} \displaystyle \frac{1}{\exp(\beta\hbar\wo) - 1} &; \quad \wo > 0 \\ N(-\wo) + 1 &; \quad \wo < 0 \phantom{\displaystyle\int^a} \end{array} \right. \label{eq:OPM} \end{equation} with $\beta = 1/(\kB T_{\rm bath})$ encoding the inverse of the bath temperature $T_{\rm bath}$, invoking the Boltzmann constant~$\kB$. Observe that negative transition frequencies~(\ref{eq:FTF}) correspond to processes during which the system looses energy to the bath so that a bath phonon is created, explaining the ``$+1$'' in the second case~(\ref{eq:OPM}) which will turn out to be crucial. The third input required for evaluating the partial rates~(\ref{eq:PAR}) is the spectral density~$J(\wo)$ of the oscillator bath; observe that the transition frequencies~(\ref{eq:FTF}) enter into this density with their absolute value only. Knowing the total rates~(\ref{eq:TOR}), the quasistationary distribution $\{ p_n \}_{n = 0,1,2,\ldots}$ of the Floquet-state occupation probabilities characterizing the steady state is obtained as solution to the master equation~\cite{BreuerEtAl00} \begin{equation} 0 = \sum_m \big( \Gamma_{nm}p_m - \Gamma_{mn} p_n \big) \; . \end{equation} For the system~(\ref{eq:HAM}) with system-bath coupling~(\ref{eq:SBC}) the matrix~$\Gamma$ becomes tridiagonal, connecting neighboring Floquet states $m = n \pm 1$ only. Moreover, for each such transition the ratio~$r$ of the ``upward'' rate~$\Gamma_{n+1,n}$ to the matching ``downward'' rate~$\Gamma_{n,n+1}$, namely, \begin{equation} \frac{\Gamma_{n+1,n}}{\Gamma_{n,n+1}} = r \end{equation} becomes independent of the oscillator quantum number~$n$. Therefore, the Floquet-state occupation probabilities for the quasistationary state are given by the geometric distribution \begin{equation} p_n = (1 - r) \, r^n \; , \end{equation} provided $r < 1$. If $r > 1$ the system does not reach a steady state, but keeps on climbing the oscillator ladder, its ``upward'' transitions being favored over the ``downward'' ones. Since the system's quasienergies~(\ref{eq:QES}) are equidistant, one may introduce a quasitemperature~$\tau$ by regarding~$r$ as a Boltzmann factor, \begin{equation} r = \exp\!\left(-\frac{\hbar\nu}{\kB\tau}\right) \; . \label{eq:DQT} \end{equation} Positive quasitemperatures then characterize a steady state with $r < 1$; the smaller~$r$, the lower~$\tau$. In contrast, negative~$\tau$ signal quasithermal instability. Needless to say, we are considering a nonequilibrium system which does not possess a temperature in the sense of equilibrium thermodynamics; the quasitemperature simply serves as a convenient parameter for comparing the driving-engineered quasistationary state to the state that would be adopted in thermal equilibrium. Now the evaluation of the general expression~(\ref{eq:PAR}) leads to the explicit result~\cite{DiermannEtAl19} \begin{equation} r = \frac{\displaystyle{\sum_\ell} \left| v^{(\ell)} \right|^2 \, N(+\nu + \ell\omega) \; J(|\nu + \ell\omega |)} {\displaystyle{\sum_\ell} \left| v^{(\ell)} \right|^2 \, N(-\nu - \ell\omega) \; J(|\nu + \ell\omega |)} \; , \label{eq:RAT} \end{equation} where $v^{(\ell)}$ denote the Fourier coefficients of the periodic parts of the classical Floquet solutions~(\ref{eq:CFS}), \begin{equation} v(t) = \sum_\ell {\mathrm e}^{\ri\ell\omega t} \, v^{(\ell)} \; . \end{equation} This representation~(\ref{eq:RAT}) contains the heart of the Floquet-state cooling mechanism. Let us assume that the spectral density $J(\wo)$ is particularly large at an upward transition with {\em positive\/} frequency $\wo = \nu + \ell_1\omega$ accompanied by a reasonably large squared Fourier coefficient $\left| v^{(\ell_1)} \right|^2$, but relatively small at all others, so that all contributions to the ratio~(\ref{eq:RAT}) with $\ell \neq \ell_1$ may be neglected. In this case one has approximately \begin{equation} r \approx \frac{N(\nu + \ell_1\omega)}{N(\nu + \ell_1\omega) + 1} = \exp\!\big[ - \beta\hbar(\nu + \ell_1\omega)\big] < 1 \label{eq:APR} \end{equation} by virtue of Eq.~(\ref{eq:OPM}). The larger $\ell_1$ can be made, that is, the more sizeable Fourier coefficients are available, the smaller~$r$ can be reached. It needs to be stressed that both the Fourier coefficient labeled $\ell_1$ and the density of states drop out here. These quantities set the scale of the corresponding partial rate~(\ref{eq:PAR}) and, hence, determine the time required for relaxing to the quasithermal steady state; if the Fourier coefficient picked out by the density of states should be small, this relaxation time may be quite long. Quite intriguingly, the geometric Floquet-state distribution implied by this approximate identity~(\ref{eq:APR}) looks as if the {\em driven nonequilibrium\/} system were mapped to an {\em undriven equilibrium\/} system characterized by a Boltzmann distribution with the actual temperature of the ambient bath, but with a ``renormalized'' effective level spacing $\hbar(\nu + \ell_1\omega)$ selected by the density of states, although this density itself does not figure in the end. It will be interesting to explore whether this particular feature exhibited by the present model is capable of generalization. Here we stick to the idea of characterizing the quasistationary steady state of the driven system in terms of the quasitemperature introduced through Eq.~(\ref{eq:DQT}). Evidently, the approximation~(\ref{eq:APR}) allows one to cover practically the entire interval $0 < r < 1$, implying that the range of quasitemperatures accessible to the system is $0 < \tau/T_{\rm bath} < \infty$. Thus, the quasitemperature may be quite different from the bath temperature $T_{\rm bath}$; in particular, the driven system can effectively be much colder than its environment. \section{Results} In order to substantiate this key issue we now specify a Gaussian spectral density \begin{equation} J(\widetilde\omega) = J_0 \exp\!\left(-\frac{(\wo - \wo_0)^2}{(\Delta\wo)^2}\right) \label{eq:GSD} \end{equation} centered around a frequency $\wo_0$ with width $\Delta\wo$. The parameters entering into the Mathieu spring function~(\ref{eq:MSF}) are chosen as $\Omega_0/\omega = \sqrt{2}$ and $\Omega_1/\omega = 1.0$, giving the characteristic exponent $\nu/\omega \approx 1.387$, only slightly down-shifted against the unperturbed oscillator frequency by the ac Stark effect~\cite{DiermannEtAl19}. Finally, the bath temperature is adjusted to $\hbar\omega/(\kB T_{\rm bath}) = \beta\hbar\omega = 0.1$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{FIG_L1.eps} \caption{Ratio~$r$ (full line; black), scaled quasi\-temperature $\tau/T_{\rm bath}$ (dashed line; red), and scaled occupation probability $p_0/P_0$ (dotted line; blue) of the Floquet state $n = 0$ vs.\ the center $\wo_0/\omega$ of a Gaussian spectral density~(\ref{eq:GSD}) with large width $\Delta\wo/\omega = 1.0$. The system parameters are $\Omega_0/\omega = \sqrt{2}$ and $\Omega_1/\omega = 1.0$; the bath temperature has been set to $\beta\hbar\omega = 0.1$.} \label{F_1} \end{figure} In Figure~\ref{F_1} we display the ratio~$r$, the scaled quasi\-temperature $\tau/T_{\rm bath}$, and the scaled population~$p_0/P_0$ of the Floquet state $n = 0$, where $P_0 = 1 - \exp(-\beta\hbar\Omega_0)$ is the thermal occupation of the oscillator ground state without periodic driving, attained when $\Omega_1/\omega = 0$. Here we have chosen the width $\Delta\wo/\omega = 1.0$ of the spectral density~(\ref{eq:GSD}); data are plotted vs.\ its center~$\wo_0/\omega$. We observe a steady decrease of~$r$ with increasing center frequency~$\wo_0/\omega$, accompanied by the corresponding decrease of the quasi\-temperature, and a fairly significant increase of the population of the Floquet state $n = 0$, such that the latter exceeds the thermal equilibrium value by a factor of about~$2.5$ for $\wo_0/\omega \approx 8$. This finding already provides an encouraging verification of the mechanism underlying the approximation~(\ref{eq:APR}): Here the density~(\ref{eq:GSD}) successively favors Floquet transition frequencies with $\ell_1 = -1,0,1,2,\ldots\;$, but the Gaussian is still so wide that these transitions are not individually resolved. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{FIG_L2.eps} \caption{As Fig.~\ref{F_1}, but for reduced width $\Delta\wo/\omega = 0.316$. The sequence of plateaus is well captured by Eq.~(\ref{eq:APR}) with $\ell_1 = -1,0,1,2,\ldots\;$.} \label{F_2} \end{figure} This changes when the density width is reduced to $\Delta\wo/\omega = 0.316$, while all other parameters are left unchanged, as shown in Fig.~\ref{F_2}: Here the scaled $n\!=\!0$-population features a series of well-developed plateaus with increasing center frequency which are explained with remarkable accuracy by Eq.~(\ref{eq:APR}), recalling $\nu/\omega \approx 1.387$ and $\beta\hbar\omega = 0.1$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{FIG_L3.eps} \caption{Ratio~$r$ (full line; black) and scaled occupation probability $p_0/P_0$ (dotted line; blue) of the Floquet state $n = 0$. All parameters are identical to those employed in Figs.~\ref{F_1} and~\ref{F_2}, except for the density width which is reduced further to $\Delta\wo/\omega = 0.1$ here. Plateau values of $p_0/P_0$ with $r < 1$ are governed by Eq.~(\ref{eq:APR}) with $\ell_1 = -1,0,\ldots, 6$; intervals of quasithermal instability with $r > 1$ by Eq.~(\ref{eq:BAD}) with $\ell_2 = -2,-3,\ldots,-9$. Observe that $p_0/P_0 \approx 4$ for $7 \lesssim \wo_0/\omega \lesssim 7.5$, corresponding to $\tau/T_{\rm bath} \approx 0.19$.} \label{F_3} \end{figure} A noteworthy phenomenon occurs when the density peak is made still sharper: Figure~\ref{F_3} depicts both $r$ and $p_0/P_0$ for $\Delta\wo/\omega = 0.1$. Now one observes {\em two\/} plateau sequences, intervals of successively lower~$r$ and, hence, lower quasitemperatures allowing for higher $n\!=\!0$-population alternating with intervals of successively higher~$r$, indicating successively stronger quasithermal instability. For explaining this numerical finding one has to go back to the exact Eq.~(\ref{eq:RAT}), and to appreciate the fact that only the absolute value of the Floquet transition frequencies enters into the density~$J(|\wo|)$. Thus, when a peaked density such as described by Eq.~(\ref{eq:GSD}) is appreciably large at a positive frequency $\nu + \ell_1\omega$, it may also be large at a {\em negative\/} frequency $\nu + \ell_2\omega$, then leading to \begin{equation} r \approx \frac{N(-\nu - \ell_2\omega) + 1}{N(-\nu - \ell_2\omega)} = \exp\!\big[ - \beta\hbar(\nu + \ell_2\omega)\big] > 1 \; . \label{eq:BAD} \end{equation} This is the key to understanding Fig.~\ref{F_3}: The sequence of stable plateaus with $r < 1$ again is explained by Eq.~(\ref{eq:APR}) with $\ell_1 = -1,0,\ldots 6$ to an accuracy on the sub-percent level, while the zones of quasithermal instability are governed by this Eq.~(\ref{eq:BAD}) with $\ell_2 = -2,-3,\ldots,-9$. In principle, both kinds of processes have been competing already in the situation considered in Fig.~\ref{F_2}, but there the ``bad'' processes have been overshadowed because of their smaller Fourier weights. It is only the narrow peak width employed in Fig.~\ref{F_3} which allows one to disentangle the ``good'' transitions from the ``bad'' ones. It deserves to be pointed out that one reaches a scaled $n\!=\!0$-population $p_0/P_0 \approx 4$ for $7 \lesssim \wo_0/\omega \lesssim 7.5$, as corresponding to the scaled quasi\-temperature $\tau/T_{\rm bath} \approx 0.19$. Thus, our present proof-of-principle study vindicates that Floquet-state cooling can be fairly effective. Among others, our results imply that an ideal Bose gas, stored in a parametrically driven oscillator trap, can condense into a macroscopically occupied single-particle Floquet state even if the ambient temperature is higher than the usual critical temperature. We also point out that that this novel type of ``cooling by driving'' is thermodynamically consistent: The non-equilibrium steady state is characterized by an energy flow which is always directed from the driven system into the bath, even if the quasitemperature of the former is lower than the actual temperature of the latter~\cite{DiermannEtAl19}. \section{Discussion} The integrable model of the parametrically driven oscillator~(\ref{eq:HAM}) features an unusually simple quasienergy spectrum~(\ref{eq:QES}) in its stability regime. Combined with a system-bath coupling of the natural form~(\ref{eq:SBC}) it gives rise to a merely tridiagonal Floquet transition matrix~(\ref{eq:TOR}) and therefore leads to the expression~(\ref{eq:RAT}) which allows one to discuss the effect of the environment on the quasistationary state in an exceptionally transparent manner. Yet, the very essentials of the Floquet-state cooling mechanism --- a rich Fourier spectrum of the Floquet transition matrix elements~(\ref{eq:FTM}), the components of which may be addressed individually with suitable densities of states --- will be present also in realistic, non-integrable periodically driven systems with dense pure point quasienergy spectra, even if their quasi\-stationary states can no longer be characterized in terms of a quasitemperature. Moreover, for such chaotic systems it may no longer be feasible to assign meaningful quantum numbers to the Floquet states which depend on the parameters in a continuous manner, because their quasienergies exhibit a dense net of avoided crossings~\cite{HoneEtAl09}; in particular, it may no longer be feasible to identify a ``Floquet ground state'' by continuity. Nonetheless, it will still be possible to select some Floquet state of interest, and to guide population into that state by means of suitable bath densities of states. Therefore, the mechanism that has been exemplified with the help of the model~(\ref{eq:HAM}) is not restricted to that model, but fairly general. For these reasons we expect that Floquet-state cooling may find practical applications, {\em e.g.\/}, with periodically driven solid-state systems interacting with a phonon bath predominantly at certain well-defined frequencies. One may also envision deliberate quasithermal engineering, amounting to the design of either favorable system environments, or of particular driving forms in order to enrich the Floquet system's Fourier content. The theoretical framework employed here, based on the golden rule-type rates~(\ref{eq:PAR}), is equivalent to the standard Born-Markov approximation~\cite{BreuerPetruccione02}, but a parametrically driven harmonic oscillator coupled to $N$ bath oscillators constitutes an integrable system for any~$N$~\cite{HagedornEtAl86}. Thus, it will be worthwhile to explore whether and how the findings reported in the present matter-of-principle study are recovered in the proper limit $N \to \infty$ without invoking any approximation at all. It is known that in many situations a periodic drive can be switched off adiabatically, such that the occupation probabilities of the Floquet states remain almost constant, even if the switch-off takes place within only a few driving cycles~\cite{BreuerHolthaus89,DreseHolthaus99}. On the other hand, the time scales required for relaxing to the quasistationary distribution of Floquet-state occupation probabilities depend on the coupling to the bath which is weak by assumption. Therefore, under appropriate conditions it should be feasible to switch off the drive in an effectively adiabatic manner on times scales significantly shorter than the relaxation times after the quasistationary state has been reached, so that its Floquet-state occupation probabilities determine the occupation probabilities of the system's proper energy eigenstates. This sketch might yield a blueprint for an actual cooling mechanism, providing higher-than-thermal ground-state populations. \vspace{3ex} {\bf DATA AVAILABILITY} \vspace{1ex} No datasets other than those plotted in Figs.~\ref{F_1} -- \ref{F_3} were generated or analysed during the current study.
1,314,259,996,930
arxiv
\section{Introduction} The emergence of atomically thin, single-layer graphene spawned a new class of materials, known as two-dimensional (2D) materials~\cite{Xu2013, Novoselov2011}. These extraordinary 2D materials have attracted significant attention within the scientific community due to their wide range of properties - from large band-gap insulators to the very best conductors, the mechanically tough to soft and malleable, and semi-metals to topologically insulating~\cite{Singh2015, Paul2017,Blonsky2015,Akiyama2021}. The diverse pool of properties that 2D materials possess promise many novel next-generation device applications in nanoelectronics, quantum computing, field-effect transistors, microwave and terahertz photonics, and catalysis~\cite{Rode2017, Xu2015, Yu2014, Kang2013, Amani2014, Li2019, Luo2016, Yu2014}. Despite the excitement surrounding these promising materials, surprisingly few 2D materials are used in the industry. Roughly 55 of the $>$5,000 theoretically predicted 2D materials have been experimentally synthesized~\cite{Mounet2018, Ashton2017, c2db, Singh2014, Zhou2019}. Of the various methods used to synthesize 2D materials, substrate-assisted methods such as chemical vapor deposition result in large-area and low-defect flakes at a reasonable cost per mass~\cite{Novoselov2012}. Substrate-assisted methods have the added benefit of being able to synthesize 2D materials that have non-van der Waals (vdW) bonded bulk counterparts. On the other hand, exfoliation techniques, like mechanical exfoliation~\cite{Singh2015}, can only be used to generate 2D flakes from vdW-bonded bulk counterparts. Currently, substrate-assisted synthesis of 2D materials rely on expensive trial-and-error processes requiring significant experimental effort and intuition for choosing the substrate, precursors, and the growth conditions (substrate temperatures, growth rate, etc.) to synthesize 2D materials resulting in the slow progress to realize and utilize these materials. Furthermore, the properties of 2D materials can be dramatically altered by placing them on substrates. For example, the mobility of carriers in 2D-MoS$_2$ is reduced by more than an order of magnitude by placing it on a sapphire substrate~\cite{singh2015al2o3}. To enable the functionalization and to assist in the selection of substrates for synthesis, a detailed understanding of the substrate-assisted modification of energetic, physical, and electronic properties of 2D materials is required. In this work, we present the $Hetero2d$ workflow package inspired by existing community workflow packages. $Hetero2d$ is tailored to address scientific questions regarding the stability and properties of 2D-substrate heterostructured materials. $Hetero2d$ provides automated routines for the generation of low-lattice mismatched heterostructures for arbitrary 2D materials and substrate surfaces, the creation of vdW-corrected density-functional theory (DFT) input files, the submission and monitoring of simulations on computing resources, and the post-processing of the key parameters to compute, namely, (a) the interface interaction energy of 2D-substrate heterostructures, (b) the identification of substrate-induced changes in the interfacial structure, and (c) charge doping of the 2D material. The 2D-substrate information generated by our routines is stored in a MongoDB database tailored for 2D-substrate heterostructures. As an example, we demonstrate the use of $Hetero2d$ in screening for substrate surfaces that stabilize the following four 2D materials - $2H$-MoS$_2$, $1T$- and $2H$-NbO$_2$, and hexagonal-ZnTe. We considered the low-index planes of a total of 50 cubic metallic materials as potential substrates. Using the $Hetero2d$ workflow, we determine that Cu, Hf, Mn, Nd, Ni, Pd, Re, Rh, Sc, Ta, Ti, V, W, Y, and Zr substrates sufficiently stabilize the formation energies of these 2D materials, with binding energies in the range of $\sim$0.1 -- 0.6 eV/atom. Upon examining the $z$-separation, the charge transfer, and the electronic density of states at the 2D-substrate interface using post-processing tools of $Hetero2d$, we find a covalent type bonding at the interface, which suggests that these substrates can be used as contact materials. \href{https://github.com/cmdlab/Hetero2d}{Hetero2d} is shared on GitHub as an open-source package under the GNU license. \section{DFT Approach to Identifying Stable 2D-Substrate Heterostructures} 2D materials are inherently meta-stable materials and are often created by peeling 2D films from layered, vdW bonded bulk counterparts. Their meta-stability arises from the removal of the vdW bonds between the individual flakes. However, the vdW bonds are an order of magnitude weaker than the in-plane covalent or ionic bonds of 2D materials, thus many 2D materials can remain stable at room temperature or above. A quantitative measure of the stability of 2D materials to remain as a free-standing 2D film is given by the formation energy, $\Delta E_{\mathrm{vac}}^f$, with respect to the bulk phase \begin{equation} \label{eq:Eform} \begin{aligned}[t] \hspace*{-1.5cm} \Delta E_{\mathrm{vac}}^f &= \dfrac{ E_{\mathrm{2D}}}{ N_{\mathrm{2D}} } - \dfrac{E_{\mathrm{3D}}}{N_{\mathrm{3D}}},\\ \end{aligned} \end{equation} where $E_{\mathrm{2D}}$\ is the energy of a 2D material in vacuum, $E_{\mathrm{3D}}$\ is the energy of the bulk counterpart of the 2D material, and $N_{\mathrm{2D}}$\ and $N_{\mathrm{3D}}$\ are the number of atoms in the unit cell of 2D and bulk counterpart, respectively. The $\Delta E_{\mathrm{vac}}^f$\ of a 2D material indicates the stability of a 2D flake to retain the 2D form over its bulk counterpart, where the higher the $\Delta E_{\mathrm{vac}}^f$, the larger the driving force to lower the free energy. Singh et. al. and others have shown that when the $\Delta E_{\mathrm{vac}}^f$\ < 0.2 eV/atom, the 2D materials are stable as a free-standing film, but for larger $\Delta E_{\mathrm{vac}}^f$'s they are highly unstable and may only be synthesized using substrate-assisted methods~\cite{Singh2015, c2db}. For substrate surfaces to stabilize a 2D material during the growth processes, the 2D-substrate heterostructure should be energetically stable. Thus the interactions between the 2D material and substrate surface have to be attractive in nature. This interaction energy known as the binding energy can be estimated as, $\Delta E_{\mathrm{b}} = (E_{\mathrm{2D}} + E_{\mathrm{S}} - E_{\mathrm{2D+S}} )/N_{\mathrm{2D}}$, where $E_{\mathrm{2D+S}}$\ is the energy of the 2D material adsorbed on the surface of a substrate, $E_{\mathrm{S}}$\ is the energy of the substrate slab, $E_{\mathrm{2D}}$\ is the energy of the free-standing 2D material, and $N_{\mathrm{2D}}$\ is the number of atoms in the unit cell of the 2D material. Note, strain is applied to the 2D material to place it on the substrate surface due to the lattice-mismatch between the two lattices. For the 2D-substrate heterostructure interaction to be attractive, the $\Delta E_{\mathrm{b}}$\ > 0. In addition, this $\Delta E_{\mathrm{b}}$\ should be greater than the $\Delta E_{\mathrm{vac}}^f$\ of 2D materials to ensure that the 2D materials remain in their 2D form on the substrate. Singh et. al. has shown previously that the successful synthesis of a 2D material on a particular substrate surface is feasible when the adsorption formation energy, $\Delta E_{\mathrm{ads}}^f$\ = $\Delta E_{\mathrm{vac}}^f$\ - $\Delta E_{\mathrm{b}}$\ < 0. \section{Hetero2d: The High-Throughput Implementation of the DFT Approach} \subsection{Introduction} The $Hetero2d$ package is an all-in-one workflow approach to model the heterostructures formed by the arbitrary combinations of 2D materials and substrate surfaces. $Hetero2d$ can calculate the $\Delta E_{\mathrm{vac}}^f$, $\Delta E_{\mathrm{b}}$, and $\Delta E_{\mathrm{ads}}^f$\ for each 2D-substrate heterostructure and store the relevant simulation parameters and post-processing in a queryable MongoDB database that can be interfaced to and accessed by an application programming interface (API) or a web-portal. $Hetero2d$ is written in Python 3.6, a high-level coding language widely used on modern scientific computing resources. $Hetero2d$ utilizes \textit{MPInterfaces}~\cite{Mathew2016} routines and the robust high-throughput computational tools developed by the Materials Project~\cite{atomate,Jain2013,Jain2015,Ong2013} (MP), namely \textit{atomate}, \textit{FireWorks}, \textit{pymatgen}, and \textit{custodian}. $Hetero2d$'s framework is inspired by \textit{atomate}'s straightforward statement-based workflow design to perform complex materials science computations with pre-built workflows that automate various types of DFT calculations. Figure \ref{fig:Figure1} illustrates the framework of our workflow within the $Hetero2d$ package. $Hetero2d$ extends some powerful high-throughput techniques available in existing community packages and combines them with new routines created for this work to generate 2D-substrate heterostructures, perform vdW-corrected DFT calculations, store the stability related data within a queryable database, and analyze key properties of the heterostructure. In the following sections, we discuss each step outlined in Figure \ref{fig:Figure1} underscoring the new computational tools developed for $Hetero2d$. \begin{figure}[!th] \centering \includegraphics[width=\textwidth]{img/WorkflowFlowChart.pdf} \caption{Outline for our computational workflow used in our study to investigate the properties of the 2D-substrate heterostructures as coded in the $Hetero2d$ package. All structures imported from an external database are relaxed using vdW-corrected DFT with our parameters (discussed below) to maintain consistency. Boxes in gold denote a DFT simulation step and boxes in silver denote a pre-processing or post-processing step.} \vspace{-0.25\intextsep} \label{fig:Figure1} \end{figure} \subsection{Workflow Framework} $Hetero2d$'s \textit{atomate}-inspired framework utilizes the \textit{FireWorks} package to break down and organize each task within a workflow. Workflows within the \textit{FireWorks} package are organized into three task levels -- (1) workflow, (2) firework, and (3) firetask. A workflow is a set of fireworks with dependencies and information shared between them through the use of a unique specification file that determines the order of execution of each firework (FW) and firetask. Each FW is composed of one or more related firetasks designed to accomplish a specific task such as DFT structure relaxation. Firetasks are the lowest level task in the workflow. Firetasks can be simple tasks such as writing files, copying files from a previous directory, or more complex tasks such as calling script-based functions to generate 2D-substrate heterostructures, starting and monitoring a DFT calculation, or post-processing a DFT calculation and updating the database. $Hetero2d$'s workflow \textit{get\_heterostructures\_stabilityWF} shown in Figure \ref{fig:Figure1}, has a total of five firework steps (1) FW$_1$: the DFT structural optimization of the 2D material, (2) FW$_2$: the DFT structural optimization of the bulk counterpart of the 2D material, (3) FW$_3$: the DFT structural optimization of the substrate, (4) FW$_4$: the creation and DFT structural optimization of the substrate slab, and (5) FW$_5$: the generation and DFT structural optimization of the 2D-substrate heterostructure configurations. Each firework can be composed of a single or many related firetasks. The tasks are gathered from the specification file that controls the execution of each firetask. For example, FW$_1$ is used to perform a vdW-corrected DFT structure optimization of the 2D material. Note that the DFT simulations are performed using the Vienna \textit{ab initio} simulation package ~\cite{Kresse5, Kresse4, Kresse1, Kresse2, Kresse3}. FW$_1$ is composed of firetasks which (1) write VASP input files to the job's launch directory, (2) write the structure file, (3) run VASP using \textit{custodian}~\cite{Ong2013} to perform just-in-time job management, error checking, and error recovery, (4) collect information regarding the location of the calculation and update the specification file, and (5) perform analysis and convergence checks for the calculation and store all pre-defined information about the calculation in our MongoDB database. A more detailed explanation of each firework in the workflow is discussed in section 3.6, \textit{Workflow Steps}. \subsection{Package Functionalities} As mentioned earlier, $Hetero2d$ adapts and extends existing community packages to assess the stability of 2D-substrate heterostructures. Table \ref{tab:Table1} lists the functionalities of $Hetero2d$ compared with two other workflow-based packages, \textit{MPInterfaces}~\cite{Mathew2016} and \textit{atomate}~\cite{atomate}, highlighting new and common features within the three packages. \begin{table} \centering \caption{A list of functionalities present in the $Hetero2d$ package compared with two other workflow-based packages \textit{MPInterfaces} and \textit{atomate}. $Hetero2d$ is the only workflow package with all the specific features needed to create 2D-substrate heterostructures using high-throughput computational methods.} \begin{adjustbox}{width=0.5\textwidth} \begin{tabular}{|c|c|c|c|} \hline & $Hetero2d$ & \textit{MPInterfaces} & \textit{Atomate} \\ \hline Structure processing & \checked & \checked & \checked \\ \hline Error recovery & \checked & \checked & \checked \\ \hline Database integration & \checked & \checked & \checked \\ \hline \textit{FireWorks} compatible & \checked & & \checked \\ \hline 2D hetero. routines & \checked & \checked & \\ \hline 2D hetero. workflow & \checked & & \\ \hline 2D post-processing & \checked & & \\ \hline \end{tabular} \end{adjustbox} \label{tab:Table1} \end{table} All three packages utilize the \textit{pymatgen} package to perform various structure processing tasks. \textit{Pymatgen} is used to perform various types of structure-manipulation processes such as reducing/increasing simulation cell size, creating a vacuum, or creating a slab during the execution of the workflow. Throughout $Hetero2d$, we utilized \textit{pymatgen} to handle structure-manipulation for (a) the bulk materials and (b) some basic pre-/post-processing of structures and generation of files for the DFT calculations. Within $Hetero2d$, \textit{pymatgen}'s structure-manipulation tools are used to create conventional unit cells for the substrate and create the substrate slab surface. Additionally, we have integrated \textit{pymatgen}'s structure analysis modules to decorate the fireworks in the workflow with structural information for each input structure to populate our database. The pre-processing enables one to differentiate crystal phases with similar compound formulas, easily reference and sort data within the database, and perform analysis in later fireworks. All three packages use the \textit{custodian} package~\cite{Ong2013} to perform error recovery. Error recovery routines are pivotal for any workflow package to reduce the need for human intervention and correct simple run-time errors with pre-defined functions. Additionally, \textit{custodian} alerts the user if an unrecoverable error has occurred. Database integration is another functionality present in all three packages that stores and analyzes the vast amount of information generated by each calculation. Only $Hetero2d$ and \textit{atomate} are \textit{FireWorks} compatible whereas, \textit{MPInterfaces} uses the python package \textit{fabric} to remote launch jobs over SSH. \textit{FireWorks} is a single package used to define, manage, and execute scientific workflows with built-in failure-detection routines capable of concurrent job execution and remote job tracking over an arbitrary number of computing resources accessible from a clean and flexible Python API. Routines used to automate the generation of 2D-substrate heterostructures given user constraints are available in $Hetero2d$ and \textit{MPInterfaces}. \textit{MPInterfaces} implements a mathematical algorithm developed by Zur et. al.~\cite{Zur1984} for generating supercells of lattice-matched heterostructures given two arbitrary lattices and user-specified tolerances for the lattice-mismatch and heterostructure surface area. $Hetero2d$ incorporates functions from \textit{MPInterfaces} to create 2D-substrate heterostructures and enable our package to utilize \textit{FireWorks} which \textit{MPInterfaces} is currently incompatible with. Additionally, by incorporating these routines in $Hetero2d$, we can modify the function to return critical information regarding the 2D-substrate heterostructures that are not returned by the \textit{MPInterfaces} function. Our 2D-substrate heterostructure function returns the strain of the 2D material along \textbf{a} and \textbf{b} lattice vectors, angle mismatch between the \textbf{ab} lattice vectors of the substrate and the 2D material, and scaling matrix used to generate the aligned the 2D-substrate heterostructures. The 2D-substrate heterostructure workflow and post-processing routines are uniquely available in $Hetero2d$. The workflow automates all steps needed to study 2D-substrate heterostructure stability and properties via the DFT method. The post-processing routines enable a curated database to view all calculation results and perform additional analysis or calculations. \subsection{Default Computational Parameters} \textit{CMDLInterfaceSet} is based on \textit{pymatgen}'s \textit{VASPInputSet} class that creates custom input files for DFT calculations. Our new class \textit{CMDLInterfaceSet} has all the functionality of the parent \textit{pymatgen} class but is tailored to perform structural optimizations of 2D-substrate heterostructures and implements vdW-corrections, on-the-fly dipole corrections for slabs, generation of custom $k$-point mesh grid density, and addition of selective dynamics tags for the 2D-substrate structures. All DFT calculations are performed using the projector-augmented wave method as implemented in the plane-wave code VASP~\cite{Kresse5, Kresse4, Kresse1, Kresse2, Kresse3}. The vdW interactions between the 2D material and substrate are modeled using the vdW–DF~\cite{Rydber2003} functional with the optB88 exchange functional~\cite{Klimes2011}. The \textit{CMDLInterfaceSet} has a default energy cutoff of 520 eV used for all calculations to ensure consistency between structures that have the cell shape and volume relaxed and those that only have ionic positions relaxed. The default $k$-point grid density was automated using \textit{pymatgen}~\cite{Ong2013} routines to 20 $k$-points/unit length by taking the nearest integer value after multiplying $\frac{1}{\textbf{a}}$ and $\frac{1}{\textbf{b}}$ by 20. These settings were sufficient to converge all calculations to a total force per atom of less than 0.02 eV/\AA. Additional information regarding default settings set in the \textit{CMDLInterfaceSet} and convergence tests performed to benchmark our calculations are in the section 1 and 2 of the SI. \subsection{Workflow Initialization and Customization} To use $Hetero2d$'s workflow, \textit{get\_heterostructures\_stabilityWF}, we import the 2D structure, its bulk counterpart, and the substrate structure from existing databases through their APIs. When initialized, the workflow can accept up to three structures (1) the 2D structure, (2) the bulk counterpart of the 2D structure, and (3) the substrate structure in the bulk or slab form. To perform structure transformations to generate the substrate slabs or the 2D-substrate heterostructures, our workflow requires two dictionaries during initialization -- the (1) \textit{h\_params} and (2) \textit{slab\_params} dictionary. Figure \ref{fig:Figure2} is a code excerpt demonstrating the parameters one can supply to generate a 2D-substrate heterostructure on a (111) substrate slab surface. In Figure \ref{fig:Figure2}, \textit{slab\_params} dictionary generates a substrate slab with a vacuum spacing of 19 \AA\ and a substrate slab thickness of at least 12 \AA. The \textit{h\_params} dictionary creates the lattice-matched, symmetry-matched 2D-substrate heterostructures with 3.0 \AA\ $z$-separation distance between the 2D material and the substrate surface. The \textit{h\_params} dictionary also sets the maximum allowed lattice-mismatch along \textbf{ab} to be less than 5\%, a surface area less than 130 \AA$^2$, sets the selective dynamics tags in the DFT input file to relax all layers of the 2D material and top two layers of the substrate slab. \begin{wrapfigure}[11]{r}{0.55\textwidth} \vspace{-1.4\intextsep} \hspace*{-0.4\columnsep}\includegraphics[width=0.55\textwidth]{img/CodeExcerpt.pdf} \vspace{-0.55\intextsep} \caption{Simplified workflow illustrating the setup necessary to setup the 2D-substrate heterostructure workflows using \textit{get\_heterostructures\_stabilityWF} used throughout this work. A full example jupyter notebook is located in the SI.} \vspace{10\intextsep} \label{fig:Figure2} \end{wrapfigure} The workflow has commands for two VASP executables compiled that incorporate vdW-corrections for performing DFT calculations for (1) 2D materials and (2) 3D materials. The first executable is a custom executable to relax 2D materials with a large vacuum and prevent the vacuum from shrinking by not letting the cell length change in the direction of vacuum spacing. The second executable allows the cell volume to change in all directions. Other optional arguments used to initialize the workflow include dipole correction for substrate slabs, tags for database entries, and avenues to modify the INCAR of each firework in the workflow. The parameters $vis$ and \textit{vis\_i} where $i$=2d, 3d2d, bulk, trans, and iface are used to override the default \textit{VaspInputSet} with one provided by the user. This can be provided for all fireworks using \textit{vis} or for a specific firework using \textit{vis\_i}. The parameters \textit{uis} and \textit{uis\_i} can be set to change the default settings in the INCAR. The parameter \textit{uis} will set the specified parameters for all INCARs in the workflow, while \textit{uis\_i} will set the INCAR parameters for the corresponding firework. Additional details regarding workflow customization options and current functionality available in \textit{Hetero2d} are discussed in SI section 3 as well as an example jupyter notebook. \subsection{Workflow Steps} As mentioned previously, our workflow has five firework steps. Here, we discuss the pre-processing steps that occur when initializing the workflow, each firework, and the firetasks composing each firework for the 2D-substrate heterostructure workflow introduced in section 3.2, \textit{Workflow Framework}. The first firework, FW$_1$, in the workflow optimizes the 2D material structure. During initialization of the workflow, the 2D material is centered within the simulation cell, obtaining crystallographic information regarding the structure, the \textit{CMDLInterfaceSet} is initialized to create VASP input files, and a list of user-defined/default tags are created for the 2D material. The structure, tags, and \textit{CMDLInterfaceSet} are used to initialize the firework \textit{HeteroOptimizeFW} that performs the structure optimization. The default tags appended to the firework are the unique identification tags (provided to the workflow by the user), the crystallographic information, workflow and firework name, and the structure's composition. In FW$_1$, \textit{HeteroOptimizeFW} executes firetasks that -- (a) create directories for the firework, (b) write all input files initialized using \textit{CMDLInterfaceSet}, (c) submit the VASP calculation to supercomputing resources to perform full structure optimization and monitor the calculation to correct errors, (d) run our \textit{HeteroAnalysisToDb} class to store all information necessary for data analysis within the database, and (e) lastly pass the information to the next firework. Details regarding \textit{HeteroAnalysisToDb} can be found in the next section. Similar to FW$_1$, FW$_2$ and FW$_3$ perform a full structural optimization for the bulk counterpart of the 2D material and the substrate, respectively. FW$_2$ and FW$_3$ differ from FW$_1$ only in the pre-processing steps. The step to center the 2D material is not performed, however, the conventional standard structure is utilized during the pre-processing for FW$_3$. FW$_3$ spawns a child firework passing the optimized substrate structure to FW$_4$ which transforms the conventional unit cell of the substrate into a substrate slab using the \textit{slab\_params} dictionary and performs the structure optimization. When the workflow is initialized, FW$_4$ undergoes similar pre-processing steps that are used to initialize the firework \textit{SubstrateSlabFW} that creates a substrate slab from the substrate. \textit{SubstrateSlabFW} is the firework that transforms the conventional unit cell of the substrate into a slab, sets the selective dynamics tags on the surface layers, and sets the number of compute nodes necessary to relax the substrate slab. The \textit{slab\_params} variable is the input dictionary that initializes \textit{pymatgen}'s \textit{SlabTransformation} module that creates the substrate slab. All required and optional input arguments used in the \textit{SlabTransformation} module must be supplied using this dictionary (key: value) format. This dictionary format is implemented to enable $Hetero2d$ to be flexible and extendable in future updates. Additionally, the \textit{slab\_params} dictionary is only required when creating a new substrate slab from a substrate. After the first four fireworks have been completed and successfully stored in the database, the fifth firework (FW$_5$) obtains the optimized structures and information from previous fireworks and the specification file. FW$_5$ calls the \textit{GenHeteroStructuresFW} firework to generate the 2D-substrate heterostructure configurations using \textit{h\_params} and spawns a firework to perform structure optimization for each configuration. The input required for the \textit{h\_params} dictionary are those that are required by $Hetero2d$'s \textit{hetero\_interfaces} function. This function attempts to find a matching lattice between the substrate surface and the 2D material. The parameters used to initialize \textit{hetero\_interfaces} are listed in the \textit{h\_params} dictionary shown in Figure \ref{fig:Figure2} and the jupyter notebook in the SI. Our function \textit{hetero\_interfaces} generates the 2D-substrate heterostructure configurations utilizing \textit{MPInterfaces}'s interface matching algorithm. We developed \textit{hetero\_interfaces} to ensure functions within the workflow are compatible with \textit{FireWorks}. Additionally, we can return key variables regarding the interfacing matching algorithm, such as the strain or angle mismatch, and store these values in our database. \textit{MPInterfaces} is used to (a) generate heterostructures within an allowed lattice-mismatch and surface area of the supercell at any rotation between the 2D material and bulk material surface and (b) create distinct configurations in which the 2D material can be placed on the bulk material surface based on the Wyckoff positions of the near-interface atoms. FW$_5$ calls \textit{GenHeteroStructuresFW} which generates the 2D-substrate heterostructure configurations, the total number of configurations is computed, each unique configuration is labeled from 0 to $n$-1, where $n$ is the total number of configurations, and stored under the \textit{Interface Config} tag. For each configuration, a new firework is spawned to optimize each 2D-substrate heterostructure configuration. The data generated within FW$_5$ is stored in the database. After all previous FWs have successfully converged, \textit{HeteroAnalysisToDb} is called one final time to compute the $\Delta E_{\mathrm{vac}}^f$, $\Delta E_{\mathrm{b}}$, and $\Delta E_{\mathrm{ads}}^f$\ for each heterostructure configuration generated by the workflow. The calculation of the $\Delta E_{\mathrm{vac}}^f$\ references the simulation for the 2D material and its bulk counterpart. The bulk counterpart is simulated using a standard periodic simulation cell. The calculation of $\Delta E_{\mathrm{b}}$\ references the 2D material, substrate slab, and 2D-substrate heterostructure simulations which all employ a standard supercell slab model. The calculation of the $\Delta E_{\mathrm{ads}}^f$\ references both $\Delta E_{\mathrm{b}}$\ and $\Delta E_{\mathrm{vac}}^f$. Once each value is computed, all the information is curated and stored in the MongoDB database. \subsection{Post-Processing Throughout Our Workflow} After each VASP simulation is complete, post-processing is performed within the calculation directory using our \textit{HeteroAnalysisToDb} class, an adaptation of \textit{atomate}'s \textit{VaspToDb} module. It is used to parse the calculation directory, perform error checks, and curate a wide range of input parameters and quantities from calculation parameters and output, energetic parameters, and structural information for storage in our MongoDB. \textit{HeteroAnalysisToDb} detects the type of calculation performed within the workflow and parses the calculation accordingly. \textit{HeteroAnalysisToDb} has the same functionally as \textit{VaspToDb} with additional analyzers developed for 2D-substrate heterostructures that -- (a) identify layer-by-layer interface atom IDs for the substrate and 2D material, (b) store the initial and final configuration of all structures, (c) compute the $\Delta E_{\mathrm{vac}}^f$, $\Delta E_{\mathrm{b}}$, and $\Delta E_{\mathrm{ads}}^f$, (d) store the results obtained from the interface matching, and (e) ensure each database entry has any custom tags added to the database such as those appended by the user. The workflow design ensures that the DFT simulations for each 2D-substrate surface pair will be performed independently of each other, but as soon as all simulations are completed for each 2D-substrate surface pair, the data will be analyzed and curated in the MongoDB database right away. \section{An Example of Substrate Screening via Hetero2d} \subsection{Materials Selection} To demonstrate the functionalities of the $Hetero2d$ package, we screened for suitable substrates for four 2D materials, namely $2H$-MoS$_2$, $1T$-NbO$_2$, $2H$-NbO$_2$~\cite{c2db}, and hexagonal-ZnTe~\cite{Torrisi2020}. The four 2D materials in consideration possess hexagonal symmetry as illustrated in Figure \ref{fig:2ds}. MoS$_2$ was selected because there is a large amount of experimental and computational~\cite{Chen2013, Zhuang2013b, Yun2012, singh2015al2o3} data available in literature which we can use to validate the computed properties from our $Hetero2d$ workflow. The hexagonal-ZnTe~\cite{Torrisi2020}, $1T$-NbO$_2$, and $2H$-NbO$_2$~\cite{c2db} are yet to be synthesized. In addition, these particular 2D materials have diverse predicted properties see Table \ref{tab:2dProp}. It is noteworthy that hexagonal-ZnTe has been predicted to be an excellent CO$_2$ reduction photocatalyst~\cite{Torrisi2020}. \begin{table}[!htbp] \centering \caption{The electronic properties and band gap of the four selected 2D materials used in this work. FM represents ferromagnetic.} \begin{adjustbox}{width=\textwidth} \begin{tabular}{|c|c|c|c|c|} \hline 2D Mat. & MoS$_2$ & $1T$-NbO$_2$ & $2H$-NbO$_2$ & ZnTe \\ \hline Classification & Semiconductor & FM~\cite{c2db} & FM~\cite{c2db} & Semiconductor\\ \hline Band Gap (eV) & 1.88~\cite{Gusakova2017} & 0.0~\cite{c2db} & 0.0~\cite{c2db} & 2.88~\cite{Torrisi2020} \\ \hline \end{tabular} \end{adjustbox} \label{tab:2dProp} \end{table} \begin{table}[!htbp] \centering \caption{A list of matching substrate surfaces for the 4 2D materials given our heterostructure search criteria discussed in the next section.} \begin{adjustbox}{width=\textwidth} \begin{tabular}{|l|c|c|l|} \hline 2D Mat. & (111) Substrate & (110) Substrate \\ \hline MoS$_2$ & Hf, Ir, Pd, Zr, Re, Rh & Ta, Rh, Sc, Pb, W, Y \\ \hline $1T$-NbO$_2$ & Ni, Mn, V, Nd, Pd, Ir, Hf, Zr, Cu & Rh, Ta, Sc, W \\ \hline $2H$-NbO$_2$ & Ni, Mn, Nd, Ir, Hf, Al, Te, Ag, Ti, Cu, Au & Ta, Sc, W, Y, Rh \\ \hline ZnTe & Sr, Ni, Mn, V, Al, Ti, Cu & W\\ \hline \end{tabular} \end{adjustbox} \label{tab:iface} \end{table} The properties of a 2D material can differ when placed on different miller-index planes for the same substrate. Thus, we investigated all unique low-index substrate surfaces (with $h$, $k$, $l$ equal to 1 or 0) for these 2D materials. A material available in the Materials Project (MP)~\cite{Ong2013} database was considered a potential substrate if it satisfied all of the following criteria - a) is metallic, b) is a cubic phase, c) is single-element composition, d) has a valid ICSD ID~\cite{ICSD} (thus been experimentally synthesized), and e) has an $E_{above\ hull}<0.1$ eV/atom. There are 50 total substrates that satisfy the criteria above when queried from the MP database. \begin{wrapfigure}[19]{r}{0.5\textwidth} \centering \vspace{-1.25\intextsep} \includegraphics[width=0.5\textwidth]{img/StructureModels.pdf} \vspace{-2\columnsep} \caption{Structure models illustrating the 2D films crystal structure. Top view demonstrates the hexagonal symmetry of each 2D material. The $1T$ and $2H$ phase for NbO$_2$ are labeled to clarify the two phases.} \label{fig:2ds} \end{wrapfigure} The bulk counterpart of each 2D material is also obtained from the MP database. We query the database for bulk materials that have the same composition as the 2D material and select the structure with the lowest $E_{above\ hull}$. SI Table 1-3 have additional reference information regarding all the optimized substrate slabs, 2D materials, and their bulk counterparts. SI Table 1 contains information about the Materials Project material\_id, $E_{above\ hull}$, ICSD ID, crystal system, and miller plane for the substrate surface. SI Table 2 contains information about the reference database ID, $\Delta E^{f}_{vac}$ (eV/atom), and crystal system for each 2D material and SI Table 3 contains information about the reference database id, $E_{above\ hull}$, E$_{gap}$, and the crystal system for the bulk counterpart of the 2D material. \subsection{Symmetry-Matched, Lattice-Matched 2D-Substrate Heterostructures} In this study, we focus our search for 2D-substrate heterostructures to substrate planes with indices, $h$, $k$, $l$ as 0 or 1. The following studies focus on the heterostructures with the (111) and (110) substrate surfaces because we find that only these two miller planes have an appreciable number of heterostructures. The (001) substrate plane resulted in only one heterostructure. Restricting our search for 2D-substrate matches to only the (111) and (110) yields a total of 4 (\# of 2D materials) X 2 (\# of planes) X 50 (\# of substrates) = 400 potential 2D-substrate heterostructure combinations. As illustrated in Figure \ref{fig:Workflow}, after introducing our constraints for the surface area to be $< 130$ \AA\ and applied strain on the 2D material to be $ < 5$ \AA, a total of 49 2D-substrate heterostructure workflows are found. Table \ref{tab:iface} lists all metallic substrates matching each of the 2D materials given our heterostructure criteria. \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{img/WorkflowDataPipeline.pdf} \caption{Schematic representing the materials selection process identifying stable 2D-substrate heterostructures using the $Hetero2d$ workflow. Tier 1 represents choosing 2D materials, substrates, and their surfaces. Tier 2 applies constraints on the surface area and lattice strain. Tier 3 shows the energetic stability of the heterostructures stored in the database.} \vspace{-0.25\intextsep} \label{fig:Workflow} \end{figure} Of the total 49 workflows, 33 workflows correspond to the (111) substrate surfaces, and 16 workflows correspond to the (110) substrate surfaces. Generally, the (111) surface has more substrate matches than (110) surface due to the intrinsic hexagonal symmetry of the (111) surface that matches the hexagonal symmetry of the selected 2D materials. Each workflow generates between 2--4 2D-substrate heterostructure configurations for a given 2D-substrate surface pair, resulting in a total of 123 2D-substrate heterostructure configurations. Of those 2D-substrate heterostructures, 78 configurations, a total of 29 workflows stabilize the meta-stable 2D materials when placed upon the substrate slab. Additional details regarding these simulations can be found in section 4 of the SI. \subsection{Stability of Free-Standing 2D Films and Adsorbed 2D-Substrate Heterostructures} \begin{wrapfigure}[13]{r}{0.56\textwidth} \vspace{-1\intextsep} \hspace*{-0.75\columnsep}\includegraphics[width=0.56\textwidth]{img/FormationEnergy.pdf} \vspace{-0.1\intextsep} \caption{The $\Delta E_{\mathrm{vac}}^f$\ for 2D- MoS$_2$ (\tikzcircle[gray, fill=orange]{2pt}), $1T$-NbO$_2$ (\tikzcircle[gray, fill=red]{2pt}), $2H$-NbO$_2$ (\tikzcircle[gray, fill=green]{2pt}), and ZnTe (\tikzcircle[gray, fill=blue]{2pt}). The $\Delta E_{\mathrm{vac}}^f$\ is used to assess the thermodynamic stability of the free-standing 2D film with respect to its bulk counterpart. MoS$_2$ and ZnTe have relatively low $\Delta E_{\mathrm{vac}}^f$\ while the $1T$ and $2H$ phase of NbO$_2$ have high $\Delta E_{\mathrm{vac}}^f$.} \vspace{10\intextsep} \label{fig:form} \end{wrapfigure} Figure \ref{fig:form} shows the $\Delta E_{\mathrm{vac}}^f$\ of the isolated unstrained 2D material with respect to their bulk counterpart. We find the $\Delta E_{\mathrm{vac}}^f$\ for both MoS$_2$ and ZnTe are low, less than 0.2 eV/atom. Both the $1T$ and $2H$ phase for NbO$_2$ possess high $\Delta E_{\mathrm{vac}}^f$, as shown by the red shaded region in Figure \ref{fig:form}, making substrate-assisted synthesis methods the most feasible method to synthesize these 2D films. The $\Delta E_{\mathrm{vac}}^f$'s in Figure \ref{fig:form} are consistent with prior computational~\cite{c2db, Torrisi2020} and experimental work~\cite{Lee2013}. Figures \ref{fig:Eads}a and \ref{fig:Eads}b show the $\Delta E_{\mathrm{ads}}^f$\ for the four 2D materials on the (110) and (111) substrate surfaces, respectively. The black lines in Figure 2 separate the 2D materials, while the shaded regions indicate stabilization of the 2D material on the substrate surface. When generating 2D-substrate heterostructure, the first challenge is finding a matching lattice between the 2D material and substrate surface. The next challenge is identifying "ideal" or likely locations to place the 2D material on the substrate surface to generate stable low-energy heterostructures. To reduce the large number of in-plane shifts possible for a given 2D-substrate heterostructure, we selectively placed the 2D material on the substrate slab by enumerating combinations of high-symmetry points (Wyckoff sites) between the 2D material and substrate slab stacking the 2D material on top of these sites $z$ \AA\ away from the substrate surface. Each unique 2D-substrate heterostructure configuration is represented by 0=$\triangle$, 1=\textbf{x}, 2=$\circ$, and 3=$\square$ in Figure \ref{fig:Eads}. \begin{figure}[t!] \centering \vspace{-1\intextsep} \includegraphics[width=\textwidth]{img/AdsEnergy.pdf} \vspace{-2\intextsep} \caption{Adsorption formation energy, $\Delta E_{\mathrm{ads}}^f$, for the symmetry-matched, low lattice-mismatched (a) (110) and (b) (111) substrate surfaces. The rectangular symmetry of the (110) surface results in fewer matches while the hexagonal symmetry of the (111) substrate surface results in numerous matches within the given constraints on the surface area and lattice strain. Negative $\Delta E_{\mathrm{ads}}^f$\ values indicate stabilization of the 2D material. Each set of symbols (up to 4 points per substrate) represents the unique 2D-substrate configurations. } \vspace{-1\intextsep} \label{fig:Eads} \end{figure} The $\Delta E_{\mathrm{ads}}^f$\ on the (110) surface is shown in Figure \ref{fig:Eads}a. In the figure, 9 substrates stabilize the $\Delta E_{\mathrm{ads}}^f$\ of the 2D materials. The $\Delta E_{\mathrm{ads}}^f$\ appears to be correlated with the substrate where the 2D material is placed, however, there are not enough data points in Figure \ref{fig:Eads}a to distinguish the origin of this trend. Interestingly, when MoS$_2$ is placed on the (110) Ta substrate surface, the 2D material buckles which likely increases the $\Delta E_{\mathrm{ads}}^f$\ significantly above the other substrates. SI Figure 6 shows both configurations for MoS$_2$ on the (110) Ta substrate surface. There are an additional 5 2D-(110) substrate pairs that were studied but are not shown in Figure \ref{fig:Eads}a because the 2D materials/substrate interface becomes highly distorted/completely disintegrated. These cases are shown in SI Figure 4a and discussed in section 5 of the SI. The (111) substrate surface matches for each 2D material are shown in Figure \ref{fig:Eads}b, where 15 substrates result in an $\Delta E_{\mathrm{ads}}^f$\ $<$ 0. An additional 8 2D-substrate pairs, shown in SI Figure 4b, have 2D materials/substrate surfaces that are disintegrated and are discussed in section 5 of the SI. A correlation between the substrate surface and the $\Delta E_{\mathrm{ads}}^f$\ is more apparent for the (111) surface in Figure \ref{fig:Eads}b due to the increased number of 2D-substrate pairs. For MoS$_2$ on Zr and Hf, the triangle configurations have $\Delta E_{\mathrm{ads}}^f$\ significantly lower than the other configurations, see SI Figure 6 for structures of the three configurations. The lower $\Delta E_{\mathrm{ads}}^f$\ is correlated with smaller bond distances between the substrate surface and the 2D material. When the $\Delta E_{\mathrm{ads}}^f$\ is lower for these structures, we find that the $2h$ Wyckoff site of the 2D material is stacked on top of the $2a$ Wyckoff site of the substrate surface. The location of a 2D material on a substrate surface has previously been shown to influence the type of bonding present between the 2D material and substrate surface~\cite{Singh2014a,Zhuang2017}. The $1T$ phase of NbO$_2$ on Hf, Zr, and Ir substrates have an $\Delta E_{\mathrm{ads}}^f$\ difference between each configuration that is larger than other 2D-substrate pairs. The differences in $\Delta E_{\mathrm{ads}}^f$\ for $1T$-NbO$_2$ on Ir is partly due to some structural disorder of the 2D materials from the O atoms bonding strongly with the substrate surface, shown in SI Figure 7. For both Hf and Zr, the differences in $\Delta E_{\mathrm{ads}}^f$\ do not arise from structural disorder. The $\Delta E_{\mathrm{ads}}^f$\ of $1T$-NbO$_2$ on Hf and Zr are more strongly affected by the location of the 2D material on the substrate surface. $2H$-NbO$_2$ has two substrate surfaces, Ti and Au, where the $\Delta E_{\mathrm{ads}}^f$\ varies strongly with the configuration of 2D material on the substrate, unlike other 2D-substrate pairs for $2H$-NbO$_2$. $2H$-NbO$_2$ on Ti and Au have no structural distortions that explain the difference in $\Delta E_{\mathrm{ads}}^f$. For $2H$-NbO$_2$ on Ti, each configuration possesses different $\Delta E_{\mathrm{ads}}^f$\ arising from the unique placement of the 2D material on the substrate surface for each configuration. The strong bonding between the 2D material and substrate surface may be due to the affinity for Ti to form a metal oxide. SI Figure 8 shows each configuration for $2H$-NbO$_2$ on (111) Ti substrate surface. For $2H$-NbO$_2$ on Au, the circle configuration has a lower $\Delta E_{\mathrm{ads}}^f$\ due to the bottom layer of the $2H$-NbO$_2$ stacked directly on the top layer of the Au substrate surface. The properties of MoS$_2$ have been studied both computationally and experimentally, where previous computational works~\cite{Zhuang2013b, Singh2015} have found similar values for the $\Delta E_{\mathrm{vac}}^f$\ of MoS$_2$. Chen et. al. found that Ir bonds more strongly with the substrate surface than Pd~\cite{Chen2013}. This may explain the small structural modulations observed in our study for MoS$_2$ on the Ir (111) substrate surface but no such modulation is observed for MoS$_2$ on the Pd (111) substrate surface. Additionally, the $z$-separation distance between the 2D material and substrate surface found in this work agrees well with Chen et. al.'s values despite using a different functional. Our $z$-separation distances are within 0.05 \AA\ for Ir and 0.16 \AA\ for Pd~\cite{Chen2013}. \subsection{Separation Distance of Adsorbed 2D Films on Substrate Slab Surfaces} The change in the thickness of the adsorbed 2D material may provide insight into the nature of bonding between the 2D-substrate heterostructures. For instance, vdW bonds are weak and thus typically result in minimal structural and electronic changes in the 2D material. Using our database, we determine the change in the thickness of post-adsorbed 2D materials from that of the free-standing 2D material. The thickness of the free-standing/adsorbed 2D material is computed first by finding the average $z$ coordinate of the top and bottom layer of the 2D material given by $\bar{d}_z = \sum\limits_{i=1}^n d^{top}_{i,z}/n - \sum\limits_{i=1}^m d^{bottom}_{i,z}/m$ where $d_{i,z}$ is the $z$ coordinate of the $i^{th}$ atom summed up to $n$ and $m$, the total number of atoms in the top and bottom layers, respectively. The thickness, obtained by taking the difference between the average thickness of the adsorbed 2D material from that of the free-standing 2D material, $\delta d$=$\bar{d}^{adsorbed}_z-\bar{d}^{free}_z$, with positive (negative) values corresponding to an increase (decrease) in the thickness of the adsorbed 2D material. Figure \ref{fig:Zdiff} illustrates the change in the thickness of the free-standing 2D material from that of the adsorbed 2D material for each 2D-substrate heterostructure. Typically for vdW type bonding, each atom should have minimum deviations from the free-standing 2D film due to the weak interaction between the adsorbed 2D material and substrate surface that characterizes vdW bonding. Figure \ref{fig:Zdiff} shows many of the 2D-substrate pairs have a significant change in the thickness of the 2D material that may indicate more covalent/ionic type bonding. The change in the thickness of the 2D material for the majority of the MoS$_2$-substrate configurations is minimal ($\textless$0.1 \AA) that may indicate weak interactions between the 2D material and substrate surface. Figure \ref{fig:Zdiff} indicates that for the majority of the adsorbed 2D materials, the substrates tend to induce an increase in the thickness of the adsorbed 2D material. \begin{figure}[!h] \centering \includegraphics[width=0.50\textwidth]{img/2dThickness.pdf} \caption{Each 2D material is separated spatially along the $x$-axis using a violin plot. The change in the 2D material's thickness, $\delta d$, for all substrates is plotted along the $y$-axis. A positive $y$-value indicates the 2D material's thickness has increased during adsorption onto the substrate slab. The width of the violin plot is non-quantitative from scaling the density curve by the number of counts per violin, however, within one violin plot, the relative $x$-width does represent the frequency that a 2D material's thickness changes by $y$ amount relative to the total number of data points in the plot.} \label{fig:Zdiff} \end{figure} \subsection{Charge Layer Doping of Adsorbed 2D Films} The $Hetero2d$ workflow package has a similar infrastructure as \textit{atomate} that allows our package to integrate seamlessly with the workflows developed within \textit{atomate}. These workflows enable us to expand our database by performing additional calculations such as Bader~\cite{Tang2009,Henkelman2006} charge analysis and high-quality density of states (DOS) calculations to assess charge transfer that occurs between the adsorbed 2D material and the substrate surface, changes in the DOS from the adsorbed and pristine 2D material, and changes in the charged state of the 2D-substrate pairs. \begin{table}[h!] \centering \caption{Q$_x$ is obtained with Bader analysis and represents the average number of electrons transferred to/from (positive/negative) specific atomic layers with the initial number of electrons taken from the POTCAR. The first four columns are the electrons transferred to/from -- the Hf substrate atoms, Q$_{sub}$, the bottom layer of S atoms, Q$_{S_b}$, the Mo atoms, Q$_{Mo}$, and the top layer of S atoms, Q$_{S_t}$ for the adsorbed 2D-substrate heterostructure. The last three columns denote the charge transfer in the pristine MoS$_2$ structure. MoS$_2$ has an increased charge accumulation on the bottom layer of the 2D material from the substrate slab.} \begin{adjustbox}{width=3in} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline electrons & Q$_{sub}$ & Q$_{S_b}$ & Q$_{Mo}$ & Q$_{S_t}$ & Q$^{prist}_{S_b}$ & Q$^{prist}_{Mo}$ & Q$^{prist}_{S_t}$ \\ \hline Q$_x$ & -0.11 & 1.10 & -1.03 & 0.57 & 0.60 & -1.20 & 0.60 \\ \hline \end{tabular} \end{adjustbox} \label{tab:bader} \end{table} \begin{figure}[hb!] \centering \includegraphics[width=\textwidth]{img/BaderCharges_DOS.pdf} \caption{(a) The element projected density of states (DOS) where red and blue lines correspond to S and Mo states, respectively, for the isolated strained 2D material (dashed lines), the adsorbed 2D material (solid lines), and the pristine MoS$_2$ material (dashed-dotted lines). The Hf (111) substrate influences the DOS for MoS$_2$ causing a semiconductor to metal transition. (b) The $z$ plane-averaged electron density difference ($\Delta\rho$) for MoS$_2$ on Hf. Electron density difference is computed by summing the charge density for the isolated MoS$_2$ and isolated Hf then subtracting that from the charge density of the interacting MoS$_2$ on Hf system. The charge densities were computing with fixed geometries. The red and blue colors indicate electron accumulation and depletion in the combined MoS$_2$ on Hf system, respectively, compared to the isolated MoS$_2$ and isolated Hf atoms. (c) The charge density distribution for MoS$_2$ on (111) Hf substrate. The cross section is taken along the (110) plane passing through Mo, S, and Hf atoms. The charge density is in units of electrons/\AA$^3$.} \label{fig:DosChg} \end{figure} Most 2D materials are desirable due to their unique electronic properties. We selected MoS$_2$ on Hf (111) surface to demonstrate the capability of \textit{Hetero2d} in providing detailed electronic and structural information. Our Bader analysis illustrated in Table \ref{tab:bader} shows that there is charge transfer from the substrate to the bottom layer of the 2D material which is consistent with the findings presented by Zhuang et. al.~\cite{Zhuang2017} In Figure \ref{fig:DosChg}a, the DOS for the isolated un-strained, isolated strained, and adsorbed MoS$_2$ is shown where the black dashed line represents the Fermi level. There is a small shift in the DOS when comparing the un-strained and strained DOS for MoS$_2$. Comparing the DOS for the adsorbed MoS$_2$ to the other DOS for MoS$_2$, there is a significant change in the DOS. We can see that the substrate influences the DOS of MoS$_2$ when placed on the Hf (111) surface causing a semiconductor to metal transition of the MoS$_2$. This change in the DOS is consistent with the Bader analysis that indicates electron doping of the MoS$_2$ material occurs which would result in changes in the DOS. Figure \ref{fig:DosChg}b shows the redistribution of charge due to the interaction of the 2D material and substrate surface where red and blue regions indicate charge accumulation (gaining electrons) and depletion (losing electrons) of the combined system due to the interaction between MoS$_2$ and Hf. The charge density difference is computed as the difference between the sum of the isolated MoS$_2$ and isolated Hf substrate slab from that of the combined MoS$_2$ on Hf system . Figure \ref{fig:DosChg}c is the charge density of the combined MoS$_2$ on Hf system along the (110) plane. Thus, the electronic properties of MoS$_2$ are dramatically affected by the substrate. \textit{Hetero2d} can analyze the substrate induced changes in the electronic structure of 2D materials. This will lead to a fundamental understanding and engineering of complex interfaces. \section{Conclusions} In summary, we have developed an open-source workflow package, $Hetero2d$, that automates the generation of 2D-substrate heterostructures, the creation of DFT input files, the submission and monitoring of computational jobs on supercomputing facilities, and the storage of relevant parameters alongside the post-processed results in a MongoDB database. Using the example of four candidate 2D materials and low-index planes of 50 potential substrates we demonstrate that our open-source package can address the immense number of 2D material-substrate surface pairs to guide the experimental realization of novel 2D materials. Among the 123 configurations studied, we find that only 78 configurations (29 workflows) result in stable 2D-substrate heterostructures. We exemplify the use of $Hetero2d$ in examining the changes in thickness of the adsorbed 2D materials, the Bader charges, and the electronic density of states of the heterostructures to study the fundamental changes in the properties of the 2D material post adsorption on the substrate. $Hetero2d$ is freely available on our GitHub website under the GNU license along with example jupyter notebooks. \section{Acknowledgements} The authors thank start-up funds from Arizona State University and the National Science Foundation grant number DMR-1906030. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number TG-DMR150006. The authors acknowledge Research Computing at Arizona State University for providing HPC resources that have contributed to the research results reported within this paper. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The authors acknowledge Akash Patel for his dedicated work maintaining our database and API. We thank Peter A. Crozier for their valuable discussions and suggestions. \section{Supporting Information} Supporting information provides additional descriptions, figures, and tables supporting the results described in the main text. \section{Data Availability} The results reported in this article and the workflow package can be found on our github website \href{https://github.com/cmdlab/Hetero2d}{Hetero2d}.
1,314,259,996,931
arxiv
\section{Introduction} Profound changes in the properties of cavity-bound molecular systems can be achieved in regimes where the quantum nature of light becomes important. A few notable examples are the change of conductivity in semiconductors due to vacuum field hybridization \cite{Orgiu2015}, the appearance of mixed states due to strong coupling \cite{Chikkaraddy2016,Casey2016}, and multiple Rabi splittings caused by ultrastrong vibrational coupling \cite{George2016}. \textcolor{black}{Although the forefront of the rapidly expanding domain of cavity-modified chemistry has been strongly driven by experiments, theoretical investigations have offered complementary insights into the various possibilities opening up with this new field of research\cite{AddR1, AddR2, Add3, Add4, Add5,Add6,Feist2015,Schachenmayer2015,Cirio2016,Flick2017,ruggenthaler2018quantum,schafer2019modification}.} To describe chemical processes that are strongly correlated with quantum light\cite{thomas2016,hiura2018cavity,thomas2019tilting}, requires an accurate and flexible, furthermore computationally efficient, treatment of the light-matter interactions. Thus, in order to meet the demand of developing an \textit{ab initio} theoretical description of cavity modified chemical systems, extensions to the traditional theoretical tool-kits for quantum optics and quantum chemistry are required. Therefore, in this paper we focus on semiclassical dynamics methods, which due to the simplicity, efficiency, and especially scalability, present an interesting alternative or extension to existing quantum electrodynamical wavefunction\cite{galego2015,Flick2017a,luk2017multiscale,schafer2018ab} and density-functional (QEDFT) based approaches\cite{ruggenthaler2014,Pellegrini2015,flick2017ab,ruggenthaler2018quantum}. The semiclassical concept has the advantage of providing an intuitive qualitative understanding of the dynamics through trajectories in phase space. Furthermore, many semiclassical methods do not exhibit an exponential scaling of the computational effort with system size or simulation time. However, these methods can fail to quantitatively, and \textcolor{black}{sometimes} even qualitatively, describe all of the relevant physical features in a variety of nonadiabatic reactive scattering and excited state relaxation processes, such as nuclear interference and detailed balance\cite{Miller2001,Kelly2016}. Hence, benchmark tests of these approaches are needed in this particular regime of the problem in order to be able to verify their viability. In order to address some of these challenges, we have recently shown the potential of the Multi-Trajectory Ehrenfest (MTEF) method to capture the correlated dynamics of a one-dimensional QED cavity-setup with a two-level atomic system coupled to a large set of cavity photon-modes\cite{HSRKA19}. Furthermore, we note that in contrast to recent work of Subotnik and co-workers, who investigated light-matter interaction with an adjusted Ehrenfest theory based method to simulate spontaneous emission of classical light \cite{CLSNS18,CLSNS218,LNSMCS18}, we focus on the description of quantized light fields. Here we broaden our scope by investigating the performance of a comprehensive class of approximate quantum dynamics methods for simulating spontaneous emission in an optical cavity, including Ehrenfest mean-field theory\cite{Ehrenfest1927,McLachlan1964}, Tully's surface hopping algorithm\cite{Tully1990}, fully linearized \cite{Wang1998} and partially linearized \cite{Hsieh2012,fbts2} semilclassical dynamics techniques, and a selection of approximate closures for the quantum mechanical Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy. Through benchmark comparisons with exact numerical results, we assess the accuracy and efficiency of each method and highlight the possibilities and theoretical challenges involved with extending these approaches towards realistic systems. The remainder of this work is divided into four sections: Sec.~\ref{Se1} gives a short overview of general quantum mechanical light-matter interactions, and a brief introduction of the class of model systems used in this study. Sec.~\ref{Se2} contains a short introduction to each of the selected dynamics methods that we consider in this work. In Sec.~\ref{Se3} we report the results of our benchmark tests of the performance of these techniques in describing spontaneous emission, stimulated absorption and strongly correlated light-matter dynamics. In Sec.~\ref{Se4} we offer some conclusions and outlooks. \section{Electron-Photon Correlated Systems}\label{Se1} The total Hamiltonian for a coupled light-matter system can be written as \begin{equation} \hat{H} = \hat{H}_{A} + \hat{H}_{F} + \hat{H}_{AF}. \label{G9} \end{equation} The first term, $\hat{H}_{A}$, is the matter Hamiltonian, which may be generally expressed in the spectral representation, \begin{equation} \hat{H}_{A} = \sum_k \varepsilon_k | k \rangle \langle k |. \notag \end{equation} Here $\{\varepsilon_k,|k\rangle\}$ are the energies and stationary states of the electron-nuclei system in absence of coupling to the cavity. The second term is the Hamiltonian of the uncoupled cavity field $\hat{H}_{F}$, \begin{equation} \hat{H}_{F} = \frac{1}{2}\sum_{\alpha = 1}^{2N} \left(\hat{P}^{2}_{\alpha} + \omega^{2}_{\alpha}\hat{Q}_{\alpha}^{2} \right). \label{G12} \end{equation} The photon-field operators, $\hat{Q}_{\alpha}$ and $\hat{P}_{\alpha}$, obey the canonical commutation relation, $[\hat{Q}_{\alpha},\hat{P}_{\alpha'}] = \imath\hbar\delta_{\alpha,\alpha'}$, and can be expressed using creation and annihilation operators for each mode of the cavity field, \begin{eqnarray} \hat{Q}_{\alpha} = \sqrt{\frac{\hbar}{2\omega_{\alpha}}}(\hat{a}^{\dagger}_{\alpha} + \hat{a}_{\alpha}),\quad \hat{P}_{\alpha} = i\sqrt{\frac{\hbar\omega_{\alpha}}{2}}(\hat{a}^{\dagger}_{\alpha} - \hat{a}_{\alpha}),\notag \end{eqnarray} where $\hat{a}^\dagger_\alpha$ and $\hat{a}_\alpha$ denote the usual photon creation and annihilation operators for photon mode $\alpha$. The coordinate-like operators, $\hat{Q}_{\alpha}$, are directly proportional to the electric displacement operator, while the conjugate momenta-like operators, $\hat{P}_{\alpha}$, are related to the magnetic field \cite{Craig1998,Pellegrini2015,Flick2015}. The upper limit of the sum in Eq.~(\ref{G12}) is $2N$, as there are (in principle) two independent polarization degrees of freedom for each photon mode, however in the 1D cavity models presented here only a single polarization will be considered. The final term in Eq.~(\ref{G9}) represents the coupling between the electron-nuclei system and the cavity field. In Coulomb gauge, and the dipole approximation \cite{Craig1998,ruggenthaler2018quantum}, this term can be written \begin{equation} \label{Gint} \hat{H}_{AF} = \sum_{\alpha=1}^{2N}\Big(\omega_{\alpha}\hat{Q}_{\alpha}(\lambda_{\alpha}\cdot \hat{\mu}) + \frac{1}{2}(\lambda_{\alpha} \cdot \hat{\mu})^2 \Big), \end{equation} where we denote ${\hat{\mu}}$ as the electronic plus nuclear dipole moment, and ${\lambda}_{\alpha}$ as the matter-photon coupling vector \cite{Tokatly2013,ruggenthaler2014,ruggenthaler2018quantum}. The featured methodologies can be generically applied to arbitrary complex matter systems. With the demand for exact reference solutions, as part of the benchmarking procedure, we are forced to restrict the Hilbert-space of interest. Focusing on the evolution of the photonic degrees of freedom, we restrict the matter part to a highly simplified few-level atomic system trapped in a cavity \cite{buzek1999,Flick2017,HSRKA19} as depicted in Fig.~\ref{AB1}. The fundamental limitations of the few-level approximation have been presented in a variety of recent publications\cite{flick2017ab,schafer2018ab,bernadrdis2018breakdown,schafer2019modification,schaefer2019rs}. While this approximation results in a strongly simplified problem, it has the advantage that exact numerical results, although nontrivial to obtain, are still achievable with a reasonable computational effort. In the case of a two-level approximation of the matter system the quadratic term $(\lambda_{\alpha} \cdot \hat{\mu})^2$ simply results in a constant energy shift and hence can be discarded \cite{schafer2018ab}. For simplicity, we also neglect this term in the case of the three level model system, \textcolor{black}{to remain consistent across set-ups and previous publications \cite{buzek1999,Flick2017,HSRKA19}.~\footnote{\textcolor{black}{We have verified that in the parameter regimes studied in this work including the quadratic term into adjusted eigenstates, according to the Hamiltonian $\hat{H}_{A} + \sum_{\alpha=1}^{2N} \frac{1}{2}(\lambda_\alpha \cdot \hat{\mathbf{\mu}})^2$, has no qualitative influence on the time-evolution of the observables associated with the cavity-bound emission process.}} However, the quadratic term is generally important to consider as it stems from a proper definition of field observables, renders the system stable, and is essential to retain gauge and translational invariance. Applications to realistic systems should of course consider this term; for a detailed discussion of this topic, one may refer to Ref. \cite{schaefer2019rs}, for example.} \begin{figure}[h!]{ \includegraphics[width=0.6\linewidth]{cavity_mode_3d_2.pdf}} \caption{Cavity-setup: Few-level approximated atomic system (green) trapped in a cavity and coupled by coupling strength $\lambda_{\alpha}$ to 400 photon modes with their photonic frequency $\omega_{\alpha}$, where $\alpha = \{1,2, ...,400\}$.} \label{AB1} \end{figure} In the case of a two-level atomic system, this corresponds to a special case of the spin-boson model. With the position of the atom fixed at $r_A = \frac{L}{2}$ in this study, half of the $2N$ cavity modes decouple from the atomic system by symmetry. We adopt the same parameters as in Ref. \cite{Flick2017,Su1991}, which are based on a 1D Hydrogen atom with a soft Coulomb potential (in atomic units): $\{\varepsilon_1,\varepsilon_2\} = \{-0.6738, -0.2798\}$, $\lambda_{\alpha}(\frac{L}{2}) = 0.0103\cdot(-1)^{\alpha}$, $L = 2.362\cdot 10^{5}$ and $\mu_{12} = 1.034$. For the three-level atom, we adopt all the same parameters for the field and the atom-field coupling as for the two-level case. The atomic energies for the three level model are $\{\varepsilon_1,\varepsilon_2,\varepsilon_3\} = \{-0.6738, -0.2798, -0.1547\}$, and as before the numerical parameters are based on the 1D soft-Coulomb Hydrogen atom. The dipole moment operator only couples adjacent states, such that, the only nonzero matrix elements are $\{\mu_{12},\mu_{23}\} = \{1.034,-2.536\}$ and their conjugates. Furthermore, with $\frac{g_{2,1}}{\epsilon_{2} - \epsilon_{1}} = 1.2\cdot10^{-2}$ for the two-level system and $\frac{g_{3,2}}{\epsilon_{3} - \epsilon_{2}} = 2.1\cdot10^{-2}$ for the three-level system, where $g_{i,j} = \mu_{k,l}\sqrt{\frac{\epsilon_{i} - \epsilon_{j}}{2}}\lambda$ is the coupling strength for the resonant mode, our system is beyond \textcolor{black}{common perturbative approaches such as the rotating wave approximation and the well-known analytic Wigner-Weisskopf solution. The appearances of a bound photon peak in the intensity most illustratively indicates this regime. Cavity losses are not considered at this point but could be included in future developments.} \section{Methods}\label{Se2} \subsection{Multi-Trajectory Methods} In this section we briefly review a selection of semiclassical dynamics methods that are based on ensembles of independent trajectories. These methods have been introduced traditionally to study electron-nuclear systems and they typically involve the use of the Wigner representation for the non-subsystem degrees of freedom. In this work we extend the application of these methods to treat coupled quantum mechanical light-matter systems, in which the degrees of freedom of the photon field will be partially Wigner transformed. The structural similarity allows for the trivial inclusion of nuclear degrees of freedom. The general expression for the average value of any observable, $\langle B (t) \rangle $, in the partial Wigner representation can be written as \begin{eqnarray} \langle B (t) \rangle &=& \text{Tr}_A \int dX \hat{B}_W(X,t) \hat{\rho}_W(X,t=0), \notag \\ &=& \sum_{\lambda \lambda'}\int dX B^{\lambda \lambda'}_W(X,t) \rho^{\lambda' \lambda}_W(X), \notag \end{eqnarray} where the subscript $W$ denotes the partial Wigner transform over the photonic degrees of freedom, which are represented on the continuous phase space $X=(R,P)$. \textcolor{black}{The partial Wigner transforms for an arbitrary operator $\hat{B}$ and the density matrix $\hat{\rho}$ are defined as \cite{Wigner1984} \begin{align} \hat{B}_{W}(R,P) &= \int dZ e^{i P \cdot Z} \langle R - \frac{Z}{2} | \hat{B} | R +\frac{Z}{2}\rangle, \notag\\ \hat{\rho}_{W}(R,P) &= \frac{1}{(2\pi\hbar)^{2N}}\int dZ e^{i P \cdot Z} \langle R - \frac{Z}{2} | \hat{\rho} | R +\frac{Z}{2}\rangle \notag. \end{align}} Thus, in order to assemble the average value a multi-trajectory method may be employed, which is essentially a hybrid Monte Carlo - molecular dynamics method in which initial conditions are sampled from the initial Wigner distribution, and then an ensemble of molecular dynamics trajectories is used to evaluate the time-evolution of the property of interest. \subsubsection{Ehrenfest Mean-Field Theory} The Ehrenfest equations of motion may be derived by assuming that the total density can be written as an uncorrelated product of the atomic and field reduced densities at all times, and then taking the appropriate classical limit\cite{Ehrenfest1927,McLachlan1964}, or by starting with the quantum-classical Liouville equation, which is formally exact for the class of systems studied here\cite{qcle}, and then making the uncorrelated approximation, i.e., \begin{equation} \hat{\rho}_W(X,t)=\hat{\rho}_A(t) \rho_{F,W}(X,t), \notag \end{equation} where the reduced density matrix of the atomic system is \begin{equation} \hat{\rho}_{A} (t) = \text{Tr}_F \Big( \hat{\rho}_W(X,t) \Big) = \int dX \hat{\rho}_W (X,t), \notag \end{equation} and the Wigner function of the cavity field is $\rho_{F,W}(X, t) = Tr_A ( \hat{\rho}_W (X,t))$. The Ehrenfest mean-field equations of motion for the atomic system are: \begin{equation} \partial_t \hat{\rho}_A(t) = -i\Big[ \hat{H}_A + \hat{H}_{AF,W}(X(t)), \hat{\rho}_A(t)\Big], \notag \end{equation} where $\hat{H}_{AF,W}$ denotes the Wigner transform of the bilinear coupling and $\hat{H}_{A}$ the atomic Hamiltonian. The evolution of the Wigner function of the photon field can be represented as a statistical ensemble of independent trajectories with $\mathcal{N}$ being the ensemble size, where we select uniform weights $w^j=1/\mathcal{N}$, \begin{equation} \rho_{F,W} (X,t) = \frac{1}{\mathcal{N}}\sum_{j=1}^{\mathcal{N}} \delta(X-X^j(t)), \notag \end{equation} that evolve according to Hamilton's equations of motion, \begin{equation} \frac{d Q_{\alpha}}{dt} = \frac{\partial H_{F,W}^{Eff}}{\partial P_{\alpha}},\quad \frac{d P_{\alpha}}{d t} = - \frac{\partial H_{F,W}^{Eff}}{\partial Q_{\alpha}}. \notag \end{equation} The mean field photonic Hamiltonian is \begin{equation} H^{Eff}_{F,W} = \frac{1}{2}\sum_{\alpha}\Big( P_{\alpha}^2 + \omega_{\alpha}^2 Q_{\alpha}^2 + 2 \omega_{\alpha} \lambda_{\alpha} Q_{\alpha} \mu (t) \Big), \notag \end{equation} where $ \mu (t) = \text{Tr}_A(\hat{\rho}_A(0) \hat{\mu}(t))$. \subsubsection{Fewest Switches Surface-Hopping} \label{sec:FBTS} \textcolor{black}{In the following we outline the fewest switches surface hopping (FSSH) method for the electron-photon coupled system. FSSH allows feedback between the classical and quantum subsystems, however requires the photons to always propagate on a particular electronic adiabatic state, with hops between adiabatic surfaces.} \cite{T90,TP71,SS11,BR95,PR97} Considering the mode displacement moving along some classical trajectory $R(t) = \{R_{\alpha=1}(t),...,R_{2N}(t)\}$, the effective electronic Hamiltonian \begin{align*} \hat{H}^{el}[R(t)] = \hat{H}_A + \hat{H}_{AF}[R(t)] + \frac{1}{2} \sum\limits_{\alpha=1}^{2N} \omega_\alpha^2 R_\alpha(t)^2 \end{align*} then becomes parametrically dependent on time through the photonic trajectory. Expanding the electronic wave function in the adiabatic basis yields \begin{equation} \Psi(r,R,t) = \sum_{i}c_{i}(t)\phi_{i}(r,R(t)), \notag \end{equation} where $r$ denotes the collection of all electronic degrees of freedom and $c_{i}(t)$ are time-dependent complex expansion coefficients. Assuming the photonic motion with the momentum $P(t)$ to be classical, the equation of motion is given by \begin{align*} \partial_t \rho_{ij} &= - i \sum_{k}(H_{ik}^{el}[R(t)]\rho_{kj} - \rho_{ik}H_{kj}^{el}[R(t)]) \\ &- P(t) \cdot \sum_{k}(\textbf{d}_{ik}^{\alpha}[R(t)]\rho_{kj} - \rho_{ik}\mathbf{d}_{kj}^{\alpha}[R(t)] ), \end{align*} with the photon mode $\alpha$ and $\rho_{ij}= c_{i}(t)c^{*}_{j}(t)$ being the corresponding electronic density matrix. Furthermore, the movement of the photon is given by moving along a single potential energy surface except for some instantaneous switches. The probability for those switches, jumping from the current state $i$ to another state $j$ is defined by \begin{equation} g_{ij} = \frac{ b_{ij} \Delta t}{\rho_{ii}}, \notag \end{equation} where $\Delta t$ is a time interval from $t$ to $t+\Delta t$ and $b_{ij} = -2 \text{Re}(\rho_{ij}P(t)\cdot \textbf{d}_{ij})$ with $\textbf{d}_{ij} = \langle \phi_i(r,R(t)) \vert \partial_{R} \phi_j(r,R(t))\rangle$ being the nonadiabatic coupling vector. \subsubsection*{Semiclassical Mapping Methods} Here we briefly sketch two semiclassical methods that are based on the mapping representation. These approaches can be rigorously derived from the path-integral formulation of the dynamics, or, for example, using the quantum-classical Liouville equation(QCLE)\cite{HK13}. Originally, however, the linearized semiclassical (LSC) approach has developed through a stationary-phase approximation to the full path-integral, and subsequently applying a linearization approximation to the resulting subsystem propagator\cite{Wang1998}. With the intention of providing only the essential information about these techniques, we will briefly introduce the representation in a mapping basis, and then simply give the expressions for the corresponding equations of motion and expectation values. The interested reader may refer to specific literature (e.g. references \cite{meyermiller,stockthoss,Miller2001,StockThoss_2005,Ki08,Hsieh2012,fbts2} for example) for further information and technical details. In order to achieve a classical-like description of the quantum subsystem, the Meyer-Miller-Stock-Thoss mapping representation\cite{meyermiller,stockthoss} is used. Each subsystem state $|\lambda\rangle$ is represented by a mapping state $|m_{\lambda}\rangle$, that is an eigenfunction of a system of $N$ fictitious harmonic oscillators, that have occupation numbers which are constrained to be 0 or 1: $|\lambda\rangle \rightarrow |m_{\lambda}\rangle$ = $|0_1, ... , 1_{\lambda},... 0_N\rangle$. \subsubsection{Linearized Semiclassical Dynamics}\label{sec:LSC} In the LSC method, the mapping version of an operator on the subsystem Hilbert space, $\hat{B}_{m}(X)$, is defined such that its matrix elements are equivalent to those of the corresponding operator, $\hat{B}_{W}(X)$. For example, the mapping Hamiltonian can be written as\cite{KNK08} \begin{equation} \hat{B}_m(X) = \sum_{\lambda \lambda'} B_W^{\lambda \lambda'}(X) \hat{a}^{\dagger}_{\lambda}\hat{a}_{\lambda'}, \notag \end{equation} where the creation and annihilation operators on the subsystem mapping states, $\hat{a}^{\dagger}_{\lambda}$ and $\hat{a}_{\lambda}$, satisfy the usual bosonic commutation relation $[\hat{a}^{\dagger}_{\lambda}, \hat{a}_{\lambda'}] = \delta_{\lambda \lambda'}$. Completing the Wigner transform over the subsystem, the mapping Hamiltonian can be written as a function of continuous phase space variables $(X,x) = (R,P,r,p)$, \begin{equation} B_m(X) = \sum_{\lambda \lambda'} B_W^{\lambda \lambda'}(X) (r_{\lambda} r_{\lambda'}+ p_{\lambda} p_{\lambda'}-\delta_{\lambda \lambda'}). \notag \end{equation} The LSC time-evolution of an arbitrary operator in the mapping representation, $B_m(X)$, can be written as a classical-like dynamics in the extended Wigner-mapping phase space, \begin{equation} \frac{\partial}{\partial t}B_m (X,x,t) = \big\{ H_m(X,x), B_m(X,x,t) \big\}_{{X,x}}. \notag \end{equation} Due to the Poisson bracket structure of this equation the density can be obtained from the evolution of an ensemble of independent trajectories, $\rho_m(X,t)={\mathcal N}^{-1}\sum_{i=1}^{{\mathcal N}} \delta(X-X_i(t))$, where the $X_i(t)=(R_i(t),P_i(t))$ are given by the solutions of the following set of ordinary differential equations \cite{NBK10}: \begin{eqnarray} \frac{d r_{\lambda}}{dt}&=& \frac{\partial H_m}{\partial p_\lambda}, \qquad \frac{d p_{\lambda}}{dt}=-\frac{\partial H_m}{\partial r_{\lambda}}, \notag\\ \frac{d R}{dt} &=& \frac{\partial H_m}{\partial P}, \qquad \frac{d P}{dt}= -\frac{\partial H_m}{\partial R}.\nonumber \end{eqnarray} \subsubsection{Partially Linearized Quantum - Classical Dynamics} A less severe approximation to the QCLE\cite{Huo2011,Hsieh2012} uses a partially linearized approximation to the equations of motion for the coupled system, using the mapping representation for the forward and backward time-propagators separately. This doubles the number of mapping variables used to describe each subsystem state, but yields an efficient approximate solution to the QCLE in this forward-backward mapping form. This forward-backward trajectory solution (FBTS) describes a classical-like dynamics in the extended phase space of the environmental and the mapping variables that represent the subsystem degrees of freedom. The effective Hamiltonian function that generates the FBTS evolution is \begin{equation} H_e(X,x,x') = \frac{1}{2}(H_m(X,x)+H_m(X,x')) \notag \end{equation} where $(X,x,x') = (R,P,r,r',p,p')$. The continuous trajectories that define the FBTS solution to the quantum-classical Liouville equation can be represented by the following Hamiltonian equations of motion \cite{KZSK12}, \begin{align*} &\frac{dr_{\mu}}{dt} = \frac{\partial H_{e}(X,x)}{\partial p_{\mu}} , ~~\quad \frac{dp_{\mu}}{dt} = -\frac{\partial H_{m}(X,x)}{\partial r_{\mu}},\nonumber \\ &\frac{dr'_{\mu}}{dt} = \frac{\partial H_{m}(X,x')}{\partial p'_{\mu}} , \quad \frac{dp'_{\mu}}{dt} = -\frac{\partial H_{m}(X,x')}{\partial r'_{\mu}},\\ &\frac{d R}{dt} = \frac{P}{M}, ~\quad\qquad\qquad \frac{d P}{dt} = -\frac{\partial H_{e}(X,x,x')}{\partial R}.\nonumber \end{align*} In the FBTS simulation algorithm, the matrix elements of the operator $\hat{B}_W(t)$ are approximated using the following expression, \begin{align*}\nonumber B^{\lambda \lambda'}_W(X,t) =& \sum_{\mu \mu} \int dx dx' \phi(x) \phi(x') (r_{\lambda} + i p_{\lambda})(r_{\lambda}' - i p_{\lambda}') \\ \times B^{\mu \mu'}_W&(X_t) (r_{\mu}(t) + i p_{\mu}(t))(r_{\mu'}'(t) - i p_{\mu'}'(t)), \end{align*} where $\phi(x) = e^{-\sum_{mu}(r_{\mu}^2+p_{\mu}^2)}$ are normalised Gaussian distribution functions, and evaluation of the integrals over the time-independent $\phi(x)$ functions is carried out by Monte Carlo sampling. \subsection{Quantum BBGKY-Hierachy} In the following we briefly describe the quantum mechanical BBGKY-hierarchy, which is an exact reformulation of many-body quantum dynamics. As such it can capture quantum interference and fluctuations. In practice, some approximate closures for the hierarchy have to be employed to reduce the computational cost of this approach. For a system of interacting fermions and bosons according to Eq.~\eqref{Gint}, where we focus on the explicit Pauli-spin representation of the 2-level system, i.e., \begin{align} \label{eq:ham3} \hat{H} &=-\frac{\Delta\varepsilon}{2} \hat{\sigma}_z + \frac{1}{2}\sum_{\alpha=1}^{2N}\Big(\hat{P}^{2}_{\alpha}+ \omega^{2}_{\alpha}\hat{Q}^{2}_{\alpha}\Big) + \hat{E}(r_A)\hat{\sigma}_x,\\ &\hat{E}(r_A) = \sum_{\alpha=1}^{2N}\mu_{12}\omega_{\alpha}\lambda_{\alpha}(r_A)\hat{Q}_{\alpha},\notag \end{align} with $\Delta\varepsilon=\varepsilon_2-\varepsilon_1$, the underlying equations of motion, known as the quantum BBGKY-hierarchy~\cite{ShunJin1985,Fricke1996,Bonitz2016} follow from the Heisenberg equations of motion for the Hamiltonian. Consistent with previous publications\cite{sakkinen2014,sakkinen2015}, we introduce the short-hand notation $\hat{X}_{1\alpha}\equiv\hat{Q}_{\alpha}$, $\hat{X}_{2\alpha}\equiv\hat{P}_{\alpha}$ such that the correlation functions are given by \begin{align*} \Lambda_{i\alpha,j\beta} &\equiv\langle\hat{X}_{i\alpha}\hat{X}_{j\beta}\rangle -X_{i\alpha}X_{j\beta} \, ,\\ \Lambda_{\varepsilon;j\alpha} &\equiv\langle\hat{X}_{j\alpha}\hat{\sigma}_{\varepsilon}\rangle -X_{j\alpha}\sigma_{\varepsilon}, \, \end{align*} with \textcolor{black}{ $i,j \in \{1,2\},~\varepsilon \in \{x,y,z\}$} and we chose to suppress the time-arguments for brevity. In this work we truncate the infinite hierarchy of equations of motion at the doublets level for the correlation functions \cite{Hoyer2004}, resulting in an approximation conventionally referred to as the second Born approximation \cite{Zimmermann1994,Lohmeyer2005}. This extends the Hartree-Fock-type approximation as presented in \cite{Pellegrini2015,Flick2017} to the next higher consistent approximation level of the hierarchy. With $\textbf{X}\equiv (Q_{\alpha=1},\dots,Q_{\alpha=(2N)},P_{\alpha=1},\dots,P_{\alpha=(2N)})^{T} = (X_{11},\dots,X_{1(2N)},X_{21},\dots,X_{2(2N)})^{T}$ the normal coordinate averages satisfy \begin{align*} \dot{\textbf{X}} &=\big\{\textbf{X}, H_{\mathrm{cl}}(\sigma_{x},\textbf{X})\big\} \, , \end{align*} where $\{\cdot,\cdot\}$ denotes the canonical Poisson bracket. Furthermore, $H_{\mathrm{cl}}$ defines the classical Hamiltonian function, i.e., providing the classical equivalent to Eq.~\eqref{eq:ham3} $\hat{B} \rightarrow \langle B \rangle$. The spin-projection averages in turn obey the equations \begin{align*} \dot{\sigma}_{z} &=2E(r_A)\sigma_{y} +2\boldsymbol{\lambda}_{\mathrm{eff}}^T\cdot \boldsymbol{\Lambda}_{y} \, ,\\ \dot{\sigma}_{y} &=-\Delta\varepsilon\sigma_{x} -2E(r_A)\sigma_{z} -2\boldsymbol{\lambda}_{\mathrm{eff}}^T\cdot\boldsymbol{\Lambda}_{z} \, ,\\ \dot{\sigma}_{x} &=\Delta\varepsilon\sigma_{y} \, , \end{align*} where $\boldsymbol{\lambda}_{\mathrm{eff}}\equiv(\omega_{1}\lambda_{1}(r_A)\mu_{\mathrm{12}}, \dots,\omega_{M}\lambda_{(2N)}(r_A)\mu_{\mathrm{12}})^{T}$ represents the effective light-matter coupling. Moreover, we introduced the vector notation $\boldsymbol{\Lambda}_{\varepsilon}\equiv(\Lambda_{\varepsilon;11},\dots, \Lambda_{\varepsilon;1(2N)},\Lambda_{\varepsilon;21},\dots, \Lambda_{\varepsilon;2(2N)})^{T}$ for the correlation functions. The dynamics of the correlation functions are determined by \begin{align*} \dot{\boldsymbol{\Lambda}}_{z} &=\big\{\boldsymbol{\Lambda}_{z}, H_{\mathrm{cl}}(-\imath\sigma_{y}-\sigma_{x}\sigma_{z},\boldsymbol{\Lambda}_{z})\big\} \notag\\ &\phantom{=} +2E\boldsymbol{\Lambda}_{y} +2\sigma_{y}\boldsymbol{\Lambda}\cdot \boldsymbol{\lambda}_{\mathrm{eff}}(r_A) \, ,\\ \dot{\boldsymbol{\Lambda}}_{y} &=\big\{\boldsymbol{\Lambda}_{y}, H_{\mathrm{cl}}(-\imath\sigma_{z}-\sigma_{y}\sigma_{x},-\boldsymbol{\Lambda}_{y})\big\} \notag\\ &\phantom{=} -\Delta\varepsilon\boldsymbol{\Lambda}_{x} +2E\boldsymbol{\Lambda}_{z} +2\sigma_{z}\boldsymbol{\Lambda}\cdot\boldsymbol{\lambda}_{\mathrm{eff}}(r_A) \, ,\\ \dot{\boldsymbol{\Lambda}}_{x} &=\big\{\boldsymbol{\Lambda}_{x},H_{\mathrm{cl}}(1-\sigma_{x}^{2},\boldsymbol{\Lambda}_{x})\big\} \notag\\ &\phantom{=} -\Delta\varepsilon\boldsymbol{\Lambda}_{y} \, , \end{align*} where the matrix $\boldsymbol{\Lambda}$ with the elements $\Lambda_{i\alpha,j\beta}$ is the covariance matrix satisfying the equation \begin{align*} \dot{\boldsymbol{\Lambda}} &=\boldsymbol{J}\cdot\boldsymbol{\Omega}\cdot\boldsymbol{\Lambda} -\boldsymbol{\Lambda}\cdot\boldsymbol{\Omega} \cdot\boldsymbol{J} -\boldsymbol{\lambda}_{\mathrm{eff}}\cdot\boldsymbol{\Lambda}_{x}^{T} -\boldsymbol{\Lambda}_{x}\cdot\boldsymbol{\lambda}_{\mathrm{eff}}^{T} \, . \end{align*} Here $\boldsymbol{J}$ is the standard symplectic matrix \begin{align*} \boldsymbol{J} = \begin{pmatrix} 0 & 1 & 0 & \\ -1 & 0 & 1 & \dots \\ 0 & -1 & 0 & \\ &\dots \end{pmatrix} \end{align*} and $\boldsymbol{\Omega}$ denotes a matrix such that $\Omega_{1\alpha,1\alpha}=\omega_{\alpha}^{2}$, $\Omega_{2\alpha,2\alpha}=1$, and otherwise zero.\\ Evolving the covariance matrix in time allows the field fluctuations to dynamically respond to the polarizable matter. Deriving the equation of motions from the many-body perturbation hierarchy sets an implicit condition on the dynamic fluctuations as the 2-particle reduced density matrix has to be identically zero to guarantee that only a single electron is acting in our system. In the following section we will show that enforcing this condition cures almost completely all nonphysical negative intensities that arise otherwise and overall improves the performance of the second Born approximation considerably. \subsection{\textcolor{black}{Configuration Interaction Expansion}}\label{sec:ci} \textcolor{black}{ To obtain accurate reference solutions, considered as exact benchmarks for this low dimensional model, we truncate the Configuration Interaction (CI) expansion such that we allow at most two photons per mode, featuring 400 modes, while retaining the full two and three state representation for the atomic system.} \begin{align} \label{eq:ci} \begin{split} \vert \Psi (t) \rangle &= \sum\limits_{k} c_{k,0}(t) \vert k\rangle \otimes \vert 0\rangle\\ &+ \sum\limits_{k}\sum\limits_{n_1}^{2N} c_{k,n_1}(t) \vert k\rangle \otimes \hat{a}^\dagger_{n_1}\vert 0\rangle\\ &+ \sum\limits_{k}\sum\limits_{n_1,n_2}^{2N^2+N} c_{k,n_1,n_2}(t) \vert k\rangle \otimes \hat{a}^\dagger_{n_1}\hat{a}^\dagger_{n_2}\vert 0\rangle \end{split} \end{align} \textcolor{black}{In line with the nature of CI expansions, the numerical cost exponentially grows when increasing the number of allowed photonic excitations. When exploiting the bosonic symmetry of the photons in total $1 + 2N + 2N(2N-1)/2$ photon basis functions span the zero-photon (vacuum), one-photon (1pt) and two-photon (2pt) space. Combined with the low-dimensional matter system featuring the eigenstates $\vert k \rangle$, it is computationally non-trivial but feasible to propagate this CI expanded wavefunction using the Lanczos algorithm \cite{park1986unitary,flick2016exact}. We ensured that the above (vacuum+1pt+2pt) CI basis is sufficient for the observables and parameters studied in this work.\footnote{ \textcolor{black}{As the exponential scaling permits the inclusion of higher photon states for the given model, we ensured convergence investigating a related 3-level system based on a screened Hydrogen atom with $1/10$ of the atomic binding potential coupled to the 100 lowest harmonics of the former cavity. Including the three-photon states resulted in marginal numerical changes such that we deem the selected two-photon states sufficient for the investigated model.} } Although spontaneous decay from the 2-level atomic system will lead to at most a single observable photon, the photonic fluctuations can reach the 2pt state space which results in the possibility to bind photon intensity at the atomic position (see Fig.~\ref{ABNew}). } \section{Results and Discussion}\label{Se3} As in earlier work\cite{HSRKA19}, we note that the Wick normal ordered form for operators (denoted $:\hat{B}:$ for some operator $\hat{B}$) is used when calculating average values in this study. The reason for using the normal ordered form, in practice, is to remove the typically non-measurable \cite{riek2015direct, benea2019electric} effect of vacuum fluctuations from the results, which ensures that both $\langle E \rangle = 0$ and $\langle I \rangle = 0$, irrespective of the number of photon modes in the cavity field, when the field is in the vacuum state. \textcolor{black}{ In order to guarantee a distinct spacial resolution for the dynamics of the photonic wavepacket in the cavity and to ensure the inclusion of all possible inference effects we use 400 photon modes to represent the cavity field that is coupled to a two or three energy-level atomic system in all calculations shown below.} We choose the atom to be initially in the highest excited state, and the cavity field in the vacuum state at zero temperature. For our benchmark numerical treatment we solved the time-dependent Schr\"odinger equation by using a truncated Configuration Interaction expansion as introduced in Sec.~\ref{sec:ci}. \textcolor{black}{The atomic population operator is given by $\hat{\sigma}_{i}(t) = |c_{i}(t)|^2 ,$ where $c_{i}(t)$ denotes the time-dependent CI coefficient for the corresponding atomic energy level. Furthermore we define the normal-ordered electric field intensity operator as \begin{equation} :\hat{I}(r,t): = :\hat{E}^2(r,t): = 2\sum_{\alpha=1}^{2N} \omega_{\alpha}\zeta^{2}_{\alpha}(r)\hat{Q}_{\alpha}^{2}(t) - \sum_{\alpha=1}^{2N}\zeta^{2}_{\alpha}(r). \nonumber \label{G15} \end{equation} with \begin{equation} \zeta_{\alpha}(r) = \sqrt{\frac{ \omega_{\alpha}}{\epsilon_{0} L}} \sin\Big(\frac{\alpha \pi}{L} r\Big). \nonumber \end{equation}} \subsection{2-Level Atom: One-Photon Emission Process} \begin{figure}[h!]{ \includegraphics[width=1\linewidth]{overview.pdf}} \caption{A schematic sketch of the photon-field intensity propagating through the cavity for four time snap-shots: (a) $t=100~a.u.$, (b) $t=600~a.u.$, (c) $t=1200~a.u.$, (d) $t=2100~a.u.$.} \label{overview} \end{figure} In Fig.~\ref{overview} we show a schematic sketch of the propagating photon-field intensity along the axis of the cavity for four different time snap-shots. As the spontaneous emission process evolves, a photon wave-packet with a sharp front is emitted from the atom (e.g. panel (a) of Fig.~\ref{overview}) and travels towards the boundaries (e.g. panel (b) of Figs.~\ref{overview}) where it is reflected, and then travels back to the atom (e.g. panel (c) of Fig.~\ref{overview}). The emitted photon is then absorbed and re-emitted by the atom, which results in the emergence of interference phenomena in the electric field. This produces a photonic wave packet with a more complex shape (e.g. panel (d) of Fig.~\ref{overview}). In Figs.~\ref{Appendix1}, and \ref{AB3} we plot this spontaneous emission process for the different methods compared to the exact result (black dashed line). Here we observe that the essential differences among the methods are (i) determining the correct amplitude of the wave-packet, (ii) capturing the re-emission interference pattern and (iii) resembling the bound photon at the atomic position. \subsubsection{Finite size corrections to the BBGKY hierarchy} \label{SecApp1} \begin{figure}[h!]{ \includegraphics[width=0.8\linewidth]{bbgky_intensity_2.pdf}} \caption{Intensity of the emitted (normal-ordered) photon field using different finite-size corrections at three different time snapshots: (a) $t = 100 ~a.u.$, (b) $t = 1200 ~a.u.$, (c) $t = 2100 ~a.u.$; no correction, single-photon correction (1pfsc), single-electron correction (1efsc) and single-photon and single-electron correction (1fsc) for the BBGKY hierarchy within the second Born approximation. The arrow indicates the direction of the wave packet.} \label{Appendix1} \end{figure} By partially summing the infinite series of perturbative diagrams that arise as a consequence of the Heisenberg equation of motion \textcolor{black}{using Hamiltonian \eqref{eq:ham3}}, we intrinsically introduce spurious interaction between physically non-existent particles as we consider more diagrams than particles are present in the physical system. This is a well-known subject of interest in electronic structure theory \cite{kremp1997non,von2009successes,verdozzi2011some,stefanucci2013nonequilibrium,florian2013equation,leymann2014expectation,richter2009few}. Specifically for our problem, this can result in such fundamental violations as producing negative atomic state occupations or photon field intensities (see Fig.~\ref{Appendix1}). Enforcing the correct fermionic truncation of the many-body hierarchy acts to cure most of the nonphysical features that appear, i.e., negative intensities after the re-emission and strong oscillations around the exact solution. This restriction to the single electron subspace (1efsc) is performed by enforcing that the two-particle reduced density matrix be identically zero, $\rho^{(2)}_{ijkl} = 0$. For one-body reduced density-matrices $\rho^{(1)}_{ij}$, the cluster expansion on the exchange-only level $\rho^{(2)}_{ijkl} \approx \rho^{(1)}_{il} \rho^{(1)}_{jk} - \rho^{(1)}_{ik} \rho^{(1)}_{jl}$ guarantees this if $\rho^{(1)}_{ij}$ is idempotent. A further correction is possible in the photonic subspace, i.e., enforcing at most a single photon in the cavity for the two-level system (1pfsc). This is achieved by substituting higher correlation matrices with lower order expansions such that the equation of motions do not connect to higher excitations and corrects the bound photon intensity to excellent accuracy. Employing both restrictions at the same time (1fsc) leads to the overall best performance and we focus on those results in Sec.~\ref{Se3}. For multiple electrons and photonic excitations such corrections will become less relevant and less straightforward to apply. \subsubsection{Trajectory-based Semiclassical methods} \begin{figure}[h!]{ \includegraphics[width=1\linewidth]{2-level_intensity.pdf}} \caption{Time-evolution of the average field intensity for the one-photon emission process, at three different time snapshots: (a) $t = 100 ~a.u.$, (b) $t = 1200 ~a.u.$, (c) $t = 2100 ~a.u.$. Exact solution (black-dashed), FSSH (purple), MTEF (red), LSC (orange), FBTS (blue) and (1fsc)BBGKY (green). The arrow indicates the direction of the wave packet.} \label{AB3} \end{figure} To perform numerical simulations using the semiclassical dynamics methods, we first employ Monte Carlo sampling from the Wigner transform of the initial density operator of the photon field, $\hat{\rho}_{F,W}(X,0)$, to generate an ensemble of initial conditions for the trajectory ensemble $(Q_{\alpha}^j(0),P_{\alpha}^j(0))$. The Wigner transform of the zero temperature vacuum state is given by \begin{equation} \rho_{F,W}(X,0)=\prod_{\alpha=1}^{2N} \frac{1}{\pi}\exp{\left[-\frac{P^{2}_{\alpha}}{\omega_{\alpha}} - \omega_{\alpha} Q_{\alpha}^2\right]}. \notag \end{equation} We then evolve each initial condition independently according to the corresponding equations of motion to produce a trajectory. Average values are then constructed by summing over the entire trajectory ensemble, and normalizing the result with respect to $\mathcal{N}$, the total number of trajectories. We use an ensemble of $\mathcal{N} = 10^5$ independent trajectories for the MTEF, FSSH, LSC, and FBTS calculations, sampled from the Wigner transform of the initial field density operator. This level of sampling is sufficient to converge the atomic observables to graphical accuracy, while the field intensity would require a slightly larger trajectory ensemble for graphical convergence. In order to illustrate the comparison more accurately a zoom-in of Fig.~\ref{AB3} is depicted in Fig.~\ref{AB4}, and \ref{AB6} in the same coloring. We find that the shapes of the (2B-1fsc) BBGKY-method and the FBTS-method nicely agree with the exact wave-packet shape for time $100$ [a.u.], while the MTEF and LSC simulations are qualitatively accurate, but miss the correct wave-packet amplitude. We find that FSSH performs rather poorly, as it fails to capture the qualitative structure of the outgoing wave-packet. Further, we observe at time $2100$ [a.u.] that the FSSH-method has broken-down completely as it fails to reproduce the wave-packet structure in addition to exhibiting a time-delay. Considering the other trajectory-based methods, we find that MTEF is not able to reproduce the photon re-emission due to the lack of capturing interferences within mean-field methods. On the other hand FBTS and LSC predict a substantial amount of interference in the form of a second maximum, however shifted to earlier times in relation to the exact solution. As seen previously, the corrected second Born truncation of the BBGKY hierarchy is in very good agreement with the exact simulation; nevertheless it still develops very small unphysical negative intensity values in between the first and second wave-packet maxima. \begin{figure}[h!]{ \includegraphics[width=0.5\textwidth]{2-level-intensity-zoom.pdf}} \caption{Zoom-in onto the wavefronts of Fig.~\ref{AB3} (same color code) at time $t = 100~a.u.$ (upper panel) and $t = 2100~a.u.$ (lower panel).} \label{AB4} \end{figure} All methods are capable of describing the remaining intensity at the atomic position. This intensity corresponds to the bound photon intensity, which emerges from beyond rotating-wave approximation (RWA) effects. \textcolor{black}{More precisely, in Fig. \ref{ABNew} we show the photon field intensity for the exact reference solution calculated in four different ways according to Eq.~\eqref{eq:ci}. First including all two-photon states (2pt) without RWA (blue) and then performing the same calculation within RWA (cyan). Here we find that using the RWA erases the bound photon state. Furthermore, we find that only including the one-photon states (1pt) is also not sufficient to capture this higher-order effect, as in both cases without RWA (red) and with RWA (orange) no bound photon is observed. \begin{figure}[h!]{ \includegraphics[width=0.5\textwidth]{Fig_1.pdf}} \caption{Photon field intensity for the exact reference solution at time $600$[a.u.] for Blue: including two-photon states (2pt) and no RWA, Cyan: including two-photon states (2pt) with RWA, Red: including only one-photon states (1pt) and no RWA, Orange: including only one-photon states (1pt) with RWA.} \label{ABNew} \end{figure} Therefore, those results show that all methods are indeed capable of describing effects beyond the perturbative regime such as bound photon states.} In Fig.~\ref{AB6} we depict this signature feature of the bound photon state for time $1200$ [a.u.]. Here we find, that BBGKY and MTEF perform best, as FBTS, LSC and FSSH overestimate the amplitude for the remaining intensity. Without single photon correction the BBGKY amplitude is comparable to the one of FBTS, i.e., finite size corrections in both, fermionic and photonic subspace, are important to obtain excellent results. \textcolor{black}{In Fig.~\ref{AB5} we plot the atomic adiabatic state population in the same color code as in Fig.~\ref{AB3}.} Here BBGKY leads to excellent accuracy while among the trajectory methods LSC performs best. The initial decay, which is connected to the shape of the wave-front, is however superior in FBTS with the drawback of an incomplete de-excitation. While MTEF is capable of qualitatively describing the process, it fails on quantitative scales and even worse is FSSH which not even qualitatively resembles the process. \begin{figure}[h!]{ \includegraphics[width=1\linewidth]{2-level-polariton.pdf}} \caption{Left: Zoom-in on the bound photon state of Fig.~\ref{AB3} (same color code). Right: Zoom-in on the bound photon state of Fig.~\ref{Appendix1} (same color code).} \label{AB6} \end{figure} \begin{figure}[h!]{ \includegraphics[width=1.1\linewidth]{2-level-population.pdf}} \caption{Time-evolution of the atomic state population in the same color code as Fig.~\ref{AB3}. Solid lines represent the atomic ground state, and dashed lines represent the excited state.} \label{AB5} \end{figure} \subsection{3-Level Atom: Two-Photon Emission Process} Let us turn our attention to the slightly more complex three-level system where we focus on the most promising approaches with respect to extrapolations towards realistic systems in mind. We thus exclude FSSH due to its relatively poor performance and BBGKY due to its high computational effort, which we will later discuss in more detail. \begin{figure}[h!]{ \includegraphics[width=1\linewidth]{3-level-intensity.pdf}} \caption{Time-evolution of the average field intensity for the two-photon emission process, at three different time snapshots: (a) $t = 100 ~a.u.$, (b) $t = 1200 ~a.u.$, (c) $t = 2100 ~a.u.$. Exact solution (black-dashed), MTEF (red), LSC (orange) and FBTS (blue). Please note that in this plot the amplitude of the bound photon state for the FBTS simulation is reduced in order to improve the illustration of the results. Explicit quantitative results for the bound photon state can be found in Fig.~\ref{AB9}. The arrow indicates the direction of the wave packet. } \label{AB7} \end{figure} \begin{figure}[h!]{ \includegraphics[width=0.5\textwidth]{3-level-intensity-zoom.pdf}} \caption{A zoom-in onto the wave fronts of Fig.\ref{AB7} (same color code) for time $t = 100~a.u.$ (upper panel) and $t = 2100~a.u.$ (lower panel). } \label{AB8} \end{figure} \begin{figure}[h!]{ \includegraphics[width=1\linewidth]{3-level-polariton.pdf}} \caption{A zoom-in on the bound photon state of Fig.~\ref{AB7} (same color code).} \label{AB9} \end{figure} \begin{figure}[h!]{ \includegraphics[width=1.1\linewidth]{3-level-population.pdf}} \caption{Time-evolution of the atomic state population in the same color-code as Fig.~\ref{AB7}. Solid lines represent the atomic ground state, dashed lines the first excited state and dotted lines the second excited state.} \label{AB10} \end{figure} In Fig.~\ref{AB7} we show the intensity of the cavity field during the two-photon emission process for MTEF, LSC and FBTS compared to the exact solution. Furthermore, in order to allow a more quantitative and accurate comparison, a zoom-in of Fig.~\ref{AB7} is depicted in the same color-code in Figs.~\ref{AB8}, and \ref{AB9} . Here similar dynamics are observed compared to the two-level case. However, due to the additional intermediate atomic state, we now observe a double-peak feature in the emitted photonic wavepacket. This feature corresponds to the emission of two photons, as the excited atom initially decays to the first excited state emitting one photon, and then further relaxes to the ground state, emitting a second photon. We find in accordance with the two-level case that the shape of the FBTS-method is in a good agreement with the exact wave-packet shape for time $100$ [a.u.], while the MTEF and LSC-simulation are qualitatively in line, but underestimate the wave-packet amplitude. Further, we observe that at time $2100$ [a.u.] none of the methods sufficiently captures the complex re-emission structure while overestimating the bound photon peak in Fig.~\ref{AB9}. In Fig.~\ref{AB10} we show the time evolution of the atomic state populations. As before, the emitted photonic wavepacket moves through the cavity, is reflected at the mirrors, and returns to the atom. The first and second excited state are then repopulated due to stimulated absorption. A second spontaneous emission process ensues, and the emitted field again takes on a more complex profile due to interference. While MTEF features the pronounced incomplete emission, LSC and especially FBTS quite accurately capture the short-time decay dynamics. Each method provides a qualitative indication of the re-absorption and consecutive emission with LSC and FBTS performing clearly superior, suffering from a diminished incomplete (de)excitation in relation to MTEF. \subsection{Computational Effort and Scaling} Regarding the BBGKY-method the computational cost for this specific model is similar to the exact time-propagation for a two-photon subspace. This makes BBGKY, also in relation to the highly accurate results it provides, the most rigorous method for the model when considering the finite size corrections. Depending on the selected approximation and numerical details such as sparsity, it however features a rather unfavourable high-order polynomial scaling which restricts this method to comparable small systems. In terms of the other semiclassical approaches, we have found that different numbers of trajectories are needed to converge different observables to the same statistical accuracy. In particular, for subsystem observables like the atomic populations the FSSH and MTEF data are relatively well converged with $10^3$-$10^4$ trajectories, while LSC and FBTS require ~$10^4$ - $10^5$. However, for observables related to the photon field, such as the intensity, the observable remains rather noisy for all the trajectory-based simulation methods with $10^5$ trajectories. As all the independent trajectory based methods employ a Monte Carlo sampling procedure, their statistical error is proportional to the inverse square-root of the number of trajectories in the ensemble. However, as shown in this work, we have observed that more trajectories are required to converge photon-field (environmental) quantities compared to atomic (subsystem) quantities to within the same relative error. Further, as the trajectories are not coupled during their time evolution, the corresponding algorithms can be implemented in a highly parallel manner to reduce the total run-time. \section{Conclusion}\label{Se4} In this work we have adapted and benchmarked a variety of approximate quantum dynamics methods, i.e., multi-trajectory Ehrenfest (MTEF), linearized and partially linearized semiclassical mapping (LSC and FBTS) methods, Tully's fewest switches surface hopping (FSSH), as well as a set of finite size corrected second Born BBGKY truncations, to treat correlated electron-photon systems. We have applied these methods to model QED cavity bound atomic systems in order to simulate the one and two photon spontaneous emission and interference processes, and to analyze the performance of these approaches. Consistently for the one and two-photon emission processes, we find that MTEF, LSC and FBTS are able to qualitatively characterize the correct dynamics. The initial spontaneous emission, the associated atomic occupations and emitted photon wavepacket improve from qualitative agreement within MTEF, to slightly better agreement while overestimating the decay-rate in LSC, to almost quantitative agreement using FBTS. However, these methods perform poorly when interference patterns emerge in the reabsorbed and remitted photonic wavepacket; MTEF totally fails to capture any of the interference effects associated with the excitation and re-emission processes, while LSC and FBTS qualitatively recover some of the characteristics of the outgoing intensity. The FSSH-method in contrast is not capable of properly resembling the wavefront of the photonic wavepacket, and furthermore exhibits an incorrect time delay in the re-emitted wavepacket. Consequentially this technique performs rather poorly compared to the other trajectory based methods. It is possible, however, that improved versions of this algorithm may offer improvement over these initial results. The self-consistent perturbative expansion form of the BBGKY-hierarchy behaves exceptionally well when restricted to the physical subspace, although some unphysical effects such as negative photon intensities can result. Finally, all methods investigated here can, in fact, capture the bound photonic state. Here MTEF and BBGKY present the best performance while LSC and FBTS consistently overestimate the amplitude of this feature. For the two-photon emission process we focused on the most promising approaches considering the balance between performance and computational scalability. Here we find in accordance to the two-level system that MTEF, LSC and FBTS are able to qualitatively characterize the correct dynamics of this process, however suffer from quantitative drawbacks, especially pronounced for interference features. Moreover, as experimental advances drive the need for realistic \emph{ab initio} descriptions of light-matter coupled systems, trajectory-based quantum-classical algorithms emerge as promising route towards treating more complex and realistic systems. More precisely, extending to molecular systems beyond the few-level description and incorporating ionic dynamics. In particular, combining the \textit{ab initio} light-matter coupling methodology recently presented by Jest\"adt et al.\cite{JRORA18} with the multi-trajectory approach could provide a computationally feasible way to simulate photon-field fluctuations and correlations in realistic three-dimensional systems, and work along these lines is already in progress. \section{Acknowledgements} We would like to thank J. Flick and N. T. Maitra for insightful discussions and acknowledge financial support from the European Research Council (ERC-2015-AdG-694097). AK acknowledges support from the National Sciences and Engineering Research Council (NSERC) of Canada. \bibliographystyle{unsrt}
1,314,259,996,932
arxiv
\section{Introduction} A famous conjecture of Heil, Ramanathan, and Topiwala~\cite{HRT96}, often called the HRT-conjecture, states that finitely many time-frequency shift s of a non-zero $L^2$-function are linearly independent. Denoting a time-frequency shift of $g\in L^2(\rd) $ along $z=(x,\xi ) \in {\bR^{2d}} $ by $$ \pi (z) g(t) = M_\xi T_x g(t) = e^{2\pi i \xi\cdot t} g(t-x), \qquad \qquad t\in \bR^d\, , $$ the question is whether $$ \sum _{j=1}^n c_j \pi (z_j) g = 0 \quad \Longrightarrow c_j = 0 \qquad \forall j \, , $$ for arbitrary points $z_1, \dots , z_n\in {\bR^{2d}} $. To this day this conjecture is open, it is known to be true only under restrictive conditions on either $g$ or the set $\{z_j \}$. (a) Linnell's Theorem~\cite{Lin99}: Let $\Lambda \subseteq {\bR^{2d}} $ be a lattice and $g\in L^2(\rd) $ arbitrary, then for every finite subset $F\subseteq \Lambda $ the set $\{\pi (\lambda ) g: \lambda \in F\}$ is linearly independent. This is a deep result obtained with von Neumann algebra techniques; special cases have been reproved with more analytic arguments in~\cite{BS10,DG13a}. (b) Bownik and Speegle ~\cite{BS13} proved the HRT-conjecture for $g$ with one-sided super-exponential decay. This result contains the early results of~\cite{HRT96}. In view of these general results, it is rather surprising that it is not known whether four arbitrary time-frequency shift s of $g\in L^2(\rd) $ are linearly independent. Even for rather special constellations the linear independence of four time-frequency shift s is highly non-trivial~\cite{DZ12}. Further contributions to the HRT-conjecture investigate the kernel of a linear combination of time-frequency shift\ operators~\cite{balan08} and estimates of the frame bounds of finite sets of time-frequency shift s~\cite{CL01}. For a detailed survey of the linear independence conjecture we refer to Heil's article~\cite{heil06}. In this note we adopt a different point of view and investigate the numerical linear independence of time-frequency shift s. In other words, can we determine numerically whether a given finite set of time-frequency shift s is linearly independent? We will argue that the answer is negative. To formulate a precise result, we will study the lower Riesz bound of finite sections of a Gabor frame and estimate its asymptotics. By taking larger and larger finite sections, the lower Riesz converges to zero, and in many cases this convergence is super-fast. Thus from a numerical point of view even small sets of time-frequency shift s may look linearly dependent. The main result will illustrate the spectacular difference between a conjectured mathematical truth and a computationally observable truth. Let us explain the problem in detail. Let $\lambda = (\lambda _1, \lambda _2) \in \bR^d \times \bR^d \simeq {\bR^{2d}} $ be a point in the time-frequency\ plane (or phase space in the terminology of physics). The time-frequency shift\ $\pi (\lambda )$ acts on a function $g\in L^2(\rd) $ by $$ \pi (\lambda ) f(t) = e^{2\pi i \lambda _2\cdot t} g(t-\lambda _1) \, . $$ For fixed $g \in L^2(\rd) $ and a countable subset $\Lambda \subseteq {\bR^{2d}} $, the set $\mathscr{G} (g, \Lambda ) = \{ \pi (\lambda ) g: \lambda \in \Lambda \} $ is called a Gabor system, and for $n>0$ the set $$ \mathscr{G} (g, \Lambda _n ) = \mathscr{G} (g, \Lambda \cap B_n(0)) = \{ \pi (\lambda ) g: \lambda \in \Lambda , |\lambda | \leq n\} $$ is a \emph{finite section } of $\mathscr{G} (g, \Lambda ) $. We are interested in the quantity \begin{equation} \label{eq:10} A_n = A(g, \Lambda _n) = \min _{c\neq 0} \frac{\|\sum _{|\lambda | \leq n} c_\lambda \pi (\lambda ) g\|_2^2}{\sum _{|\lambda |\leq n} |c_\lambda |^2} \, . \end{equation} Since $\mathscr{G} (g, \Lambda _n )$ spans a finite-dimensional subspace of $L^2(\rd) $, the minimum exists. Moreover, $A_n= 0$, if and only if\ $\mathscr{G} (g, \Lambda _n ) $ is linearly dependent. Thus we may take $A_n$ as a quantitative measure for the numerical linear dependence of $\mathscr{G} (g, \Lambda _n ) $. Our main result is an asymptotic estimate for $A_n$ as $n\to \infty $. Before formulating this estimate, we need to explain some of the basic concepts of Gabor analysis and time-frequency analysis . We refer to the textbooks~\cite{chr03,book,heil11} for detailed expositions of time-frequency analysis\ and frame theory. A Gabor system $\mathscr{G} (g, \Lambda ) $ is a frame, a so-called Gabor frame, if there exist frame bounds $A,B>0$, such that $$ A \|f\|_2^2 \leq \sum _{\lambda \in \Lambda } |\langle f, \pi (\lambda ) g\rangle |^2 \leq B \|f\|_2^2 \quad \forall f\in L^2(\rd) \, . $$ For an equivalent and more suitable condition we define the synthesis operator $D_{_{g,\Lambda} } $ $$ D_{_{g,\Lambda} } c = \sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda )g \, , $$ which is well-defined on finite sequences $c$. Then $\mathscr{G} (g, \Lambda ) $ is a frame, if and only if\ $D_{_{g,\Lambda} } : \ell ^2(\Lambda ) \to L^2(\rd) $ is bounded and onto $L^2(\rd) $. If, in addition to the frame property, $\textrm{ker}\, D_{g,\Lambda} = \{0\}$, then $\mathscr{G} (g, \Lambda ) $ is a Riesz basis for $L^2(\rd) $. In this case there exist $A',B' >0$, such that $$ A' \|c\|_2^2 \leq \|\sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda )g\|_2^2 \leq B' \|c\|_2^2 \quad \forall c\in \ell ^2(\Lambda ) \, . $$ In other words, a Riesz sequence is $\ell ^2$-linearly independent. In particular, every finite subset of a Riesz sequence $\mathscr{G} (g, \Lambda ) $ is linearly independent. If $\mathscr{G} (g, \Lambda ) $ is a frame, but not a Riesz basis, then by definition $\textrm{ker}\, D_{g,\Lambda} \neq \{0\}$. However, if the linear independence conjecture is true, then certainly $\mathrm{ker}\, D_{g, \Lambda _n} = \{ 0 \}$ for all $n\in \bN $. This means that for $n\to \infty $, the finite sets $\mathscr{G} (g, \Lambda _n ) $ must get ``more and more linearly dependent''. Quantitatively, this means that the lower Riesz bound $A_n $ must tend to $0$. Our main theorem shows that this transition to linear dependence may happen very fast. \begin{tm} \label{tm-main} Let $v: {\bR^{2d}} \to \bR ^+$ be a submultiplicative weight function such that $\lim _{n\to \infty } v(nz)^{1/n} =1 $ for all $z\in {\bR^{2d}} $ ($v$ satisfies the Gelfand-Raikov-Shilov condition). Assume that \begin{equation} \label{eq:11} \int_{\rdd} |\langle g, \pi (z) g\rangle | v(z) \, dz < \infty \, . \end{equation} If $\mathscr{G} (g, \Lambda ) $ is a frame for $L^2(\rd) $, but not a Riesz basis, then the lower Riesz bound $A_n$ of $\mathscr{G} (g, \Lambda _n )$ decays like \begin{equation} \label{eq:3} A_n \leq C \sup _{|\lambda | >n } v(\lambda )^{-2} \, . \end{equation} \end{tm} For the polynomial weight $v(z) = (1+|z|)^s$, the lower bound decays like $A_n = \mathscr{O} ( n^{-2s} ) $, and for the sub-exponential weight $v(z) = e^{a|z|^b}$ for $a>0$ and $0<b<1$ we have $A_n = \mathscr{O} ( e^{-an^b}) $. This means that the lower bound $A_n $ tends to zero almost exponentially. The finite Gabor system $\mathscr{G} (g, \Lambda _n )$ is extremely badly conditioned, and numerically $\mathscr{G} (g, \Lambda _n ) $ behaves like a linearly dependent set. On the other hand, if $\Lambda $ is a lattice, then by Linnell's theorem $\mathscr{G} (g, \Lambda _n )$ is always linearly independent. Theorem~\ref{tm-main} states a striking contrast between the numerical linear dependence of finite sets of time-frequency shift s and their conjectured abstract linear independence. \vspace{3 mm} In the remainder of this note we prepare the necessary background on time-frequency analysis\ and spectral invariance of matrix algebras and then prove Theorem~\ref{tm-main} and a variation. The proof will be relatively short, but it combines several non-trivial statements from harmonic analysis. In a sense, we extend the quantitative analysis of the finite section method in~\cite{GRS10} to elements in the kernel of a matrix. \vspace{3 mm} \textbf{Operators related to Gabor systems.} If $D_{_{g,\Lambda} } $ is bounded from $\ell ^2(\Lambda ) $ to $L^2(\rd) $, then $\mathscr{G} (g, \Lambda ) $ is called a Bessel sequence. Its adjoint operator is the analysis operator $D^*_{_{g,\Lambda} } f = \big( \langle f, \pi (\lambda ) g\rangle : \lambda \in \Lambda \big)\in \ell ^2(\Lambda ) $ for $f\in L^2(\rd) $. We also consider the frame operator of $\mathscr{G} (g, \Lambda ) $ defined to be \begin{equation} \label{eq:12} S_{_{g,\Lambda} } f = D_{_{g,\Lambda} } D_{_{g,\Lambda} } ^* f = \sum _{\lambda \in \Lambda } \langle f, \pi (\lambda ) g\rangle \pi (\lambda ) g \ \end{equation} for $f$ in a suitable space of test functions. The Gram matrix is the matrix $ G_{_{g,\Lambda} } = D_{_{g,\Lambda} } ^* D_{_{g,\Lambda} }$ acting on $\ell ^2(\Lambda )$ with entries $$ (G_{_{g,\Lambda} } )_{\lambda , \mu } = \langle \pi (\mu )g, \pi (\lambda )g\rangle \, \qquad \lambda , \mu \in \Lambda \, . $$ The algebraic identity \begin{align*} \| \sum _{|\lambda | \leq n } c_\lambda \pi (\lambda )g\|_2^2 = \sum _{|\lambda | , |\mu | \leq n} \langle \pi (\lambda ) g, \pi (\mu )g\rangle c_\lambda \overline{c_\mu } \end{align*} shows that the Riesz bounds of $\mathscr{G} (g, \Lambda _n )$ are just the extremal eigenvalues of the finite sections of the Gramian matrix of $\mathscr{G} (g, \Lambda ) $. \vspace{3 mm} \textbf{Weights and modulation space s.} To measure the time-frequency\ concentration of a function, we use weighted modulation space s. In time-frequency analysis\ one uses the several conditions for weight functions~\cite{gro07c}: \\ (i) a weight $v: {\bR^{2d}} \to \bR ^+$ is \emph{submultiplicative}, if $v(z_1+z_2) \leq v(z_1) v(z_2) $ for all $z_1,z_2\in {\bR^{2d}} $, and \\ (ii) $v$ is \emph{subconvolutive}, if $(v^{-1} \ast v^{-1} )(z) \leq C v(z)^{-1} $ for all $z\in {\bR^{2d}} $. \\ (iii) A weight $v$ satisfies the \emph{Gelfand-Raikov-Shilov (GRS) condition} $$\lim _{n\to \infty } v(nz)^{1/n} =1 \quad \text{ for all } z\in {\bR^{2d}} \, . $$ The main examples for weights are the polynomial weights $z\mapsto (1+|z|)^s$ for $s\geq 0$ and the sub-exponential weights $z\mapsto e^{a|z|^b}$ for $a>0$ and $0<b<1$. The exponential weight $z\mapsto e^{a|z|}$ for $a>0$ does not satisfy the GRS-condition. Let $\phi (t) = e^{-\pi t^2}$ be the Gaussian and $v$ a weight function on ${\bR^{2d}} $. A function $g$ belongs to the modulation space\ $M^1_v(\bR^d )$, if $$ \|g\|_{M^1_v} := \int_{\rdd} |\langle g, \pi (z) \phi \rangle | \, v(z) \, dz < \infty \, . $$ Likewise $g\in M^\infty _v(\bR^d )$, if $$ \|g\|_{M^\infty _v} := \sup _{z\in {\bR^{2d}} } |\langle g, \pi (z) \phi \rangle | \, v(z) < \infty \, . $$ From the theory of modulation space s we need the following facts about the modulation space s $M^1_v$ and $M^\infty _v$. See \cite{book} and \cite{feiSTSIP} for a historical survey about modulation space s. \begin{lemma} \label{amalg} (A) Assume that $v$ is a submultiplicative weight on ${\bR^{2d}} $. Then the following conditions are equivalent: (i) $g\in M^1_v (\bR^d )$ (ii) $\int_{\rdd} |\langle g, \pi (z) g \rangle | \, v(z) \, dz < \infty $. (iii) The function $z\mapsto \langle g, \pi (z) g\rangle $ belongs to the amalgam space $W(C, \ell ^1_v)$, i.e., it is continuous and \begin{equation} \label{eq:c7} \sum _{k\in {\bZ^{2d}} } \sup _{z\in [0,1]^{2d}} |\langle g, \pi (k+z)g\rangle | v(k) < \infty \, . \end{equation} (B) Assume that $v$ is submultiplicative and subconvolutive. Then $g\in M^\infty _v(\bR^d )$ if and only if\ $ \sup _{z\in {\bR^{2d}} }|\langle g, \pi (z) g \rangle | \, v(z) <\infty $. \end{lemma} For a proof see~\cite{book}, Propositions~12.1.2, 12.1.11 and Theorem~13.5.3. Note that condition~\eqref{eq:11} in Theorem~\ref{tm-main} amounts to saying that $g\in M^1_v(\bR^d )$. \vspace{3 mm} \textbf{Spectral invariance of matrices with off-diagonal decay.} Let $\Lambda $ be a countable set in ${\bR^{2d}} $ satisfying the condition $$ \max _{z\in {\bR^{2d}} } \# \{ \lambda \in \Lambda : |\lambda - z | \leq 1\} < \infty \, . $$ $\Lambda $ is said to be relatively separated. Let $v$ be a submultiplicative weight on ${\bR^{2d}} $. We will use the following classes of infinite matrices over the index set $\Lambda $. (i) The class $\mathscr{C} _v^\infty (\Lambda )$ consists of matrices $A = (a_{\lambda \mu })_{\lambda ,\mu \in \Lambda }$ with \emph{off-diagonal decay} $v^{-1} $ and is equipped with the norm \begin{equation} \label{eq:5} \|A\|_{\mathcal{C}_v ^\infty } = \sup _{\lambda ,\mu \in \Lambda } |a_{\lambda \mu }| v(\lambda -\mu ) \, . \end{equation} For polynomials weights $v(z) = (1+|z|)^s$, $\mathcal{C}_v ^\infty $ is often called the Jaffard class. (ii) A matrix $A $ belongs to the class $\mathcal{C}_v = \mathcal{C}_v (\Lambda )$ of \emph{convolution-dominated matrices}, if there exists an envelope function $\Theta \in W(C,\ell ^1_v)$, such that $$ |a_{\lambda \mu } | \leq \Theta (\lambda -\mu ) \qquad \forall \lambda , \mu \in \Lambda \, . $$ The norm on $\mathcal{C}_v $ is $\|A\|_{\mathcal{C}_v } = \inf \{ \|\Theta \|_{W(C,\ell ^1_v)} : \Theta \,\, \text{ is an envelope }\}$. If $v$ is submultiplicative, then $\mathcal{C}_v $ is a Banach algebra. If $v^{-1} \in \ell ^1(\Lambda ) $ and $v $ is subconvolutive, then $\mathcal{C}_v ^\infty $ is a Banach algebra. Both algebras can be embedded into the $C^*$-algebra of bounded operators $\mathscr{B}( \ell ^2(\Lambda ))$. The most important result about these matrix algebras is their spectral invariance asserting that the off-diagonal decay is preserved under inversion. \begin{tm}\label{spec} Assume that $\Lambda $ is relatively separated and the $v$ is a submultiplicative weight satisfying the GRS-condition. (i) If $A\in \mathcal{C}_v $ and $A $ is invertible on $\ell ^2(\Lambda )$, then $A^{-1} \in \mathcal{C}_v $. (ii) Assume in addition that $v$ is subconvolutive. If $A\in \mathcal{C}_v ^\infty $ and $A $ is invertible on $\ell ^2(\Lambda )$, then $A^{-1} \in \mathcal{C}_v ^\infty $. \end{tm} We say that both $\mathcal{C}_v $ and $\mathcal{C}_v ^\infty $ are inverse-closed in $\mathscr{B} (\ell ^2(\Lambda ))$. Theorem~\ref{spec} has been proved several times and on several levels of generality. We refer to the original work of Baskakov~\cite{Bas90}, Kurbatov~\cite{Kur90}, Gohberg-Kaeshoek-Woerdemann~\cite{GKW89}, and Sj\"ostrand~\cite{Sjo95} for (i), and to Baskakov~\cite{Bas90}, Jaffard~\cite{jaffard90}, and ~\cite{GL04a} for (ii). The attributions for the algebra $\mathcal{C}_v$ are a bit subtle, because the cited references deal only with the case when $\Lambda $ is a lattice. The case of a relatively separated index set $\Lambda $ follows by a simple reduction described in~\cite{BCHL06a}: Since $\max \# \big(\Lambda \cap (k+[0,1]^{2d})\big) = N <\infty$, one can define an explicit map $a: \Lambda \mapsto {\bZ^{2d}} $ that preserves the off-diagonal decay properties after re-indexing a given matrix $A$. For the spectral invariance one may assume therefore without loss of generality that $\Lambda $ is a lattice. Also, Sj\"ostrand's argument~\cite{Sjo95} works for relatively separated index sets and weights without any change of the proof. An extended survey about spectral invariance including matrix algebras can be found in ~\cite{Gr10}. These matrix classes arise naturally in the analysis of Gabor frames, as is shown by the following lemma. \begin{lemma} \label{mat} Assume that $\Lambda \subseteq {\bR^{2d}} $ is relatively separated and that $v$ is a submultiplicative weight on ${\bR^{2d}} $. (i) If $g\in M^1_v(\bR^d )$, then the Gramian $G_{_{g,\Lambda} } $ of $\mathscr{G} (g, \Lambda ) $ is in $\mathcal{C}_v (\Lambda )$. (ii) If, in addition, $v$ is subconvolutive and if $g\in M^\infty _v(\bR^d )$, then $G_{_{g,\Lambda} } \in \mathcal{C}_v ^\infty (\Lambda ) $. \end{lemma} \begin{proof} Since $$ |(G_{g,\Lambda } )_{\lambda , \mu }| = | \langle \pi (\mu )g, \pi (\lambda )g\rangle | = |\langle g, \pi (\lambda -\mu )g\rangle | \, , $$ we may take $\Theta (z) = |\langle g, \pi (z) g\rangle | $ as an envelope function. If $g\in M^1_v(\bR^d )$, then $\Theta \in W(C,\ell ^1_v) $ by Lemma~\ref{amalg}. (ii) is clear from the definitions. \end{proof} \vspace{3 mm} \textbf{Proof of Theorem~\ref{tm-main}.} Theorem~\ref{tm-main} follows from the combination of several observations. First an easy lemma. \begin{lemma}\label{easy} Assume that $\mathscr{G} (g, \Lambda ) $ is a Bessel sequence with bound $B$ and that $\textrm{ker}\, D_{g,\Lambda} \neq \{0\}$. If $c\in \textrm{ker}\, D_{g,\Lambda} , \|c\|_2=1$, then for sufficiently large $n$ we have \begin{equation} \label{eq:c8} A_n \leq 2B \sum _{\lambda \in \Lambda : |\lambda |>n} |c_\lambda |^2 \, . \end{equation} \end{lemma} \begin{proof} We split the sum $\sum _{\lambda \in \Lambda } c_\lambda \pi (\lambda )g = 0$ into two parts and then take norms. We obtain \begin{align*} \|\sum _{|\lambda | \leq n} c_\lambda \pi (\lambda )g\|_2^2&= \|\sum _{|\lambda | > n} c_\lambda \pi (\lambda )g\|_2^2 \leq B \sum _{|\lambda | >n} |c_\lambda |^2 \, . \end{align*} For $n$ large enough we have $\sum _{|\lambda |\leq n} |c_\lambda |^2 \geq \tfrac{1}{2} $, whence the lower Riesz bound $A_n$ of $\mathscr{G} (g, \Lambda _n )$ obeys the following estimate: \begin{equation} \label{eq:13} A_n = \inf _{c\neq 0} \frac{\| \sum _{|\lambda |\leq n} c_\lambda \pi (\lambda )g \|_2^2}{\sum _{|\lambda |\leq n} |c_\lambda | ^2} \leq 2B \sum _{|\lambda | > n} |c_\lambda |^2\, . \end{equation} This estimate holds for every normalized $c\in \mathrm{ker}\, D_{g,\Lambda } $. \end{proof} Lemma~\ref{easy} states the obvious fact that the finite sets $\mathscr{G} (g, \Lambda _n )$ become ``more and more linearly dependent'' in the sense that $A_n \mapsto 0$. To estimate the asymptotic behavior of $A_n$ more precisely, we need to construct a ``bad'' sequence $c$ with fast decay in $\mathrm{ker}\, D_{g,\Lambda }$. The possible decay depends on the time-frequency\ concentration of the window $g$, as we will prove now. \begin{prop} \label{ker} If $g\in M^1_v(\bR^d )$ and $\mathscr{G} (g, \Lambda ) $ is a frame, but not a Riesz basis for $L^2(\rd) $, then $\mathrm{ker}\, D_{g,\Lambda } \cap \ell ^1_v(\Lambda ) \neq \{0\}$. \end{prop} \begin{proof} 1. Recall that $G_{g,\Lambda } = D_{g,\Lambda } ^* D_{g,\Lambda }$ is the Gramian operator associated to $\mathscr{G} (g, \Lambda ) $. Consequently, $c\in \mathrm{ker} \, D_{g,\Lambda } $ if and only if\ $\|D_{g,\Lambda } c\|_2^2 = \langle G_{g,\Lambda } c,c \rangle = 0$ if and only if\ $c\in \mathrm{ker} \, G_{g,\Lambda } $. \\ 2. To relate the spectrum of the frame operator $S_{_{g,\Lambda} } $ on $L^2(\rd) $ and of $G_{_{g,\Lambda} } $ on $\ell ^2(\Lambda )$, we use the identity $$ \sigma (S_{g,\Lambda }) \cup \{0\} = \sigma (D_{g,\Lambda } D^*_{g,\Lambda }) \cup \{0\} = \sigma (D^*_{g,\Lambda } D_{g,\Lambda }) \cup \{0\} = \sigma (G_{g,\Lambda }) \cup \{0\} \, , $$ which follows from a purely algebraic manipulation~\cite[p.\ 199]{conway90}. From this identity we draw the following conclusions: Since $\mathscr{G} (g, \Lambda ) $ is a frame, we have $\sigma (S_{g,\Lambda }) \subseteq [A,B]$ for $A,B>0$. Since $\mathscr{G} (g, \Lambda ) $ is not a Riesz basis, $\mathrm{ker}\, G_{_{g,\Lambda} } \neq \{0\}$ and thus $0\in \sigma (G_{_{g,\Lambda}} )$. Consequently, \begin{equation} \label{eq:c9} \sigma (G_{_{g,\Lambda}} ) \subseteq \{0\} \cup [A,B] \, . \end{equation} The main point is the spectral gap between $0$ and $A$. 3. We now apply an argument developed by Baskakov~\cite{Bas97a} to show that the orthogonal projection onto the kernel of $G_{_{g,\Lambda} } $ is a matrix with off-diagonal decay. Let $P$ be the orthogonal projection from $\ell ^2(\Lambda )$ onto $\mathrm{ker}\, G_{_{g,\Lambda} }$. With the Riesz functional calculus~\cite{conway90}, this projection can be written as \begin{equation} \label{eq:6} P = \frac{1}{2\pi i} \int _\gamma (z I - G_{g,\Lambda })^{-1} \, dz \, , \end{equation} where $\gamma $ is a closed curve in $\bC $ around $0$ disjoint from the interval $[A,B]$, for instance $\gamma (t) = \frac{A}{2} e^{2\pi i t}, t\in [0,1]$. 4. Spectral invariance: By Lemma~\ref{mat} $G_{_{g,\Lambda} }$ and $z\mathrm{I} - G_{_{g,\Lambda} }$ are matrices in $ \mathscr{C} _v $. Since $zI- G_{g,\Lambda }$ is invertible for $z\in \gamma $, Theorem~\ref{spec} implies that $(zI - G_{g,\Lambda })^{-1} $ is also in $\mathscr{C} _v$. From the continuity of the resolvent function $z\mapsto (zI - G_{g,\Lambda } )^{-1} $ we conclude that $\sup _{z\in \gamma } \|(zI-G_{g,\Lambda })^{-1} \|_{\mathscr{C} _v} <\infty $. Consequently, the integral defining the orthogonal projection onto the kernel of $G_{g,\Lambda } $ is in the algebra of convolution-dominated matrices $\mathscr{C} _v$: $$ P \in \mathscr{C} _v \, . $$ This means that there exists an envelope $\Theta \in W(C,\ell ^1_v)$, such that $|P_{\lambda \mu } | \leq \Theta (\lambda -\mu )$. If $\{ e_\lambda : \lambda \in \Lambda \}$ with $e_\lambda (\mu ) = \delta _{\lambda , \mu }$ denotes the standard orthonormal basis of $\ell ^2(\Lambda )$, then $$ |\langle e_\lambda , Pe_\mu \rangle | = |P_{\lambda , \mu } | \leq \Theta (\lambda -\mu ) \, , $$ or, equivalently, $Pe_\mu \in \ell ^1_v (\Lambda )$ for all $\mu \in \Lambda $. As the projection $P$ is non-zero by assumption, $Pe_\mu \neq 0$ for some $\mu$, and thus we have found a non-trivial vector in $\mathrm{ker}\, G_{_{g,\Lambda} } \cap \ell ^1_v = \textrm{ker}\, D_{g,\Lambda} \cap \ell ^1_v $, and we are done. \end{proof} Combining Lemma~\ref{easy} and Proposition~\ref{ker}, we now can conclude the proof of Theorem~\ref{tm-main}. Choose an $\ell ^2$-normalized $c\in \mathrm{ker}\, D_{g,\Lambda} \cap \ell ^1_v(\Lambda )$. Then by \eqref{eq:13} we obtain that \begin{align} A_n &\leq 2 B \sum _{|\lambda | >n} |c_\lambda |^2 \notag \\ &\leq 2 B \sup _{|\lambda | >n} v(\lambda )^{-2} \, \sum _{|\lambda | >n} |c_\lambda |^2 v(\lambda )^2 \notag \\ & \leq 2 B \sup _{|\lambda | >n} v(\lambda )^{-2} \, \sum _{|\lambda | >n} |c_\lambda | v(\lambda ) = C \sup _{|\lambda | >n} v(\lambda )^{-2} \, . \label{final} \end{align} Theorem~\ref{tm-main} is proved completely. \hfill $\Box$ The same proof yields the following variation of Theorem~\ref{tm-main}. \begin{tm} \label{tm-mainb} Let $v$ be a submultiplicative and subconvolutive weight function satisfying the Gelfand-Raikov-Shilov condition. Assume that $g\in M^\infty _v(\bR^d )$ and that $\mathscr{G} (g, \Lambda ) $ is a frame for $L^2(\rd) $, but not a Riesz basis. Then the lower Riesz bound $A_n$ of $\mathscr{G} (g, \Lambda _n )$ decays like \begin{equation} \label{eq:3a} A_n \leq C \sum _{|\lambda | >n} v(\lambda )^{-2} \, . \end{equation} \end{tm} \begin{proof} The proof is similar, we just use the versions of Lemma~\ref{mat} and Theorem~\ref{spec} that are valid for $M^\infty _v(\bR^d )$. Instead of Proposition~\ref{ker} we use the following statement: If $g\in M^\infty _v(\bR^d )$ and $\mathscr{G} (g, \Lambda ) $ is a frame, but not a Riesz basis for $L^2(\rd) $, then $\textrm{ker}\, D_{g,\Lambda} \cap \ell ^\infty _v (\Lambda ) \neq \{0\}$. Equation~\eqref{final} is replaced by \begin{align*} A_n &\leq 2B \sum _{|\lambda | >n} |c_\lambda |^2 \notag \\ &\leq 2 B \sup _{|\lambda | >n} |c_\lambda |^2 v(\lambda )^{2} \, \sum _{|\lambda | >n} v(\lambda )^{-2} \, . \end{align*} \end{proof} \noindent\textsl{REMARKS:}\ 1. Note the importance of assumptions: $\mathscr{G} (g, \Lambda ) $ must be a frame so that there exists a spectral gap for the Gramian. Theorem ~\ref{tm-main} fails, when $\mathscr{G} (g, \Lambda ) $ is not a frame and the spectral gap is missing. This may be the case for Gabor systems at the critical density, for instance, with $\phi (t) = e^{-\pi t^2}$ the Gabor system $\mathscr{G} (\phi , \bZ ^2)$ is neither a frame nor a Riesz basis (but still complete in $L^2(\bR )$). In this case, the asymptotic decay of the lower Riesz bound $A_n$ can be investigated with different methods, see~\cite{Ban14}. 2. Theorem~\ref{tm-main} quantifies the degree of linear dependence of the finite sets $\mathscr{G} (g, \Lambda _n ) $. Note that good time-frequency\ localization of $g$ (corresponding to fast growth of $v$) yields a faster decay of the constants $A_n$. This is somewhat counter-intuitive, because the fast decay of $z\mapsto \langle g , \pi (z) \phi \rangle $ implies that the function $z \mapsto |\langle g , \pi (z) \phi \rangle |^2$ is sharply peaked in ${\bR^{2d}} $, and shifts of sharply peaked bumps (corresponding to the time-frequency shift s of $\pi (\lambda )g$) tend to be linearly independent with good constants. According to Theorem~\ref{tm-main} this is not the case here. This phenomenon indicates the existence of subtle cancellations in linear combinations of time-frequency shift s and seems to be yet another manifestation of the uncertainty principle . 3. To obtain an upper estimate for $A_n$, we needed to find only a \emph{single} sequence $c \in \ell ^2(\Lambda )$ such that $ \|\sum _{|\lambda |\leq n} c_\lambda \pi (\lambda )g\|_2^2 \approx A_n \|c\|_2^2$. In the course of the proof we have constructed such a sequence by using the spectral invariance and the properties of the basis function $g$. It is natural to ask whether the decay rate of $A_n$ in Theorem~\ref{tm-main} is best possible. This question, however, is much more difficult, because it amounts to showing that $ \|\sum _{|\lambda |\leq n} c_\lambda \pi (\lambda )g\|_2^2 \geq \mathrm{const} \, A_n \|c\|_2^2$ for \emph{all} $c$. Since every finite set of time-frequency shift s can be extended to a Gabor frame, this statement seems equivalent to the original linear independence conjecture. 4. If $v$ is an exponential weight, $v(z) = e^{a|z|}$ for some $a>0$, then the matrix algebras $\mathcal{C}_v $ and $\mathcal{C}_v ^\infty $ are no longer inverse-closed in $\mathscr{B} (\ell ^2(\Lambda ))$. The statement of Theorem~\ref{spec} is false and has to be replaced by a weaker version. Nevertheless one can show~\cite{Ban14} that for $g\in M^1_v$ with exponential weight $v(z) = e^{a|z|}$ the lower Riesz bound decays exponentially $A_n \lesssim e^{-\epsilon n}$ for some $\epsilon >0$. 5. In our analysis we have only used that $\mathscr{G} (g, \Lambda ) $ is a frame with $\textrm{ker}\, D_{g,\Lambda} \neq \{0\}$ and the decay properties of the Gramian $G_{_{g,\Lambda} }$. The statement about the asymptotic behavior of the lower Riesz bound $A_n$ carries over without change to general localized frames~\cite{FoG05} indexed by a relatively separated subset of ${\bR^{2d}} $. \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
1,314,259,996,933
arxiv
\section{Introduction} \label{intro} For more than half a century, it has been possible to confine charged particles by radiofrequency (rf) quadrupole fields~\cite{Paul1990}. Such rf (or Paul) traps rely on the strong direct interaction of the charge of the particles with the electrical field, whereby trap depths of up to hundreds of eV (i.e., trapping temperatures of the order of a million Kelvin) can easily be achieved for atomic ions. This makes it generally easy to trap ions by e.g. introducing a buffer gas at room temperature to provide a friction force which damps the ion motion. Conversely, trapping of neutral atomic species by electric fields has to rely on the interaction of an induced electric dipole moment with the field itself. Although such interactions can be enhanced significantly by choosing electric fields oscillating close to an electronic transition frequency of the atom, the depth of such traps typically do not exceed 100 mK~\cite{Grimm2000}. Hence, in order to achieve trapping, laser cooling has to be implemented. In the past decades, cooling and trapping of neutral atoms in various dipole-induced lattice trap configurations has led to the studies of a wealth of physics phenomena, by mimicking for instance idealized solid-state physics scenarios, such as e.g. Bloch-oscillations~\cite{BenDahan1996} and the superfluid-MOT insulating transition~\cite{Greiner2002}. Laser cooling of atomic ions~\cite{Wineland1975} in Paul~\cite{Leibfried2003} and Penning~\cite{Brown1986} traps has as well facilitated the studies of a large variety of physics phenomena ranging from pure plasma physics~\cite{Dubin1999}, non-linear physics~\cite{Blumel1988,Hoffnagle1988,Blumel1989,Brewer1990} to quantum physics~\cite{Leibfried2003}, including quantum information processing~\cite{Blatt2008}, and has even opened a whole new field of cold molecular ion-based research~\cite{Willitsch2008}. However, electric field-induced dipolar forces have only recently been applied to trap, or alter the trapping conditions of, atomic ions~\cite{Schneider2010,Enderlein2012,Linnet2012,Karpa2013}. The interest here has been partly to demonstrate trapping of a single ion with localized fields in order to e.g. enable coherent interactions between atoms and ions without perturbation induced by trapping fields~\cite{Cormick2011}, partly to superpose a steep periodic potential to a shallow rf trap potential with the aim of studying structural~\cite{Drewsen2012,Horak2012,Cormick2012} and dynamical phase transitions (e.g. Coulomb-Frenkel-Kontorova model~\cite{GarciaMata2007,Benassi2011,Pruttivarasin2011,Schneider2012review}), as well as enhancing the coupling strength between ions and cavity photons with quantum memory~\cite{Herskind2009,Albert2011} and photon counter~\cite{Clausen2013} applications in mind. With respect to the latter applications, standing wave fields generated in a Fabry-P\'{e}rot cavity are of special interest, since they provide a means to achieve a well-controlled spatial phase between a localizing field mode and an interrogation field mode at the position of the ions~\cite{Linnet2012}. Fine probing of the longitudinal cavity field spatial structure has been performed with single ions~\cite{Guthohrlein2001,Mundt2002}; however, the application of a single standing-wave field does not allow for the absolute positioning discussed here. Due to the boundary conditions for the fields in a Fabry-P\'{e}rot cavity, all modes of the same parity (even or odd) have overlapping nodes and antinodes at the center (waist) of the cavity, i.e. an in-phase relation is imposed at the center, regardless of the frequency of the modes~\cite{note}. Similarly, an out-of-phase relation can here be obtained by combining field modes with even and odd number of nodal planes. With an exact knowledge of the center position of an applied optical cavity one can hence deterministically switch between having the trapped ions at a node or anti-node of the potential, as has e.g. been exploited in some of the experiments reported in~\cite{Linnet2012}. For large ensembles of ions, it could also be interesting to be perform this with respect to a corrugated super-lattice created through the interference of two cavity modes. In this paper, we demonstrate a simple method to determine the center of a near-confocal symmetric optical Fabry-P\'{e}rot cavity having its rotational symmetry axis aligned with the rf nodal line of a linear rf trap~\cite{Herskind2009JPB}, by utilizing an ion Coulomb crystal as an imaging medium. In the following section (Sec. \ref{CenterCavTheory}), we first consider the idealized case of a two-level atom interacting with two cavity modes, a probe field oscillating at a frequency close to the two-level system resonance frequency, and an off-resonant lattice field providing a periodic AC Stark potential. In Sec. \ref{Exp_setup}, we describe the essential parts of the experimental setup used in our investigations with Ca$^+$ ions. This section is followed by a description of the experimental procedure and obtained results (Sec. \ref{sec:findcentre}). In Sec. \ref{Future}, we discuss some future prospects of this method for multi-cavity mode operation with cold ions and atoms, before concluding in Sec. \ref{Conclusion}. \section{Simple two-level description} \label{CenterCavTheory} A full description of the applied method for centering the trapping potential with respect to the center of a standing wave light field of a Fabry-P\'{e}rot cavity would have to be based on an ensemble of multi-state systems with the effects of the applied magnetic and electrical fields on the individual states included. However, in order to provide a clear physical picture, in this section, we provide only a description for an ensembles of two-level atoms. \begin{figure} \centering \includegraphics[width=1\linewidth]{Simple_model_fig.pdf} \caption{a) Schematics of the considered atomic two-level system with energy levels and applied fields. In b) and c) the parameters $n_p=20$ and $n_l=22,23$ have been used, together with $L=10$, $\Gamma=1$, $\Delta_s=10$ and $s_0=0.1$, in order to illustrate the effect. The real values of $n_p$ and $n_l$ in the experiment are around $8.49 \times 10^{4}$. The lower part of the figures (blue peak structure with a wide envelope) shows the variation of the probe scattering, $\Gamma_{\textrm{scat}}$, along the cavity axis, for a lattice detuning of b) 2$\omega_{FSR}$ and c) 3$\omega_{FSR}$. The top part of the figures (sinusoidal curves) shows the probe standing wave (blue) and the effective Stark shifted detuning $\Delta(z)$ (red) along the cavity axis; the two sinusoidal curves in this part of the figure have been rescaled to the same amplitude in order to better illustrate the spatial beating.} \label{fig:CenterCavResults} \end{figure} More specifically, we consider two-level atoms with a\\ ground state \textit{g} and an excited state \textit{e} positioned inside a near-confocal, symmetric optical Fabry-P\'{e}rot cavity (see Fig.~\ref{fig:CenterCavResults}a). The atoms are trapped by an external mechanism which keeps them confined within the cavity mode-volume. A so-called lattice field, which is far-detuned from the atomic resonance, but on resonance with a longitudinal mode of the cavity, is applied. By keeping one of the cavity modes resonant with the \textit{g}-\textit{e} transition this leads to a lattice detuning of a whole number of cavity free-spectral-ranges, $\omega_{FSR}$ from the two-level resonance. The effect of the lattice field is to induce a spatially modulated AC Stark shift of the atomic transition given by: \begin{equation} \Delta_S(z) = \Delta_S \sin^2(k_{l} z) \label{eq:S_z} \end{equation} where $\Delta_S$ is the maximum Stark shift, $k_{l}=(n_{l} \pi) / L$ the lattice field wavenumber and $n_{l}$ the longitudinal mode number of the lattice field. The effect of this lattice field is monitored by a near-resonant intercavity probe field detuned $\Delta_{p}$ with respect to the bare two-level transition frequency. The spatially dependent photon scattering rate of this probe field is given by: \begin{equation} \Gamma_{\textrm{scat}} = \frac{1}{2} \frac{s(z)}{1+s(z)} \Gamma \label{eq:Pie} \end{equation} where $s(z)$ is the saturation parameter, defined as: \begin{equation} s(z)=\frac{s_0 \sin^2(k_p z)}{1+\left(2 \Delta(z) / \Gamma\right)^2}, \label{eq:satpar} \end{equation} with $\Gamma$ being the decay rate of the excited state, $s_0=I_0/I_{sat}$ the maximum on-resonance saturation parameter for the probe field, $k_{p}=(n_{p} \pi) / L$ the probe field wavenumber expressed in terms of $n_{l}$, the longitudinal mode number of the lattice field, and the spatially varying effective detuning is given by \begin{equation} \Delta(z) = \Delta_{p} + \Delta_S \sin^2(k_{l} z) \label{eq:delta} \end{equation} The effect of the Stark shifting lattice field on the probe photon scattering rate (eq.~\ref{eq:Pie}) is a beating signal arising from the wavenumber difference of the probe and lattice fields. This is illustrated in Fig.~\ref{fig:CenterCavResults}b-c where the parameters have been chosen to illustrate the effect ($n_p = 20$ and $n_l =22,23$). In the experiment the values of $n_p$ and $n_l$ are around $8.49 \times 10^{4}$. In most practical realizations the imaging system is not able to resolve the fine-structured pattern, as the individual lattice sites are typically separated by only a few hundreds of nm. The beating signal, on the other hand, occurs on a much larger length scale (proportional to the inverse of the probe-lattice wavevector difference) and can be resolved using standard imaging techniques, as will be shown below. At the center of the cavity ($z=0$) the beat pattern has an extremum because of the boundary conditions imposed by the mirrors. If the lattice detuning is an even number of free-spectral-ranges (FSRs) the scattering rate is minimised, as the probe and lattice fields overlap so that the transition is shifted away from the probe frequency wherever the probe field is strong. A lattice detuning by an odd number of FSRs produces a maximum of the scattering rate, as the probe field is strongest where the transition is unshifted. This is also illustrated in Fig.~\ref{fig:CenterCavResults}b-c. As mentioned earlier, the two-level description cannot be expected to give a precise account of measured scattering rates. However, around the center of the Fabry-Perot cavity, the scattering rate will always vary periodically with a spatial period set by the inverse of the probe-lattice wavevector difference, $\lambda_{beat} \propto 1/(k_{l}-k_{p})$, with a maximum (minimum) for $n_p-n_l$ being odd (even). The length scale of $\lambda_{beat}$ for our experimental parameters is hundreds of $\mu$m. \section{The experimental setup} \label{Exp_setup} The setup used in the experiments has been described in detail in~\cite{Herskind2008} and is depicted schematically in Fig.~\ref{fig:setup}. It consists of a symmetrically-driven four-rod linear Paul trap operating at a 4 MHz drive frequency. The trap incorporates a pair of mirrors forming a near-confocal Fabry-P\'{e}rot resonator whose optical axis is aligned with the nodal line of the trap electric fields~\cite{Herskind2009JPB}. The cavity has a length of 11.7 mm, corresponding to a free-spectral-range $\omega_{\mathrm{FSR}} = 2 \pi \times 12.7$ GHz. It has a finesse of $\sim 3000$, and a zeroth-order mode waist radius of 37 $\mu$m for light at 866 nm. The trap is loaded with $^{40}$Ca$^+$ ions, whose relevant energy levels and transitions are depicted in Fig.~\ref{fig:setup}b). The $S_{1/2}-P_{1/2}$ transition at 397 nm is used for Doppler cooling and imaging, while the $D_{3/2}-P_{1/2}$ transition at 866 nm is used either for repumping ions shelved in the metastable $D_{3/2}$ state during cooling or for interactions with the cavity light. In Fig.~\ref{fig:setup}a) the applied laser fields are also sketched. A bias magnetic field of 1 Gauss is applied in the $y$-direction. The 397 nm cooling light is applied along the axial ($z$) direction (contrapropagating beams with opposite circular polarizations) and the 866 nm repump light along the radial ($x$) direction with linear polarization along the $z$-direction. The ions can be imaged by collecting the 397 nm fluorescence onto an intensified CCD camera with line-of-sight along the $y$-direction (not shown). Two different 866 nm laser fields with circular polarization can be coupled into the cavity: the resonant \textit{probe} field and the off-resonant \textit{lattice} field. The cavity length is stabilized using an additional off-resonant laser (not shown), which is locked to an external reference cavity and which has negligible influence on both the internal and external states of the ions. \begin{figure} \centering \includegraphics[width=1\linewidth]{Setup_fig.pdf} \caption{a) Schematic picture of the cavity ion trap and laser fields used in the experiments in top view (see text for details). b) Relevant energy levels for $^{40}$Ca$^+$.} \label{fig:setup} \end{figure} \section{Finding the center of the optical cavity} \label{sec:findcentre} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{Crystal_images_fig.pdf} \caption{Experimental images ($224 \times 640$ pixel) of a $^{40}$Ca$^+$ ion Coulomb crystal with $\approx 6000$ ions (length 650 $\mu$m, diameter 150 $\mu$m, density $~\sim 6 \times 10^8$ cm$^{-3}$). As repumping is only performed by the intracavity fields, only the central part of the crystal, contained in the cavity modevolume, is visible. For reference, an outline of the actual crystal shape is shown by the dashed ellipsoids in the middle pictures. The top figure shows the ion scattering signal when applying both the lattice and probe fields. Two situations are shown corresponding to a lattice field detuned by $+15$FSR and $+16$FSR, respectively; a clear beating signal is observed, with a maximum at the cavity center in the first case and a minimum in the second. After background subtraction, making the pixel-to-pixel ratio of the top and middle images gives the bottom image, thus isolating the fluorescence beat pattern due to the lattice field. The images are obtained from $60\times 200$ ms exposures.} \label{fig:CenterCavCrystals} \end{figure*} In a recent study~\cite{Linnet2012} it was shown that an ion could be pinned by the intracavity standing wave lattice field in the described cavity ion trap. One important issue in such experiments is to ensure that the overlap between the probe and the lattice fields is well-controlled at the ion position. This means that the ion has to be precisely located at the center of the cavity to obtain reproducible situations for all lattice detunings. To locate this center we trap a large ion Coulomb crystal of $^{40}$Ca$^+$ ions and inject into the cavity both the resonant probe field (see Fig.~\ref{fig:setup}) and the lattice field detuned by an odd or even integer number of FSRs away from the atomic resonance. Due to the lattice field Stark shifts of the energy levels of the $P_{1/2}$ and $D_{3/2}$ states, the scattering rate of the probe field becomes spatially modulated, as discussed in Sec.~\ref{CenterCavTheory}. In regions where the high (low) intensity of the lattice overlaps with the high intensity of the probe this shift is largest (smallest) and the probe scattering is suppressed (unchanged). As mentioned earlier, our imaging resolution ($\sim \mu$m) does not allow us to resolve standing waves at the single lattice site scale, but the resulting beating signal is observable on a larger scale. In the experiment, an ion Coulomb crystal containing $\sim 6000$ $^{40}$Ca$^+$-ions (length 650 $\mu$m, diameter 150 $\mu$m, density $~\sim 6 \times 10^8$ cm$^{-3}$) is trapped and Doppler-cooled using the axial 397 nm cooling beams and the 866 nm radial repumper. The bias magnetic field applied in the $y$-direction ensures that the circularly polarized cavity fields address all four Zeeman sublevels of the $D_{3/2}$ state. The cavity length is chosen so that, in the absence of a lattice field, the probe field can be simultaneously resonant with the $D_{3/2}-P_{1/2}$ transition and with a cavity mode. When the cavity fields are applied, the 866 nm side repumper is blocked, so as to perform all repumping through the cavity only. The amplitude of the spatially varying saturation parameter for the probe is $s_{max} \approx 4$ (eq.~\ref{eq:satpar}) while that of the lattice is at least $1000$ times smaller, in order to realize a situation where scattering essentially comes from the probe field and the lattice field only produces a spatially dependent Stark shift. We set the probe laser on resonance with $\Delta_p = 2 \pi \times (0 \pm 2)$ MHz and the lattice Stark shift depends on the detuning of the lattice field and is in the range $\Delta_s = 2 \pi \times (3-9)$ MHz. The 397 nm laser is red-detuned by 40 MHz, so that the lattice Stark shift does not appreciably affect its scattering rate. The observed fluorescence modulation is therefore dominated by the variation in the repumping rate out of the $D_{3/2}$ state. \begin{figure} \centering \includegraphics[width=1\linewidth]{Exp_plots_fig.pdf} \caption{Beating signals along the axis of a Coulomb crystal when both the lattice and probe fields are injected into the cavity. Depending on the lattice detuning from the probe the characteristics of the beating change and this is shown for four different detunings: 15 FSR (full blue), 16 FSR (full red), 27 FSR (dashed blue), 28 FSR (dashed red). Detuning by an even number of FSRs (here, 16 and 28) results in a minimum scattering at the cavity center, while detuning by an odd number of FSRs (here 15 and 27) produces a maximum. At the center of the cavity maxima and minima line up, as expected. The black parabolic fit is performed on $\pm100$ pixels around the center.} \label{fig:Exp_results_cav_center} \end{figure} In Fig.~\ref{fig:CenterCavCrystals} projection images of the crystal are shown for lattice detunings of $+15$FSR and $+16$FSR. The cavity fundamental mode is clearly visible as only the ions in the crystals contained within the cavity mode volume participate in the cooling cycle and sympathetically cool the ions outside the cavity mode volume. The top figures show images obtained when applying both the lattice and probe fields, while the middle figures show images with the probe field only. After background subtraction, making the ratio of the image with and without the lattice allows for correcting for inhomogeneities of the imaging system and thus isolates the fluorescence modulation pattern due to the lattice. A running average is applied across the beat pattern image setting the single pixel value equal to the mean of pixel values in a $10 \times 10$ square around it. In this way unevenness in the crystal structure from e.g. crystal shells will be blurred to ensure a smother beating signal along the axial direction. The central part of the crystal is then isolated and 37 pixels (33 $\mu m$) are summed in the vertical direction. The result is proportional to the fluorescence signal of ions with the same axial position along the cavity. In Fig.~\ref{fig:Exp_results_cav_center} the resulting modulated signals are shown for a lattice field detuned by 15 (blue), 16 (red), 27 (dashed blue) and 28 (dashed red) FSRs, respectively. As mentioned, the experimental system is more complicated than the simple two-level illustrative picture presented in Sec. \ref{CenterCavTheory} and the envelope of the beat pattern is not given by a simple analytical function that we could use as a model for fitting. Rather than fitting the full beat pattern to a numerical model, we fit the points around the center of the cavity with a parabolic function in order to establish the exact cavity center position which is all we care about for this purpose. Beat patterns with only one or a few cycles within the length of the cavity give unambiguous position information, but do not offer much position resolution because of their coarse spatial structure. Beat patterns with many cycles, obtained using a far-detuned lattice, provide fine resolution but do not distinguish between positions separated by the beat period. By combining images taken for different lattice detunings, one can obtain more precise location information anywhere in the cavity without losing track of the overall position. For example, for the set of beat patterns shown in Fig.~\ref{fig:Exp_results_cav_center}, all maxima and minima line up, as they must, at the cavity center. Since e.g. $15$ and $28$ are coprime, there is only one such location over the entire length of the cavity. As expected, when the lattice field is detuned by an even number of FSRs from the probe field, the fluorescence is suppressed at the cavity center. For a detuning by an odd number of FSRs the suppression occurs half a period away from the cavity center. The purely parabolic fits are performed including $\pm100$ pixels around the approximate cavity center position. From these we confirm that the cavity center is at an axial position of $320.70 \pm 0.15$ pixels. The center position for the individual measurements agree with the mean value within $+1.5$ and $-2.5$ pixels and their errorbars are within range of the mean. Converting the fit results into a physical length gives an uncertainty in the absolute positioning of the cavity center of only $\pm$135 nm, smaller than both the beating periods (here 400-700 $\mu$m) and the periodicity of the two standing waves (433 nm). As mentioned above, an even better precision could be obtained in principle by using more sets of detunings. \section{Future prospects} \label{Future} As briefly mentioned in the introduction, the positioning of single or ensembles of ions with respect to the center of a Fabry-P\'{e}rot cavity may have several applications for ion-based cavity QED. First, it adds to the tools for trapping ions in localized optical fields~\cite{Schneider2010,Enderlein2012} for coherent atom-ion studies~\cite{Grier2009,Zipkes2010,Schmid2010,Cetina2012}. Second, superposing a steep and short-scale periodic optical potential to a shallow rf trap allows studies of structural~\cite{Drewsen2012,Horak2012,Cormick2012} and dynamical phase transitions (e.g. Coulomb-Frenkel-Kontorova model~\cite{GarciaMata2007,Benassi2011,Pruttivarasin2011,Schneider2012review,Cetina2013}). As demonstrated in~\cite{Linnet2012}, localizing an ion in a standing wave cavity field also allows for a better control of the ion-cavity coupling strength, which can be of interest for quantum information processing applications, such as single-photon generation~\cite{Keller2004,Barros2009}, quantum memory~\cite{Herskind2009,Albert2011}, photon counters~\cite{Clausen2013}, single ion-photon interfaces~\cite{Stute2012,Stute2013}, or for cavity-mediated cooling~\cite{Leibrandt2009}. In addition to ionic systems in cavities, this positioning technique should also be applicable to e.g. cold neutral atomic species, trapped in magnetic or optical dipole traps. One can envision its application in single atom quantum dynamics studies~\cite{Pinkse2000,Hood2000,Thompson2013} which would benefit from accurate positioning. This include feedback~\cite{Kubanek2012}, cavity~\cite{Maunz2004,Nussmann2005} or ground-state cooling~\cite{Reiserer2013}. It also naturally applies to cavity QED studies with ensembles, e.g. to investigations of the quantum dynamics of cold atoms in cavity-generated optical potentials~\cite{Ritsch2013}, to applications involving the simultaneous interaction with multiple standing wave fields~\cite{Albert2011,Botter2013}, or to cold atom cavity optomechanics studies~\cite{StamperKurn2013}. \section{Conclusion} \label{Conclusion} To conclude, we have demonstrated a simple method to accurately find the absolute center of a Fabry-P\'{e}rot resonator using the spatially modulated fluorescence of trapped ions probed by two simultaneously resonating cavity fields. This method has potential applications to a wide range of cavity QED investigations with cold ions or atoms. This work was supported by the European Commission (STREP PICC and ITN CCQED) and the Carlsberg Foundation.
1,314,259,996,934
arxiv
\section*{Acknowledgements} We are grateful to Alan Schwartz for interesting discussions. This work was supported in part by DOE grant DE-FG02-91ER40671, the NSF Presidential Young Investigator Award PHY-9157482, James S. McDonnell Foundation grant No. 91-48, and an A. P. Sloan Foundation Research Fellowship.
1,314,259,996,935
arxiv
\section{Introduction} Financial markets, especially the stock market, enjoy substantial coverage day-to-day on digital platforms such as YouTube. Besides the presenters, experts with much understanding of the stock markets work as contributors or panelists who share their perspectives on various topics. On YouTube, channels such as Yahoo Finance's Stock Market Coverage provide a wealth of information about the development of financial market events, that can allow the audience to get informed on trending topics, among others. \iffalse Often,\fi Most studies on the impact of stock news, for instance, on stock prices focused on using headlines from renowned news agencies or blog posts \cite{Jariwala:20,VelayDaniel:18,NemesKiss:21}. The news coverage particularly gives a topic context and meaning \cite{Chipidza:21}, much as financial television (TV) programs such as CNBC Markets. However, in comparison to traditional news coverage, the dissemination of financial and economic news is either a segmented section or a dedicated channel on TV. While most financial market studies focused on data sources such as news headlines, financial reports, and social media posts, continued news coverage of any kind have had limited usage for the purpose of analysis. In particular, one may argue the authenticity of social media posts from either Facebook or Twitter, as misinformation is ubiquitous on such platforms \cite{Kogan:18}. With media coverage, not only is it factually oriented to decrypt market news and events, an advantage is that the information is fact-checked. Besides this, the viewpoints of renowned experts can get contradicted, backed, or completed by journalists or other high-profile persons in the world of finance and economics. Given the popularity of YouTube, the publicly available videos constitute a reliable source of data for further analysis. However, the challenge of analyzing videos, in general, requires either capturing image frames or manipulating snippets of an entire video \cite{Snelson:21}. The social science field is an example where videos are a data source for analysis; these are either non-verbal or otherwise \cite{Luff:12}. In this paper, we build corpora of transcribed videos on YouTube that focus on the financial market and the economy. We used OpenAI's Whisper \cite{Radford:22} to transcribe videos to texts since market coverage generates a large volume of video data. Consequently, watching tons of financial market coverage videos to derive actionable insights can be challenging and complex. To this end, we transcribe market coverage videos to texts for simplifying analyses. Specifically, we use a topic modeling approach for generating topics related to the markets. Further, we perform an n-gram analysis to understand the coverage narratives and extract the most frequently mentioned persons and organizations in the market coverage using named entity recognition. It should be noted that we kept topics related to the economy and markets. \noindent \textbf{Background and Related Works.} The characterization of financial and economic news has been explored in numerous aspects, including emotions and sentiments \cite{Schumaker:12,Griffith:20} from social media posts to news headlines \cite{Mitra:11,Bukovina:16}. The role of news has been covered in \cite{Baker:21} where the authors pose the question ``{\it What drives big moves in national stock markets?}''. According to the study, news about US economic and policy developments significantly impacts worldwide equity markets. Considering previous works, most have centered on how media coverage affects financial markets \cite{Fang:09,Dogal:12,Strycharz:18}. In comparison with previous works on the effects of media coverage on financial markets, we focus on how financial and economic news coverage narrations between multiple platforms are similar. Our work is closely related to the studies of \cite{Piao:15,McBeth:18,Bhargava:22}, who compare the narratives of news coverage. Relative to these works, we investigate the evolution of the popular topics addressed over time by financial media channels on YouTube and discover similarities between the topics addressed by different media channels. Additionally, we show that financial news coverage is centered on organizations and individuals: where individuals are either heads of organizations or knowledgeable panelists or experts in finance and economics. We focus on the following two research questions. \textit{RQ1:} How are major financial events identified through language use within news coverage topics? \textit{RQ2:} To what extent do news coverage topics exhibit content coordination regarding major financial events and entities (such as organizations and individuals) across different news channels? Specifically, this work makes the following contributions. (1) We show how effective our data collection and pre-processing strategy is for gathering digital videos and generating text information, which relates to the financial market and economic discourse. (2) We compare the narratives between reliable media channels; one of the advantages of utilizing datasets from these media channels is that they do not stem from bots. (3) We investigate the evolution of the topics addressed over time and examine the most frequently mentioned entities (organizations and persons) in financial market coverage and discover similarities between topics. (4) We publicly release our code as open source to support continued development.\footnote{\url{https://github.com/djeffkanda/market_coverage_analysis}} \begin{figure*}[t!] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.46]{Plots/blw_plots.pdf} \caption{Bloomberg Wall Street Week (BLW)} \label{fig:coverage_bws_ngrams} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.46]{Plots/bsm_plots.pdf} \caption{Bloomberg Stock Market News and Analysis (BSM)} \label{fig:coverage_bsm_ngrams} \end{subfigure} \caption{Bi-grams with the highest tf-idf from Bloomberg data. Note that \textit{x-axis} represents tf-idf values.} \vspace{-5mm} \end{figure*} \begin{figure}[htb] \centering \includegraphics[scale=0.45]{Plots/yh_plots.pdf} \caption{Bi-grams with the highest tf-idf from YFM. Note that \textit{x-axis} represents tf-idf values.} \label{fig:coverage_yfm_ngrams} \vspace{-5mm} \end{figure} \begin{figure*}[t!] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.46]{Plots/blw_sentiment.pdf} \caption{Bloomberg Wall Street Week (BLW)} \label{fig:sentiment_bws} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.46]{Plots/bsm_sentiment.pdf} \caption{Bloomberg Stock Market News and Analysis (BSM)} \label{fig:sentiment_bsm} \end{subfigure} \caption{Words preceded by either \textit{consumer, economy, inflation, loan, market} or \textit{recession} that had the greatest contribution to sentiment values, in a positive or negative direction in Bloomberg coverage.} \vspace{-5mm} \end{figure*} \begin{figure}[htb!] \centering \includegraphics[scale=0.45]{Plots/yh_sentiment.pdf} \caption{Words preceded by either \textit{consumer, economy, inflation, loan, market} or \textit{recession} that had the greatest contribution to sentiment values, in a positive or negative direction in YFM.} \label{fig:sentiment_yfm} \vspace{-5mm} \end{figure} \iffalse Research Papers \begin{itemize} \item \url{https://proceedings.mlr.press/v162/zhang22n/zhang22n.pdf} \item \url{https://cs229.stanford.edu/proj2015/189_report.pdf} \end{itemize} Characterization \url{https://www.qsslab.ca/publication/2020-costas-twitter/costas-2020-twitter.pdf} \url{https://web.fe.up.pt/~eduarda/papers/smuc33c-sousa.pdf} \url{https://arxiv.org/pdf/2103.13239.pdf} \url{https://arxiv.org/pdf/1903.00156.pdf} \fi \section{Methods} \subsection{Data collection} The data we used in the study was collected from the YouTube channels of Yahoo Finance and Bloomberg Markets and Finance and transcribed using the OpenAI's Whisper speech-to-text model described in \cite{Radford:22}. Note that our choice of using Yahoo Finance and Bloomberg is motivated by the fact that they (i) are among the world leaders in business news and real-time financial market coverage, (ii) provide financial news, data, and commentary including stock quotes, press releases, financial reports, original content, and video to the world of finance every Monday–Friday from 9 am to 5 pm (ET), and (iii) host world-class specialists to express their opinions and discuss market news and events; additionally, they are freely accessible on YouTube. They decompose the markets and real-life financial issues for individual investors, industry leaders, and those seeking to invest in their future. \begin{table} \renewcommand{\arraystretch}{1.5} \setlength\tabcolsep{3.7pt} \centering \fontsize{8.5pt}{8.5pt}\selectfont \caption{Data summary of the collected coverage}\label{TblOne45XTiOH2vPq1} \begin{tabular}{ c c c c c} \hline Media & Tot. collected files & Total hours & Avg. time per file \\ \hline {BLW} & {744} & {171.16} & 14~minutes \\ {BSM} & {3885} & {398.15} & 6~minutes \\ {YFM} & {318} & {2467} & 8~hours \\ \hline \end{tabular} \vspace{-5mm} \end{table} Specifically, we collected one of the YouTube playlists of the Yahoo Finance Market's official channel called Stock Market Coverage, from 02 January 2020 to 30 September 2022. We extracted two different YouTube playlists from Bloomberg and Finance's official channel, namely Wall Street Week, and Stock Market News and Analysis, for the same time period. {\bf Yahoo Finance Stock Market Coverage (YFM)} receives top names in finance and economics to scrutinize the latest market news and contribute with cogent evidence to explain the development of the market events, identify any untapped needs in the marketplace, and provide advanced analyses and opinions. {\bf Bloomberg Wall Street Week (BLW)} hosts influential personalities in finance and economics to talk about the week's biggest issues on Wall Street. {\bf Bloomberg Stock Market News and Analysis (BSM)} is a playlist of Bloomberg Markets and Finance's YouTube official channel where experts discuss the latest market news and effectuate market analysis in real-time coverage. Table \ref{TblOne45XTiOH2vPq1} summarizes statistics of the collected market coverage. For the three targeted YouTube playlists, BLW, BSM, and YFM, respectively, we extracted 744, 3885, and 318 videos for which the total hours approximate 171.16, 398.15, and 2467 and the average duration of videos counts 14 minutes, 6 minutes and 8 hours. We utilized the OpenAI's Whisper, a speech recognition model, to transcribe audios of the collected data to text corpora \cite{Radford:22}. Speech recognition remains a challenging problem in artificial intelligence and machine learning \cite{Chiu:18,Qin:19,YZhang:20}. In a step toward solving it, OpenAI introduced Whisper, an automatic speech recognition system that approaches human-level robustness and accuracy in English speech recognition. Whisper outperformed the state-of-the-art speech recognition systems by leaps and bounds and has received immense interest for its multilingual transcription and translation capabilities spanning nearly 100 languages. Whisper was trained on 680,000 hours of multilingual and `multitask' data collected from the web, which lead to improved recognition of unique accents, background noise, and technical jargon. One of the advantages of Whisper is that it performs well even on diverse accents and technical language and is almost human-level in terms of recognizing speech even in extremely noisy situations. The architecture and the performance of Whisper over other speech recognition systems are briefly explained in its original paper \cite{Radford:22}. Specifically, we utilized the transcribed corpora (Table \ref{TblOne45XTiOH2vPq1}) to extract insights using natural language processing techniques including n-gram analysis (\S\ref{ngram}), topic modeling (\S\ref{topicmodels}) and named entity recognition (\S\ref{ner}). We removed stopwords, numbers and special characters for performing n-gram analysis \S\ref{ngram} and opic modeling \S\ref{topicmodels}. \subsection{N-gram analysis}\label{ngram} We analyzed n-grams to extract important insights in text transcriptions to understand language use within news coverage narratives. We extracted bi-grams from text transcriptions of financial market coverage by leveraging the vectors based on the term frequency-inverse document frequency (\textit{tf{-}idf}) technique \cite{Ramos:2003,Gebre:2013}. Specifically, we utilized \textit{tf{-}idf} as a statistical measure to evaluate how important a word is to each text transcription in the corpus; we converted each text transcription into its bag-of-word representation and computed the \textit{tf{-}idf} value of each word using the standard formula, \textit{tf{-}idf}${=}(1+\log{n_{w,t}})\times\log{\frac{T}{T_w}}$, where the \textit{tf{-}idf} value of word $w$ in text transcription $t$ is the log normalization of the number of times the word occurs in the text transcription ($n_{w,t}$) times the inverse log of the number of text transcriptions $T$ and $T_w$ the number of text transcriptions containing word $w$. \begin{figure*}[t!] \centering \includegraphics[scale=0.9]{NER/blw_ner_2020a.pdf} \caption{NER results of BLW for 2020} \label{fig:ner} \end{figure*} \subsection{Topic modeling}\label{topicmodels} Topic modeling refers to the machine learning task of automatically discovering the abstract `topics' that occur in a collection of documents, and one popular topic modeling technique is known as latent Dirichlet allocation (LDA) \cite{blei:03}. LDA is a probabilistic model that identifies latent topics in a document and can be trained using collapsed Gibbs sampling. Fundamentally, LDA assumes $k$ underlying topics, each of which is a distribution over a fixed vocabulary. While LDA models topics from text corpora \cite{Angelov:20}, it basically suffers from several shortcomings, including difficulty in setting the parameter k (which refers to the number of topics to produce semantically meaningful results), a deficiency in handling short texts \cite{Banda:21}, in capturing the contextual meaning of sentences \cite{ZukZuk:2020}, and its inability to model topic correlations and the evolution of topics over time \cite{WangMcCallum:06}. \begin{figure*}[t!] \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/blw_topic_m/blwplot_chunk_1.png} \caption{Topics ranging from 1 to 10} \label{blw:a} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/blw_topic_m/blwplot_chunk_2.png} \caption{Topics ranging from 11 to 20} \label{blw:b} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/blw_topic_m/blwplot_chunk_3.png} \caption{Topics ranging from 21 to 30} \label{blw:c} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/blw_topic_m/blwplot_chunk_4.png} \caption{Topics ranging from 31 to 40} \label{blw:d} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/blw_topic_m/blwplot_chunk_5.png} \caption{Topics ranging from 41 to 50} \label{blw:e} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/blw_topic_m/blwplot_chunk_6.png} \caption{Topics ranging from 51 to 55} \label{blw:f} \end{subfigure} \caption{Top 55 topics with frequencies over time extracted from Bloomberg Wall Street Week}\label{figure_blw} \vspace{-5mm} \end{figure*} To overcome these limitations, the new generation of topic models \cite{Peinelt:2020,Bianchi:2020,Angelov:20,Grootendorst:22} utilize pre-trained representations such as BERT to enable topic modeling (i) to consider the contextual meaning of sentences for supporting the results in order to match the adequate topics and (ii) to include more features for efficiently modeling topic correlations and topic evolution over time. Recent pre-trained contextualized representations like BERT have pushed the state-of-the-art in several areas of natural language processing due to their ability to expressively represent complex semantic relationships from being trained on massive datasets. BERT is a bidirectional Transformer-based pre-trained contextual representation using masked language modeling objective and next sentence prediction tasks \cite{Devlin:2019}. The significant advantage of BERT is that it simultaneously gains the context of words from both the left and right context in all layers. To this end, BERT utilizes a multi-layer bidirectional Transformer encoder, where each layer contains multiple attention heads. In this paper we use BERTopic \cite{Grootendorst:22} to generate topics addressed in financial market coverage, analyze the evolution of these topics over time and discover similarities between the topics addressed in BLW, BSM and YFM (\textit{RQ1}). BERTopic leverages BERT embeddings and a class-based term frequency-inverse document frequency to create dense clusters to detect unique topics. In addition, BERTopic generates the topic representations at each timestamp for each topic. The traditional LDA model requires a predefined $k$ (the number of topics) for algorithms to cluster corpus around $k$ topics \cite{blei:03}. BERTopic does not require a predefined $k$, reducing the need for various iterations of model finetuning. The performance of BERTopic over LDA-like models and other topic modeling techniques is reported in \cite{Grootendorst:22}.\footnote{The Python package of BERTopic:\\ \url{https://github.com/MaartenGr/BERTopic}} \begin{figure*} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/bsm_topic_m/bsmplot_chunk_1.png} \caption{Topics ranging from 1 to 10} \label{bsm:a} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/bsm_topic_m/bsmplot_chunk_2.png} \caption{Topics ranging from 11 to 20} \label{bsm:b} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/bsm_topic_m/bsmplot_chunk_3.png} \caption{Topics ranging from 21 to 30} \label{bsm:c} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/bsm_topic_m/bsmplot_chunk_4.png} \caption{Topics ranging from 31 to 40} \label{bsm:d} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/bsm_topic_m/bsmplot_chunk_5.png} \caption{Topics ranging from 41 to 50} \label{bsm:e} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/bsm_topic_m/bsmplot_chunk_6.png} \caption{Topics ranging from 51 to 55} \label{bsm:f} \end{subfigure} \caption{Top 55 topics with frequencies over time extracted from Bloomberg Stock Market News and Analysis}\label{figure_bsm} \end{figure*} \begin{figure*} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/yh_topic_m/yhplot_chunk_1.png} \caption{Topics ranging from 1 to 10} \label{yh:a} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/yh_topic_m/yhplot_chunk_2.png} \caption{Topics ranging from 11 to 20} \label{yh:b} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/yh_topic_m/yhplot_chunk_3.png} \caption{Topics ranging from 21 to 30} \label{yh:c} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/yh_topic_m/yhplot_chunk_4.png} \caption{Topics ranging from 31 to 40} \label{yh:d} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/yh_topic_m/yhplot_chunk_5.png} \caption{Topics ranging from 41 to 50} \label{yh:e} \end{subfigure \begin{subfigure}[b]{0.5\linewidth} \centering \includegraphics[scale=0.192]{Plots/yh_topic_m/yhplot_chunk_6.png} \caption{Topics ranging from 51 to 53} \label{yh:f} \end{subfigure} \caption{Top 55 topics with frequencies over time extracted from Yahoo Finance Stock Market Coverage}\label{figure_yfm} \end{figure*} \subsection{Named entity recognition}\label{ner} Named entity recognition (NER) aims at finding and categorizing specific entities in text with their corresponding semantic types such as person names, organizations (such as companies, government organizations, etc.), locations (such as cities, countries, etc.), or date and time expressions \cite{Li:2020,Perera:2020}. In this paper, we utilized NER to extract the names of persons and organizations mentioned in financial market coverage, and map the frequency of the most mentioned entities over time (\textit{RQ2}). The rationale behind the extraction of NER entities is to identify entities that constitute the center of attention in the financial market and dominate the financial world at a specific time-step. This could support the understanding of the evolution of the topics addressed over time and indicate the entities around which the topics are concentrated in. Note that we removed some names of Bloomberg and Yahoo anchors that appeared in the NER results. We used a fine-tuned BERT model called bert-base-NER.\footnote{Find the official page of bert-base-NER on HuggingFace, dslim/bert-base-NER} In this paper we used the initial model of bert-case-NER without modifying its architecture or implementation. It is important to note that bert-base-NER is ready to use for NER and achieves state-of-the-art performance for the NER task, and it has been trained to extract four types of entities: location (LOC), organizations (ORG), person (PER) and miscellaneous (MISC). Specifically, bert-base-NER is a version of the bert-base-cased model that was fine-tuned on the English version of the standard CoNLL-2003 Named Entity Recognition dataset \cite{Sang:2003,Devlin:2019}. \section{Results} This section first investigates the relationships between words using n-grams (n=2) and language use within news coverage narratives and provides sentiments of n-grams indicating economic and financial concerns. Second, topic modeling and name entity recognition are performed to examine the evolution of topics over time and discern the coverage narratives, and then understand the implication of the most mentioned persons and organizations in the stock market. \noindent \textbf{Bi-gram analysis.} As mentioned in \S\ref{ngram}, we sort to determine the link between words by looking at which words typically come after others right away. By utilizing the \textit{tf-idf} statistical measure, we identify the importance of each consecutive sequence of words in each year's corpus. Figures \ref{fig:coverage_bws_ngrams}, \ref{fig:coverage_bsm_ngrams} and \ref{fig:coverage_yfm_ngrams} show the top 55 bi-grams drawn from the market coverage. Generally, each year, we observe the bi-grams are primarily on economic and financial markets-related topics as well as some pertaining issues that happened during the years. While examining the results obtained from the BLW, BSM and YFM datasets, we observe that the majority of bi-grams indicate economics-related topics \iffalse related to the economic topic\fi except for \textit{``divided government'', ``delta variant'', ``democratic national''} and \textit{``blue wave''}, among others. Particularly, we note that these datasets assimilate topics/events pertaining to aspects of finance that have major impacts on the economy, for example, prices. Besides these, other interesting financial discourses are centered on the \textit{“game stop"} fiasco in 2021 \cite{Malz:21}; we observed this as a bi-gram in BSM. {\color{black} Interestingly, we note the presence of the bi-gram \textit{``digital assets''} along with \textit{``cyber security''}. Recent turmoil in the cryptocurrency market has underlined the critical risks involved with investing in or engaging with digital assets. Digital assets raise cybersecurity concerns requiring regulatory controls and measures to protect individuals from cybercrime and other critical risks \cite{Bauer:20}. We observe an important number of cryptocurrency-related results, including bi-grams \textit{``bitcoin futures''} and \textit{``crypto space''} in 2021, topics \textit{``bitcoin price''} and \textit{``nfts and crypto''} in Figure \ref{figure_yfm} and the topic \textit{``bitcoin and digital currency''} in Figures \ref{figure_blw} and \ref{figure_bsm}.} We report bi-grams highlighting recent events occurring in Ukraine as well as their continuous in the early months; these bi-grams include \textit{“russia ukraine", “russia oil"} and \textit{“ukraine conflict"} \iffalse In 2022, bi-grams such as \textit{“russia ukraine", “russia oil"} and \textit{“ukraine conflict"} are inline with recent events happening in Ukraine and their continuous discussions in the early months.\fi are especially inline with the commodities market.\footnote{\href{https://shorturl.at/lqWZ3}{shorturl.at/lqWZ3 Accessed 23 December 2022}} Additionally, we note that the coverage bi-grams also identify persons and organizations including \textit{``central banks'', ``paul krugman''} and \textit{``gary gensler''}. For instance, Paul Krugman is an economist and a contributor on Bloomberg\footnote{\href{https://shorturl.at/dHIX8}{shorturl.at/dHIX8 Accessed 23 December 2022}} and Gary Gensler Chairperson of the U.S. Securities and Exchange Commission since 2021.\footnote{https://www.sec.gov/} Overall, we find that some bi-grams depict the language use in coverage which attributes to events such as \textit{``president elect'' or ``rate hikes''}; each referring to the general elections and (imminent) announcement of interest rate hikes and discussions on these type of events.\footnote{\href{https://shorturl.at/mwBGL}{shorturl.at/mwBGL Accessed 23 December 2022}} Figure \ref{fig:sentiment_bws}, \ref{fig:sentiment_bsm} and \ref{fig:sentiment_yfm} show the top 6 economic (financial)-related keywords identified in the bi-gram that best describes the financial markets. The keywords (\textit{consumer, economy, inflation, loan, market} and \textit{recession}) are not exhaustive and can be expanded. The choice of these keywords is arbitrary, we believe that they reflect and pertain to broad discussions regarding recent events. We examine how frequently sentiment-associated words are preceded by these keywords, which attribute to positive or negative sentiments; {\color{black}with positive or negative values indicating the direction of the sentiment}. {\color{black}We note that bi-grams stemming from the previously mentioned keywords identify the most common economic events}. For instance, the bi-gram of \textit{``demand consumer''} has a negative sentiment while \textit{``confidence consumer''} has a positive sentiment. These bi-grams reflect the events of supply-chain issues or consumers' ability to buy items.\footnote{\href{https://shorturl.at/brJOP}{shorturl.at/brJOP Accessed 23 December 2022}} Further, \textit{``inflation''} and \textit{``economy''} discourse cite either \textit{``growth/growing/good''} or \textit{``risk(s)/worse/stalling''} painting the picture of positive and negative sentiments, respectively. The bi-gram of \textit{``stalling economy''} essentially describes one with a growth rate below some threshold level. Thus, the possible effects of the COVID-19 pandemic. The bi-gram of \textit{``growth loan''} is the maximum positive sentiment across BLW, BSM and YFM data. Our analysis indicates the right direction of such a bi-gram. However, the financial keyword, \textit{``market's''} bi-gram has \textit{“share"} as the most positive sentiment and with \textit{``demand/debt''} as the opposite. A discussion relating to market share could be attributed to an organization as the bi-grams of Figures \ref{fig:coverage_bws_ngrams},\ref{fig:coverage_bsm_ngrams} and \ref{fig:coverage_yfm_ngrams} identify some organizations. Interestingly, we observe that the NER analysis also identified such organizations. \noindent \textbf{Named entity recognition.} Within the context of (financial) news coverage, individuals (or persons) are either introduced as panelists or as a contributor or mentioned (cited) to affirm a statement. Likewise, some individuals are associated with some organization (or institution), or sometimes discussions are centered around an organization based on what might be trending. To distinguish between persons and organizations from our corpora, we employed a NER model as described in \S\ref{ner}. Figure \ref{fig:ner} shows the distinct entities for the BLW corpus in each quarter of 2020. Each quarter shows the frequencies of the top 60 entities (organizations and persons). A closer look at all the quarters' organizations reveals the following organizations having the highest mentions or often discussed: \textit{Tesla, Congress} and \textit{US Treasury}. Note that \textit{Congress} represents both the ``House of Representatives" and ``Senate". Within the various quarters, we noticed that some major technology companies were frequently in discussions: \textit{IBM, Huawei, Facebook, Apple} and \textit{Amazon}. The \textit{Congress}, for example, were often concerning stimulus package discussions.\footnote{www.bloomberg.com/news/articles/2020-03-25/what-s-in-congress-2-trillion-coronavirus-stimulus-package} We further observed that the identified organizational entities are either governmental or private financial institutions, such as \textit{US Treasury, Goldman Sachs} and \textit{Rock Creek}. Besides, there were organizations of global and continental significance during the coverage. For example, the \textit{World Bank, OPEC} and \textit{ECB}---the European Central Bank. Similarly, the news coverage on persons ranges from world leaders or politicians to investment moguls to financial experts to heads of institutions and others. In the first quarter of 2020, we noticed that the name of \textit{Donald Trump}, the then president of the United States of America (USA), was frequently mentioned; this suggests that the events of January 6, 2020, and subsequent events had tremendous discussions in the financial and economic news space. We also noticed an important frequency around the name of \textit{Roger Ferguson}, the former president and Chief Executive Officer (CEO) of \textit{TIAA}---organization and a contributor on BLW. In the subsequent quarters, \textit{Larry Summers}, a renowned financial expert and contributor; \textit{Joe Biden}, the current president of the USA; and \textit{Donald Trump} were highly mentioned. Concerning \textit{Larry Summers}, he provides insight into how prospective the economic and financial outlook would be based on some announcements. Of particular notice was \textit{George Floyd} in the financial news coverage in the second quarter of 2020; his murder sparked numerous protests and moments of reckoning that reverberated far beyond the United States. Based on the NER sample result, we observed that financial news coverage does not only cover finance and economic topics but also general topics. In the next section, we identify some common topics from the news coverage. \noindent \textbf{Topic modeling.} Figures \ref{figure_blw}, \ref{figure_bsm} and \ref{figure_yfm} show the top 55 salient topics from the BLW, BSM and YFM, respectively. Figures are organized into six sub-figures at the rate of ten topics per sub-figure to provide a better visualization of the frequency of topics over time for the period from January 2020 to September 2022. Figure \ref{blw:a} shows the ten most topics addressed in BLW. These topics include ``\textit{inflation}'', ``\textit{vaccine}'', ``\textit{China-related news}'' and ``\textit{Tesla and electric vehicles}. Particularly, for the topic ``\textit{Tesla and electric vehicles}'', a high spike was observed in early 2020, followed by a drop in frequency over the first quarter of 2021. Even though electric vehicles seemed less frequently discussed in favor of topics related to the vaccine and COVID-19, we observed that the discussions around electric vehicles remained one of the most salient topics in the market coverage. We noticed many spikes in Figure \ref{blw:a} and Figure \ref{bsm:a} for the topic ``\textit{vaccine}'' during 2020 and the topic ``\textit{mask-wearing}'' in Figure \ref{yh:c}. One of the reasons that could partly justify this observation is that pharmaceutical companies such as Pfizer and BioNTech started research on developing vaccines for COVID-19 during that period and announced promising results. In November 2020, Pfizer announced the vaccine releases, followed by a vaccination campaign worldwide.\footnote{\href{https://www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-announce-vaccine-candidate-against}{Pfizer and BioNTech Announce Vaccine Candidate Against COVID-19. \url{shorturl.at/mHNV4}}} \iffalse The COVID-19 pandemic has been a concern for the entire world, with serious negative impacts on several sectors of life.\fi The COVID-19 pandemic has caused major social and economic impacts on the lives of people across the world. One of the direct impacts includes unemployment in the labor market~(Figure \ref{blw:b}, \ref{bsm:a}, and \ref{yh:c}), and inflation, in the financial market. Figure \ref{yh:c} shows an increase in frequency over time for the topic ``\textit{inflation}'' from 2021 to 2022. We note that this topic received considerable attention in the market coverage along with its related topics, such as ``\textit{home and housing mortgage}'', due to the increase in mortgage rates, and ``\textit{recession}''. Figures \ref{blw:b}, \ref{bsm:a} and \ref{yh:c} highlight the evolution of topics over time for topics such as ``\textit{Disney and Netflix content}'' and ``\textit{Russia sanctions and Ukraine}''. The Russian invasion of Ukraine in early 2022 \iffalse caused multilateral effects on the global economy\fi caused knock-on effects worldwide. Sanctions imposed on Russia by the United States and other countries engendered multilateral effects on the global economy in general and the stock market in particular. This reason could be retained as a compelling justification to partly explain the many high spikes that we observed for the topic of Russia and Ukraine. The topic ``\textit{Disney and Netflix content}'' indicates its large surge during the period of the first COVID-19 lockdown. Note that lockdown was one of the restrictive measures taken by governments to contain the ongoing pandemic. During the lockdown period, many people spent most of their time on streaming platforms as they could not go out. Online streaming platform subscriptions have increased along with their corresponding stock price. \section{Discussion} News coverage provides contest and analysis needed to aid viewers in ascertaining further insight lacking from other news sources (newspapers or blogs) through anchors and guests (experts') discussions. In this paper, we collected news coverage data from YouTube and Bloomberg related to financial and economic news to identify the most discussed topics from transcribed video news coverage. The primary goal of our research was to identify the similarities of news coverage topics regarding major financial events across different news channels. Our findings describe the usefulness of considering video (visual) as a data source. By analyzing the similarity between other channels, we observe some related bi-gram keywords and entities (organizations and persons). The bi-gram provides an overview of the structure of language use during news coverage through discussions and headlines briefing of news segments. Our results find that news coverage evolved, and discussions were often centered more on recent events surrounding specific financial markets. Secondly, we identified major financial events through the evolution of topics over time and their frequencies. Our topic models broadly reflect the evolution and variation of topics related to financial events. Important to note are the global events documented in various studies that are in tandem with financial markets, such as the Russo-Ukrainian War \cite{Lo:22}. Prior work found the effect of news coverage on trading and prices \cite{Engelberg:11,Haroon:20}, while our results identify the narrative of news coverage without any relation to either trading behavior or price volatility. Our results can be used to create dashboards portraying outputs stemming from financial market coverage from various reliable media channels. This can help anticipate "investment" actions or predict market pricing based on the news coverage and identify the most frequently cited entities to make a good investment choice. Further, the results investigated the less frequently cited entities, which one can keep a constant eye on or keep on track to ensure if they constitute a new market opportunity and if something might skyrocket overnight. \section{Conclusion} In this paper, we characterize financial market coverage from YouTube. To this end, we utilize OpenAI's Whisper speech-to-text model to generate a text corpus of market coverage YouTube videos from Bloomberg and Yahoo Finance. Then, we use natural language processing to gain insights into language usage in financial market coverage. Additionally, we investigate the prevalence and evolution of trending topics and the influence of certain persons and organizations on the financial market. We discover similarities between topics and exhibit content coordination regarding major financial events. Through this characterization, we gain a better understanding of the dynamics of financial market coverage and valuable insights into current financial events and the global economy. We show how our findings can be used to predict market performance and pricing and to support investment actions and decision-making. In the future, we would like to experiment with market forecasts using a holistic model that combines financial market coverage and stock prices and includes features such as n-gram, NER, topic modeling, and emotions.
1,314,259,996,936
arxiv
\section{Introduction} The paradigm of adaptive nonparametric inference has developed a fairly complete theory for estimation and testing -- we mention the key references \cite{L90, DJKP95, DJKP96, LMS97, BBM99, BM01, S96} -- but the theory of adaptive confidence statements has not succeeded to the same extent, and consists in a significant part of negative results that are in a somewhat puzzling contrast to the fact that adaptive estimators exist. The topic of confidence sets is, however, of vital importance, since it addresses the question of whether the accuracy of adaptive estimation can itself be estimated, and to what extent the abundance of adaptive risk bounds and oracle inequalities in the literature are useful for statistical inference. In this article we give a set of necessary and sufficient conditions for when confidence sets that adapt to unknown smoothness in $L^2$-diameter exist in the problem of nonparametric density estimation on $[0,1]$. The scope of our techniques extends without difficulty to density estimation on the real line, and also to other common function estimation problems such as nonparametric regression or Gaussian white noise. Our focus on $L^2$-type confidence sets is motivated by the fact that they involve the most commonly used loss function in adaptive estimation problems, and so deserve special attention in the theory of adaptive inference. We can illustrate some main ideas by the simple example of two fixed Sobolev-type classes. Let $X_1, \dots, X_n$ be i.i.d.~with common probability density $f$ contained in the space $L^2$ of square-integrable functions on $[0,1]$. Let $\Sigma(r)=\Sigma(r,B)$ be a Sobolev ball of probability densities on $[0,1]$, of Sobolev-norm radius $B$ -- see Section \ref{inf} for precise definitions -- and consider adaptation to the submodel $\Sigma(s) \subset \Sigma(r)$, $s>r$. An adaptive estimator $\hat f_n$ exists, achieving the optimal rate $n^{-s/(2s+1)}$ for $f \in \Sigma(s)$ and $n^{-r/(2r+1)}$ otherwise, in $L^2$-risk; see for instance Theorem \ref{adapt} below. A confidence set is a random subset $C_n=C(X_1, \dots, X_n)$ of $L^2$. Define the $L^2$-diameter of a norm-bounded subset $C$ of $L^2$ as \begin{equation} \label{2diam} |C| = \inf \left\{ \tau: C \subset \{h: \|h-g\|_2 \le \tau\} \text{ for some } g \in L^2 \right\}, \end{equation} equal to the radius of the smallest $L^2$-ball containing $C$. For $G \subset L^2$ set $\|f-G\|_2= \inf_{g \in G}\|f-g\|_2$ and define, for $\rho_n \ge 0$ a sequence of real numbers, the separated sets $$\tilde \Sigma(r, \rho_n) \equiv \tilde \Sigma(r, s, B, \rho_n) = \{f \in \Sigma(r): \|f-\Sigma(s)\|_2 \ge \rho_n\}.$$ Obviously $\tilde \Sigma(r,0)=\Sigma(r)$, but for $\rho_n>0$ these sets are proper subsets of $\Sigma(r) \setminus \Sigma(s)$. We are interested in adaptive inference in the model $$\mathcal P_n \equiv \Sigma(s) \cup \tilde \Sigma(r, \rho_n)$$ under minimal assumptions on the size of $\rho_n$. We shall say that the confidence set $C_n$ is $L^2$-adaptive and honest for $\mathcal P_n$ if there exists a constant $M$ such that for every $n \in \mathbb N$, \begin{equation}\label{fixad} \sup_{f \in \Sigma(s)} {\Pr}_f\left\{|C_n| > M n^{-s/(2s+1)}\right\} \le \alpha', \end{equation} \begin{equation}\label{fixadr} \sup_{f \in \tilde \Sigma(r,\rho_n)} {\Pr}_f\left\{|C_n| > M n^{-r/(2r+1)}\right\} \le \alpha' \end{equation} and if \begin{equation} \label{hon} \inf_{f \in \mathcal P_n} {\Pr}_f\left\{f \in C_n \right\} \ge 1-\alpha -r_n \end{equation} where $r_n \to 0$ as $n \to \infty$. We regard the constants $\alpha, \alpha'$ as given 'significance levels'. \begin{theorem} \label{fix} Let $0<\alpha, \alpha ' <1, s>r>1/2$ and $B>1$ be given. \newline A) An $L^2$-adaptive and honest confidence set for $\tilde \Sigma(r, \rho_n) \cup \Sigma(s)$ exists if one of the following conditions is satisfied: \newline i) $s \le 2r$ and $\rho_n \ge 0$ \newline ii) $s>2r$ and $$\rho_n \ge M n^{-r/(2r+1/2)}$$ for every $n \in \mathbb N$ and some constant $M$ that depends on $\alpha, \alpha', r, B$. \newline B) If $s>2r$ and $C_n$ is an $L^2$-adaptive and honest confidence set for $\tilde \Sigma(r, \rho_n) \cup \Sigma(s)$, for every $\alpha, \alpha'>0$, then necessarily $$\liminf_n ~\rho_n n^{r/(2r+1/2)}>0.$$ \end{theorem} We note first that for $s \le 2r$ adaptive confidence sets exist without any additional restrictions -- this is a main finding of the papers \cite{JL03, CL06, RV06} and has important precursors in \cite{L98, HL02, B04}. It is based on the idea that under the general assumption $f \in \Sigma(r)$ we may estimate the $L^2$-risk of any adaptive estimator of $f$ at precision $n^{-r/(2r+1/2)}$ which is $O(n^{-s/(2s+1)})$ precisely when $s \le 2r$. As soon as one wishes to adapt to smoothness $s>2r$, however, this cannot be used anymore, and adaptive confidence sets then require separation of $\Sigma(s)$ and $\Sigma(r) \setminus \Sigma(s)$ (i.e., $\rho_n>0$). Maximal subsets of $\Sigma(r)$ over which $L^2$-adaptive confidence sets do exist in the case $s>2r$ are given in Theorem \ref{fix}, with separation sequence $\rho_n$ characterised by the asymptotic order $n^{-r/(2r+1/2)}$. This rate has, as we show in this article, a fundamental interpretation as the minimax rate of testing between the composite hypotheses \begin{equation} \label{compt} H_0: f \in \Sigma(s) ~~ \text{against} ~~H_1: f \in \tilde \Sigma(r, \rho_n). \end{equation} The occurrence of this rate in Theorem \ref{fix} parallels similar findings in Theorem 2 in \cite{HN11} in the different situation of confidence \textit{bands}, and is inspired by the general ideas in \cite{GN10, HN11, KNP11, B11}, which attempt to find 'maximal' subsets of the usual parameter spaces of adaptive estimation for which honest confidence statements can be constructed. Our results can be construed as saying that for $s>2r$ confidence sets that are $L^2$-adaptive exist precisely over those subsets of the parameter space $\Sigma(r)$ for which the target $s$ of adaptation is testable in a minimax way. Our solution of (\ref{compt}) is achieved in Proposition \ref{test} below, where we construct consistent tests for general composite problems of the kind $$H_0: f \in \Sigma ~~ \text{against} ~~H_1: f \in \Sigma(r), \|f-\Sigma\|_2 \ge \rho_n, ~~~\Sigma \subset \Sigma(r),$$ whenever the sequence $\rho_n$ is at least of the order $\max(n^{-r/(2r+1/2)}, r_n),$ where $r_n$ is related to the complexity of $\Sigma$ by an entropy condition. In the case $\Sigma = \Sigma(s)$ with $s>2r$ relevant here we can establish $r_n=n^{-s/(2s+1)}=o(n^{-r/(2r+1/2)}),$ so that this test is minimax in light of lower bounds in \cite{I86, I93}. While the case of two fixed smoothness classes in Theorem \ref{fix} is appealing in its conceptual simplicity, it does not describe the typical adaptation problem, where one wants to adapt to a continuous smoothness parameter $s$ in a window $[r,R]$. Moreover the radius $B$ of $\Sigma(s)$ is, unlike in Theorem \ref{fix}, typically unknown, and the usual practise of 'undersmoothing' to deal with this problem incurs a rate-penalty for adaptation that we wish to avoid here. Instead, we shall address the question of simultaneous exact adaptation to the radius $B$ and to the smoothness $s$. We first show that such strong adaptation is possible if $R<2r$, see Theorem \ref{rv}. In the general case $R\ge 2r$ we can use the ideas from Theorem \ref{fix} as follows: starting from a fixed largest model $\Sigma(r, B_0)$ with $r, B_0$ known, we discretise $[r,R]$ into a finite grid $\mathcal S$ consisting of progressions $r, 2r, 4r, \dots$, and then use the minimax test for (\ref{compt}) in an iterated way to select the optimal value in $\mathcal S$. We then use the methods underlying Theorem \ref{fix} Ai) in the selected window, and show that this gives honest adaptive confidence sets over 'maximal' parameter subspaces $\mathcal P_n \subset\Sigma(r, B_0)$. In contrast to what is possible in the $L^\infty$-situation studied in \cite{B11}, the sets $\mathcal P_n$ asymptotically contain all of $\Sigma(r, B_0)$, highlighting yet another difference between the $L^2$- and $L^\infty$-theory. See Proposition \ref{dish} and Theorem \ref{cont} below for details. We also present a new lower bound which implies that for $R>2r$ even 'pointwise in $f$' inference is impossible for the full parameter space of probability densities in the $r$-Sobolev space, see Theorem \ref{imp}. In other words, even asymptotically one has to remove certain subsets of the maximal parameter space if one wants to construct confidence sets that adapt to arbitrary smoothness degrees. One way to remove is to restrict the space apriori to a fixed ball $\Sigma(r,B_0)$ of known radius as discussed above, but other assumptions come to mind, such as 'self-similarity' conditions employed in \cite{PT00, GN10, KNP11, B11} for confidence intervals and bands. We discuss briefly how this applies in the $L^2$-setting. We state all main results other than Theorem \ref{fix} above in Sections \ref{inf} and \ref{drei}, and proofs are given, in a unified way, in Section \ref{prf} \section{The Setting} \label{inf} \subsection{Wavelets and Sobolev-Besov Spaces} \label{sobsec} Denote by $L^2:=L^2([0,1])$ the Lebesgue space of square integrable functions on $[0,1]$, normed by $\|\cdot\|_2$. For integer $s$ the classical Sobolev spaces are defined as the spaces of functions $f \in L^2$ whose (distributional) derivatives $D^\alpha f, 0<\alpha \le s,$ all lie in $L^2$. One can describe these spaces, for $s>0$ any real number, in terms of the natural sequence space isometry of $L^2$ under an orthonormal basis. We opt here to work with wavelet bases: for index sets $\mathcal Z \subset \mathbb Z, \mathcal Z_l \subset \mathbb Z$ and $J_0 \in \mathbb N$, let $$\{\phi_{J_0 m}, \psi_{lk}: m \in \mathcal Z, k \in \mathcal Z_l, l \ge J_0+1, l \in \mathbb N \}$$ be a compactly supported orthonormal wavelet basis of $L^2$ of regularity $S$, where as usual, $\psi_{lk}=2^{l/2}\psi_k(2^l\cdot)$. We shall only consider Cohen-Daubechies-Vial \cite{CDV93} wavelet bases where $|\mathcal Z_l|=2^l, |\mathcal Z| \le c(S)<\infty, J_0 \equiv J_0(S)$. We define, for $\langle f, g\rangle= \int_0^1 fg$ the usual $L^2$-inner product, and for $0 \le s <S$, the Sobolev (-type) norms \begin{eqnarray} \label{sobolev} \|f\|_{s,2}&:=& \max\left(2^{J_0s}\sqrt {\sum_{k \in \mathcal Z} \langle f, \phi_{J_0 k}\rangle^2},\sup_{l \ge J_0+1} 2^{ls} \sqrt{\sum_{k \in \mathcal Z_l}\langle f, \psi_{lk} \rangle^2 } \right) \notag \\ &=& \max \left(2^{J_0s}\|\langle f, \phi_{J_0 \cdot}\rangle\|_2, \sup_{l \ge J_0+1} 2^{ls}\|\langle f, \psi_{l \cdot} \rangle\|_2 \right) \end{eqnarray} where in slight abuse of notation we use the symbol $\|\cdot\|_2$ for the sequence norms on $\ell^2(\mathcal Z_l), \ell^2(\mathcal Z)$ as well as for the usual norm on $L^2$. Define moreover the Sobolev (-type) spaces $$W^s \equiv B^s_{2\infty} = \{f \in L^2: \|f\|_{s,2}<\infty \}.$$ We note here that $W^s$ is not the classical Sobolev space -- in this case the supremum over $l \ge J_0+1$ would have to be replaced by summation over $l$ -- but the present definition gives rise to the slightly larger Besov space $B^s_{2 \infty}$, which will turn out to be the natural exhaustive class for our results below. We still refer to them as Sobolev spaces for simplicity, and since the main idea is to measure smoothness in $L^2$. We understand $W^s$ as spaces of continuous functions whenever $s>1/2$ (possible by standard embedding theorems). We shall moreover set, in abuse of notation, $\phi_{J_0 k} \equiv \psi_{J_0k}$ (which does not equal $2^{-1/2}\psi_{J_0+1,k}(2^{-1}\cdot)$) in order for the wavelet series of a function $f \in L^2$ to have the compact representation $$f=\sum_{l=J_0}^\infty \sum_{k \in \mathcal Z_{l}} \psi_{lk} \langle \psi_{lk},f\rangle,$$ with the understanding that $\mathcal Z_{J_0}=\mathcal Z$. The wavelet projection $\Pi_{V_j}(f)$ of $f \in L^2$ onto the span $V_j$ in $L^2$ of $$\{\phi_{J_0 m}, \psi_{lk}: m \in \mathcal Z, k \in \mathcal Z_l, J_0+1 \le l \le j\}$$ equals $$K_j(f)(x) \equiv \int_0^1 K_j(x,y)f(y)dy \equiv 2^j \int_0^1 K(2^jx, 2^jy)f(y)dy = \sum_{l=J_0}^{j-1} \sum_{k \in \mathcal Z_l}\langle f, \psi_{lk} \rangle \psi_{lk}(x) $$ where $K(x,y)=\sum_k \phi_{J_0 k}(x) \phi_{J_0 k}(y)$ is the wavelet projection kernel. \subsection{Adaptive Estimation in $L^2$} Let $X_1, \dots, X_n$ be i.i.d.~with common density $f$ on $[0,1]$, with joint distribution equal to the first $n$ coordinate projections of the infinite product probability measure ${\Pr}_f$. Write $E_f$ for the corresponding expectation operator. We shall throughout make the minimal assumption that $f \in W^r$ for some $r>1/2$, which implies in particular, by Sobolev's lemma, that $f$ is continuous and bounded on $[0,1]$. The adaptation problem arises from the hope that $f \in W^s$ for some $s$ significantly larger than $r$, without wanting to commit to a particular a priori value of $s$. In this generality the problem is still not meaningful, since the regularity of $f$ is not only described by containment in $W^s$, but also by the size of the Sobolev norm $\|f\|_{s,2}$. If one defines, for $0<s<\infty, 1 \le B<\infty$, the Sobolev-balls of densities \begin{equation} \label{ball} \Sigma(s,B):= \left\{f:[0,1] \to [0, \infty), \int_T f =1, \|f\|_{s,2} \le B \right\}, \end{equation} then Pinsker's minimax theorem (for density estimation) gives, as $n \to \infty$, \begin{equation} \label{pinsker} \inf_{T_n} \sup_{f \in \Sigma(s,B)} E_f \|T_n-f\|_2^2 \sim c(s) B^{2/(2s+1)} n^{-2s/(2s+1)} \end{equation} for some constant $c(s)>0$ depending only on $s$, and where the infimum extends over all measurable functions $T_n$ of $X_1, \dots, X_n$ (cf., e.g., the results in Theorem 5.1 in \cite{E08}). So any risk bound, attainable uniformly for elements $f \in \Sigma(s,B)$, cannot improve on $B^{2/(2s+1)}n^{-2s/(2s+1)}$ up to multiplicative constants. If $s,B$ are known then constructing estimators that attain this bound is possible, even with the asymptotically exact constant $c(s)$. The adaptation problem poses the question of whether estimators can attain such a risk bound without requiring knowledge of $B,s$. The paradigm of adaptive estimation has provided us with a positive answer to this problem, and one can prove the following result. \begin{theorem} \label{adapt} Let $1/2 <r \le R<\infty$ be given. Then there exists an estimator $\hat f_n = f(X_1, \dots, X_n, r, R)$ such that, for every $s \in [r,R]$, every $B \ge 1, U>0$, and every $n \in \mathbb N$, $$\sup_{f \in \Sigma(s,B), \|f\|_\infty \le U} E_f \|\hat f_n - f\|_2^2 \le c B^{2/(2s+1)}n^{-2s/(2s+1)}$$ for a constant $0<c<\infty$ that depends only on $r, R, U$. \end{theorem} If one wishes to adapt to the radius $B \in [1,B_0]$ then the canonical choice for $U$ is \begin{equation} \label{supbd} \sup_{f \in \Sigma(r,B_0)}\|f\|_\infty \le c(r) B_0 \equiv U < \infty, \end{equation} but other choices will be possible below. More elaborate techniques allow for $c$ to depend only on $s$, and even to obtain the exact asymptotic minimax 'Pinsker'-constant, see for instance Theorem 5.1 in \cite{E08}. We shall not study exact constants here, mostly to simplify the exposition and to focus on the main problem of confidence statements, but also since exact constants are asymptotic in nature and we prefer to give nonasymptotic bounds. From a 'pointwise in $f$' perspective we can conclude from Theorem \ref{adapt} that adaptive estimation is possible over the full continuous Sobolev scale $$\bigcup_{s \in [r,R], 1 \le B < \infty} \Sigma(s,B) = W^r \cap \left\{f:[0,1] \to [0, \infty), \int_0^1 f =1 \right\}; $$ for any probability density $f \in W^s, s \in [r,R]$, the single estimator $\hat f_n$ satisfies $$E_f\|\hat f_n -f\|_2^2 \le c \|f\|_{s,2}^{2/(2s+1)} n^{-2s/(2s+1)}$$ where $c$ depends on $r,R, \|f\|_\infty$. Since $\hat f_n$ does not depend on $B, U$ or $s$ we can say that $\hat f_n$ adapts to both $s \in [r,R]$ and $B \in [1, B_0]$ simultaneously. If one imposes an upper bound on $U$ then adaptation even holds for every $B \ge 1$. Our interest here is to understand what remains of this remarkable result if one is interested in adaptive \textit{confidence statements} rather than in risk bounds. \section{Adaptive Confidence Sets for Sobolev Classes} \label{drei} \subsection{Honest Asymptotic Inference} We aim to characterise those sets $\mathcal P_n$ consisting of uniformly bounded probability densities $f \in W^r$ for which we can construct adaptive confidence sets. More precisely, we seek random subsets $C_n$ of $L^2$ that depend only on known quantities, cover $f \in \mathcal P_n$ at least with prescribed probability $1-\alpha$, and have $L^2$-diameter $|C_n|$ adaptive with respect to radius and smoothness with prescribed probability at least $1-\alpha'$. To avoid discussing measurability issues we shall tacitly assume throughout that $C_n$ lies within an $L^2$-ball of radius $O(|C_n|)$ centered at a random variable $\tilde f_n \in L^2$. \begin{definition} [$L^2$-adaptive confidence sets] \label{cc} Let $X_1, \dots, X_n$ be i.i.d.~on $[0,1]$ with common density $f$. Let $0<\alpha, \alpha' <1$ and $1/2 <r \le R$ be given and let $C_n=C(X_1, \dots, X_n)$ be a random subset of $L^2$. $C_n$ is called $L^2$-adaptive and honest for a sequence of (nonempty) models $\mathcal P_n \subset W^r \cap \{f: \|f\|_\infty \le U\}$, if there exists a constant $L=L(r,R,U)$ such that for every $n \in \mathbb N$ \begin{equation} \label{adap} \sup_{f \in \Sigma(s,B) \cap \mathcal P_n} {\Pr}_f\left\{|C_n| > L B^{1/(2s+1)}n^{-s/(2s+1)}\right\} \le \alpha'~~\text{for every}~s\in [r,R], B \ge 1, \end{equation} (the condition being void if $\Sigma(s,B) \cap \mathcal P_n$ is empty) and \begin{equation} \label{hon} \inf_{f \in \mathcal P_n} {\Pr}_f\left\{f \in C_n \right\} \ge 1-\alpha -r_n \end{equation} where $r_n \to 0$ as $n \to \infty$. \end{definition} To understand the scope of this definition some discussion is necessary. First, the interval $[r,R]$ describes the range of smoothness parameters one wants to adapt to. Besides the restriction $1/2<r \le R < \infty$ the choice of this window of adaptation is arbitrary (although the values of $R,r$ influence the constants). Second, if we wish to adapt to $B$ in a fixed interval $[1,B_0]$ only, we may take $\mathcal P_n$ a subset of $\Sigma(r, B_0)$ and the canonical choice of $U=c(r)B_0$ from (\ref{supbd}). In such a situation (\ref{adap}) will still hold for every $B \ge 1$ although the result will not be meaningful for $B > B_0$. Otherwise we may impose an arbitrary uniform bound on $\|f\|_\infty$ and adapt to all $B \ge 1$. We require here the sharp dependence on $B$ in (\ref{adap}) and thus exclude the usual 'undersmoothed', near-adaptive, confidence sets in our setting. A natural 'maximal' model choice would be $\mathcal P_n = \Sigma(r,B_0) ~ \forall n$ with $B_0 \ge 1$ arbitrary. \subsection{The Case $R < 2r$.} A first result, the key elements of which have been discovered and discussed in \cite{L98, HL02, JL03, CL06, RV06}, is that $L^2$-adaptive confidence statements that parallel the situation of Theorem \ref{adapt} exist without any additional restrictions whatsoever, in the case where $R < 2r$, so that the window of adaptation is $[r,2r)$. The sufficiency part of the following theorem is a simple extension of results in Robins and van der Vaart \cite{RV06} in that it shows that adaptation is possible not only to the smoothness $s$, but also to the radius $B$. The main idea of the proof is that, if $R<2r$, the squared $L^2$-risk of $\hat f_n$ from Theorem \ref{adapt} can be estimated at a rate compatible with adaptation, by a suitable $U$-statistic. \begin{theorem} \label{rv} A) If $R<2r$, then for any $\alpha, \alpha'$, there exists a confidence set $C_n=C(X_1, \dots, X_n, r, R, \alpha, \alpha')$ which is honest and adaptive in the sense of Definition \ref{cc} for any choice $\mathcal P_n \equiv \Sigma(r,B_0) \cap \{f: \|f\|_\infty \le U\}, B_0 \ge 1, U>0$. \newline B) If $R \ge 2r$, then for $\alpha, \alpha'$ small enough no $C_n$ as in A) exists. \end{theorem} We emphasise that the confidence set $C_n$ constructed in the proof of Theorem \ref{rv} does only depend on $r,R,\alpha, \alpha'$ and does not require knowledge of $B_0$ or $U$. Note however that the sequence $r_n$ from Definition \ref{cc} does depend on $B_0$ -- one may thus use $C_n$ without any prior choice of parameters, but evaluation of its coverage is still relative to the model $\Sigma(r,B_0)$. Arbitrariness of $B_0, U$ implies, by taking $B_0=\|f\|_{s,2}, U=\|f\|_\infty$ in the above result, that 'pointwise in $f$' adaptive inference is possible for any probability density in the Sobolev space $W^r$. \begin{corollary} \label{rvp} Let $0<\alpha, \alpha' <1$ and $1/2 <r \le R$. Assume $R<2r$. There exists a confidence set $C_n=C(X_1, \dots, X_n, r, R, \alpha, \alpha')$ such that \newline i) $\liminf_n {\Pr}_f\left\{f \in C_n \right\} \ge 1-\alpha~~~ \text{for every probability density } f \in W^r,$ and \newline ii) $\limsup_n {\Pr}_f\{|C_n| > L \|f\|_{s, 2}^{1/(2s+1)}n^{-s/(2s+1)}\} \le \alpha'~~~ \text{for every probability density } f \in W^s, s\in [r,R],$ and some finite positive constant $L=L(r,R, \|f\|_\infty)$. \end{corollary} \subsection{The Case of General $R$ } If we allow for general $R \ge 2r$ honest inference is not possible without restricting $\mathcal P_n$ further. In fact even a weaker 'pointwise in $f$' result of the kind of Corollary \ref{rvp} is impossible for general $R\ge r$. This is a consequence of the following lower bound. \begin{theorem} \label{imp} Fix $0<\alpha<1/2$, let $s \ge r$ be arbitrary. A confidence set $C_n=C(X_1, \dots, X_n)$ in $L^2$ cannot satisfy \newline i) $\liminf_n {\Pr}_f\{f \in C_n\} \ge 1- \alpha ~~~\text{for every probability density } f \in W^r$, and \newline ii) $|C_n| = O_{{\Pr}_f}(r_n) ~~~\text{for every probability density } f \in W^s$ \newline at any rate $r_n = o(n^{-r/(2r+1/2)})$. \end{theorem} For $R>2r$ we have $n^{-R/(2R+1)} = o(n^{-r/(2r+1/2)})$. Thus even from a 'pointwise in $f$' perspective a confidence procedure cannot adapt to the entirety of densities in a Sobolev space $W^r$ when $R>2r$. On the other hand if we restrict to proper subsets of $W^r$, the situation may qualitatively change. For instance if we wish to adapt to submodels of a fixed Sobolev ball $\Sigma(r, B_0)$ with $r, B_0$ known, we have the following result. \begin{proposition} \label{dish} Let $0<\alpha, \alpha' <1$ and $1/2 <r \le R, B_0 \ge 1$. There exists a confidence set $C_n=C(X_1,\dots, X_n, B_0,r,R, \alpha, \alpha')$ such that \newline i) $\liminf_n {\Pr}_f\left\{f \in C_n \right\} \ge 1-\alpha~~~ \text{for every probability density } f \in \Sigma(r, B_0),$ and \newline ii) $\limsup_n {\Pr}_f\{|C_n| > L \|f\|_{s, 2}^{1/(2s+1)}n^{-s/(2s+1)}\} \le \alpha'~~~ \text{for every probability density } f \in \Sigma(s, B_0), s\in [r,R],$ and some finite positive constant $L=L(r,R, \|f\|_\infty)$. \end{proposition} Now if we compare Proposition \ref{dish} to Theorem \ref{rv} we see that there exists a genuine discrepancy between honest and pointwise in $f$ adaptive confidence sets when $R\ge 2r$. Of course Proposition \ref{dish} is not useful for statistical inference as the index $n$ from when onwards coverage holds depends on the unknown $f$. The question arises whether there are meaningful \textit{maximal} subsets of $\Sigma(r, B_0)$ for which honest inference is possible. The proof of Proposition \ref{dish} is in fact based on the construction of subsets $\mathcal P_n$ of $\Sigma(r,B_0)$ which grow dense in $\Sigma(r,B_0)$ and for which honest inference is possible. This approach follows the ideas from Part Aii) in Theorem \ref{fix}, and works as follows in the setting of continuous $s \in [r,R]$: assume without loss of generality that $2(N-1)r<R<2Nr$ for some $N \in \mathbb N, N >1$, and define the grid $$\mathcal S=\{s_m\}_{m=1}^N = \{r, 2r, 4r, \dots, 2(N-1)r\}.$$ Note that $\mathcal S$ is independent of $n$. Define, for $s \in \mathcal S \setminus \{s_N\}$, \begin{equation*} \tilde \Sigma(s, \rho):= \tilde \Sigma(s, B_0, \mathcal S, \rho) = \left\{f \in \Sigma(s, B_0): \|f-\Sigma(t, B_0)\|_2 \ge \rho ~\forall t>s, t \in \mathcal S \right\}. \end{equation*} We will choose the separation rates $$\rho_n(s) \sim n^{-s/(2s+1/2)},$$ equal to the minimax rate of testing between $\Sigma(s, B_0)$ and any submodel $\Sigma(t, B_0)$ for $t \in \mathcal S, t>s$. The resulting model is therefore, for $M$ some positive constant, $$\mathcal P_n(M, \mathcal S) = \Sigma(s_N, B_0) \bigcup \left(\bigcup_{s \in \mathcal S \setminus \{s_N\}} \tilde \Sigma(s, M\rho_n(s))\right).$$ The main idea behind the following theorem is to first construct a minimax test for the nested hypotheses $$\{H_s: f \in \tilde \Sigma(s, M \rho_n(s))\}_{s \in \mathcal S \setminus \{s_N\}},$$ then to estimate the risk of the adaptive estimator $\hat f_n$ from Theorem \ref{adapt} under the assumption that $f$ belongs to smoothness hypothesis selected by the test, and to finally construct a confidence set centered at $\hat f_n$ based on this risk estimate (as in the proof of Theorem \ref{rv}). \begin{theorem} \label{cont} Let $R > 2r$ and $B_0 \ge 1$ be arbitrary. There exists a confidence set $C_n=C(X_1,\dots, X_n, B_0,r,R, \alpha, \alpha')$, honest and adaptive in the sense of Definition \ref{cc}, for $\mathcal P_n = \mathcal P_n(M, \mathcal S), n \in \mathbb N,$ with $M$ a large enough constant and $U$ as in (\ref{supbd}). \end{theorem} First note that, since $\mathcal S$ is independent of $n$, $\mathcal P_n(M, \mathcal S) \nearrow \Sigma(r, B_0)$ as $n \to \infty$, so that the model $\mathcal P_n(M, \mathcal S)$ grows dense in the fixed Sobolev ball, which for known $B_0$ is the full model. This implies in particular Proposition \ref{dish}. An important question is whether $\mathcal P_n(M, \mathcal S)$ was taken to grow as fast as possible as a function of $n$, or in other words, whether a smaller choice of $\rho_n(s)$ would have been possible. The lower bound in Theorem \ref{fix} implies that any faster choice for $\rho_n(s)$ makes honest inference impossible. Indeed, if $C_n$ is an honest confidence set over $\mathcal P_n(M, \mathcal S)$ with a faster separation rate $\rho_n'=o(\rho_n(s))$ for some $s \in \mathcal S \setminus \{s_N\}$, then we can use $C_n$ to test $H_0: f \in \Sigma(s')$ against $H_1: f \in \tilde \Sigma(s, \rho_n')$ for some $s'>2s$, which by the proof of Theorem \ref{fix} gives a contradiction. \subsubsection{Self-Similarity Conditions} \label{ssc} The proof of Theorem \ref{cont} via testing smoothness hypotheses is strongly tied to knowledge of the upper bound $B_0$ for the radius of the Sobolev ball, but as discussed above, this cannot be avoided without contradicting Theorem \ref{imp}. Alternative ways to restrict \(W^r,\) other than constraining the radius, and which may be practically relevant, are given in \cite{PT00, GN10, KNP11, B11}. The authors instead restrict to `self-similar' functions, whose regularity is similar at large and small scales. As the results \cite{GN10, KNP11, B11} prove adaptation in \(L^\infty,\) they naturally imply adaptation also in \(L^2;\) the functions excluded, however, are now those whose norm is hard to estimate, rather than those whose norm is merely large. In the $L^2$-case we need to estimate $s$ only up to a small constant; as this is more favourable than the $L^\infty$-situation, one may impose weaker self-similarity assumptions, tailored to the $L^2$-situation. This can be achieved arguing in a similar fashion to Bull \cite{B11}, but we do not pursue this further in the present paper. \section{Proofs} \label{prf} \subsection{Some Concentration Inequalities} \label{ci} Let $X_i, i=1, 2, \dots,$ be the coordinates of the product probability space $(T,{\cal T},P)^{\mathbb N}$, where $P$ is any probability measure on $(T, \mathcal T)$, $P_n=n^{-1}\sum_{i=1}^n \delta_{X_i}$ the empirical measure, $E$ expectation under $P^\mathbb N \equiv \Pr$. For $M$ any set and $H:M \to \mathbb R$, set $\|H\|_M=\sup_{m \in M}|H(m)|$. We also write $Pf=\int_T fdP$ for measurable $f: T \to \mathbb R$. The following Bernstein-type inequality for canonical $U$-statistics of order two is due to Gin\'{e}, Latala and Zinn \cite{GLZ00}, with refinements about the numerical constants in Houdr\'{e} and Reynaud-Bouret \cite{HR03}: let $R(x,y)$ be a symmetric real-valued function defined on $T \times T$, such that $ER(X,x)=0$ for all $x$, and let $$\Lambda^2_1=\frac{n(n-1)}{2} ER(X_1,X_2)^2,$$ $$\Lambda_2=n\sup\{E[R(X_1,X_2)\zeta(X_1)\xi(X_2)]:E\zeta^2(X_1)\le 1,E\xi^2(X_1)\le1\},$$ $$ \Lambda_3=\|nER^2(X_1,\cdot)\|^{1/2}_\infty,\ \ \Lambda_4=\|R\|_\infty.$$ Let moreover $U_n^{(2)}(R) = \frac{2}{n(n-1)} \sum_{i<j} R(X_i, X_j)$ be the corresponding degenerate $U$-statistic of order two. Then, there exists a universal constant $0<C<\infty$ such that for all $u>0$ and $n \in\mathbb N$: \begin{equation} \label{glz} \Pr\left\{\frac{n(n-1)}{2}|U_n^{(2)}(R)|>C(\Lambda_1u^{1/2}+\Lambda_2u+\Lambda_3u^{3/2}+\Lambda_4u^2)\right \} \le 6\exp\{- u\}. \end{equation} We will also need Talagrand's \cite{T96} inequality for empirical processes. Let $\cal F$ be a countable class of measurable functions on $T$ that take values in $[-1/2,1/2]$, or, if $\mathcal F$ is $P$-centered, in $[-1,1]$. Let $\sigma \le 1/2$, or $\sigma \le 1$ if $\mathcal F$ is $P$-centered, and $V$ be any two numbers satisfying \begin{equation*} \sigma^2 \geq \|Pf^2\|_{\cal F},\ \ V \geq n\sigma^2+2E\left\|\sum_{i=1}^n(f(X_i)-Pf)\right\|_{\cal F}. \end{equation*} Bousquet's \cite{B03} version of Talagrand's inequality then states: for every $u >0$, \begin{equation}\label{bous} \Pr\left\{\left\|\sum_{i=1}^n(f(X_i)-Pf)\right\|_{\cal F}\ge E\left\|\sum_{i=1}^n(f(X_i)-Pf)\right\|_{\cal F}+u\right\}\le \exp\left(-\frac{u^2}{2V+\frac{2}{3}u}\right). \end{equation} A consequence of this inequality, derived in Section 3.1 in \cite{GN11}, is the following. If $T=[0,1]$, $P$ has bounded Lebesgue density $f$ on $T$, and $f_n(j)=\int_0^1 K_j(\cdot,y)dP_n(y)$, then for $M$ large enough, every $j \ge 0, n \in \mathbb N$ and some positive constants $c, c'$ depending on $U$ and the wavelet regularity $S$, \begin{equation} \label{tal2} \sup_{f: \|f\|_\infty \le U}{\Pr}_f \left \{ \left\|f_n(j) - Ef_n(j) \right\|_2 > M \sqrt{\|f\|_\infty \frac{2^j}{n}} \right \} \le c' e^{-cM^2 2^j}. \end{equation} \subsection{A General Purpose Test for Composite Nonparametric Hypotheses} \label{testsec} In this subsection we construct a general test for composite nonparametric null hypotheses that lie in a fixed Sobolev ball, under assumptions only on the entropy of the null-model. While of independent interest, the result will be a key step in the proofs of Theorems \ref{fix} and \ref{cont}. Let $X,X_1, \dots, X_n$ be i.i.d.~with common probability density $f$ on $[0,1]$, let $\Sigma$ be any subset of a fixed Sobolev ball $\Sigma (t,B)$ for some $t>1/2$ and consider testing \begin{equation} \label{genhyp} H_0: f \in \Sigma~\text{ against } H_1: f \in \Sigma(t, B)\setminus \Sigma, \|f-\Sigma\|_2 \ge \rho_n , \end{equation} where $\rho_n \ge 0$ is a sequence of nonnegative real numbers. For $\{\psi_{lk}\}$ a $S$-regular wavelet basis, $S>t$, $J_n \ge J_0$ a sequence of positive integers such that $2^{J_n} \simeq n^{1/(2t+1/2)}$ and for $g \in \Sigma$, define the $U$-statistic \begin{equation} T_n(g) = \frac{2}{n (n-1)} \sum_{i<j} \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} (\psi_{lk}(X_i)-\langle \psi_{lk}, g\rangle)(\psi_{lk}(X_j)-\langle \psi_{lk}, g \rangle) \end{equation} and, for $\tau_n$ some thresholds to be chosen below, the test statistic \begin{equation} \label{stat} \Psi_n = 1\left\{\inf_{g \in \Sigma} |T_n(g)| > \tau_n \right\}. \end{equation} Measurability of the infimum in (\ref{stat}) can be established by standard compactness/continuity arguments. We shall prove a bound on the sum of the type-one and type-two errors of this test under some entropy conditions on $\Sigma$, more precisely, on the class of functions $$\mathcal G(\Sigma) = \bigcup_{J > J_0} \left\{\sum_{l=J_0}^{J-1} \sum_{k \in \mathcal Z_l} \psi_{lk}(\cdot) \langle \psi_{lk}, g \rangle: g \in \Sigma \right\}.$$ Recall the usual covering numbers $N(\varepsilon, \mathcal G, L^2(P))$ and bracketing metric entropy numbers $N_{[]}(\varepsilon, \mathcal G, L^2(P))$ for classes $\mathcal G$ of functions and probability measures $P$ on $[0,1]$ (e.g., \cite{G00, VW96}). \begin{definition} \label{entropy} Say that $\Sigma$ is $s$-regular if one of the following conditions is satisfied for some fixed finite constants $A$ and every $0<\varepsilon <A$: \newline a) For any probability measure $Q$ on $[0,1]$ (and $A$ independent of $Q$) we have $$\log N(\varepsilon, \mathcal G(\Sigma), L^2(Q)) \le (A/\varepsilon)^{1/s}.$$ \newline b) For $P$ such that $dP=fd\lambda$ with Lebesgue density $f:[0,1] \to [0, \infty)$ we have $$\log N_{[]}(\varepsilon, \mathcal G(\Sigma), L^2(P)) \le (A/\varepsilon)^{1/s}.$$ \end{definition} Note that a ball $\Sigma(s,B)$ satisfies this condition for the given $s, 1/2<s<S,$ since any element of $\mathcal G(\Sigma(s,B))$ has $\|\cdot\|_{s,2}$-norm no more than $B$, and since $$\log N(\varepsilon, \Sigma(s,B), \|\cdot\|_\infty) \le (A/\varepsilon)^{1/s},$$ see, e.g., p.506 in \cite{LGM96}. \begin{proposition} \label{test} Let $$\tau_n = L d_n \max(n^{-2s/(2s+1)}, n^{-2t/(2t+1/2)}), ~~~\rho^2_n = \frac{L_0}{L} \tau_n$$ for real numbers $1 \le d_n \le d(\log n)^\gamma$ and positive constants $L, L_0, \gamma,d$. Let the hypotheses $H_0, H_1$ be as in (\ref{genhyp}), the test $\Psi_n$ as in (\ref{stat}), and assume $\Sigma$ is $s$-regular for some $s>1/2$. Then for $L=L(B, t, S)$, $L_0=L_0(L, B,t, S)$ large enough and every $n \in \mathbb N$ there exist constants $c_i, i=1,\dots, 3$ depending only on $L,L_0, t, B$ such that $$\sup_{f \in H_0} E_f\Psi_n + \sup_{f \in H_1} E_f(1-\Psi_n) \le c_1 e^{-d_n^2} + c_2 e^{-c_3 n \rho_n^2}.$$ \end{proposition} The main idea of the proof is as follows: for the type-one errors our test-statistic is dominated by a degenerate $U$-statistic which we can bound with inequality (\ref{glz}), carefully controlling the four regimes present. For the alternatives the test statistic can be decomposed into a degenerate $U$-statistic which can be dealt with as before, and a linear part, which is the critical one. The latter can be compared to a ratio-type empirical process which we control by a slicing argument applied to $\Sigma$, combined with Talagrand's inequality. \begin{proof} 1) We first control the type-one errors. Since $f \in H_0 = \Sigma$ we see \begin{equation} \label{h0} E_f\Psi_n = {\Pr}_f \left\{\inf_{g \in \Sigma} |T_n(g)| > \tau_n \right\} \le {\Pr}_f \left\{|T_n(f)| > \tau_n \right\}. \end{equation} $T_n(f)$ is a $U$-statistic with kernel $$R_f(x,y)= \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l}(\psi_{lk}(x)-\langle \psi_{lk}, f\rangle)(\psi_{lk}(y)-\langle \psi_{lk}, f \rangle),$$ which satisfies $E R_f(x,X_1)=0$ for every $x$, since $E_f(\psi_{lk}(X)-\langle \psi_{lk}, f\rangle)=0$ for every $k,l$. Consequently $T_n(f)$ is a degenerate $U$-statistic of order two, and we can apply inequality (\ref{glz}) to it, which we shall do with $u=d^2_n$. We thus need to bound the constants $\Lambda_1, \dots, \Lambda_4$ occurring in inequality (\ref{glz}) in such a way that, for $L$ large enough, \begin{equation} \label{bd0} \frac{2C}{n(n-1)}(\Lambda_1 d_n+\Lambda_2 d_n^2+\Lambda_3 d_n^3+\Lambda_4 d_n^4) \le L d_n n^{-2t/(2t+1/2)} \le \tau_n, \end{equation} which is achieved by the following estimates, noting that $n^{-2t/(2t+1/2)} \simeq 2^{J_n/2}/n$. First, by standard $U$-statistic arguments, we can bound $ER^2_f(X_1,X_2)$ by the second moment of the uncentred kernel, and thus, using orthonormality of $\psi_{lk}$, \begin{eqnarray*} ER_f^2(X_1,X_2) &\le& \int \int \left(\sum_{k,l} \psi_{lk}(x) \psi_{lk}(y)\right)^2 f(x)f(y)dxdy \\ &\le& \|f\|_\infty ^2 \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} \int_0^1 \psi_{lk}^2(x)dx \int_0^1 \psi_{lk}^2(y)dy \\ &\le & C(S)2^{J_n} \|f\|^2_\infty \end{eqnarray*} for some constant $C(S)$ that depends only on the wavelet basis. We obtain $\Lambda_1^2 \leq C(S) n(n-1) 2^{J_n} \|f\|_\infty^2/2$ and it follows, using (\ref{supbd}) that for $L$ large enough and every $n$, $$\frac{2C\Lambda_1 d_n}{n(n-1)} \le C(S, B, t) \frac{2^{J_n/2}d_n}{n} \le \tau_n/4.$$ For the second term note that, using the Cauchy-Schwarz inequality and that $K_j$ is a projection operator \begin{eqnarray*} \left|\int \int \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l}\psi_{lk}(x) \psi_{lk}(y) \zeta(x) \xi(y) f(x)f(y)dxdy \right| &=& \left|\int K_{J_n} (\zeta f)(y) \xi(y) f(y)dy \right| \\ &\le& \|K_{J_n}(\zeta f)\|_2 \|\xi f\|_2 \le \|f\|_\infty^2, \end{eqnarray*} and similarly \begin{equation*} |E[E_{X_1} [K_{J_n}(X_1,X_2)] \zeta(X_1) \xi(X_2)]| \leq \|f\|^2_\infty, \ |EK_{J_n}(X_1,X_2)| \leq \|f\|^2_\infty. \end{equation*} Thus $$E[R_f(X_1,X_2)\zeta(X_1)\xi(X_2)] \leq 4\|f\|^2_\infty $$ so that, using (\ref{supbd}), $$\frac{2C\Lambda_2 d_n^2}{n(n-1)} \le \frac{C'(B,t) d_n^2}{n} \le \tau_n/4$$ again for $L$ large enough and every $n$. For the third term, using the decomposition $R_f(x_1,x)=(r(x_1,x)-E_{X_1}r(X,x))+(E_{X,Y}r(X,Y)-E_Yr(x_1,Y))$ for $r(x,y)=\sum_{k,l} \psi_{lk}(x)\psi_{lk}(y)$, the inequality $(a+b)^2 \le 2a^2+2b^2$ and again orthonormality, we have that for every $x\in\mathbb R$, $$n|E_{X_1}R_f^2(X_1,x)| \leq 2n \left[\|f\|_\infty \sum_{l=J_0}^{J_n-1}\sum_{k \in \mathcal Z_l} \psi^2_{lk}(x) + \|f\|_\infty \|\Pi_{V_{J_n}}(f)\|_2^2 \right]$$ so that, using $\|\psi_{lk}\|_\infty \le d2^{l/2}$, again for $L$ large enough and by (\ref{supbd}), $$\frac{2C\Lambda_3d_n^{3}}{n (n-1)}\le C''(B,t) \frac{2^{J_n/2}d_n^3}{n}\frac{1}{\sqrt n} \le \tau_n/4.$$ Finally, we have $\Lambda_4=\|R_f\|_\infty \leq c 2^{J_n}$ and hence $$\frac{2C\Lambda_4 d_n^4}{n(n-1)} \le C' \frac{2^{J_n} d_n^4}{n^2} \le \tau_n/4,$$ so that we conclude for $L$ large enough and every $n \in \mathbb N$, from inequality (\ref{glz}), \begin{equation} {\Pr}_f \left\{|T_n(f)| > \tau_n \right\} \le 6\exp\left\{-d^2_n\right\} \label{second} \end{equation} which completes the bound for the type-one errors in view of (\ref{h0}). 2) We now turn to the type-two errors. In this case, for $f \in H_1$ \begin{equation} \label{h1} E_f(1-\Psi_n) = {\Pr}_f \left\{ \inf_{g \in \Sigma} |T_n(g)| \le \tau_n \right\}.\end{equation} and the typical summand of $T_n(g)$ has Hoeffding-decomposition \begin{eqnarray*} && (\psi_{lk}(X_i)-\langle \psi_{lk},g \rangle)(\psi_{lk}(X_j)-\langle \psi_{lk}, g \rangle) \\ && = (\psi_{lk}(X_i)-\langle \psi_{lk}, f \rangle + \langle \psi_{lk},f-g\rangle)(\psi_{lk}(X_j)-\langle \psi_{lk}, f \rangle + \langle \psi_{lk}, f-g \rangle) \\ && = (\psi_{lk}(X_i)- \langle \psi_{lk}, f \rangle)(\psi_{lk}(X_j)- \langle \psi_{lk}, f \rangle)) \\ && ~~~~+ (\psi_{lk}(X_i)-\langle \psi_{lk}, f\rangle) \langle \psi_{lk}, f-g \rangle + (\psi_{lk}(X_j)-\langle \psi_{lk}, f \rangle) \langle \psi_{lk}, f-g \rangle \\ && ~~~~+ \langle \psi_{lk}, f-g \rangle^2 \end{eqnarray*} so that by the triangle inequality, writing \begin{equation} \label{linear} L_n(g)= \frac{2}{n} \sum_{i=1}^n \sum_{l=J_0}^{J_n-1}\sum_{k \in \mathcal Z_l}(\psi_{lk}(X_i)-\langle \psi_{lk}, f\rangle) \langle \psi_{lk}, f-g \rangle \end{equation} for the linear terms, we conclude \begin{eqnarray} \label{2lb} \left| T_n(g) \right | &\ge& \sum_{l=J_0}^{J_n-1}\sum_{k \in \mathcal Z_l}\langle \psi_{lk}, f-g \rangle^2 - \left|T_n(f)\right| -|L_n(g)| \notag \\ & = & \|\Pi_{V_{J_n}}(f-g)\|_2^2 - |T_n(f)| - |L_n(g)| \end{eqnarray} for every $g \in \Sigma$. We can find random $g^*_n \in \Sigma$ such that $\inf_{g \in \Sigma} |T_n(g)| = |T_n(g^*_n)|$. (If the infimum is not attained the proof below requires obvious modifications; for the case $\Sigma=\Sigma(s,B), s>t$, relevant below, the infimum can be shown to be attained at a measurable minimiser by standard continuity and compactness arguments.) We bound the probability in (\ref{h1}), using (\ref{2lb}), by $$ {\Pr}_f \left\{|L_n(g_n^*)| > \frac{\|\Pi_{V_{J_n}}(f-g_n^*)\|_2^2 - \tau_n}{2}\right\} +{\Pr}_f \left\{|T_n(f)| > \frac{\|\Pi_{V_{J_n}}(f-g_n^*)\|_2^2-\tau_n}{2}\right\}.$$ Now by the standard approximation bound (cf.~(\ref{sobolev})) and since $g^*_n \in \Sigma \subset \Sigma(t, B)$, \begin{equation} \label{2sep} \|\Pi_{V_{J_n}}(f-g_n^*)\|_2^2 \ge \inf_{g \in \Sigma}\|f-g\|_2^2 - c(B)2^{-2J_nt} \ge 4 \tau_n \end{equation} for $L_0$ large enough depending only on $B$ and the choice of $L$ from above. We can thus bound the sum of the last two probabilities by $$ {\Pr}_f \{|L_n(g_n^*)| > \|\Pi_{V_{J_n}}(f-g_n^*)\|_2^2/4\} +{\Pr}_f \{|T_n(f)| > \tau_n\}.$$ For the second degenerate part the proof of Step 1 applies, as only boundedness of $f$ was used there. In the linear part somewhat more care is necessary. We have \begin{eqnarray} \label{ratio} {\Pr}_f \{|L_n(g^*_n)| > \|\Pi_{V_{J_n}}(f-g^*_n)\|_2^2/4\} \le {\Pr}_f \left\{\sup_{g \in \Sigma}\frac{|L_n(g)|}{\|\Pi_{V_{J_n}}(f-g)\|_2^2} > \frac{1}{4}\right\}. \end{eqnarray} Note that the variance of the linear process from (\ref{linear}) can be bounded, for fixed $g \in \Sigma$, using independence and orthonormality, by \begin{eqnarray} \label{weakvar} Var_f(|L_n(g)|) &\le& \frac{4}{n} \int \left(\sum_{l=J_0}^{J_n-1}\sum_{k \in \mathcal Z_l} \psi_{lk}(x) \langle \psi_{lk}, f-g\rangle\right)^2 f(x)dx \notag \\ &\le& \frac{4 \|f\|_\infty}{n} \sum_{l=J_0}^{J_n-1}\sum_{k \in \mathcal Z_l} \int \psi^2_{lk}(x)dx \cdot \langle \psi_{lk}, f-g \rangle^2 \notag \\ &\le& \frac{4\|f\|_\infty \|\Pi_{V_{J_n}}(f-g)\|_2^2}{n} \end{eqnarray} so that the supremum in (\ref{ratio}) is one of a self-normalised ratio-type empirical process. Such processes can be controlled by slicing the supremum into shells of almost constant variance, cf.~Section 5 in \cite{G00} or \cite{GK06}. Define, for $g \in \Sigma$, $$\sigma^2(g):=\|\pi_{V_{J_n}}(f-g)\|_2^2 \ge \|f-g\|_2^2 - c(B)2^{-2J_n t} \ge c \rho_n^2,$$ the inequality holding for $L_0$ large enough and some $c>0$, as in (\ref{2sep}). Define moreover, for $m \in \mathbb Z$, the class of functions $$\mathcal G_{m, J_n} = \left\{2\sum_{l=J_0}^{J_n-1}\sum_{k \in \mathcal Z_l} \psi_{lk}(\cdot) \langle \psi_{lk}, f-g \rangle: g \in \Sigma, \sigma^2(g)\le 2^{m+1} \right\},$$ which is uniformly bounded by a constant multiple of $\|f\|_{t,2}+\sup_{g \in \Sigma(t,B)}\|g\|_{t,2} \le 2B$ in view of (\ref{sobolev}) and since $t>1/2$. Then clearly, in the notation of Subsection \ref{ci}, $$\sup_{g \in \Sigma: \sigma^2(g) \le 2^{m+1}}|L_n(g)| = \|P_n-P\|_{\mathcal G_{m, J_n}} $$ and we bound the last probability in (\ref{ratio}) by \begin{eqnarray} \label{sup} && {\Pr}_f \left\{\max_{m \in \mathbb Z: c'\rho_n^2 \le 2^m \le C}\sup_{g \in \Sigma: 2^m \le \sigma^2(g) \le 2^{m+1}}\frac{|L_n(g)|}{\sigma^2(g)} > \frac{1}{4}\right\} \notag \\ && \le \sum_{m \in \mathbb Z: c'\rho_n^2 \le 2^m \le C} {\Pr}_f \left\{\sup_{g \in \Sigma: \sigma^2(g) \le 2^{m+1}}|L_n(g)| > 2^{m-2}\right\} \\ && \le \sum_{m \in \mathbb Z: c'\rho_n^2 \le 2^m \le C} {\Pr}_f \left\{ \|P_n-P\|_{\mathcal G_{m, J_n}}-E \|P_n-P\|_{\mathcal G_{m, J_n}} > 2^{m-2} -E \|P_n-P\|_{\mathcal G_{m, J_n}}\right\} \notag \end{eqnarray} where we may take $C<\infty$ as $\Sigma \subset \Sigma(t, B)$ is bounded in $L^2$, and where $c'$ is a positive constant such that $c' \rho_n^2 \le 2^m \le c \rho_n^2$ for some $m \in \mathbb Z$. We bound the expectation of the empirical process. Both the uniform and the bracketing entropy condition for $\mathcal G(\Sigma)$ carry over to $\cup_{J \ge 0}\mathcal G_{J,m}$ since translation by $f$ preserves the entropy. Using the standard entropy-bound plus chaining moment inequality (3.5) in Theorem 3.1 in \cite{GK06} in case a) of Definition \ref{entropy}, and the second bracketing entropy moment inequality in Theorem 2.14.2 in \cite{VW96} in case b), together with the variance bound (\ref{weakvar}) and with (\ref{supbd}), we deduce \begin{equation} \label{mom} E \|P_n-P\|_{\mathcal G_{m,J_n}} \le C \left( \sqrt{\frac{2^m}{n}} (2^m)^{-1/4s} + \frac{(2^m)^{-1/2s}}{n}\right). \end{equation} We see that $$2^{m-2} -E \|P_n-P\|_{\mathcal G_k} \ge c_0 2^m$$ for some fixed $c_0$ precisely when $2^m$ is of larger magnitude than $(2^m)^{\frac{1}{2}-\frac{1}{4s}} n^{-1/2} + (2^m)^{-1/2s}n^{-1}$, equivalent to $2^m \ge c'' n^{-2s/(2s+1)}$ for some $c''>0$, which is satisfied since $2^m \ge c' \rho_n^2 \ge c'' n^{-2s/(2s+1)}$ if $L_0$ is large enough, by hypothesis on $\rho_n$. We can thus rewrite the last probability in (\ref{sup}) as $$ \sum_{m \in \mathbb Z : c'\rho_n^2 \le 2^m \le C} {\Pr}_f \left\{ n\|P_n-P\|_{\mathcal G_{m, J_n}}- nE\|P_n-P\|_{\mathcal G_{m, J_n}} > c_0n2^m \right\}.$$ To this expression we can apply Talagrand's inequality (\ref{bous}), noting that the supremum over $\mathcal G_{m, J_n}$ can be realised, by continuity, as one over a countable subset of $\Sigma$, and since $\Sigma$ is uniformly bounded by $\sup_{f \in \Sigma(t, B)}\|f\|_\infty \le U \equiv U(t,B)$. Renormalising by $U$ and using (\ref{bous}), (\ref{weakvar}), (\ref{mom}) we can bound the expression in the last display, up to multiplicative constants, by \begin{eqnarray*} \sum_{m \in \mathbb Z: c'\rho_n^2 \le 2^m \le C} \exp \left\{-c_1 \frac{n^2(2^m)^2}{n2^m + nE \|P_n-P\|_{\mathcal G_{m, J_n}}+ n2^m } \right\} &\le& \sum_{m \in \mathbb Z: c'\rho_n^2 \le 2^m \le C} e^{-c_2 n2^m} \\ &\le & c_3 e^{-c_4 n \rho_n^2} \end{eqnarray*} since $2^m \ge c'\rho_n^2 >> n^{-1}$, which completes the proof. \end{proof} \subsection{Proof of Theorem \ref{adapt}} \begin{proof} We construct a standard Lepski type estimator: choose integers $j_{\min}, j_{\max}$ such that $J_0 \le j_{\min} < j_{\max}$, $$2^{j_{\min}} \simeq n^{1/(2R+1)} ~~\textrm{and} ~~ 2^{j_{\max}} \simeq n^{1/(2r+1)}$$ and define the grid $$ \mathcal J := \mathcal J_n = [j_{\min}, j_{\max}] \cap \mathbb N.$$ Let $f_n(j) \equiv f_n(j,\cdot)=\int_0^1 K_j(\cdot,y)dP_n(y)$ be a linear wavelet estimator based on wavelets of regularity $S>R$. To simplify the exposition we prove the result for $\|f\|_\infty$ known, otherwise the result follows from the same proof, with $\|f\|_\infty$ replaced by $\|f_n(j_{\max})\|_\infty$, a consistent estimator for $\|f\|_\infty$ that satisfies sufficiently tight uniform exponential error bounds (using inequality (26) in \cite{GN11} and proceeding as in Step (II) on p.1157 in \cite{GN10b}). Set \begin{equation} \bar j_n = \min \bigg \{ j \in \mathcal J: \|f_n(j) - f_n(l)\|_2^2 \le C(S) (\|f\|_\infty \vee 1) \frac{2^l}{n} ~~ \forall l>j, l\in {\mathcal J} \bigg \} \label{htf2} \end{equation} where $C(S)$ is a large enough constant, to be chosen below, in dependence of the wavelet basis. The adaptive estimator is $\hat f_n = f_n(\bar j_n)$. We shall need the standard estimates \begin{equation} E\|f_n(j) - Ef_n(j)\|_2^2 \leq D \frac{2^j }{n} := D \sigma^2 (j,n) \label{var} \end{equation} and, for $f \in W^s, s \in [r,R]$, \begin{equation} \|E f_n(j) - f\|_2 \leq 2^{-js} D' \|f\|_{s,2} := B(j, f) \label{bias} \end{equation} for constants $D, D'$ that depend only on the wavelet basis and on $r,R$. Define $j^*:=j^*(f)$ by \begin{equation*} j^*=\min\left\{j\in {\cal J}: B(j,f) \le \sqrt D \sigma(j,n) \right\} \end{equation*} so that, for every $f \in \Sigma(s,B)$ and $D''=D''(D,D')$ \begin{equation} \label{bal} D^{-1} B^2(j^*,f) \le \sigma^2(j^*,n) \le D'' \|f\|_{s,2}^{2/(2s+1)} n^{-2s/(2s+1)} \le D'' B^{2/(2s+1)} n^{-2s/(2s+1)}. \end{equation} We will consider the cases $\{\bar j_n \leq j^* \}$ and $\{\bar j_n > j^* \}$ separately. First, by the definition of $\bar j_n, j^*$ and (\ref{var}), (\ref{bias}), (\ref{bal}), \begin{eqnarray} \label{lowvar} E \left\|f_n(\bar j_n) - f \right\|^2_2 I_{\{\bar j_n \le j^* \}} &=& E \left( \|f_n (\bar j_n) - f_n (j^*) \|^2_2 + E\|f_n (j^*) - f\|^2_2 \right) I_{\{\bar j_n \le j^* \}} \notag \\ & \le & C(S)(\|f\|_\infty \vee 1)\frac{2^{j^*}}{n} + C' \sigma^2(j^*,n) \le C'' B^{2/(2s+1)} n^{-2s/2s+1} \notag \end{eqnarray} for $C''=C''(D,D', S,U)$, which is the desired bound. On the event $\{\bar j_n > j^* \}$ we have, using (\ref{var}) and the definition of $j^*$, \begin{eqnarray*} E \left\|f_n(\bar j_n) - f\right\|_2 I_{\{\hat j_n > j^* \}} & \leq & \sum_{j \in \mathcal{J}:j > j^*} \left(E \left\|f_n(j) -f\right\|_2^2 \right)^{1/2} ~ \left(EI_{\{\hat j_n = j\}}\right)^{1/2} \\ & \leq & \sum_{j \in \mathcal{J}: j > j^*} C''' \sigma(j,n) \cdot \sqrt{ {\Pr}_f\{\hat j_n = j\}} \\ &\le & C'''' \sum_{j \in \mathcal J: j>j^*} \sqrt{ {\Pr}_f\{\hat j_n = j\}} \end{eqnarray*} since $\sup_{j \in \mathcal J}\sigma(j,n) = \sigma(j_{\max},n)$ is bounded in $n$. Now pick any $j \in \mathcal{J}$ so that $j > j^*$ and denote by $j^-$ the previous element in the grid (i.e. $j^-= j-1$). One has, by definition of $\bar j_n$, \begin{equation} \label{pr} {\Pr}_f \{\bar j_n= j\} \leq \sum_{l \in \mathcal{J}: l \ge j} {\Pr}_f \left \{ \left\|f_n(j^-) - f_n(l) \right\|_2 > \sqrt {C(S) (\|f\|_\infty \vee 1) \frac{2^l}{n}}\right\}, \end{equation} and we observe that, by the triangle inequality, \begin{equation*} \left\|f_n(j^-) - f_n(l) \right \|_2 \leq \left\|f_n(j^-) - f_n(l) - Ef_n(j^-) + Ef_n(l) \right\|_2 + B(j^-, f) + B(l, f), \end{equation*} where, $$ B(j^-, f) + B(l, f) \leq 2B(j^*, f) \leq c \sigma(j^*,n) \leq c' \sigma(l,n) $$ by definition of $j^*$ and since $l>j^- \geq j^*$. Consequently, the probability in (\ref{pr}) is bounded by \begin{equation} \label{c1} {\Pr}_f \left \{ \left\|f_n(j^-) - f_n(l) -Ef_n(j^-) + Ef_n(l) \right\|_2 > (\sqrt{C(S)(\|f\|_\infty \vee 1)}-c') \sigma(l,n) \right \}, \end{equation} and by inequality (\ref{tal2}) above this probability is bounded by a constant multiple of $e^{-d2^l}$ if we choose $C(S)$ large enough. This gives the overall bound $$\sum_{l \in \mathcal J: l \geq j} c''e^{-d2^l} \le d'e^{-d''2^{j_{\min}}},$$ which is smaller than a constant multiple times $B^{1/(2s+1)} n^{-s/(2s+1)}$, uniformly in $s \in [r,R], n \in \mathbb N$ and for $B \ge 1$, by definition of $j_{\min}$. This completes the proof. \end{proof} \subsection{Proof of Theorem \ref{rv}} \begin{proof} A) Suppose for simplicity that the sample size is $2n$, and split the sample into two halves with index sets $\mathcal S^1, \mathcal S^2$, of equal size $n$, write $E_1, E_2$ for the corresponding expectations, and $E=E_1E_2$. Let $\hat f_n= f_n(\bar j_n)$ be the adaptive estimator from the proof of Theorem \ref{adapt} based on the sample $\mathcal S^1$. One shows by a standard bias-variance decomposition, using $\bar j_n \in \mathcal J$ and $\|K_j(f)\|_{r,2} \le \|f\|_{r,2}$, that for every $\varepsilon>0$ there exists a finite positive constant $B'=B'(\varepsilon, B_0)$ satisfying $$\inf_{f \in \Sigma (r,B_0)}{\Pr}_f\{\|\hat f_n\|_{r,2} \le B'\} \ge 1 - \varepsilon.$$ It therefore suffices to prove the theorem on the event $\{\|\hat f_n\|_{r,2} \le B'\}$. For a wavelet basis of regularity $S>R$ and for $J_n \ge J_0$ a sequence of integers such that $2^{J_n} \simeq n^{1/(2r+1/2)}$, define the $U$-statistic \begin{equation}\label{ustat0} U_n(\hat f_n)=\frac{2}{n(n-1)} \sum_{i<j, i,j \in \mathcal S^2} \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} (\psi_{lk}(X_i)-\langle \psi_{lk}, \hat f_n\rangle)(\psi_{lk}(X_j)-\langle \psi_{lk}, \hat f_n \rangle) \end{equation} which has expectation $$E_2 U_n(\hat f_n) = \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l}\langle \psi_{lk}, f- \hat f_n \rangle^2 = \|\Pi_{V_{J_n}}(f-\hat f_n)\|_2^2.$$ Using Chebychev's inequality and that, by definition of the norm (\ref{sobolev}) $$\sup_{h \in \Sigma(r,b)}\|\Pi_{V_{J_n}}(h)-h\|_2^2 \le c(b) 2^{-2J_nr}$$ for every $0<b<\infty$ and some finite constant $c(b)$, we deduce \begin{eqnarray*} \label{cov} &&\inf_{f \in \Sigma(r,B_0)} {\Pr}_{f,2} \left\{U_n(\hat f_n) - \|f-\hat f_n\|_2^2 \ge -(c(B_0)+c(B'))2^{-2J_nr} -z(\alpha) \tau_n(f) \right\} \\ && \ge \inf_{f \in \Sigma(r,B_0)} {\Pr}_{f,2} \left\{U_n(\hat f_n) - \|\Pi_{V_{J_n}}(f-\hat f_n)\|_2^2 \ge -z(\alpha) \tau_n(f) \right\} \\ && \ge 1 - \sup_{f \in \Sigma(r,B_0)}\frac{Var_2(U_n(\hat f_n)-E_2U_n(\hat f_n))}{(z(\alpha) \tau_n(f))^2}. \end{eqnarray*} We now show that the last quantity is greater than or equal to $1-z(\alpha)^{-2} \ge 1- \alpha$ for quantile constants $z(\alpha)$ and with $$\tau^2_n(f)= \frac{C(S)2^{J_n}\|f\|^2_\infty}{n(n-1)}+\frac{4\|f\|_\infty}{n}\|\Pi_{V_{J_n}}(f-\hat f_n)\|_2^2,$$ which in turn gives the honest confidence set under $\Pr$ \begin{equation} \label{conf0} C_n(\|f\|_\infty, B_0) = \left \{f: \|f-\hat f_n\|_2 \le \sqrt{z_\alpha \tau_n(f) + U_n(\hat f_n) + (c(B_0)+c(B'))2^{-2{J_n}r}} \right\}. \end{equation} We shall comment on the role of the constants $\|f\|_\infty, c(B_0), C(B')$ at the end of the proof, and establish the last claim first: note that the Hoeffding decomposition for the centered $U$-statistic with kernel $$R(x,y)= \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} (\psi_{lk}(x)-\langle \psi_{lk}, \hat f_n\rangle)(\psi_{lk}(y)-\langle \psi_{lk}, \hat f_n \rangle)$$ is (cf.~the proof of Theorem 4.1 in \cite{RV06}) $$U_n(\hat f_n)-E_2U_n(\hat f_n) = \frac{2}{n} \sum_{i=1}^n (\pi_1R)(X_i) + \frac{2}{n(n-1)}\sum_{i<j}(\pi_2R)(X_i, X_j) \equiv L_n + D_n$$ where \begin{equation*} (\pi_1R)(x) =\sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} (\psi_{lk}(x)-\langle \psi_{lk}, f \rangle) \langle \psi_{lk}, f-\hat f_n \rangle \end{equation*} and $$(\pi_2R)(x,y)= \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} (\psi_{lk}(x)-\langle \psi_{lk}, f \rangle)(\psi_{lk}(y)-\langle \psi_{lk}, f \rangle)$$ The variance of $U_n(\hat f_n)-E_2U_n(\hat f_n)$ is the sum of the variances of the two terms in the Hoeffding decomposition. For the linear term we bound the variance $Var_2(L_n)$ by the second moment, using orthonormality of the $\psi_{lk}$s, \begin{equation*} \frac{4}{n} \int \left(\sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} \psi_{lk}(x) \langle \psi_{lk}, \hat f_n -f \rangle \right)^2 f(x) dx \le \frac{4 \|f\|_\infty}{n} \sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} \langle \psi_{lk}, \hat f_n-f \rangle^2, \end{equation*} which equals the second term in the definition of $\tau^2_n(f)$. For the degenerate term we can bound $Var_2(D_n)$ analogously by the second moment of the uncentered kernel (cf.~after (\ref{bd0})), i.e., by \begin{equation*} \frac{2}{n(n-1)} \int \left(\sum_{l=J_0}^{J_n-1} \sum_{k \in \mathcal Z_l} \psi_{lk}(x) \psi_{lk}(y) \right)^2 f(x) dx f(y)dy \le \frac{C(S) 2^{J_n} \|f\|^2_\infty}{n(n-1)}, \end{equation*} using orthonormality and the cardinality properties of $\mathcal Z_l$. The so constructed confidence set has an adaptive expected maximal diameter: let $f \in \Sigma(s,B)$ for some $s \in [r,R]$ and some $1 \le B \le B_0$. The nonrandom terms are of order $$\sqrt{c(B_0)+c(B')}2^{-J_nr} + \|f\|_\infty^{1/2}2^{J_n/4}n^{-1/2} \le C(S, B_0, B', r, U) n^{-r/(2r+1/2)}$$ which is $o(n^{-s/(2s+1)})$ since $s \le R < 2r$. The random component of $\tau_n(f)$ has order $\|f\|_\infty^{1/4} n^{-1/4}E_1\|\Pi_{V_{J_n}}(\hat f_n - f)\|_2^{1/2}$ which is also $o(n^{-s/(2s+1)})$ for $s<2r$, since $\Pi_{V_{J_n}}$ is a projection operator and since $\hat f_n$ is adaptive, as established in Theorem \ref{adapt}. Moreover, by Theorem \ref{adapt} and again the projection properties, $$EU_n(\hat f_n) = E_1\|\Pi_{V_{J_n}}(\hat f_n-f)\|_2^2 \le E_1\|\hat f_n -f\|_2^2 \le c B^{2/(2s+1)}n^{-2s/(2s+1)}.$$ The term in the last display is the leading term in our bound for the diameter of the confidence set, and shows that $C_n$ adapts to both $B$ and $s$ in the sense of Definition \ref{cc}, using Markov's inequality. The confidence set $C_n(\|f\|_\infty, B_0)$ is not feasible if $B_0$ and $\|f\|_\infty$ are unknown, so in particular under the assumptions of Theorem \ref{rv}, but $C_n$ independent of $B_0, \|f\|_\infty$ can be constructed as follows: we replace $c(B_0)+c(B')$ in the definition of (\ref{conf0}) by a divergent sequence of positive real numbers $c_n$, which can still be accommodated in the diameter estimate from the last paragraph since $n^{-2r/(2r+1/2)}c_n$ is still $o(n^{-2s/(2s+1)})$ as long as $s \le R<2r$ for $c_n$ diverging slowly enough (e.g., like $\log n$). Define thus the confidence set \begin{equation} \label{conf} C_n = \left \{f: \|f-\hat f_n\|_2 \le \sqrt{z_\alpha \tau_n(f) + U_n(\hat f_n) + c_n2^{-2Jr}} \right\}, \end{equation} with $\|f\|_\infty$ replaced by $\|f_n(j_{\max})\|_\infty$ in all expressions where $\|f\|_\infty$ occurs. As stated before (\ref{htf2}), $\|f_n(j_{\max})\|_\infty$ concentrates around $\|f\|_\infty$ with exponential error bounds, so that the sufficiency part of Theorem \ref{rv} then holds for this $C_n$ with slightly increased $z_\alpha$. \medskip B) Necessity of $R \le 2r$ follows immediately from Part B of Theorem \ref{fix}. That $R<2r$ is also necessary is proved in Subsection \ref{lpt} below. \end{proof} \subsection{Proof of Theorem \ref{fix}} \begin{proof} That an $L^2$-adaptive confidence set exists when $s \le 2r$ follows from Theorem \ref{rv}; The case $s<2r$ is immediate, and the case $s=2r$ follows using the confidence set (\ref{conf0}). This set is feasible since, under the hypotheses of Theorem \ref{fix}, $B=B_0$ is known, as is $B'$ and the upper bound for $\|f\|_\infty$ (cf.~(\ref{supbd})). It is further adaptive since $n^{-r/(2r+1/2)}=n^{-s/(2s+1)}$ for $s=2r$. For part Aii we use the test $\Psi_n$ from Proposition \ref{test} with $\Sigma=\Sigma(s), t=r,$ and define a confidence ball as follows. Take $\hat f_n=f_n(\bar j_n)$ to be the adaptive estimator from the proof of Theorem \ref{adapt}, and let, for $0<L'<\infty$, \begin{eqnarray*} C_n=\begin{cases}\{f \in \Sigma(r): \|f-\hat f_n\|_2 \le L'n^{-s/(2s+1)}\}&\text{if} ~\Psi_n=0\\ \{f \in \Sigma(r): \|f-\hat f_n\|_2 \le L'n^{-r/(2r+1)}\}&\text{if}~\Psi_n=1 \end{cases} \end{eqnarray*} We first prove that $C_n$ is honest for $\Sigma(s) \cup \tilde \Sigma(r, \rho_n)$ if we choose $L'$ large enough. For $f \in \Sigma(s)$ we have from Theorem \ref{adapt}, by Markov's inequality, \begin{eqnarray*} \inf_{f \in \Sigma(s)}{\Pr}_f \left\{f \in C_n \right\}&\ge& 1- \sup_{f \in \Sigma(s)}{\Pr}_f \left \{ \|\hat f_n-f\|_2 > L'n^{-s/(2s+1)} \right\} \\ & \ge & 1-\frac{n^{s/(2s+1)}}{L'} \sup_{f \in \Sigma(s)}E_f \|\hat f_n -f\|_2 \\ &\ge& 1-\frac{c(B,s,r)}{L'} \end{eqnarray*} which can be made greater than $1-\alpha$ for any $\alpha>0$ by choosing $L'$ large enough depending only on $B, \alpha, r,s$. When $f \in \tilde \Sigma(r, \rho_n)$, using again Markov's inequality \begin{equation*} \inf_{f \in \tilde \Sigma(r, \rho_n)}{\Pr}_f \left\{f \in C_n\right\} \ge 1 - \frac{\sup_{f \in \Sigma(r)}E_f\|\hat f_n -f\|_2}{L'n^{-r/(2r+1)}} - \sup_{f \in \tilde \Sigma(r, \rho_n)}{\Pr}_f\{\Psi_n =0\}. \end{equation*} The first subtracted term can be made smaller than $\alpha/2$ for $L'$ large enough as before. The second subtracted term can also be made less than $\alpha/2$ using Proposition \ref{test} and the remark preceding it, choosing $M$ and $d_n$ to be large but also bounded in $n$. This proves that $C_n$ is honest. We now turn to adaptivity of $C_n$: by the definition of $C_n$ we always have $|C_n| \le L'n^{-r/(2r+1)}$, so the case $f \in \tilde \Sigma(r, \rho_n)$ is proved. If $f \in \Sigma(s)$ then using Proposition \ref{test} again, for $M, d_n$ large enough depending on $\alpha'$ but bounded in $n$, $${\Pr}_f\{|C_n| > L'n^{-s/(2s+1)}\} = {\Pr}_f\{\Psi_n =1\} \le \alpha',$$ which completes the proof of part A. To prove part B of Theorem \ref{fix} we argue by contradiction and assume that the limit inferior equals zero. We then pass to a subsequence of $n$ for which the limit is zero, and still denote this subsequence by $n$. Let $f_0\equiv 1 \in \Sigma(s)$, suppose $C_n$ is adaptive and honest for $\Sigma(s) \cup \tilde\Sigma(r, \rho_n)$ for every $\alpha, \alpha'$, and consider testing $$H_0: f=f_0 ~~~\text{against}~~~H_1: f \in \tilde \Sigma(r, \rho_n)$$ where $\rho_n =o(n^{-r/(2r+1/2)})$. Since $s>2r$ we may assume $n^{-s/(2s+1)}=o(\rho_n)$ (otherwise replace $\rho_n$ by $\rho_n' \ge \rho_n$ s.t. $n^{-s/(2s+1)}=o(\rho_n')$). Accept $H_0$ if $C_n \cap \tilde \Sigma(r, \rho_n)$ is empty and reject otherwise, formally $$\Psi_n = 1\{C_n \cap \tilde \Sigma(r, \rho_n) \neq \emptyset\}.$$ The type-one errors of this test satisfy \begin{eqnarray*} E_{f_0}\Psi_n &=& {\Pr}_{f_0}\left\{C_n \cap \tilde \Sigma(r, \rho_n) \neq \emptyset \right\} \\ &\le & {\Pr}_{f_0} \{f_0 \in C_n, |C_n| \ge \rho_n\} + {\Pr}_{f_0} \{f_0 \notin C_n\} \\ &\le & \alpha + \alpha' + r_n \to \alpha + \alpha' \end{eqnarray*} as $n \to \infty$ by the hypothesis of coverage and adaptivity of $C_n$. The type-two errors satisfy, by coverage of $C_n$, as $n \to \infty$ $$E_f(1-\Psi_n) = {\Pr}_f \{C_n \cap \tilde \Sigma(r, \rho_n)=\emptyset\} \le {\Pr}_{f} \{f \notin C_n\} \le \alpha + r_n \to \alpha,$$ uniformly in $f \in \tilde \Sigma(r, \rho_n)$. We conclude that this test satisfies $$\limsup_n\left[E_{f_0}\Psi_n + \sup_{f \in H_1}E_f(1-\Psi_n)\right] \le 2\alpha + \alpha'$$ for arbitrary $\alpha, \alpha'>0$. For $\alpha, \alpha'$ small enough this contradicts (the proof of) Theorem 1i in \cite{I86}, which implies that the limit inferior of the term in brackets in the last display, even with an infimum over all tests, exceeds a fixed positive constant. Indeed, the alternatives (6) in \cite{I86} can be taken to be $$f_i(x) = 1 + \epsilon 2^{-j_n (r+1/2)} \sum_{k \in \mathcal Z_{j_n}} \beta_{ik} \psi_{j_n k}(x), ~~~~~i=1, \dots, 2^{2^{j_n}},$$ for $\epsilon>0$ a small constant, $\beta_{ik} = \pm 1$, and with $j_n$ such that $2^{j_n} \simeq n^{1/(2r+1/2)}$. Since $$\inf_{g \in \Sigma(s)}\|f_i-g\|_2 \ge \sqrt{\sum_{l\ge j_n, k}\langle f_i, \psi_{lk}\rangle^2} - \sup_{g \in \Sigma(s)} \sqrt{\sum_{l\ge j_n, k}\langle g, \psi_{lk}\rangle^2} \ge c\epsilon n^{-r/(2r+1/2)}$$ for every $\epsilon>0$, some $c>0$ and $n$ large enough, these alternatives are also contained in our $H_1$, so that the proof of the lower bound Theorem 1i in \cite{I86} applies also in the present situation. \end{proof} \subsection{Proof of Theorem \ref{cont}} We shall write $\Sigma(s)$ for $\Sigma(s,B_0)$ and $\tilde \Sigma_n(s)$ for $\tilde \Sigma(s, \rho_n(s))$ in this proof, and we write $\tilde \Sigma_n(s_N)$ also for $\Sigma(s_N)$ in slight abuse of notation. For $i=1,\dots, N,$ let $\Psi(i)$ be the test from (\ref{stat}) with $\Sigma=\Sigma(s_{i+1})$ and $t=s_i$. Starting from the largest model we first test $H_0: f \in \Sigma(s_2)$ against $H_1: f \in \tilde \Sigma_n(s_1)$, accepting $H_0$ if $\Psi(1)=0$. If $H_0$ is rejected we set $\hat s_n = s_1=r$, otherwise we proceed to test $H_0: f \in \Sigma (s_3)$ against $H_1: f \in \tilde \Sigma_n(s_2)$ using $\Psi(2)$ and iterating this procedure downwards we define $\hat s_n$ to be the first element $s_i$ in $\mathcal S$ for which $\Psi(i)=1$ rejects. If no rejection occurs we set $\hat s_n$ equal to $s_N$, the last element in the grid. For $f \in \mathcal P_n(M, \mathcal S)$ define the unique $s_{i_0}:=s_{i_0}(f)= \{s \in \mathcal S: f \in \tilde \Sigma_n(s) \}$. We now show that for $M$ large enough \begin{equation} \label{consist} \sup_{f \in \mathcal P_n(M, \mathcal S)}{\Pr}_f \{\hat s_n \ne s_{i_0}(f)\} < \max(\alpha, \alpha')/2. \end{equation} Indeed, if $\hat s_n < s_{i_0}$ then the test $\Psi(i)$ has rejected for some $i < i_0$. In this case $f \in \tilde \Sigma_n(s_{i_0}) \subset \Sigma(s_{i_0}) \subseteq \Sigma(s_{i+1})$ for every $i<i_0$, and thus, \begin{eqnarray*} {\Pr}_f\{\hat s_n< s_{i_0}\} &=& {\Pr}_f\left\{\bigcup_{i < i_0} \{\Psi(i)=1\} \right\} \le \sum_{i<i_0} \sup_{f \in \Sigma(s_{i+1})}E_f\Psi(i) \\ &\le& C(N) e^{-cd_n^2} < \max(\alpha, \alpha')/2 \end{eqnarray*} using Proposition \ref{test} and the remark preceding it, choosing $M$ and $d_n$ to be large but also bounded in $n$. On the other hand if $\hat s_n > s_{i_0}$ (ignoring the trivial case $s_{i_0} =s_N$) then $\Psi(i_0)$ has accepted despite $f \in \tilde \Sigma_n(s_{i_0})$. Thus $${\Pr}_f\{\hat s_n > s_{i_0}\} \le \sup_{f \in \tilde \Sigma_n(s_{i_0})}E_f (1-\Psi(i_0)) \le C e^{-cd_n^2} \le \max(\alpha, \alpha')/2 $$ again by Proposition \ref{test}, for $M, d_n$ large enough. Denote now by $C_n(s_i)$ the confidence set (\ref{conf0}) constructed in the proof of Theorem \ref{rv} with $r$ there being $s_i$, with $R=2s_i=s_{i+1}$, with $\|f\|_\infty$ replaced by $U$ and with $z_\alpha$ such that the asymptotic coverage level is $\alpha/2$ for any $f \in \Sigma(s_i)$. We then set $C_n=C_n(\hat s_n)$, which is a feasible confidence set as $B_0, r, U$ are known under the hypotheses of the theorem. We then have, from the proof of Theorem \ref{rv}, uniformly in $f \in \tilde \Sigma_n(s_{i_0}) \subset \Sigma(s_{i_0})$, $${\Pr}_f\{f \in C_n(\hat s_n)\} \ge {\Pr}_f\{f \in C_n(s_{i_0})\} - \alpha/2 \ge 1- \alpha.$$ Moreover, if $f \in \Sigma (s, B) \cap \tilde \Sigma_n(s_{i_0})$ for some $1 \le B \le B_0$ and for either $s \in [s_{i_0}, s_{i_0+1})$ or $s \in [s_N, R]$ (in case $s_{i_0}=s_N$), the expected diameter of $C_n$ satisfies, by the estimates in the proof of Theorem \ref{rv}, \begin{eqnarray*} && {\Pr}_f\{|C_n(\hat s_n)| > C B^{2/(2s+1)}n^{-s/(2s+1)}\} \\ && \le {\Pr}_f\{|C_n(s_{i_0})| > C B^{2/(2s+1)}n^{-s/(2s+1)}\} + \alpha'/2 \\ && \le \alpha' \end{eqnarray*} for $C$ large enough, so that this confidence set is adaptive as well, which completes the proof. \subsection{Proof of Theorem \ref{imp}} \begin{proof} Suppose such $C_n$ exists. We will construct functions $f_m \in W^s, m = 0, 1, \dots,$ and a further function $f_\infty \in W^r$, which serve as hypotheses for $f$. For each $m \in \mathbb N$, we will ensure that, at some time $n_m$, $C_{n_m}$ cannot distinguish between $f_m$ and $f_\infty$, and is too small to contain both simultaneously. We will thereby obtain a subsequence $n_m$ on which, for \(\delta = \tfrac15(1 - 2\alpha),\) \[\sup_m \Pr_{f_\infty} \{f_\infty \in C_{n_m}\} \le 1 - \alpha - \delta,\] contradicting our assumptions on \(C_n.\) For \(m = 0, 1, 2, \dots, \infty,\) construct functions $f_0=1$, \[f_m = 1 + \varepsilon \sum_{i=1}^m \sum_{k \in \mathcal Z_{j_i}} 2^{-j_i(r+1/2)} \beta_{ik} \psi_{j_ik}.\] where $\varepsilon>0$ is a constant, and the parameters \(j_1, j_2, \ldots \in \mathbb N\), \(\beta_{ik} = \pm 1\) are chosen inductively satisfying \(j_i/j_{i-1} \ge 1 + 1/2r\). Pick \(\varepsilon > 0\) small enough that \(\|f_m - f_{m-1}\|_\infty \le 2^{-(m+1)}\) for all \(m < \infty,\) and any choice of \(j_i, \beta_{ik}.\) Then \[f_m = 1 + \sum_{i=1}^m (f_i - f_{i-1}) \ge \tfrac12,\] and \(\int f_m = \langle 1, f_m \rangle = 1,\) so the \(f_m\) are densities. By (\ref{sobolev}), \(f_m \in W^r,\) and for \(m < \infty,\) also \(f_m \in W^s.\) We have already defined $f_0$; for convenience let $n_0 = 1$. Inductively, suppose we have defined \(f_{m-1}, n_{m-1}.\) For \(n_m > n_{m-1}\) and \(D>0 \) large enough depending only on $f_{m-1}$, we have: \begin{enumerate} \item \(\Pr_{f_{m-1}}\{f_{m-1} \not\in C_{n_m}\} \le \alpha + \delta\); and \item \(\Pr_{f_{m-1}}\{|C_{n_m}| \ge Dr_{n_m}\} \le \delta.\) \end{enumerate} Setting $$T_n = 1(\exists\ f \in C_n, \|f - f_{m-1}\|_2 \ge 2Dr_n),$$ we then have \begin{equation} \label{eq:c-accurate} \Pr_{f_{m-1}}\{T_{n_m}=1\} \le \Pr_{f_{m-1}}\{f_{m-1} \not \in C_{n_m}\} + \Pr_{f_{m-1}}\{|C_{n_m}| \ge Dr_{n_m}\} \le \alpha + 2\delta. \end{equation} We claim it is possible to choose $j_m, \beta_{mk}$ and $n_m$, depending only on $f_{m-1}$ so that also: 1. if $m>1$, \begin{equation} \label{eq:f-separate} 3Dr_{n_m} \le \|f_m - f_{m-1}\|_2 \le \tfrac14 \|f_{m-1} - f_{m-2}\|_2, \end{equation} and 2. for any further choice of \(j_i, \beta_{ik},\) \begin{equation} \label{eq:f-indistinguishable} \Pr_{f_\infty}\{T_{n_m} = 0\} \ge 1 - \alpha - 4\delta. \end{equation} We may then conclude that, since all further choices will satisfy \eqref{eq:f-separate}, \[\|f_\infty - f_{m-1}\|_2 \ge \|f_m - f_{m-1}\|_2 - \sum_{i=m+1}^\infty \|f_i - f_{i-1}\|_2 \ge 2Dr_{n_m},\] so \[\Pr_{f_\infty}\{f_\infty \in C_{n_m}\} \le \Pr_{f_\infty}\{T_{n_m} = 1\} \le \alpha + 4\delta = 1 - \alpha - \delta\] as required. It remains to verify the claim. For \(j \ge (1 + 1/2r)j_{m-1},\) \(\beta_k = \pm 1,\) set \[g_\beta = \varepsilon 2^{-j(r+1/2)} \sum_{k \in \mathcal Z_j} \beta_k \psi_{jk},\] and \(f_\beta = f_{m-1} + g_\beta.\) Allowing \(j \to \infty,\) set \[n \sim C2^{j(2r+1/2)},\] for \(C > 0\) to be determined. Then $$\|g_\beta\|_2 = \varepsilon 2^{-jr} \approx n^{-r/(2r+1/2)},$$ so for \(j\) large enough, \(f_\beta\) satisfies \eqref{eq:f-separate} with any choice of \(\beta.\) The density of \(X_1, \dots, X_n\) under \(f_\beta,\) w.r.t.\ under \(f_{m-1},\) is $$Z_\beta = \prod_{i=1}^n \frac{f_\beta}{f_{m-1}}(X_i).$$ Set \(Z = 2^{-j}\sum_\beta Z_\beta,\) so \(E_{f_{m-1}}[Z] = 1,\) and \begin{align*} E_{f_{m-1}}[Z^2] &= 2^{-2j} \sum_{\beta, \beta'} \prod_{i=1}^n E_{f_{m-1}}\left[ \frac{f_\beta f_{\beta'}}{f_{m-1}^2}(X_i)\right]\\ &= 2^{-2j} \sum_{\beta, \beta'} \left \langle \frac{f_\beta}{\sqrt{f_{m-1}}}, \frac{f_{\beta'}}{\sqrt{f_{m-1}}} \right \rangle^n\\ &= 2^{-2j} \sum_{\beta, \beta'} \left(1 + \left \langle \frac{g_\beta}{\sqrt{f_{m-1}}}, \frac{g_{\beta'}}{\sqrt{f_{m-1}}} \right \rangle\right)^n\\ &\le 2^{-2j} \sum_{\beta, \beta'} (1 + 2 \langle \beta, \beta' \rangle)^n\\ &= E[(1 + \varepsilon^22^{1-j(2r+1)} Y)^n],\\ \intertext{where \(Y = \sum_{i=1}^{2^j} R_i,\) for i.i.d.\ Rademacher random variables \(R_i,\)} &\le E[\exp(n\varepsilon^2 2^{1-j(2r+1)} Y)]\\ &= \cosh\left(D2^{-j/2}(1 + o(1))\right)^{2^j},\\ \shortintertext{as \(j \to \infty,\) for some \(D > 0,\)} &= \left(1 + D^2 2^{-j} (1 + o(1))\right)^{2^j}\\ &\le \exp\left(D^2(1 + o(1))\right)\\ &\le 1 + \delta^2, \end{align*} for \(j\) large, \(C\) small. Hence \(E_{f_{m-1}}[(Z - 1)^2] \le \delta^2,\) and we obtain \begin{align*} \Pr_{f_{m-1}}\{T_n = 1\} + \max_\beta \Pr_{f_\beta}\{T_n = 0\} &\ge \Pr_{f_{m-1}}\{T_n=1\} + 2^{-j}\sum_\beta \Pr_{f_\beta}\{T_n = 0\}\\ &= 1 + E_{f_{m-1}}[(Z-1)1(T_n = 0)]\\ &\ge 1 - \delta. \end{align*} Set \(f_m = f_\beta,\) for \(\beta\) maximizing this expression. The density of \(X_1, \dots, X_n\) under \(f_\infty,\) w.r.t.\ under \(f_m,\) is \[Z' = \prod_{i=1}^n \frac{f_\infty}{f_m}(X_i).\] Now, \(E_{f_m}[Z'] = 1,\) and \[\norm{f_\infty - f_m}_2^2 = \sum_{i=m+1}^\infty \varepsilon^2 2^{-2j_ir} \le E'2^{-2j_{m+1}r} \le E'2^{-j(2r+1)},\] for some constant \(E' > 0,\) so similarly \begin{align*} E_{f_m}[{Z'}^2] &\le (1 + 2\norm{f_\infty - f_m}_2^2)^n\\ &\le (1 + E' 2^{1 - j(2r+1)})^n\\ &\le \exp(E'n2^{1 - j(2r+1)})\\ &= \exp\left(F2^{-j/2}(1+o(1))\right),\\ \shortintertext{for some \(F > 0,\)} &\le 1 + \delta^2, \end{align*} for \(j\) large. Hence \(E_{f_{m}}[(Z'-1)^2] \le \delta^2,\) and \begin{align*} \Pr_{f_{m-1}}\{T_n = 1\} + \Pr_{f_\infty}\{T_n = 0\} &= \Pr_{f_{m-1}}\{T_n=1\} + E_{f_m}[Z'1(T_n=0)] \\&\ge 1 - \delta + E_{f_m}[(Z'-1)1(T_n = 0)] \\&\ge 1 - 2\delta. \end{align*} If we take \(j_m = j,\) \(n_m = n\) large enough also that \eqref{eq:c-accurate} holds, then \(f_\infty\) satisfies \eqref{eq:f-indistinguishable}, and our claim is proved. \end{proof} \subsection{Proof of Part B of Theorem \ref{rv}} \label{lpt} \begin{proof} Suppose such \(C_n\) exists for $R=2r$. Set \(f_0 = 1,\) and \[f_1 = 1 + B2^{-j(r+1/2)} \sum_{k \in \mathcal Z_j} \beta_{jk} \psi_{jk},\] for \(B > 0,\) \(j > j_0,\) and \(\beta_{jk} = \pm 1\) to be determined. Having chosen \(B,\) we will pick \(j\) large enough that \(f_1 \ge \tfrac12.\) Since \(\int f_1 = \langle f_1, 1 \rangle = 1,\) \(f_1\) is then a density. Set \(\delta = \tfrac14(1 - 2\alpha).\) As \(f_0 \in \Sigma(R, 1),\) for \(n\) and $L$ large we have: \begin{enumerate} \item \(\Pr_{f_0}\{f_0 \not \in C_n\} \le \alpha + \delta;\) and \item \(\Pr_{f_0}\{\abs{C_n} \ge Ln^{-R/(2R+1)}\} \le \delta.\) \end{enumerate} Setting \(T_n = 1(\exists\, f \in C_n : \norm{f - f_0}_2 \ge 2Ln^{-R/(2R+1)}),\) we then have \[\Pr_{f_0}\{T_n = 1\} \le \alpha + 2\delta,\] as in the proof of Theorem \ref{imp}. For a constant \(C = C(\delta) > 0\) to be determined, set \(B = (3L)^{2R+1}C^{-R}.\) Allowing \(j \to \infty,\) set \(n \sim CB^{-2}2^{j(R+1/2)}.\) Then \[\norm{f_1 - f_0}_2 = B2^{-jr} \simeq 3Ln^{-R/(2R+1)},\] so for \(j\) large, \(\norm{f_1 - f_0}_2 \ge 2Ln^{-R/(2R+1)}.\) Arguing as in the proof of Theorem \ref{imp}, the density \(Z\) of \(f_1\) w.r.t.\ \(f_0\) has second moment \begin{align*} E_{f_0}[Z^2] &\le \cosh(nB^22^{1-j(2r+1)})^{2^j}\\ &= \cosh(C2^{1-j/2}(1 + o(1)))^{2^j}\\ &= (1 + C^22^{2-j}(1 + o(1)))^{2^j}\\ &\le \exp(4C^2(1 + o(1)))\\ &\le 1 + \delta^2, \end{align*} for \(C(\delta)\) small, \(j\) large. Hence \[\Pr_{f_0}\{T_n=1\} + \max_\beta \Pr_{f_1}\{T_n = 0\} \ge 1 - \delta.\] and for all \(j\) (and \(n\)) large enough, we obtain, for suitable \(\beta,\) \[\Pr_{f_1}\{f_1 \in C_n\} \le \Pr_{f_1}\{T_n = 1\} \le \alpha + 3\delta = 1 - \alpha - \delta.\] Since \(f_1 \in \Sigma(r, B)\) for all $n, \beta_{jk}$ this contradicts the definition of \(C_n.\) \end{proof} \bigskip \textbf{Acknowledgement.} The authors are very grateful to two anonymous referees for a careful reading of a preliminary manuscript that led to several substantial improvements. \bibliographystyle{plain}
1,314,259,996,937
arxiv
\section{Conclusion} \label{sec:conclusion} We presented the first formal verification of the semantics of the concurrent revisions concurrency control model. We identified and resolved a number of ambiguities in the operational semantics, and simplified a proof of determinacy. Our paper can hopefully serve as a case study for the verification of concurrency control models, and the Isabelle\slash HOL artifact can be used as a basis for developing and verifying extensions of the concurrent revisions model. \section{Concurrent Revisions} \label{sec:concurrent-revisions} In this section we first give an informal, high-level overview of CR (Section \ref{sec:overview}) exhibiting the central ideas. Then, we systematically describe and comment on the formal semantics as defined in the original account (Section~\ref{sec:formal-semantics}). \subsection{Overview} \label{sec:overview} The central unit of concurrency in the CR model is the \emph{revision}. A revision can be thought of as a process evaluating an expression $e$ using a (conceptually) isolated, local \emph{store} $\gamma = \{ l_1 \mapsto v_1, \ldots, l_n \mapsto v_n \}$, which maps \emph{locations} $l_i$ to \emph{values} $v_i$. A revision is uniquely identified by an \emph{identifier}. All computation within the model takes place within some revision. Initially, there is only one revision called the \emph{main revision}. We write $\{ r \mapsto \langle \gamma, e \rangle \}$ to denote a program state in which revision $r$ evaluates $e$ using store $\gamma$. Revisions execute in complete isolation from one another, unless an explicit synchronization operation -- fork or join -- is performed. When a revision $r_1$ \emph{forks} some expression $e$, a fresh revision $r_2$ is created that evaluates $e$. Revision $r_2$ is initialized with a copy of $r_1$'s store (a \emph{snapshot}), and the identifier $r_2$ is exposed to $r_1$. Let $\mathcal{E}[e]$ denote an expression where $\mathcal{E}[ \ ]$ represents an evaluation context around $e$. Then \[ \{r_1 \mapsto \langle \gamma, \mathcal{E}[\mathsf{rfork} \ e] \rangle \} \to \{r_1 \mapsto \langle \gamma, \mathcal{E}[r_2] \rangle, r_2 \mapsto \langle \gamma, e \rangle \} \] represents an example in which $r_1$ forks $e$. (Informally, we also say that $r_1$ forks $r_2$.) When revision $r_1$ has a reference to $r_2$, then $r_1$ can \emph{join} $r_2$. This causes $r_1$ to block until $r_2$ terminates. Once $r_2$ terminates, the store of $r_2$ is \emph{merged} into $r_1$'s store, and $r_2$ ceases to exist. Joining a nonexistent revision is considered an error. If $e$ is in normal form (signifying termination of $r_2$), then \[ \begin{array}{llll} \{r_1 \mapsto \langle \gamma_1, \mathcal{E}[\mathsf{rjoin} \ r_2] \rangle, r_2 \mapsto \langle \gamma_2, e \rangle \} \to_{r_1} \\ \{r_1 \mapsto \langle \mathcal{M}(\gamma_1, \gamma_2), \mathcal{E}[ \mathsf{unit} ] \rangle \} \end{array} \] represents an example in which $r_1$ joins $r_2$, with $\mathcal{M}$ representing the merge function. To explain how the merge function $\mathcal{M}$ works, we first introduce the notion of a \emph{revision diagram}, which visualizes the interactions between revisions. In these diagrams, solid arrows depict steps within revisions, and dotted arrows depict fork and join relations between revisions. The following is a simple example, in which four states are labeled: \begin{center} \begin{tikzcd}[row sep=0.2em, column sep=1em, fork/.style={dotted, bend left=35pt, opacity=0.8}, join/.style={dotted, bend left=35pt, opacity=0.8}] r_2 \hspace{-2mm} & & & \cdot \arrow[r] & \cdot \arrow[r] & \cdot \arrow[r] & \cdot \arrow[r] & c \arrow[rd, join] & & \\ r_1 \hspace{-2mm} & \cdot \arrow[r] & a \arrow[r] \arrow[ru, fork] & \cdot \arrow[r] & b \arrow[rrrr] & & & & d \end{tikzcd} \end{center} In state $a$, main revision $r_1$ forks $r_2$. In state $b$, $r_1$ initiates a join on $r_2$, which blocks until $r_2$ reaches its terminal state $c$. State $d$ is the result of $r_1$ joining $r_2$. State $a$ is the \emph{greatest common ancestor} (\emph{gca}) of joiner state $b$ and joinee state $c$. (The initial state is regarded as the minimal element). Burckhardt and Leijen have shown that each pair of states $(x,y)$ has a unique gca: see Lemma 17 and Theorem 10 of the technical report~\cite{burckhardt2010semanticstech}. Let $x_\gamma$ denote the store at a state $x$, and $\mathcal{W}(x,y)$ the set of locations that were written to in the execution from state $x$ to state $y$. The merge $\mathcal{M}$ of stores $b_\gamma$ (belonging to a joining revision $r_1$) and $c_\gamma$ (belonging to a joined revision $r_2$) with gca store $a_\gamma$ (see the diagram above) is defined as follows: \[ \mathcal{M}(b_\gamma, c_\gamma) \ l = \begin{cases} b_\gamma \ l & \hspace{-5.95pt} l \notin \mathcal{W}(a, c)\\ c_\gamma \ l & \hspace{-5.95pt}l \in \mathcal{W}(a, c) \land l \notin \mathcal{W}(a, b) \\ f_l(a_\gamma \ l, b_\gamma \ l, c_\gamma \ l)& \hspace{-5.95pt} \text{otherwise} \end{cases} \] Here, $f_l$ is a deterministic \emph{merge function} that resolves the \emph{write-write} conflict on $l$. It is uniquely determined by the \emph{isolation type} of $l$: a user-definable type for shared locations that describes how conflicts should be resolved. We illustrate the concept of an isolation type using two standard examples: the \textit{Versioned} and \textit{Cumulative} isolation types. If $l$ stores a \textit{Versioned} integer, then $f_l(v_1,v_2,v_3) = v_3$, effectively prioritizing the joinee and possibly overwriting a modification by the joiner. This behavior is illustrated by the following revision diagram: \begin{center} \begin{tikzcd}[row sep=0.2em, column sep=2.5em, fork/.style={dotted, bend left=18pt, opacity=0.8}, join/.style={dotted, bend left=18pt, opacity=0.8}] r_2 \hspace{-6mm} & & & \cdot \arrow[r, "l \ := \ 2" ] & \cdot \arrow[rd, join] \\ r_1 \hspace{-6mm} & \cdot \arrow[r, "l \ := \ 3"] & \cdot \arrow[r] \arrow[ru, fork] & \cdot \arrow[r, "l \ := \ 7"] & \cdot \arrow[r] & {\{l \mapsto 2, \ldots\} } \end{tikzcd} \end{center} A datum can be declared \textit{Versioned}, for instance, when the joinee is performing some task enjoying higher priority than the joiner's task. If $l$ stores a \textit{Cumulative} integer, by contrast, then the merge function is $f_l(v_1,v_2, v_3) = v_2 + v_3 - v_1$, taking both modifications into account. In the following diagram, both revisions added $2$ to the original value of $3$, causing the result of the merge to be $7$: \begin{center} \begin{tikzcd}[row sep=0.2em, column sep=2.5em, fork/.style={dotted, bend left=18pt, opacity=0.8}, join/.style={dotted, bend left=18pt, opacity=0.8}] r_2 \hspace{-6mm} & & & \cdot \arrow[r, "l \ := \ 5"] & \cdot \arrow[rd, join] \\ r_1 \hspace{-6mm} & \cdot \arrow[r, "l \ := \ 3"] & \cdot \arrow[r] \arrow[ru, fork] & \cdot \arrow[r, "l \ := \ 5"] & \cdot \arrow[r] & {\{l \mapsto 7, \ldots \} } \end{tikzcd} \end{center} A typical use case for the \textit{Cumulative} isolation type is one in which $l$ functions as a counter. Since identifiers can be exchanged through fork and join operations, valid revision diagrams can be quite complex: \begin{center} \begin{tikzcd}[row sep=0.2em, column sep=1em, fork/.style={dotted, bend left=35pt, opacity=0.8}, join/.style={dotted, bend left=35pt, opacity=0.8}] r_3 \hspace{-2mm} & & & \cdot \arrow[r] & \cdot \arrow[r] & \cdot \arrow[rd, join] & & \\ r_4 \hspace{-2mm} & & & & & \cdot \arrow[r] & \cdot \arrow[rdd, join] & \\ r_2 \hspace{-2mm} & & \cdot \arrow[r] \arrow[ruu, fork] & \cdot \arrow[rd, join] & & & & \\ r_1 \hspace{-2mm} & \cdot \arrow[r] \arrow[ru, fork] & \cdot \arrow[r] & \cdot \arrow[r] & \cdot \arrow[ruu, fork] \arrow[r] & \cdot \arrow[r] & \cdot \arrow[r] & \cdot \end{tikzcd} \end{center} Despite this, programs are \emph{determinate}, meaning that the outcome of a program is uniquely determined, even if scheduling is nondeterministic. This property assumes two simple conditions: (1) revisions do not perform nondeterministic behavior that affects the semantics of outcomes (e.g., generating a random number), and (2) revisions are joined only once (a second join operation would be undefined). \subsection{Formal Semantics} \label{sec:formal-semantics} \begin{figure*} $ \begin{array}{lllll} (\textit{apply}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ (\lambda x. e) \ v ] \rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau, \mathcal{E}[ [v/x] e] \rangle) \\ (\textit{if-true}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ \mathsf{true} \ \mathsf{?} \ e_1 \ \mathsf{:} \ e_2]\rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau, \mathcal{E}[ e_1]\rangle) \\ (\textit{if-false}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ \mathsf{false} \ \mathsf{?} \ e_1 \ \mathsf{:} \ e_2]\rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau, \mathcal{E}[ e_2]\rangle) \\ \\ (\textit{new}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ \mathsf{ref} \ v]\rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau(l \mapsto v), \mathcal{E}[ l ] \rangle) & \mathsf{if} \ l \notin s \\ (\textit{get}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ \mathsf{!} l]\rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau, \mathcal{E}[ (\sigma \mathsf{::} \tau) \ l] \rangle) & \mathsf{if} \ l \in \mathsf{dom} \ (\sigma \mathsf{::} \tau)\\ (\textit{set}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ l := v] \rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau(l \mapsto v), \mathcal{E}[ \mathsf{unit}] \rangle) & \mathsf{if} \ l \in \mathsf{dom} \ (\sigma \mathsf{::} \tau) \\ \\ (\textit{fork}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ \mathsf{rfork} \ e]\rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau, \mathcal{E}[ r'] \rangle, r' \mapsto \langle \sigma \mathsf{::} \tau, \epsilon, e \rangle) & \mathsf{if} \ r' \notin s \\ (\textit{join}) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ \mathsf{rjoin} \ r']\rangle, r' \mapsto \langle \sigma', \tau', v \rangle \rrbracket & \to_r & s(r \mapsto \langle \sigma, \tau \mathsf{::} \tau', \mathcal{E}[ \mathsf{unit}] \rangle, r' \mapsto \bot) \\ (\textit{join}_\epsilon) & s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[ \mathsf{rjoin} \ r'] \rangle, r' \mapsto \bot\rrbracket & \to_r & \epsilon \\ \end{array} $ \caption{The rules of the operational semantics.} \label{fig:operational_semantics} \end{figure*} The CR semantics is modeled by the \emph{revision calculus}, which consists of a programming language for revisions, a set of evaluation contexts, notions of local and global states, and an operational semantics on global states. The original account also introduces an equivalence relation on states and a vocabulary for discussing execution traces. \subparagraph{Preliminaries} We write $\mathsf{dom} \ f$ and $\mathsf{ran} \ f$ to denote respectively the domain and range of a partial function $f$, $\epsilon$ for the empty partial function, $f \ x = \bot$ for $x \notin \mathsf{dom} \ f$, and $f(x \mapsto y)$ for the partial function obtained by updating $x$ to $y$ in $f$. For $n > 1$, the expression $f (x_1 \mapsto y_1, \ldots , x_{n+1} \mapsto y_{n+1})$ abbreviates $(f (x_1 \mapsto y_1, \ldots, x_n \mapsto y_n))(x_{n+1} \mapsto y_{n+1})$. For a bijective function $f$, we write $f^{-1}$ to denote its inverse. Given partial functions $f$ and $g$, $f :: g$ is a partial function that maps $x$ to $g \ x$ if $x \in \mathsf{dom} \ g$ and to $f \ x$ otherwise (``$g$ shadows $f$''). For functions $f$ and sets $S$, $f \ ' \ S$ denotes $S$ under the image of $f$, i.e., $\{ f \ x \mid x \in S \}$. We write $\rightsquigarrow^=$, $\rightsquigarrow^*$ and $\rightsquigarrow^n$ for respectively the reflexive closure, reflexive transitive closure and $n$-fold composition of a relation $\rightsquigarrow$, use mirrored arrows $\leftsquigarrow$ to denote inverse relations, and write $R \circ R'$ for the composition of relations $R$ and $R'$, given by $(x,z) \in R \circ R' \iff \exists y. \ (x,y) \in R \land (y,z) \in R'$. \subparagraph{Expressions} The programming language is parameterized by three (typically infinite) sets: variables $x \in \textit{Var}$, revision identifiers $r \in \textit{Rid}$ and location identifiers $l \in \textit{Lid}$. It defines a set of constants $c \in \textit{Const}$, containing elements \textsf{unit}, \textsf{true} and \textsf{false}. The sets of values and expressions are mutually defined as follows: \[ \begin{array}{lcl} v \in \textit{Val} & ::= & c \bnfmid x \bnfmid l \bnfmid r \bnfmid \lambda x.e \\ e \in \textit{Expr} & ::= & v \bnfmid e \ e \bnfmid e \ \mathsf{?} \ e \ \mathsf{:} \ e \bnfmid \mathsf{ref} \ e \bnfmid \mathsf{!}e \bnfmid \\ & & e := e \bnfmid \mathsf{rfork} \ e \bnfmid \mathsf{rjoin} \ e\\ \end{array} \] For the properties of interest, we do not need to consider $\lambda$-terms modulo $\alpha$-equivalence. This is fortunate, since $\alpha$-equivalence has a reputation of being challenging to formalize \cite{berghofer2007a, urban2011general}. In some contexts, we will write $e_1 \bullet e_2$ rather than $e_1 \ e_2$ to improve readability. \subparagraph{Evaluation Contexts} The following set of evaluation contexts is defined: \[ \begin{array}{lclcl} \mathcal{E} \in \mathit{Cntxt} & ::= & \square \bnfmid \mathcal{E} \ e \bnfmid v \ \mathcal{E} \bnfmid \mathcal{E} \ \mathsf{?} \ e \ \mathsf{:} \ e \bnfmid \mathsf{ref} \ \mathcal{E} \bnfmid \\ & & \mathsf{!}\mathcal{E} \bnfmid \mathcal{E} := e \bnfmid l := \mathcal{E} \bnfmid \mathsf{rjoin} \ \mathcal{E} \end{array} \] The expression $\mathcal{E}[e]$ denotes the result of \emph{plugging} $e$ into the unique hole ($\square$) of $\mathcal{E}$. Evaluation contexts allow decomposing an expression $e = \mathcal{E}[r]$ into an evaluation site $r$ (a redex) and its surrounding context $\mathcal{E}$, enabling rewriting under contexts. A more detailed explanation of evaluation contexts is provided by Harper~\cite[pp. 44--46]{harper2016practical}. More strongly for CR, a \emph{unique decomposition} lemma holds: $\mathcal{E}[r] = \mathcal{E}'[r']$ implies $\mathcal{E} = \mathcal{E}'$ and $r = r'$ for redexes $r$ and $r'$. Since the operational semantics matches expressions $e$ against patterns of the form $\mathcal{E}[r]$, the unique decomposition lemma thus guarantees that always a unique redex of $e$ is evaluated. For example, the expression $((\lambda x \ldotp x) \ x) \ ((\lambda y \ldotp y) \ y)$ can match against the pattern $\mathcal{E}[(\lambda x \ldotp x) \ x]$, since $\square \ ((\lambda y \ldotp y) \ y)$ is a valid context. It cannot match against $\mathcal{E}[(\lambda y \ldotp y) \ y ]$, however, since $((\lambda x \ldotp x) \ x) \ \square$ is not a valid context. Uniqueness of decomposition is claimed, but not demonstrated in the original account. We describe its proof in Section~\ref{sec:formalization-preliminaries}. \subparagraph{State} Three notions of state are required: the state of a store, the local state of a revision, and the global state. A \textit{Store} is a partial function $\sigma,\tau \in \mathit{Lid} \rightharpoonup \mathit{Val}$, and a \textit{GlobalState} is a partial function $s \in \mathit{Rid} \rightharpoonup \mathit{LocalState}$. For technical reasons, the local state of a revision is not a tuple $\langle \gamma, e \rangle$, consisting of a store $\gamma$ and expression $e$, as informally described in Section~\ref{sec:overview}. Instead, a local state is a triple $L \in \textit{LocalState} = \textit{Snapshot} \times \textit{LocalStore} \times \textit{Expr}$, where $\textit{Snapshot}$ and $\textit{LocalStore}$ are type synonyms for $\textit{Store}$. To understand why, we note that the gca store, required to define the merge operation, always equals the snapshot (initial store) of the joinee. The diagrams of Section~\ref{sec:overview} provide examples, and its proof is given in the original account (Lemma 18 of the technical report \cite{burckhardt2010semanticstech}). Thus, if a revision~$r'$ preserves the snapshot it inherits from its forker~$r$, while tracking its own updates separately, then the gca store can always be obtained from the local state of $r'$ when $r'$ is joined. In the operational semantics, snapshots are never modified and local stores track updates. We introduce the notations $L_\sigma$, $L_\tau$ and $L_e$ for respectively the first, second and third component of a local state $L$, and define $\mathsf{doms} \ L = \mathsf{dom} \ L_\sigma \cup \mathsf{dom} \ L_\tau$. \subparagraph{Occurrences} To avoid ambiguities in our discussion of the operational semantics, we introduce a family of functions not present in the original account. We write $\textit{RID} \ e$ to denote the set of all revision identifiers occurring in expression $e$, and $\textit{LID} \ e$ to denote the set of all location identifiers occurring in~$e$. We analogously define functions $\textit{RID}$ and $\textit{LID}$ for contexts. For stores $\sigma$, we define $\textit{RID} \ \sigma = \bigcup \textit{RID} \ ' \ \mathsf{ran} \ \sigma$ and $\textit{LID} \ \sigma = \mathsf{dom} \ \sigma \cup \bigcup \textit{LID} \ ' \ \mathsf{ran} \ \sigma$. For local states $L$, we define $\textit{RID} \ L = \textit{RID} \ L_\sigma \cup \textit{RID} \ L_\tau \cup \textit{RID} \ L_e$, and similarly for $\textit{LID} \ L$. For global states $s$, we define $\textit{RID} \ s = \mathsf{dom} \ s\cup \bigcup \textit{RID} \ ' \ \mathsf{ran} \ s$ and $\textit{LID} \ s = \bigcup \textit{LID} \ ' \ \mathsf{ran} \ s$. \subparagraph{Operational Semantics} The operational semantics (Figure~\ref{fig:operational_semantics}) defines a transition relation on global states, indexed by the revision~$r$ ``performing'' the step. The left hand side of each rule is of the form $s\llbracket r \mapsto L \rrbracket$, and matches any global state~$s$ for which $s \ r = L$. The first three rules affect only the expression local to~$r$. The original authors state that rule \formatrule{apply} is deterministic, but otherwise they make no explicit assumptions about the capture-avoiding substitution $[v/x]e$. The next three rules model store interactions. The side condition for \formatrule{new}, $l \notin s$, is a notational shorthand expressing that ``$l$ does not appear in any snapshot or local store of $s$'' \cite{burckhardt2011semantics}. We believe that \begin{equation}\label{eq:new_wrong} l \notin \bigcup \{\textit{LID} \ L_\sigma \cup \textit{LID} \ L_\tau \mid L \in \mathsf{ran} \ s \} \tag{$SC_\textit{new}$} \end{equation} is the literal interpretation of this informal characterization, rather than the more conservative side condition $l \notin \textit{LID} \ s$. We examine how the choice of interpretation influences determinacy in Section \ref{sec:operational-semantics}. Note that \formatrule{new} is nondeterministic. Like rule \formatrule{new}, rule \formatrule{fork} is nondeterministic: the side condition $r' \notin s$ is meant to express that $r'$ ``is not mapped by~$s$, and does not appear in any snapshot or local store of $s$''~ \cite{burckhardt2011semantics}. We believe that \begin{equation} \label{eq:fork_wrong} r' \notin \mathsf{dom} \ s \cup \bigcup \{\textit{RID} \ L_\sigma \cup \textit{RID} \ L_\tau \mid L \in \mathsf{ran} \ s \}\tag{$SC_\textit{fork}$} \end{equation} is the literal interpretation of this sentence, rather than $r' \notin \textit{RID} \ s$. In Section \ref{sec:operational-semantics} we will show that \eqref{eq:fork_wrong} leads to nondeterminacy. The join operation is modeled by rules \formatrule{join} and \formatrule{join$_\epsilon$}. Rule \formatrule{join} resolves all conflicts according to the \textit{Versioned} isolation type. The restriction to this isolation type is part of the original account, and we adopt it here in order to remain faithful. The original account argues that this rule can be generalized by using a custom merge function \[ \mathit{merge}_l : \mathit{Val} \times \mathit{Val} \times \mathit{Val} \to \mathit{Val} \] defined for the values at each location $l$ of respectively the snapshot, the local store of the joiner and the local store of the joinee. Because locations are randomly allocated in the calculus, we argue that it instead may be better to modify the calculus by introducing subtypes of \textit{Val}, which then determine which merge functions are used \cite{overbeek2018thesis}. In addition, one would have to forbid the definition of merge functions whose results depend on nondeterministic aspects, such as the occurrence of particular location and revision identifiers in argument values. Failure to do so would result in nondeterminacy. Rule \formatrule{join$_\epsilon$} ensures that the global state collapses to the empty function when an erroneous join is performed. It is needed to establish determinacy \cite{burckhardt2011semantics}. \subparagraph{Equivalence} Since location and revision identifiers are allocated nondeterministically, an equivalence relation on structures containing identifiers is introduced. Let $\alpha \in \mathit{Rid} \to \mathit{Rid}$, $\beta \in \mathit{Lid} \to \mathit{Lid}$ and let $S$ be some structure containing identifiers (expressions, stores, etc.). We write $\mathcal{R} \ \alpha \ \beta \ S$ to denote the structure that results from renaming every identifier in $S$ according to $\alpha$ and $\beta$, and $S \approx_{\alpha\beta} S'$ to express that $\alpha$ and $\beta$ are bijections and $\mathcal{R} \ \alpha \ \beta \ S = S'$. Structures $S$ and $S'$ are said to be \emph{renaming-equivalent}, denoted $S \approx S'$, if $S \approx_{\alpha\beta} S'$ for some $\alpha$ and $\beta$. \subparagraph{Executions} The original account defines a \emph{program expression} as ``an expression containing no revision identifiers'', and an \emph{initial state} as a global state of the form $\epsilon(r \mapsto \langle \epsilon , \epsilon, e \rangle)$, with $e$ a program expression and $r \in Rid$. We contend that the characterization of a program expression can be interpreted as either $\textit{RID} \ e = \varnothing$ or as $\textit{RID} \ e = \textit{LID} \ e = \varnothing$. We choose the latter interpretation, since rules \formatrule{set} and \formatrule{get} would anyway block on manually introduced location identifiers. This is because only identifiers allocated by \formatrule{new} can end up in a store's domain. In addition, using the former interpretation causes nondeterminacy if side condition \eqref{eq:new_wrong} is used \cite{overbeek2018thesis}. Let $\rightarrow \ = \bigcup \{ \to_r \mid r \in \mathit{Rid} \}$. An \emph{execution} is a sequence $s \to^* s'$ with $s$ an initial state. The execution is \emph{maximal} if there does not exist an $s''$ such that $s' \to s''$, and $e \downarrow s$ expresses that there exists a maximal execution for a program expression $e$ that ends in global state $s$. Determinacy modulo~$\approx$ thus means that $e \downarrow s$ and $e \downarrow s'$ imply $s \approx s'$. A state $s'$ is \emph{reachable} if there exists an execution $s \to^* s'$ from an initial state $s$. We say that a property $P$ is an \emph{execution invariant} if $P \ s$ for all reachable states $s$. A property $P$ is an \emph{inductive invariant} if \begin{itemize} \item $P \ s$ for all initial states $s$, and \item for all states $s$ and $s'$, $\ s \to s' \land P \ s \Longrightarrow P \ s'$. \end{itemize} Every inductive invariant is an execution invariant, but not vice versa. \section{Determinacy} \label{sec:determinacy} Our proof of determinacy deviates from the one found in the original account. In this section we first explain and motivate the high-level differences (Section~\ref{sec:comparison}). We then explain how our proof is formalized in theory \theory{Determinacy} (Section~\ref{sec:formalization}). \subsection{Comparison} \label{sec:comparison} The original proof establishes determinacy through a sequence of linearly dependent claims: \begin{enumerate} \item \emph{Local determinism} is established: if $s_2 \leftarrow_r s_1 \approx_{\alpha\beta} s_1' \to_{\alpha \ r} s_2'$, then $s_2 \approx s_2'$.\footnote{% Where applicable, we make the formulations in the original account formally precise. In this case, the assumption was written as $s_2 \leftarrow_r s_1 \approx s_1' \to_r s_2'$, which is slightly incorrect: the relation between revision $r$ in $s_1$ and revision $r$ in $s_2$ can be arbitrary.} The proof relies on the statement that ``for a fixed revision $r$, [an expression context $\mathcal{E}[e]$] is matched uniquely by at most one operational rule'', which we will call \emph{rule determinism}. Note that the local determinism lemma assumes, rather than infers, the existence of the step $s_1' \to_{\alpha \ r} s_2'$ which can be understood as ``mimicking'' the step $s_1 \to_r s_2$. \item \emph{Strong local confluence} is proven: for reachable states $s_1$ and $s_1'$ with $s_2 \leftarrow_r s_1 \approx_{\alpha\beta} s_1' \to_{r'} s_2'$, there exist states $s_3$ and $s_3'$ such that $s_2 \to^=_{\alpha^{-1} \ r'} s_3 \approx_{\alpha\beta} s_3' \leftarrow_{\alpha \ r}^= s_3'$. The case where $r' = \alpha \ r$ follows from local determinism, and the case $r' \neq \alpha \ r$ is proven by a double case analysis on $s_1 \to_r s_2$ and $s_1' \to_{r'} s_2'$. \item The relation $\to$ is lifted to a relation $\to_\mathcal{C}$ over classes of $\approx$-equivalent states, i.e., $C \to_\mathcal{C} C'$ if there exist states $s \in C$ and $s' \in C'$ such that $s \to s'$. From strong local confluence, it follows that $C_2 \leftto_\mathcal{C} C_1 \to_\mathcal{C} C_3$ implies the existence of a class $C_4$ such that $C_2 \to_\mathcal{C}^= C_4 \leftto_\mathcal{C}^= C_3$. \item From this locally commuting property of $\to_\mathcal{C}$, it is claimed that a routine \emph{diagram tiling} \cite{bezem1998diagram} proof establishes \emph{confluence} of $\to_\mathcal{C}$, i.e., that $C_2 \leftto_\mathcal{C}^* C_1 \to_\mathcal{C}^* C_3$ implies $C_2 \to_\mathcal{C}^* C_4 \leftto_\mathcal{C}^* C_3$ for some $C_4$. The proof itself is not given. \item Without further comment, confluence of $\to$ modulo $\approx$ is concluded from confluence of $\to_\mathcal{C}$. \item Determinacy of $\to$ modulo $\approx$ is subsequently obtained as a corollary. \end{enumerate} From a formal perspective, we first observe that item (5) is problematic. Namely, a joining reduction $C \to_\mathcal{C}^* C'$ could be due to a noncontiguous $\to$ reduction sequence \[ S = \begin{array}{lllllllllllllll} s & \to & s_0 & & s_1 & \to & s_2 \\ & & \hspace{.6mm}\rotatebox[origin=c]{270}{$\approx$} & & \hspace{.6mm}\rotatebox[origin=c]{270}{$\approx$} & & \hspace{.6mm}\rotatebox[origin=c]{270}{$\approx$} & \\ & & s_0' & \to & s_1' & & s_2' & \to & \cdots & \to & s' \end{array} \] where $s \in C$, $s \in C'$, and $s_i \neq s_i'$ for some $i$. However, the existence of a contiguous $\to$ reduction follows from such an $S$ if equivalent states can mimic each other's steps, i.e., if whenever $s_2 \leftarrow s_1 \approx s_1'$, there exists an $s_2'$ such that $s_1' \to s_2' \approx s_2$. This property, which we will call the \emph{mimicking property}, is stronger than local determinism. During the formalization process, we first proved the mimicking property. We then realized that strong local confluence and mimicking can be applied directly in a diagram tiling proof for proving confluence of $\to$, eliminating the need to lift and unlift the relation $\to$. This simplifies items~\mbox{(3--5)} above. We also realized that the statements of local determinism and strong local confluence could be simplified: the equivalences in the sources of the divergences are not needed (e.g., the condition for local determinism becomes $s_2 \leftarrow_r s_1 \to_r s_2'$). This simplifies items~(1--2), which we experienced to be advantageous for the mechanization: we only have to reason about renamings (more specifically, swaps) whenever divergent nondeterministic steps are considered. Item~(6) is the same in our account. In summary, the outline of our proof is as follows: \begin{enumerate} \item Rule determinism is established. \item We prove our simplified statement of local determinism: if $s_2 \leftarrow_r s_1 \to_r s_2'$, then $s_2 \approx s_2'$. \item We prove our simplified statement of strong local confluence: if $s_1$ is reachable and $s_2 \leftarrow_r s_1 \rightarrow_{r'} s_2'$, then there exist $s_3$ and $s_3'$ such that $s_2 \rightarrow_{r'}^= s_3 \approx s_3' \leftarrow_r^= s_2'$. As a technical detail, this lemma in addition requires that \textit{Rid} and \textit{Lid} are infinite sets. \item Independently, we prove the mimicking property. \item From the mimicking property and strong local confluence, confluence of $\to$ modulo $\approx$ is proven using a straightforward diagram tiling proof. \item Determinacy of $\to$ modulo $\approx$ is obtained as a corollary. \end{enumerate} \subsection{Formalization} \label{sec:formalization} We now explain our proof in more detail, and immediately relate it to the Isabelle formalization. Theory \theory{Determinacy} first proves nine rule determinism lemmas, one for each rule of the operational semantics. Intuitively, these lemmas state that if $s \to s'$ and $s$ matches the source state of a rule $R$, then $s'$ matches the target state of $R$. The lemma for \formatrule{apply} (lemma \texttt{app\_deterministic}), for instance, states that \[ \begin{array}{ll} s \ r = \langle \sigma, \tau, \mathcal{E}[ (\lambda x. e) \ v ] \rangle \Longrightarrow (s \to s') = \\ (s' = s(r \mapsto \langle \sigma, \tau, \mathcal{E}[ [v/x] e] \rangle)). \end{array} \] The lemmas for \formatrule{new} and \formatrule{fork} are deterministic up to naming only. For instance, the rule for \formatrule{new} (lemma \texttt{new\_pseudo\allowbreak deter\allowbreak ministic}) states that \[ \begin{array}{ll} s \ r = \langle \sigma, \tau, \mathcal{E}[ \mathsf{ref} \ v]\rangle \Longrightarrow (s \to s') = \\ (\exists l\ldotp l \notin \textit{LID} \ s \land s' = s(r \mapsto \langle \sigma, \tau(l \mapsto v), \mathcal{E}[ l ] \rangle))\text{.} \end{array} \] The proofs of these lemmas follow easily from the unique decomposition lemma. The lemmas are declared as simplification rules, and are useful in the proof of local determinism. \begin{lemma}[\texttt{local\_determinism}]\label{lem:local_determinism} $s_2 \leftarrow_r s_1 \rightarrow_r s_2' \Longrightarrow s_2 \approx s_2'$. \end{lemma} \begin{proof} By a case analysis on the left step $s_2 \leftarrow_r s_1$. In every case other than (\textit{new}) and (\textit{fork}), we obtain $s_2' = s_2$ by rule determinism: a case distinction on the right step is not necessary. In case (\textit{new}), we are given that $s_2 = s(r \mapsto \langle \sigma, \tau(l \mapsto v), \mathcal{E}[l]\rangle)$ (for $l \notin \textit{LID} \ s$), and by rule determinism, $s_2' = s(r \mapsto \langle \sigma, \tau(l' \mapsto v), \mathcal{E}[l']\rangle)$ (for $l' \notin \textit{LID} \ s$). Define $\alpha = \mathit{id}$ and the swap $\beta = \mathit{id}(l := l', l' := l)$. It suffices to prove $\mathcal{R} \ \alpha \ \beta \ s_2 = s_2'$, which is derived using \textit{auto} roughly as follows. The distributive laws for renaming push the renaming inwards. The conclusions of the swap rules get matched. The assumptions of the swap rules are derived from $l \notin \textit{LID} \ s$, $l' \notin \textit{LID} \ s$ and the simplification rules for occurrences, canceling out all redundant renamings. The argument for case~(\textit{fork}) is analogous to case (\textit{new}). \end{proof} Our statement of strong local confluence is as follows. \begin{theorem}[\texttt{strong\_local\_confluence}]\label{lem:slc} Assume that $s_1$ is reachable and that \textit{Rid} and \textit{Lid} are infinite. Then $s_2 \leftarrow_r s_1 \rightarrow_{r'} s_2' \Longrightarrow \exists s_3 \ s_3' \ldotp s_2 \rightarrow_{r'}^= s_3 \approx s_3' \leftarrow_r^= s_2'$. \end{theorem} The case $r = r'$ follows from Lemma \ref{lem:local_determinism}. For the $r \neq r'$ case, we conceptually follow the original proof in that we proceed by a double case analysis on the assumption $s_2 \leftarrow_r s_1 \to_{r'} s_2'$. This generates 81 cases, many of which are highly similar. We manage this explosion of proof obligations as follows. First, we prove the following lemma which helps deal with the 36 symmetric cases: \begin{lemma}[\texttt{SLC\_sym}]\label{lem:slc_sym} $\exists s_3 \ s_3' \ldotp s_2 \rightarrow_{r'}^= s_3 \approx s_3' \leftarrow_r^= s_2' \Longrightarrow \exists s_3 \ s_3' \ldotp s_2 \rightarrow_{r'}^= s_3 \approx s_3' \leftarrow_r^= s_2'$. \end{lemma} When applied in a proof context for a case (\textit{rule})/(\textit{rule}$'$), \texttt{SLC\_sym} transforms the conclusion into its symmetric version, which at that point already has a proof. Second, in many cases the steps commute directly. In these cases, the following lemma is used as an introduction rule: \begin{lemma}[\texttt{SLC\_commute}] $s_2 \to_{r'} s_3 = s_3' \leftarrow_r s_2' \Longrightarrow s_2 \rightarrow_{r'}^= s_3 \approx s_3' \leftarrow_r^= s_2'$. \end{lemma} By applying the rule, the proof obligation is refined, which helps guide \textit{auto} and leads to understandable Isar proofs. Lemmas \texttt{join\_and\_local\_commute}, \texttt{local\_steps\_\allowbreak commute} and \texttt{local\_and\_{\allowbreak}rfork\_commute} have similar roles, refining the proof obligation even further for the commuting pairs (\textit{join})/(\textit{local}), (\textit{local})/(\textit{local}) and (\textit{local})/(\textit{fork}), respectively. Finally, we only perform a case analysis on the left step $s_2 \leftarrow_r s_1$ in the Isabelle proof to Theorem~\ref{lem:slc}. Each of the nine cases is established by a separate lemma named \texttt{SLC\_}\textit{rule}, with \textit{rule} one of the nine rule names. These nine lemmas are proven in the order of the following proof sketch. \begin{proof}[Proof of Theorem~\ref{lem:slc}] The case distinction on the left step \mbox{$s_2 \leftarrow_r s_1$} generates nine cases that are proven in the following order. We use commuting diagrams to visually summarize proofs. \begin{enumerate} \item (\textit{join}$_\epsilon$): Suppose revision $r$ joins a nonexistent revision $r''$ in the left step. $s_1 \to_{r'} s_2'$ is either a (\textit{join}$_\epsilon$) step (joining some $r'''$) or not (denoted by $\overline{\textit{join}_\epsilon}$): \begin{center} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \text{join}_\epsilon (r''')"] \arrow[dd, "r \colon \text{join}_\epsilon (r'')"', swap] & & s_2' \arrow[dd, equals] \\ & & \\ s_2 \arrow[r, equals] & \epsilon \arrow[r, equals] & \epsilon \end{tikzcd} \end{minipage} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \overline{\text{join$_\epsilon$}}"] \arrow[dd, "r \colon \text{join}_\epsilon (r'')"', swap] & & s_2' \arrow[dd, "r \colon \text{join}_\epsilon(r'')", swap] \\ & & \\ s_2 \arrow[r, equals] & \epsilon \arrow[r, equals] & \epsilon \end{tikzcd} \end{minipage} \end{center} Observe that the right diagram would fail for the case $\overline{\textit{join}_\epsilon} =$ \formatrule{fork} if side condition \eqref{eq:fork_wrong} were used. \item (\textit{join}): Suppose revision $r$ successfully joins a revision $r''$ in the left step. $s_1 \to_{r'} s_2'$ either also succesfully joins $r''$ or not (denoted by $\overline{\textit{join}(r'')}$): \begin{center} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \text{join} (r'')"] \arrow[dd, "r \colon \text{join} (r'')"', swap] & & s_2' \arrow[dd, "r \colon \text{join}_\epsilon(r'')", swap] \\ & & \\ s_2 \arrow[r, "r' \colon \text{join}_\epsilon(r'')"'] & \epsilon \arrow[r, equals] & \epsilon \end{tikzcd} \end{minipage} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \overline{\text{join} (r'')}"] \arrow[dd, "r \colon \text{join} (r'')"', swap] & & s_2' \arrow[dd, "r \colon \text{join}(r'')", swap] \\ & & \\ s_2 \arrow[r, "r' \colon \overline{\text{join}(r'')}"'] & s_3 \arrow[r, equals] & s_3' \end{tikzcd} \end{minipage} \end{center} \item (\textit{local}): Under a (\textit{local}) step we here understand any step that is an \formatrule{apply}, \formatrule{ifTrue}, \formatrule{ifFalse}, \formatrule{get} or \formatrule{set} step. The right step is a ($*$) (\textit{local}), (\textit{new}) or (\textit{fork}) step: \begin{center} \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon *"] \arrow[dd, "r \colon \text{local}"] & & s_2' \arrow[dd, "r \colon \text{local}", swap] \\ & & \\ s_2 \arrow[r, "r' \colon *", swap] & s_3 \arrow[r, equals] & s_3' \end{tikzcd} \end{center} \item \formatrule{new}: Suppose the left step allocates a location identifier $l$. Either the right step also allocates $l$ or it does not (i.e., it allocates some $l' \neq l$ or is some (\textit{fork}) step): \vspace{1.5mm} \begin{center} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \text{new}(l)"] \arrow[dd, "r \colon \text{new}(l)"] & & s_2' \arrow[dd, "r \colon \text{new}(l'')", swap] \\ & & \\ s_2 \arrow[r, "r' \colon \text{new}(l'')", swap] & s_3 \arrow[r, "\approx" description, no head] & s_3' \end{tikzcd} \end{minipage} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \overline{\text{new}(l)}"] \arrow[dd, "r \colon \text{new}(l)"] & & s_2' \arrow[dd, "r \colon \text{new}(l)", swap] \\ & & \\ s_2 \arrow[r, "r' \colon \overline{\text{new}(l)}", swap] & s_3 \arrow[r, equals] & s_3' \end{tikzcd} \end{minipage} \end{center} \item \formatrule{fork}: Finally, we consider the case where the left step is a (\textit{fork}) step. The right step is a \formatrule{fork} step as well. Both steps either fork the same revision identifier $r''$ or not ($r''' \neq r''$): \begin{center} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \text{fork}(r'')"] \arrow[dd, "r \colon \text{fork}(r'')"] & & s_2' \arrow[dd, "r \colon \text{fork}(r''')", swap] \\ & & \\ s_2 \arrow[r, "r' \colon \text{fork}(r''')", swap] & s_3 \arrow[r, "\approx" description, no head] & s_3' \end{tikzcd} \end{minipage} \begin{minipage}{.211\textwidth} \centering \begin{tikzcd}[row sep=0.6em, column sep=3.1em] s_1 \arrow[rr, "r' \colon \text{fork}(r''')"] \arrow[dd, "r \colon \text{fork}(r'')"] & & s_2' \arrow[dd, "r \colon \text{fork}(r'')", swap] \\ & & \\ s_2 \arrow[r, "r' \colon \text{fork}(r''')"'] & s_3 \arrow[r, equals] & s_3' \end{tikzcd} \end{minipage} \end{center} \end{enumerate} The following table summarizes which case is addressed by which item in the given enumeration. The values for symmetric cases are grayed out and solved using Lemma~\ref{lem:slc_sym}. \vspace{1mm} \begin{tabular}{l|lllllllll} & \rotatebox{90}{$\formatrule{join$_\epsilon$}$} & \rotatebox{90}{$\formatrule{join}$} & \rotatebox{90}{$\formatrule{apply}$} & \rotatebox{90}{$\formatrule{ifTrue}$} & \rotatebox{90}{$\formatrule{ifFalse}$} & \rotatebox{90}{$\formatrule{get}$} & \rotatebox{90}{$\formatrule{set}$} & \rotatebox{90}{$\formatrule{new}$} & \rotatebox{90}{$\formatrule{fork}$} \\ \hline $\formatrule{join$_\epsilon$}$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ $\formatrule{join}$ & {\color[HTML]{9B9B9B} 1} & 2 & 2 & 2 & 2 & 2 & 2 & 2 & 2 \\ $\formatrule{apply}$ & {\color[HTML]{9B9B9B} 1} & {\color[HTML]{9B9B9B} 2} & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\ $\formatrule{ifTrue}$ & {\color[HTML]{9B9B9B} 1} & {\color[HTML]{9B9B9B} 2} & {\color[HTML]{9B9B9B} 3} & 3 & 3 & 3 & 3 & 3 & 3 \\ $\formatrule{ifFalse}$ & {\color[HTML]{9B9B9B} 1} & {\color[HTML]{9B9B9B} 2} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & 3 & 3 & 3 & 3 & 3 \\ $\formatrule{get}$ & {\color[HTML]{9B9B9B} 1} & {\color[HTML]{9B9B9B} 2} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & 3 & 3 & 3 & 3 \\ $\formatrule{set}$ & {\color[HTML]{9B9B9B} 1} & {\color[HTML]{9B9B9B} 2} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & 3 & 3 & 3 \\ $\formatrule{new}$ & {\color[HTML]{9B9B9B} 1} & {\color[HTML]{9B9B9B} 2} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & 4 & 4 \\ $\formatrule{fork}$ & {\color[HTML]{9B9B9B} 1} & {\color[HTML]{9B9B9B} 2} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 3} & {\color[HTML]{9B9B9B} 4} & 5 \\ \end{tabular} \qedhere \end{proof} From Theorem \ref{lem:slc}, we obtain the following lemma as a corollary: \begin{lemma}[\texttt{SLC\_top\_relaxed}] Assume that $s_1$ is reachable and that \textit{Rid} and \textit{Lid} are infinite. Then $s_2 \leftarrow s_1 \rightarrow^= s_2' \Longrightarrow \exists s_3 \ s_3' \ldotp \ s_2 \rightarrow^= s_3 \approx s_3' \leftarrow^= s_2'$. \end{lemma} This version of strong local confluence is used in the diagram tiling proofs. In the visualizations of the proofs, we will label its diagram representation with the name $\text{SLC}^=$. To establish the mimicking property, we first prove a series of lemmas of the form $ (\alpha \ r \in \textit{RID} \ (\mathcal{R} \ \alpha \ \beta \ S)) = (r \in \textit{RID} \ S) $ and $ (\beta \ l \in \textit{LID} \ (\mathcal{R} \ \alpha \ \beta \ S)) = (l \in \textit{LID} \ S)$ for each of the structures $S$, with $\alpha$ and $\beta$ bijections. These lemmas imply that the allocation of a fresh identifier $r$ or $l$ can be directly mimicked by allocating $\alpha \ r$ or $\beta \ l$, respectively. This fact is used in the proof to the lemma below. \begin{lemma}[\texttt{mimicking}]\label{lem:mimicking} If $s \to_r s'$, then $\mathcal{R} \ \alpha \ \beta \ s \to_{\alpha \ r} \mathcal{R} \ \alpha \ \beta \ s'$ for bijections $\alpha$ and $\beta$. \end{lemma} From Lemma \ref{lem:mimicking} we derive the following transitive variant, which is the version used in the diagram tiling proofs (we label its diagram representation with $\mathcal{M}^*$): \begin{lemma}[\texttt{mimic\_trans}]\label{lem:mimic_trans} $s_2 \leftarrow^* s_1 \approx s_1' \Longrightarrow \exists s_2' \ldotp \ s_1' \to^* s_2' \approx s_2$. \end{lemma} Now that we have the two necessary diagrams, we follow the original account by establishing confluence modulo $\approx$ in two steps: \begin{lemma}[\texttt{strip\_lemma}] Assume that $s_1$ is reachable and that \textit{Rid} and \textit{Lid} are infinite. Then $s_2 \leftarrow^* s_1 \to^= s_2' \Longrightarrow \exists s_3 \ s_3' \ldotp \ s_2 \to^* s_3 \approx s_3' \leftarrow^* s_2'$. \end{lemma} \begin{proof} By induction on the length $n$ of $s_2 \leftarrow^n s_1$. The Isabelle proof of the inductive step is visualized by the following diagram, in which $\twoheadrightarrow$ depicts $\to^*$: \begin{center} \begin{tikzcd}[row sep=0.01em, column sep=1em] s_1 \arrow[dd] \arrow[rrrr, "="] & & & & s_2' \arrow[dd] \\ & & \text{SLC}^= & & \\ a \arrow[dd, "n"'] \arrow[rr, "="] & & b \arrow[dd, two heads] \arrow[rr, "\approx" description, no head] & & c \arrow[dd, two heads] \\ & \text{IH} & & \mathcal{M}^* & \\ s_2 \arrow[r, two heads] & s_3 \arrow[r, "\approx" description, no head] & d \arrow[rr, "\approx" description, no head] & & s_3' \end{tikzcd} \end{center} \vspace{-6.1mm} \qedhere \end{proof} \begin{lemma}[\texttt{confluence\_modulo\_equivalence}] Assume that $s_1$ is reachable and that \textit{Rid} and \textit{Lid} are infinite. Then $ s_2 \leftarrow^* s_1 \approx s_1' \to^* s_2' \Longrightarrow \exists s_3 \ s_3' \ldotp \ s_2 \to^* s_3 \approx s_3' \leftarrow^* s_2' $. \end{lemma} \begin{proof} By induction on the length $n$ of $s_1' \to^n s_2'$. The Isabelle proof of the inductive step is visualized by the diagram below, in which STRIP denotes the strip lemma: \begin{center} \begin{tikzcd}[row sep=0.01em, column sep=1em] s_1 \arrow[dd, "n"'] \arrow[r, "\approx" description, no head] & s_1' \arrow[rrr, two heads] & & & s_2' \arrow[dd, two heads] \\ & & \text{IH} & & \\ a \arrow[dd] \arrow[rr, two heads] & & b \arrow[dd, two heads] \arrow[rr, "\approx" description, no head] & & c \arrow[dd, two heads] \\ & \text{STRIP} & & \mathcal{M}^* & \\ s_2 \arrow[r, two heads] & s_3 \arrow[r, "\approx" description, no head] & d \arrow[rr, "\approx" description, no head] & & s_3' \end{tikzcd} \end{center} \vspace{-6.1mm} \qedhere \end{proof} Finally, determinacy is obtained as a corollary by the same proof in the original account: \begin{theorem}[\texttt{determinacy}] Assume that $e$ is a program expression and that \textit{Rid} and \textit{Lid} are infinite. Then $e \downarrow s$ and $e \downarrow s'$ imply $s \approx s'$. \end{theorem} \section{Discussion} \label{sec:discussion-and-related-work} Our formalization contributes to the metatheory of concurrent revisions in two ways. First, it demonstrates that interpreting the \formatrule{fork} side condition as \eqref{eq:fork_wrong} leads to nondeterminacy (Section \ref{sec:fork_side}). Second, it shows that the side condition on \formatrule{new} admits a weaker formulation, and that the side conditions on \formatrule{get} and \formatrule{set} are redundant (Section~\ref{sec:new_side}). More pragmatically, what are the implications of our findings for the existing C\# \cite{burckhardt2010concurrent} and Haskell \cite{leijen2011prettier} implementations of CR? It does not seem like our counterexample in Section~\ref{sec:fork_side} is reproducible in either language. Based on the provided C\# fragments and explanations~\cite{burckhardt2010concurrent}, a ``revision identifier'' is simply a reference to an object instance of a \texttt{Revision} class. Thus, when a revision has a join pending on some object, it cannot be garbage collected, and a concurrent fork cannot replace it. Experiments in an official online environment\footnote{\url{https://rise4fun.com/Revisions}} are consistent with this analysis: join operations do not affect the hash code of a revision object $r$, and subsequent joins on $r$ return an exception. The Haskell implementation has similar characteristics, and the authors explain that a revision's data is replaced with an exception when it is joined. Our tiling proof for determinacy clarifies that determinacy does not rely on strong local confluence only, but also on the mimicking property. While we think that one could reasonably argue that the mimicking property is too minor to mention in a paper proof, we nonetheless contend that it is valuable to have made the dependence explicit, especially if model extensions (such as a generalization of \formatrule{join}'s merge policy) are to be considered. We see at least three ways in which future work could meaningfully extend the formalization presented in this paper. First, the other results in the original account could also be formalized. In particular, we think that the theorem asserting the existence of a unique gca for every pair of states in a revision diagram would be interesting to formalize, since the property is important, and its paper proof relatively involved. Second, rule \formatrule{join} could be generalized to support custom merge functions. Third, the calculus could be extended with features that are part of the concurrent revisions project, but not yet formalized, such as support for incremental computation \cite{burckhardt2011two}. We think such extensions can leverage our formalization in two ways. First, all of the elementary definitions and the associated results can be directly reused, such as the unique decomposition lemma, the result that $\approx$ is an equivalence, and the lemmas required for reasoning about occurrences and renamings. Such reuse would eliminate a lot of tediousness from the formalization effort. Second, since most of our proofs are written using the structured Isar proof language, it should be quite easy to modify these proofs when, for instance, additional rules are added to the calculus: any newly generated cases can be straightforwardly integrated into the existing proofs. We consider this high degree of maintainability a great advantage of using Isabelle\slash HOL. \subparagraph{Related Work} Manovit et al. \cite{manovit2006testing} developed a formal axiomatic framework and pseudorandom testing methodology for TM systems, and used it to uncover bugs in the relatively well-known Transactional memory Coherence and Consistency (TCC) \cite{hammond2004transactional} system. Cohen et al.\ \cite{cohen2008mechanical} and Doherty et al.~\cite{doherty2013towards} both developed frameworks for the formal verification of TM implementations, using the interactive theorem prover PVS. Doherty et al.~\cite{doherty2017proving} presented the first formal verification of a pessimistic (i.e., non-aborting) software transactional memory (STM) algorithm using Isabelle/HOL, extending a refinement strategy pursued in~\cite{doherty2013towards}. Abadi et al.~\cite{abadi2008semantics} developed a formal semantics for the transactional Automatic Mutual Exclusion model, and used it to study design trade-offs and errors that occur in known STM implementations. \section{Introduction} \label{sec:introduction} \emph{Concurrency control models} provide abstractions that simplify the task of writing concurrent software. Such abstractions may assure the programmer, for instance, that intermediate program states of a process are not visible to other processes (\emph{isolation}), or that blocks of instructions execute as a single indivisible unit (\emph{atomicity}). These assumptions simplify reasoning about a program's behavior and prevent undesirable interactions between processes. \emph{Concurrent revisions} (\emph{CR}) is a concurrency control model originally published by Burckhardt et al.\ in 2010~\cite{burckhardt2010concurrent}. Unlike the relatively established family of \emph{transactional memory} (\emph{TM})~\cite{herlihy1993transactional,shavit1995software} models, which take inspiration from database transactions, the design of CR is modeled after branching version control systems such as Git. This unorthodox starting point gives rise to some distinguishing features, including: \begin{itemize} \item \emph{Non-linear program state history}. In traditional concurrent programming models, it makes sense to speak of `the' state of shared data. Any local views that processes have of this state may be considered deviating, e.g., because they are stale, or because an update is being prepared locally. By contrast, in CR there is no such singular shared state: there exists only the collection of local views on shared data. \item \emph{Deterministic conflict resolution}. Processes must sometimes converge, while their local views may conflict. Rather than issuing rollbacks in the event of conflict (as in TM), in CR the conflict is resolved at run time using \emph{deterministic merge functions}. Which merge function to apply is context-dependent, and is declaratively defined by the programmer using semantic type annotations. \item \emph{Determinacy}. A concurrency control model is \emph{determinate} if the outcome of programs is guaranteed to be uniquely determined~\cite{karp1966properties}. Most models are not determinate, since scheduling may influence a program's outcome. For instance, the outcome of a lock-based approach may depend on which thread first acquires a particular lock. For TM, the outcome may depend on which transaction is successfully committed first. By contrast, any CR program (satisfying some simple conditions) is determinate, regardless of asynchronous execution and scheduling. This simplifies the life of the programmer, who no longer needs to reason about the timing of events. \end{itemize} CR has been implemented in C\# by Burckhardt et al.~\cite{burckhardt2010concurrent}. This implementation is accompanied by a case study in the form of a game implementation, for which a considerable speedup is observed relative to a sequential version, and the corresponding code is arguably easy to reason about. A Haskell implementation followed later by Leijen et al.~\cite{leijen2011prettier}. The implementations are supported by a formal operational semantics by Burckhardt and Leijen \cite{burckhardt2011semantics} (supplemented with a relevant technical report \cite{burckhardt2010semanticstech}), which contains a proof of determinacy as one of its central results. Concurrency control models, being intricate pieces of concurrent software, are generally interesting targets for formal specification and verification. There are numerous formal approaches to the family of TM models~\cite{harris2005composable, abadi2008semantics, cohen2008mechanical, doherty2013towards, doherty2017proving}, for instance, and some of these efforts uncovered bugs in popular models that lead to fixes in existing software libraries~\cite{manovit2006testing}. The operational semantics of concurrent revisions, however, has not yet been formalized. This paper contributes the first step towards the formal verification of CR, using the formal operational semantics of \cite{burckhardt2011semantics} (henceforth referred to as the ``original account'') as our basis. The formalization was performed using the proof assistant \mbox{Isabelle\slash HOL} \cite{nipkow2002isabelle}. Our main results are \begin{itemize} \item the identification and resolution of subtle ambiguities in the side conditions of the rules of the operational semantics, resulting in the strengthening of a side condition and the elimination of three redundant side conditions; and \item the mechanization and simplification of the proof of determinacy, in which we show that the proof relies on a property not mentioned in the original account. \end{itemize} The verification of an orthogonal desired property, namely, the existence of unique greatest common ancestors in revision diagrams \cite{burckhardt2011semantics} (the meaning of which will become clearer in Section 2), is left for future work. The formalization artifact is available at the Archive of Formal Proofs \cite{overbeek2018formalization} and consists of about 3000 lines of Isabelle code. More details can be found in the author's master's thesis~\cite{overbeek2018thesis}. In the remainder of this paper, we first provide an overview of CR and describe its formal semantics (Section~\ref{sec:concurrent-revisions}). Then, we explain the formalization in three parts, covering respectively preliminary aspects (Section~\ref{sec:formalization-preliminaries}), the operational semantics (Section~\ref{sec:operational-semantics}) and the proof of determinacy (Section~\ref{sec:determinacy}). Finally, we discuss the significance of our findings, in part by considering CR implementations and related work (Section~\ref{sec:discussion-and-related-work}). \section{Operational Semantics} \label{sec:operational-semantics} We are now ready to formalize the operational semantics. Recall from Section \ref{sec:formal-semantics} that we have to choose between the \formatrule{fork} side conditions \eqref{eq:fork_wrong} and $r \notin \textit{RID} \ s$, and between the \formatrule{new} side conditions \eqref{eq:new_wrong} and $l \notin \textit{LID} \ s$. In this section, we first show that \eqref{eq:fork_wrong} is too weak, since it leads to an indeterminate calculus (Section \ref{sec:fork_side}). We then argue that the side conditions \eqref{eq:new_wrong} and $l \notin \textit{LID} \ s$ are equivalent, and that an even weaker formulation of this side condition is possible. The core of the argument is in Section \ref{sec:new_side}, in which we also describe its formalization in \theory{OperationalSemantics}. The argument is concluded in Section \ref{sec:executions}, in which we describe \theory{Executions}, the formalization of executions. \subsection{Side Condition for Rule \formatrule{fork}}\label{sec:fork_side} Can a revision identifier $r$ be safely allocated if one uses side condition \eqref{eq:fork_wrong}? The answer is no: this would result in indeterminacy, irrespective of the side condition on rule~\formatrule{new}. What follows is a counterexample to determinacy. As a visual aid, we underline redexes $r$ of expressions $e = \mathcal{E}[r]$. Define the program expression \[ P = \big( \lambda x \ldotp \mathsf{rfork} \ (\mathsf{rjoin} \ x) \bullet ( \mathsf{rjoin} \ x \bullet \mathsf{rfork} \ \mathsf{unit} ) \big) \bullet \underline{\mathsf{rfork} \ \mathsf{unit}} \] and consider an initial state $\{ r_1 \mapsto \langle \epsilon, \epsilon, P \rangle \}$. In what follows, we will omit the stores, because they will remain empty. \newcommand{\hspace{-15mm}}{\hspace{-15mm}} Consider the following execution trace: \small \[ \begin{array}{lllll} & \hspace{-2mm} \{ r_1 \mapsto P \} \\ \to_{r_1} & \hspace{-2mm} \{ r_1 \mapsto \underline{\big( \lambda x \ldotp \mathsf{rfork} \ (\mathsf{rjoin} \ x) \bullet ( \mathsf{rjoin} \ x \bullet \mathsf{rfork} \ \mathsf{unit} ) \big) \bullet r_2}, \\ & \hspace{-2mm} \ r_2 \mapsto \mathsf{unit} \} \\ \to_{r_1} & \hspace{-2mm} \{ r_1 \mapsto \underline{\mathsf{rfork} \ (\mathsf{rjoin} \ r_2)} \bullet ( \mathsf{rjoin} \ r_2 \bullet \mathsf{rfork} \ \mathsf{unit} ), \\ & \hspace{-2mm} \ r_2 \mapsto \mathsf{unit} \} \\ \to_{r_1} & \hspace{-2mm} \{ r_1 \mapsto r_3 \bullet (\underline{\mathsf{rjoin} \ r_2} \bullet \mathsf{rfork} \ \mathsf{unit} ), r_2 \mapsto \mathsf{unit}, \\ & \hspace{-2mm} \ r_3 \mapsto \underline{\mathsf{rjoin} \ r_2} \} \\ \to_{r_1} & \hspace{-2mm} \{ r_1 \mapsto r_3 \bullet ( \mathsf{unit} \bullet \underline{\mathsf{rfork} \ \mathsf{unit}} ), r_3 \mapsto \underline{\mathsf{rjoin} \ r_2} \} \\ \to_{r_1} & \hspace{-2mm} \{ r_1 \mapsto r_3 \bullet ( \mathsf{unit} \bullet r_4 ), r_3 \mapsto \underline{\mathsf{rjoin} \ r_2}, r_4 \mapsto \mathsf{unit} \} \end{array} \] \normalsize By \eqref{eq:fork_wrong}, $r_1$, $r_2$ and $r_3$ are pairwise distinct, and so are $r_1$, $r_3$ and $r_4$. But $r_2$ and $r_4$ may be equal, since $r_2$ occurred only in an expression when $r_4$ was forked. If $r_2 = r_4$, then $r_3$ performs a \formatrule{join} step resulting in the terminal global state $ s = \{ r_1 \mapsto r_3 \bullet ( \mathsf{unit} \bullet r_4), \ r_3 \mapsto \mathsf{unit} \} $. If $r_2 \neq r_4$, however, $r_3$ performs a \formatrule{join$_\epsilon$} step, collapsing the global state to $\epsilon$ $\not\approx s$. Thus, the revision calculus is nondeterminate if \eqref{eq:fork_wrong} is used. Using the side condition $r \notin \textit{RID} \ s$ invalidates the counterexample, and we will see in Section \ref{sec:determinacy} that it suffices for establishing determinacy. The proof that \eqref{eq:fork_wrong} does not suffice as the side condition for \formatrule{fork} is the only proof not part of the Isabelle formalization. To formalize it, a number of operational assumptions on \texttt{subst} are needed that allow it to distribute over the constructor symbols in the second reduction step. \subsection{Side Condition for Rule \formatrule{new}}\label{sec:new_side} Can a location identifier $l$ be safely allocated if one uses side condition \eqref{eq:new_wrong}? The answer is yes. In fact, the side conditions \[ \label{eq:new_side2} l \notin \bigcup \{ \mathsf{doms} \ L \mid L \in \mathsf{ran} \ s \} \tag{$SC'_\textit{new}$} \] \eqref{eq:new_wrong} and $l \notin \textit{LID} \ s$ all turn out to be equivalent. This is because $\textit{LID} \ L = \mathsf{doms} \ L$ for every $L \in \mathsf{ran} \ s$ is an execution invariant. This finding also implies that the side conditions for \formatrule{get} and \formatrule{set} are redundant. To prove our finding, our first step is to formalize the operational semantics assuming the conservative formulation $l \notin \textit{LID} \ s$. Its formalization is the inductive relation \texttt{revision\_step} in theory \theory{OperationalSemantics}. The notation $s \to_r s'$ henceforth corresponds to \texttt{revision\_step r s s'}. We introduce the following definition (formalized by the two Isabelle definitions \texttt{domains\_subsume} and \texttt{domains\_\allowbreak subsume\_\allowbreak globally}): \begin{definition}[Subsumption] The domains of a local state $L$ \emph{subsume} its location identifiers, denoted $\mathcal{S} \ L$, when $\textit{LID} \ L \subseteq \mathsf{doms} \ L$. We write $\mathcal{S}_G \ s$ for a global state $s$ when $\mathcal{S} \ L$ for all local states $L \in \mathsf{ran} \ s$. \end{definition} Our claim is thus that $\mathcal{S}_G$ is an execution invariant for global states $s$. (The direction $\mathsf{doms} \ L \subseteq \textit{LID} \ L$ is trivial.) We prove this by means of an inductive invariant. $\mathcal{S}_G$ is not an inductive invariant itself. The reason is rule (\textit{join}): \[ \begin{array}{lll} s\llbracket r \mapsto \langle \sigma, \tau, \mathcal{E}[\mathsf{rjoin} \ r'] \rangle, r' \mapsto \langle \sigma', \tau', v \rangle \rrbracket \to_r \\ s(r \mapsto \langle \sigma, \tau \mathsf{::} \tau', \mathcal{E}[\mathsf{unit}]\rangle, r' \mapsto \bot) \end{array} \] The two inductive assumptions $ \mathcal{S} \ \langle \sigma, \tau, \mathcal{E}[\mathsf{rjoin} \ r'] \rangle $ and $ \mathcal{S} \ \langle \sigma', \tau', v \rangle $ are not strong enough to prove the obligation $ \mathcal{S} \ \langle \sigma, \tau \mathsf{::} \tau', \mathcal{E}[\mathsf{unit}]\rangle $. Namely, the case in which $\tau'$ maps to a value containing some $l \in \textit{Lid}$ that is subsumed \emph{only} by $\mathsf{dom} \ \sigma'$ cannot be proven. To take care of rule \formatrule{join}, the following property is needed as well (formalized by definitions \texttt{subsumes\_accessible} and \texttt{subsumes\_accessible\_globally}): \begin{definition} Let $s$ be a global state with $r, r' \in \mathsf{dom} \ s$. We write $\mathcal{A} \ r \ r' \ s$ if $r' \in \textit{RID} \ (s \ r)$ implies $ \textit{LID} \ (s \ r')_\sigma \subseteq \mathsf{doms} \ (s \ r)$. If $\mathcal{A} \ r \ r' \ s$ for all $r, r' \in \mathsf{dom} \ s$, then we write $\mathcal{A}_G \ s$. \end{definition} We show that $\mathcal{S}_G \land \mathcal{A}_G$ is preserved under $\to$ steps. We do not yet show that it is an inductive invariant, since that requires the formalization of notions related to executions, such as the definition of an initial state. Since the proof is a contribution of this paper, we provide a proof sketch that also serves as a high-level overview for the proof in the Isabelle formalization. \begin{lemma}[\texttt{step\_preserves\_$\mathcal{S}_G$\_and\_$\mathcal{A}_G$}] \label{lem:inductive_inv} Assume that $s \to_r s'$, $\mathcal{S}_G \ s$ and $\mathcal{A}_G \ s$. Then $\mathcal{S}_G \ s'$ and $\mathcal{A}_G \ s'$. \end{lemma} \begin{proof} We first establish $\mathcal{S}_G \ s'$ by a case distinction on the step $s \to_r s'$. It suffices to show $\mathcal{S} \ (s' \ r'')$ for indices $r''$ that have been updated, i.e., for which\ $s \ r'' \neq s' \ r''$. Cases \formatrule{apply}, \formatrule{ifTrue}, \formatrule{ifFalse}, \formatrule{new}, \formatrule{get} and \formatrule{set} modify only revision $r$, and case \formatrule{fork} in addition modifies revision $r'$. In each case, the goal is shown using calculational reasoning, requiring only the assumption $\mathcal{S} \ (s \ r)$. The proof for case \formatrule{join} is proven similarly, but in addition requires the assumption $\mathcal{A}_G \ s$. Case \formatrule{join$_\epsilon$} is vacuous since $s' =\epsilon$. To show $\mathcal{A}_G \ s'$, we make two observations. First, $\mathcal{A} \ r \ r \ s'$ for all $r \in \mathsf{dom} \ s'$ follows from $\mathcal{S}_G \ s'$ (encoded by lemma \texttt{$\mathcal{S}_G$\_imp\_$\mathcal{A}$\_refl}). Second, if $s' \ r_1 \ {=} \ s \ r_1$ and $s' \ r_2 \ {=} \ s \ r_2$, then $\mathcal{A} \ r_1 \ r_2 \ s'$ follows directly from $\mathcal{A} \ r_1 \ r_2 \ s$. Hence, it suffices to show that $\mathcal{A} \ r_1 \ r_2 \ s'$ for all \emph{distinct} $r_1, r_2 \in \mathsf{dom} \ s'$ with $s' \ r_1 \ {\neq} \ s \ r_1$ or $s' \ r_2 \ {\neq} \ s \ r_2$. We again proceed by case analysis on the step $s \to_r s'$: \begin{itemize} \item For each of the six local rules that modify \emph{only} the revision $r$, one must show $\mathcal{A} \ r \ r'' \ s'$ and $\mathcal{A} \ r'' \ r \ s'$ for arbitrary $r'' \in \mathsf{dom} \ s'$ with $r'' \neq r$. The reasoning in each of these six cases is very similar. \item Case \formatrule{join} is like the above, except that a case distinction on $r'' \in \textit{RID} \ \tau'$ is required for showing $\mathcal{A} \ r \ r'' \ s'$. \item Case (\textit{fork}) creates two new local states at $r$ and $r'$. This creates a proof obligation for six properties, namely, $\mathcal{A} \ r_1 \ r_2 \ s'$ for distinct $r_1, r_2 \in \{ r, r', r'' \}$, where $r''$ is some arbitrary unchanged revision. \item Case \formatrule{join$_\epsilon$}, finally, again holds vacuously. \qedhere \end{itemize} \end{proof} Theory \theory{OperationalSemantics} ends with the definition of \texttt{revision\_step\_relaxed}. This inductive relation is identical to \texttt{revision\_step}, except that the side condition for \formatrule{new} is \eqref{eq:new_side2}, and the side conditions for \formatrule{get} and \formatrule{set} are omitted. Here, we will write $s \to_r' s'$ for the relation \texttt{revision\_step\_relaxed r s s'}. The proof that $\to_r$ and $\to_r'$ characterize the same transition system (given the definition of an initial state) is formalized in \theory{Executions}. \subsection{Executions} \label{sec:executions} Theory \theory{Executions} formalizes all of the notions related to executions, described in Section \ref{sec:formal-semantics}. The set \texttt{steps} encodes the abstracted relation $\to$. To avoid confusion with the HOL symbol for logical implication, we write $s \leadsto s'$ for $s \to s'$ in the Isabelle formalization. The closure operations are defined using definitions from the Isabelle library \texttt{Transitive\_Closure}, which also liberates us from having to prove many standard (but indispensible) closure laws, such as $(x,y) \in R^* \iff \exists n. \ (x,y) \in R^n$ and $R^* \circ R^* = R^*$. The theory proves that every inductive invariant is an execution invariant (Isabelle lemma \texttt{inductive\allowbreak\_invariant\_\allowbreak is\_\allowbreak execution\_invariant}), and that the property \[ \lambda s \ldotp \ \mathcal{S}_G \ s \land \mathcal{A}_G \ s \] is an inductive invariant (\texttt{nice\_ind\_inv\_is\_\allowbreak inductive\_\allowbreak invariant}). This lemma is used to prove that $(s \to_r s') = (s \to_r' s')$ for reachable states $s$ (\texttt{transition\_\allowbreak relations\_\allowbreak equivalent}), concluding the argument started in Section~\ref{sec:new_side}. In addition, inductive invariance is used to show that reachability of $s$ implies that the sets $\textit{RID} \ s$ and $\textit{LID} \ s$ are finite (lemma \texttt{reachable\_imp\_identifiers\_finite}). Its proof requires similar lemmas for all the remaining structures. The result implies that a fresh identifier can always be allocated, on the assumption that \textit{Lid} and \textit{Rid} are infinite sets (lemma \texttt{reachable\_imp\_identifiers\_\allowbreak available}). While it is understandably not mentioned in the original account, it is required for formally establishing determinacy. The theory ends with a proof that reachability is closed under execution, i.e., that $s \to s'$ and reachability of $s$ imply that $s'$ is reachable (\texttt{reachability\_closed\_under\_\allowbreak execution}). This lemma is a technicality required in the proof of determinacy. \section{Formalization Preliminaries} \label{sec:formalization-preliminaries} \begin{figure*} $ \begin{array}{llll} \texttt{top\_redex}: & {\textit{redex} \ e} \Longrightarrow {e \rhd (\square, e)} \\ \texttt{lapply}: & \lnot \> \textit{redex} \ (e_1 \ e_2) \Longrightarrow e_1 \rhd (\mathcal{E}, r) \Longrightarrow {e_1 \ e_2 \rhd (\mathcal{E} \ e_2, r)} \\ \texttt{rapply} : & \lnot \> \textit{redex} \ (v \ e_2) \Longrightarrow e_2 \rhd (\mathcal{E}, r) \Longrightarrow {v \ e_2 \rhd (v \ \mathcal{E}, r)} \\ \texttt{ite} : & {\lnot \> \textit{redex} \ (e_1 \ \mathsf{?} \ e_2 \ \mathsf{:} \ e_3) \Longrightarrow e_1 \rhd (\mathcal{E}, r)} \Longrightarrow {e_1 \ \mathsf{?} \ e_2 \ \mathsf{:} \ e_3 \rhd (\mathcal{E} \ \mathsf{?} \ e_2 \ \mathsf{:} \ e_3, r)} \\ \texttt{ref} : & {\lnot \> \textit{redex} \ (\mathsf{ref} \ e) \Longrightarrow e \rhd (\mathcal{E}, r)} \Longrightarrow {\mathsf{ref} \ e \rhd (\mathsf{ref} \ \mathcal{E}, r)} \\ \texttt{read} : & {\lnot \> \textit{redex} \ (\mathsf{!} e) \Longrightarrow e \rhd (\mathcal{E}, r)} \Longrightarrow {\mathsf{!} e \rhd (\mathsf{!} \mathcal{E}, r)} \\ \texttt{lassign} : & \lnot \> \textit{redex} \ (e_1 := e_2) \Longrightarrow \ e_1 \rhd (\mathcal{E}, r) \Longrightarrow {e_1 := e_2 \rhd (\mathcal{E} := e_2, r)} \\ \texttt{rassign} : & \lnot \> \textit{redex} \ (l := e_2) \Longrightarrow e_2 \rhd (\mathcal{E}, r) \Longrightarrow {l := e_2 \rhd (l := \mathcal{E}, r)} \\ \texttt{rjoin} : & {\lnot \> \textit{redex} \ (\mathsf{rjoin} \ e) \Longrightarrow e \rhd (\mathcal{E}, r)} \Longrightarrow {\mathsf{rjoin} \ e \rhd (\mathsf{rjoin} \ \mathcal{E}, r) } \end{array} $ \caption{Predicate \texttt{decompose}, which asserts how expressions can be decomposed.} \end{figure*} We briefly describe the formalization of all aspects of the semantics that are preliminary to the mechanization of the operational semantics. These aspects are defined in the Isabelle theories \theory{Data}, \theory{Occurrences}, \theory{Renaming} and \theory{Substitution}. Theory \theory{Data} imports \texttt{Main}, meaning that it depends only on a standard assortment of Isabelle libraries. \subparagraph{Data} Theory \theory{Data} defines the inductive data types \texttt{const}, \texttt{('r,'l,'v) val}, \texttt{('r,'l,'v) expr} and \texttt{('r,'l,'v) cntxt} required for formalizing expressions (Section \ref{sec:formal-semantics}). In the latter three definitions, \texttt{'r}, \texttt{'l} and \texttt{'v} are type parameters for respectively the types of revision identifiers \textit{Rid}, location identifiers \textit{Lid} and variables \textit{Var}. The theory also defines the notions of stores and states, and some of the related notations and operations, such as projection functions for local states. In Isabelle, partial functions $\alpha \rightharpoonup \beta$ are modeled using option types, i.e., as total functions $\alpha \to \beta \ \texttt{option}$. Theory \theory{Data} also contains all definitions related to plugging and decomposing. Most notably, it contains the proof of the unique decomposition lemma (formalized as lemma \texttt{completion\_eq}) mentioned in Section \ref{sec:formal-semantics}. The proof for this lemma has the following structure. First, a particular decomposition for terms containing redexes is defined, given in Figure~2, and formalized as inductive predicate \texttt{decompose}. Intuitively, $e \rhd (\mathcal{E}, r)$ is meant to assert that expression $e$ decomposes into context $\mathcal{E}$ and redex $r$. The decomposition is shown to be valid and unique, respectively: \begin{lemma}[\texttt{plug\_decomposition\_equivalence}] For redexes $r$, \hspace{0.5mm} $e \rhd (\mathcal{E},r) \iff \mathcal{E}[r] = e$. \end{lemma} \begin{proof} Direction $\Longrightarrow$ follows by rule induction on $e \rhd (\mathcal{E}, r)$. Direction $\Longleftarrow$ is shown by structural induction on $\mathcal{E}$. \end{proof} \begin{lemma}[\texttt{unique\_decomposition}] If $e \rhd (\mathcal{E}_1, r_1)$ and $e \rhd (\mathcal{E}_2, r_2)$, then $\mathcal{E}_1 = \mathcal{E}_2$ and $r_1 = r_2$. \end{lemma} \begin{proof} By rule induction on $e \rhd (\mathcal{E}_1, r_1)$. \end{proof} Proofs of unique decomposition lemmas have a reputation of being tediously routine and error-prone \cite{xiao2001from}. This is also our experience, and we think the many inductive cases provide some indication for that. Isabelle's \textit{auto} proof method, however, is able to solve all of these cases automatically once configured with the supporting lemma below and (automatically generated) introduction and elimination rules for \texttt{decompose}. \begin{lemma}[\texttt{plugged\_redex\_not\_val}] If $r$ is a redex, then $\mathcal{E}[r] \notin \textit{Val}$. \end{lemma} \subparagraph{Occurrences} Theory \theory{Occurrences} defines the $\textit{RID}$ and $\textit{LID}$ definitions for stores, local states and global states. (The $\textit{RID}$ and $\textit{LID}$ definitions for values, expressions and contexts are automatically introduced with the data type declarations in \theory{Data}.) The theory also proves lemmas that are useful for reasoning about occurrences of location and revision identifiers. For instance, suppose we wish to prove $\textit{RID} \ v \subseteq \textit{RID} \ s(r \mapsto \langle \sigma, \tau(l \mapsto v), \mathcal{E}[e] \rangle)$. Ideally, we would like to automate the proofs to such obvious lemmas as much as possible. To this end, we prove a number of simplification rules that flatten complex expressions such as $\textit{RID} \ s(r \mapsto \langle \sigma, \tau(l \mapsto v), \mathcal{E}[e] \rangle)$ into simpler ones such as \[ \begin{array}{lll} \textit{RID} \ s(r \mapsto \bot) \cup \{r\} \cup \textit{RID} \ \sigma \cup \textit{RID} \ \tau(l \mapsto \bot) \ \cup \\ \textit{RID} \ v \cup \textit{RID} \ \mathcal{E} \cup \textit{RID} \ e\text{,} \end{array} \] since Isabelle's automation tools can easily reason about sets. Similarly, we declare a number of introduction and elimination rules for expressions that cannot be flattened. An example is the introduction rule \[ r \in \textit{RID} \ (\sigma :: \tau) \Longrightarrow r \notin \textit{RID} \ \sigma \Longrightarrow r \in \textit{RID} \ \tau \] named \texttt{ID\_combination\_subset\_union(1)} in the Isabelle formalization. \subparagraph{Renaming} Theory \theory{Renaming} contains all of the definitions and laws related to renaming. Like the $\textit{RID}$ and $\textit{LID}$ functions, the various renaming functions are discriminated using subscripts in Isabelle, which we omit in this paper. For values $v$, the renaming $\mathcal{R} \ \alpha \ \beta \ v$ is defined as an abbreviation for $\texttt{map\_val} \ \alpha \ \beta \ \mathit{id} \ v$, where \texttt{map\_val} is a function automatically generated by the data type declaration of \texttt{val}. Here, $\texttt{map\_val} \ \alpha \ \beta \ \mathit{id} \ v$ is the value obtained by renaming location identifiers, revision identifiers and variables according to $\alpha$, $\beta$ and the identity function, respectively. Abbreviations are analogously defined for the renaming of expressions and contexts. The renaming of a store $\sigma$, $\mathcal{R} \ \alpha \ \beta \ \sigma$, is formalized as the function \[ (\mathcal{R} \ \alpha \ \beta \ \sigma) \ l = \sigma \ (\beta^{-1} \ l) \ \text{>>=} \ (\lambda v \ldotp \mathcal{R} \ \alpha \ \beta \ v)\text{,} \] where \text{>>=} is the \emph{bind operator} satisfying $(\texttt{None} \ \text{>>=} \ f) = \texttt{None}$ and $(\texttt{Some} \ x \ \text{>>=} \ f) = \texttt{Some} \ (f \ x)$ for option types. We show that the renaming is well defined for bijections $\beta$ (lemma \texttt{$\mathcal{R}_S$\_implements\_renaming}). The renaming of a global state is defined in a similar fashion, and the renaming of a local state is straightforwardly defined as a renaming of its components. The relation $\approx$ is defined and established to be an equivalence (lemmas $\alpha\beta\texttt{\_refl}$, $\alpha\beta\texttt{\_sym}$ and $\alpha\beta\texttt{\_trans}$). This requires proving several identity, composition and inverse laws for each of the renaming functions. We prove several distributive laws that serve as simplification rules for renamings. For instance, the term $\mathcal{R} \ \alpha \ \beta \ (s(r \mapsto \langle \sigma, \tau(l \mapsto v), \mathcal{E}[e] \rangle))$ is configured to simplify to \[ \begin{array}{lll} \mathcal{R} \ \alpha \ \beta \ s(\alpha \ r \mapsto \langle \mathcal{R} \ \alpha \ \beta \ \sigma, \mathcal{R} \ \alpha \ \beta \ \tau(\beta \ l \mapsto \mathcal{R} \ \alpha \ \beta \ v), \\ \hspace{23.2mm} \mathcal{R} \ \alpha \ \beta \ \mathcal{E}[ \mathcal{R} \ \alpha \ \beta \ e] \rangle)\text{.} \end{array} \] We distinguish a special class of bijective renamings of the form $\mathit{id}(x := y, y := x)$ that we call \emph{swaps}. All renamings used in proofs are swaps. Several rules are proven that help eliminate ``redundant'' swaps. An example of such a rule states that if $l \notin \textit{LID} \ v$ and $l' \notin \textit{LID} \ v$, then $\mathcal{R} \ \mathit{id} \ \mathit{id}(l := l', l' := l) \ v = v$ (lemma \texttt{eliminate\_swap\_val(2)}). The swap rules are declared as both simplification and introduction rules. \subparagraph{Substitution} As observed in Section~\ref{sec:formal-semantics}, rule \formatrule{apply} presupposes a notion of substitution, but the original account does not specify which one. For this reason, we also do not fix a particular notion of substitution. Instead, theory \theory{Substitution} defines a locale called \texttt{substitution}. The locale fixes a constant \texttt{subst}, and introduces three assumptions: \begin{enumerate} \item \texttt{renaming\_distr\_subst}: \\ $\mathcal{R} \ \alpha \ \beta \ (\texttt{subst} \ e \ x \ e') = \texttt{subst} \ (\mathcal{R} \ \alpha \ \beta \ e) \ x \ (\mathcal{R} \ \alpha \ \beta \ e')$; \item \texttt{subst\_introduces\_no\_rids}: \\ $\textit{RID} \ (\texttt{subst} \ e \ x \ e') \subseteq \textit{RID} \ e \cup \textit{RID} \ e'$; and \item \texttt{subst\_introduces\_no\_lids}: \\ $\textit{LID} \ (\texttt{subst} \ e \ x \ e') \subseteq \textit{LID} \ e \cup \textit{LID} \ e'$. \end{enumerate} We found that these assumptions were sufficient for proving determinacy. We provide two models for \texttt{substitution} that demonstrate that the assumptions are satisfiable. The first is a trivial model, in which \texttt{subst} is interpreted as a constant function that maps to $\mathsf{unit}$: $\texttt{constant\_function} \ e \ x \ e' = \mathsf{unit}$. The fact that this constant function is a model (proven in lemma \texttt{constant\_function\_models\_substitution}) indicates that the assumptions on \texttt{subst} are weak. The second model, function \texttt{nat\_subst$_\texttt{E}$}, is a more faithful instance of a deterministic substitution function in which natural numbers are used as variables. It is mutually recursively defined with \texttt{nat\_subst$_\texttt{V}$}, which implements substitution for values. Let $\mathcal{V} \ e$ denote the set of (free and bound) variables that occur in the expression $e$, and let $e_{x \mapsto y}$ denote the expression obtained by renaming \emph{every} variable $x$ in $e$ to $y$. The following case of the definition illustrates how deterministic capture-avoiding substitution is implemented: \[ \begin{array}{lll} \texttt{nat\_subst}_\texttt{V} \ e \ x \ (\lambda y\ldotp e') = \\ \hspace{10mm} \begin{cases} \lambda y\ldotp e' & \text{if $x = y$}\\ \lambda z \ldotp \texttt{nat\_subst$_\texttt{E}$} \ e \ x \ e'_{y \mapsto z} & \text{otherwise} \end{cases} \end{array} \] where $z = \textit{max}(\mathcal{V} \ e \ \cup \ \mathcal{V} \ e') + 1$. For further technical details, such as why bound variables are also renamed, we refer to the author's master's thesis~\cite{overbeek2018thesis}.
1,314,259,996,938
arxiv
\section{The Role of Individuals in Usage-Based Grammar Induction} This paper experiments with the interaction between the amount of exposure (the size of a training corpus) and the number of representations learned (the size of the grammar and lexicon) under perception-based vs production-based grammar induction. The basic idea behind these experiments is to test the degree to which computational construction grammar \cite{Alishahi2008, Wible2010, Forsberg2014, d17, Barak2017, Barak2017a} satisfies the expectations of the usage-based paradigm \cite{g06a, Goldberg2011, Goldberg2016}. The input for language learning, \textit{exposure}, is essential from a usage-based perspective. Does usage-based grammar induction maintain a distinction between different types of exposure? A first preliminary question is whether the grammar grows at the same rate as the lexicon when exposed to increasing amounts of data. While the growth curve of the lexicon is well-documented \cite{Zipf1935, Heaps1978, Gelbukh2001,Baayen2001}, less is known about changes in construction grammars when exposed to increasing amounts of training data. Construction Grammar argues that both words and constructions are \textit{symbols}. However, because these two types of representations operate at different levels of complexity, it is possible that they grow at different rates. We thus experiment with the growth of a computational construction grammar \cite{d18, Dunn2019} across data drawn from six different registers: news articles, Wikipedia articles, web pages, tweets, academic papers, and published books. These experiments are needed to establish a baseline relationship between the grammar and the lexicon for the experiments to follow. The second question is whether a difference between perception and production influences the growth curves of the grammar and the lexicon. Most corpora used for experiments in grammar induction are aggregations of many unknown individuals. From the perspective of language learning or acquisition, these corpora represent a \textit{perception-based} approach: the model is exposed to snippets of language use from many different sources in the same way that an individual is exposed to many different speakers. Language perception is the process of hearing, reading, and seeing language use (being exposed to someone else's production). These models simulate perception-based grammar induction in the sense that the input is a selection of many different individuals, each with their own grammar. This is contrasted with a \textit{production-based} approach in which each training corpus represents a single individual: the model is exposed only to the language production observed from that one individual. Language production is the process of speaking, writing, and signing (creating new language use). From the perspective of language acquisition, a purely production-based situation does not exist: an individual needs to learn a grammar before that grammar is able to produce any output. But, within the current context of grammar induction, the question is whether a corpus from just a single individual produces a different type of grammar than a corpus from many different individuals. This is important because most computational models of language learning operate on a corpus drawn from many unknown individuals (perception-based, in these terms) without evaluating whether this distinction influences the grammar learning process. We conduct experiments across two registers that simulate either production-based grammar induction (one single individual) or perception-based grammar induction (many different individuals). The question is whether the mode of observation influences the resulting grammar's growth curve. These conditions are paired across two registers and contrasted with the background registers in order to avoid interpreting other sources of variation to be a result of these different exposure conditions. The third question is whether individuality is an important factor to take into account in induction. On the one hand, perception-based models will be exposed to language use by many different individuals, potentially causing individual models to \textit{converge} onto a shared grammar. On the other hand, production-based models will be exposed to the language use of only one individual, potentially causing individual models to \textit{diverge} in a manner that highlights individual differences. We test this by learning grammars from 20 distinct corpora for each condition for each register. We then compute the pairwise similarities between representations, creating a population of perception-based vs production-based models. Do the models exposed to individuals differ from models exposed to aggregations of individuals? The primary contribution of this paper is to establish the influence that individual production has on usage-based grammar induction. The role of individual-specific usage is of special importance to construction grammar: How much does a person's grammar actually depend on observed usage? The computational experiments in this paper establish that production-based models show more individual differences than comparable perception-based models. This is indicated by both (i) a significantly increased growth curve and (ii) greater pairwise distances between learned grammars. \section{Methods: Computational CxG} The grammar induction experiments in this paper draw on computational construction grammar \cite{d17, d18b, d18}. In the Construction Grammar paradigm, a grammar is modelled as an inventory of symbols of varying complexity: from parts of words (morphemes) to lexical items (words) up to abstract patterns (\textsc{np -> det n}). Construction Grammar thus rejects the notion that the lexicon and grammatical rules are two separate entities, instead suggesting that both are similar symbols with different levels of abstraction. In the same way as other symbols, the units of grammar in this paradigm consist of a \textit{form} combined with a \textit{meaning}. This is most evident in the case of lexical items, but also applies to grammatical constructions. For example, the abstract structure \textsc{np vp np np}, with the right constraints, conveys a meaning of transfer (e.g. \textit{Kim gave Alex the book}). In order to extract a grammar of this kind computationally, an algorithm must focus on the form of the constructions. For example, computational construction grammars are different from other types of grammar because they allow lexical and semantic representations in addition to syntactic representations. On the one hand, this leads to constructions capturing item-specific slot-constraints that are an important part of usage-based grammar. On the other hand, this means that the hypothesis space of potential grammars is much larger. Representing the \textit{meaning} of these constructional forms is a separate problem from finding the forms themselves. ~ \hspace{1mm}(a) \textsc{np}-Simple -> \textsc{det} \textsc{adj} \textsc{n} \hspace{1mm}(b) \textsc{np}-Construction -> \textsc{det} \textsc{adj} \textsc{[sem=335]} \hspace{1mm}(c) ``the developing countries" \hspace{1mm}(d) ``a vertical organization" \hspace{1mm}(e) ``this total world" ~ For example, a simple phrase structure grammar might define just one version of a noun phrase as in (a), using syntactic representations. But a construction grammar could also define the distinct \textsc{np}-construction in (b), further constraining the semantic domain. Thus, the utterances in (c) through (e) are noun phrases that belong to this more constrained \textsc{np}-based construction (where the semantic constraint is represented as \textsc{sem=335}). The grammar induction algorithm used here employs an association-based beam search to identify the best sequences of slot-constraints \cite{Dunn2019}. While a grammar formalism like dependency grammar \cite{Nivre2008, Zhang2012} must identify the head and attachment type for each word, a construction grammar must identify the representation type for each slot-constraint. This leads to a larger number of potential representations and the beam search has been used to explore this space efficiently. Previous work has used the Minimum Description Length (MDL) paradigm \cite{Goldsmith2001, Goldsmith2006} to describe the fit between a grammar and a corpus as an optimization function during training. With the exception of the use of semantic representations for slot-constraints, the meaning of constructions is not taken into account here. This is a necessary simplification. Nonetheless, it is important to remember that -- to the extent that these patterns are strong manifestations of association across slots -- it is likely that they each possess a distinct meaning as well as a distinct form. The experiments in this paper are centered on sub-sets of corpora containing 100k words. This is significantly less data than previous work \citep{d18}. The idea is to measure the degree to which the grammar itself changes when the induction algorithm is exposed to a more realistic amount of linguistic usage. Because the impact of training size is not clear on the MDL metric, the grammars in this paper are based on the beam search together with an MDL-based metric for choosing the optimum threshold for the $\Delta P$ association measure \citep{DunnIJCL} used in the beam search. But a final MDL-based selection stage is not employed. Previous work represented semantic domains using word embeddings clustered into discrete categories. To provide better representations for less common vocabulary items, the embeddings here are derived from fastText \cite{Grave2019}, using k-means (the number of clusters is set to 1 per 1,000 words). Thus, the assumption is that each lexical item belongs to a single domain. Drawing on the universal part-of-speech tag-set \cite{pdm12, nn}, semantic domains are only applied to open-class lexical items, on the assumption that more functional words do not carry domain-specific information. The codebase for grammar induction is open source.\footnote{\href{https://github.com/jonathandunn/c2xg/tree/v0.03}{https://github.com/jonathandunn/c2xg}} \section{Data and Experimental Design} \begin{table} \centering \begin{tabular}{|c|l|c|} \hline \textbf{ID} & \textbf{Data Source} & \textbf{Condition} \\ \hline \textsc{ac-ind} & Academic Articles & Production \\ \textsc{pg-ind} & Published Books & Production \\ \hline \textsc{ac-agg} & Academic Papers & Perception \\ \textsc{pg-agg} & Published Books & Perception \\ \hline \textsc{tw-agg} & Tweets & Background \\ \textsc{cc-agg} & Web Crawled & Background \\ \textsc{wi-agg} & Wikipedia Articles & Background \\ \textsc{nw-agg} & News Articles & Background \\ \hline \end{tabular} \caption{Sources of Language Data} \label{tab:1} \end{table} The basic experimental framework in this paper is to apply grammar induction to independent sub-sets of corpora drawn from different registers. We find the \textit{growth curve} of grammars and lexicons by measuring the increase in representations as these individual subsets are combined. In this case, we examine the representations learned from between 100k and 2 million words in increments of 100k, for a total of 20 observations per condition. Further, we measure the \textit{convergence} of grammars by quantifying pairwise similarities within each condition. In this framework, a \textit{condition} is defined by the data used for learning representations. For example, we examine the convergence of grammars learned from news articles by measuring pairwise similarity across 200 randomly selected combinations of unique sub-sets of the corpus of news articles. Because of variation in registers, or varieties associated with the context of production \cite{Biber2009}, some grammatical constructions are incredibly rare in one type of corpus but quite common in another type \cite{FodorCrowther+2002+105+145, Sampson2002}. Along these same lines, some registers have more technical terms and thus a larger lexicon with more rare words. Both of these factors mean that the relationship between grammar and the lexicon could be an artifact of one particular register. To control for this possibility, the experiments in this paper are replicated across six registers, as shown in Table 1. First, corpora representing unique individuals are taken from academic articles and from Project Gutenberg. In this condition, each additional increment of data represents a new speaker (e.g. Dickens, followed by Austen, followed by James). Second, corpora representing aggregations of individuals are taken from the same registers; the difference here is that each additional increment of data does not represent a unique new speaker, only an increased amount of language use. Third, background corpora representing other aggregations of individuals are taken from tweets, web pages, Wikipedia articles, and news articles. These background corpora provide a baseline against which we compare variation in production-based vs perception-based models. Does any observed difference between the \textit{production} and \textit{perception} conditions fall within the expected range observed within this baseline? In the first condition, \textit{production}, each increment of data (100k words) represents the production of a single individual. In other words, a model trained on this sub-set of the corpus is a representation of only that one individual's production. A corpus of academic articles is drawn from the field of history \cite{Daltrey2020}. This corpus represents the \textsc{ac-ind} condition, meaning the \textit{Academic} register representing \textit{Individuals}. A corpus of books from Project Gutenberg is drawn from 20th century authors. This corpus represents the \textsc{pg-ind} condition, meaning the \textit{Project Gutenberg} data organized by \textit{Individuals}. Each grammar and lexicon in this condition is trained on the production of a single speaker. In the second condition, \textit{perception}, these production-based corpora are contrasted with data from the same registers in which each increment of 100k words represents many unknown individuals aggregated together. In other words, a model trained on this sub-set of the corpus reflects the perception of a single individual exposed to many other speakers. The academic register is represented by the British Academic Written English Corpus \cite{Alsop2009}, drawn from proficient student writing. This provides the \textsc{ac-agg} condition, representing the \textit{Academic} register but with each increment an \textit{Aggregation} of many unknown individuals. The register of books is drawn from the same Project Gutenberg corpus, this time with at most 500 words in each increment representing a single author. This ensures that there is little individual-specific information present in the corpus. This variant provides the \textsc{pg-agg} condition, representing \textit{Project Gutenberg} data as an \textit{Aggregation} of many individuals. To provide a baseline, these paired corpora are contrasted with four further sources which represent an aggregation of many unknown individuals: social media data from tweets (\textsc{tw-agg}), web data from the Common Crawl (\textsc{cc-agg}), Wikipedia articles (\textsc{wi-agg}), and news articles, with no more than 10 articles from the same publication per increment (\textsc{nw-agg}). This range of sources ensures that the experiments do not depend on the idiosyncratic properties of a single register. Each \textit{ID} in Table 1 represents 2 million words, divided into increments of 100k words. Representations are learned independently on each increment in isolation. In other words, the grammar induction algorithm is applied to each increment of 100k words, with no influence from the other sections of the overall corpus. Thus, each grammar simulates the representations learned from exposure to a fixed amount of language data. The \textit{amount} of exposure is held constant (at 100k words per grammar), allowing us to measure the influence of individuals (production) vs. aggregations of individuals (perception). The growth of grammars and lexicons is simulated by creating the union of these independent sub-sets: for example, the grammar from Dickens plus the grammar from Austen plus the grammar from James. This means that, after observing 2 million words, the production-based condition has observed the union of 20 different individuals. This design is required to represent the production-based condition because of the difficulty of finding 2 million words for many different individuals. This means that the perception-based condition at 2 million words samples from potentially tens of thousands of speakers while the production-based condition samples from just 20 speakers. Thus, the growth curves potentially depend on the order in which different samples are observed. In other words, there is a chance that differences between growth curves are artifacts of particular orders of observation and not actual differences between corpora. To test this possibility, we simulate growth curves from 100 random samples for each condition. For each sample, we calculate the coefficient of the regression between the amount of the data and the number of representations, a measure of the growth curve. This provides a population of growth curves for each condition. We then use a t-test to determine whether this sample of growth curves represents a single population. In every case, there is no difference. This gives us confidence that the order of observations has no influence on the final results; the curves reported here are averaged across these 100 samples. \section{Measuring Growth Curves and Grammatical Overlap} \begin{figure*}[t] \centering \includegraphics[width=475 pt]{Figure_1_LexvsCxg} \caption{Growth Curve of the Lexicon Contrasted with the Grammar} \label{fig:lex_size} \end{figure*} \begin{table*} \centering \begin{tabular}{|ccccc|ccccc|} \hline \textbf{Lexicon} & ~ & ~ & ~ & ~ & \textbf{Grammar} & ~ & ~ & ~ & ~ \\ \textit{Condition} & \textbf{$\alpha$} & \textbf{[0.025} & \textbf{0.975]} & \textbf{Max $N$} & \textit{Condition} & \textbf{$\alpha$} & \textbf{[0.025} & \textbf{0.975]} & \textbf{Max $N$}\\ \hline \textsc{ac-agg} & 0.776 & [0.772 & 0.782] & 67.4k & \textsc{ac-agg} & 0.660 & [0.657 & 0.664] & 16.2k \\ \textsc{pg-agg} & 0.771 & [0.764 & 0.780] & 56.3k & \textsc{pg-agg} & 0.652 & [0.652 & 0.654] & 13.3k \\ \textsc{cc-agg} & 0.782 & [0.775 & 0.790] & 67.2k & \textsc{cc-agg} & 0.649 & [0.648 & 0.651] & 12.7k \\ \textsc{nw-agg} & 0.788 & [0.782 & 0.795] & 76.2k & \textsc{nw-agg} & \textbf{0.721} & [\textbf{0.718} & \textbf{0.724}] & \textbf{37.7k} \\ \textsc{tw-agg} & 0.793 & [0.787 & 0.799] & 82.9k & \textsc{tw-agg} & 0.678 & [0.676 & 0.680] & 19.8k \\ \textsc{wi-agg} & 0.797 & [0.793 & 0.803] & 91.1k & \textsc{wi-agg} & 0.657 & [0.654 & 0.660] & 15.2k \\ \hline \end{tabular} \caption{$\alpha$ Parameters and Confidence Intervals for Growth Curve Estimation by Register} \label{tab:2} \end{table*} The growth of the lexicon is expected to take a power law distribution in which the number of lexical items is proportional to the total number of words in the corpus, as shown in (1). The challenge in understanding the rate of growth, then, is to estimate the parameter $\alpha$. The simplest method is to undertake a least-squares regression using the log of the size of the corpus and number of representations, as show in (2). On some data sets, this method is potentially problematic because fluctuations in the most infrequent representations can lead to a poor fit at certain portions of the curve \cite{Clauset2009}. We validated the experiments in this paper by conducting comparisons between estimated $\alpha$ parameters and synthesized data following Heap's law. These comparisons confirm that the traditional least-squares regression method provides an accurate measure of the growth curve. \begin{equation} p(x) \propto x^{-\alpha} \end{equation} \begin{equation} \log p(x) = \alpha \log x + c \end{equation} The first question is the degree to which there is variation in the $\alpha$ parameter across representation type (grammar vs lexicon) or condition (production vs perception). For each case, such as perception-based grammar induction from news articles, we calculate the growth curve as described above using least-squares regression on the mean growth curve. We then report both the estimated $\alpha$ and the confidence interval for determining whether differences in the parameter values are significant. \begin{equation} d_{J}(A,B)= 1 - \frac{\left | A \cap B \right |}{\left | A \cup B \right |} \end{equation} The second question is the degree to which the representations from individual sub-sets of a corpus agree with one another. To measure this, we use the Jaccard distance between grammars, shown in (3). To calculate the Jaccard distance, we first form the union of the two grammars being compared and, second, create a vector for each with binary values indicating whether a particular item is present or not present. The Jaccard distance then measures the difference between these binary vectors, with higher values indicating that there is more distance between grammars and lower values indicating that the grammars are more similar. \section{Experiment 1. Growth Curves Across Grammar and the Lexicon} \begin{figure*}[t] \centering \includegraphics[width=475 pt]{Figure_2_INDvsAGG} \caption{Growth Curves for the Production and Perception Conditions} \label{fig:cxg_size} \end{figure*} \begin{table*} \centering \begin{tabular}{|ccccc|ccccc|} \hline \textbf{Lexicon} & ~ & ~ & ~ & ~ & \textbf{Grammar} & ~ & ~ & ~ & ~ \\ \textit{Condition} & \textbf{$\alpha$} & \textbf{[0.025} & \textbf{0.975]} & \textbf{Max $N$} & \textit{Condition} & \textbf{$\alpha$} & \textbf{[0.025} & \textbf{0.975]} & \textbf{Max $N$}\\ \hline \textsc{ac-agg} & 0.776 & [0.772 & 0.782] & 67.4k & \textsc{ac-agg} & 0.660 & [0.657 & 0.664] & 16.2k \\ \textsc{ac-ind} & 0.788 & [0.784 & 0.792] & 79.1k & \textsc{ac-ind} & \textbf{0.691} & [0.686 & 0.697] & \textbf{25.7k} \\ \hline \textsc{pg-agg} & 0.771 & [0.764 & 0.780] & 56.3k & \textsc{pg-agg} & 0.652 & [0.652 & 0.654] & 13.3k \\ \textsc{pg-ind} & 0.757 & [0.751 & 0.764] & 47.5k & \textsc{pg-ind} & \textbf{0.716} & [0.714 & 0.719] & \textbf{34.0k} \\ \hline \end{tabular} \caption{$\alpha$ Parameters and Confidence Intervals for Growth Curve Estimation by Condition} \label{tab:3} \end{table*} We begin by measuring the difference between growth curves for the lexicon and for grammars. Here we compare each of the six perception-based conditions, to see the range of behaviours across registers. This is shown in Figure 1, with the x axis showing the increasing amount of data (from 100k words to 2 million words) and the y axis showing the increasing number of representations (to a max of 80k lexical items). The red line represents the grammar and the blue line represents the lexicon. Each of the perception-based conditions (i.e., each register) is represented by a separate plot. This figure shows that the lexicon grows much more quickly than the grammar. This is somewhat expected because, even though both of them are symbols in the Construction Grammar paradigm, they are symbols of different complexity and may have different behaviors. The other important observation is that lexical items can only be terminal units in the slots of grammatical constructions, which again suggests that the number of different terminal units should be larger than the number of grammatical constructions. The growth of both lexicon and grammar is visualized by the slope of the lines, with a steeper curve showing quicker growth. Further, the grammar generally levels off, with the rate of growth slowing more quickly as the amount of data increases. In other words, as we observe new data, we are less likely to continuously encounter new constructions as we are to encounter new lexical items. There is general agreement across registers, except that the corpus of news articles shows a grammar that grows much more quickly, reaching a total of 37k constructions. This is a significantly larger grammar than any of the other registers. We also see variation in the lexicon, with the vocabulary on Wikipedia growing at the quickest rate. Which of these differences are significant? We examine this in Table 2 by looking at the coefficient of a least-squares linear regression to estimate the $\alpha$ parameter, as discussed above. Each $\alpha$ is also shown with its confidence interval, outside of which the difference is taken to be significant. These regression results formalize what is visually clear from the figure: the difference between grammar and lexicon is quite significant. Because the $r^2$ values of the regression are so high \cite{Clauset2009}, it is also the case that there is a significant but less meaningful difference across registers in both types of representation. The clearest of these register-specific outliers are Wikipedia (for the lexicon) and news articles (for the grammar); only the second of these is significantly different from all other registers. \section{Experiment 2. Perception vs Production in Growth Curves} Our next experiment takes a closer look at the difference in the growth curves under our two conditions, production (structured around individuals) and perception (structured around aggregations of individuals). The results are shown in Figure 2, again with the growth in number of representations (types) on the y axis and the amount of data observed (tokens) on the x axis. The top row presents the lexicon and the bottom row the grammar. Finally, the blue line represents the perception condition while the red line represents the production or individual condition. \begin{figure*}[t] \centering \includegraphics[width=475 pt]{Figure_4_GrammarOverlap} \caption{Distribution of Grammar Differences using Jaccard Distance} \label{fig:cxg_size} \end{figure*} The growth of the lexicon does not show any striking differences. In the academic register (\textsc{ac}), the perception condition shows a faster growth rate; but in the book register (\textsc{pg}) the reverse is true. But the growth of the grammar shows a marked difference: the production-based grammar (in red) grows more quickly in both conditions. This is formalized in Table 3, showing the estimated $\alpha$ parameters together with their confidence intervals for testing significance. The lexical differences, confirming what we see visually, are not significantly different in either register (i.e., the confidence intervals overlap, or very nearly do). So the difference between production and perception has no influence on the growth of the lexicon. And yet the growth of the grammar across these two conditions is significantly different in both registers, with an especially large difference in the register of published books (\textsc{pg}). This significance is shown by the confidence intervals on the estimation of the $\alpha$ parameter; but it is also shown in the final size of the grammars: 16.2 and 13.3k (\textsc{agg}) vs 25.7k and 34.0k (\textsc{ind}). In other words, given access to data from just one individual, the grammar contains more constructions than an equal amount of data from an aggregation of individuals. It is important to remember that the grammar induction algorithm is applied independently to each sub-set of the data. What this result shows, then, is that there are considerable individual differences or idiosyncrasies in the grammar but not in the lexicon. In both registers, grammar induction based on the production of individuals acquires more constructions given the same amount of exposure. This is important because most computational approaches to language learning assume that speakers generalize toward a single shared grammar. This implies, incorrectly, that the presence of many speakers in the training corpora is irrelevant, perhaps with the further constraint that each training corpus should represent a single community and register (like written British English). \section{Experiment 3. Perception vs Production in Grammar Similarity} The previous experiments have focused on the \textit{size} and growth of the grammars without focusing on the presence of individual representations (i.e., constructions). To what degree do the grammars from each sub-set of a corpus overlap? Is there a significant difference between the overlap of perception-based and production-based representations? The basic idea in this experiment is to take a closer look at the higher growth curve in production-based grammars identified in the previous experiment: it is possible that a few of the grammars are unique, thus contributing to a higher growth curve, without a pervasive uniqueness distributed across all of the production-based grammars. This experiment consists in creating pairs of grammars under the two conditions. First, we sample 200 pairs drawn from each condition/register: for example, a pair from different sub-sets of the corpus of news articles. Second, we use Jaccard distance to measure the similarity of each pair. Each comparison is made within a single register, thus controlling for the possibility of register variation. This provides a broader population of pairwise similarities, allowing us to measure the uniqueness of individual grammars in each condition. We visualize the distribution of grammar similarities using a violin plot in Figure 3. The distance measure ranges from 1 (no overlap) to 0 (complete overlap). The violin plot here shows the distributions, with width representing the density for a particular value and height representing the range of values. This shows, for example, that the \textsc{ac-ind} condition is not normally distributed. Rather, it has a large range of values with two slight peaks. The \textsc{ac-agg} condition, however, is normally distributed, with a large peak at its mean (shown here by the dotted line in the center). The values for the Jaccard distances show that, independently of condition, these pairs of grammars are relatively dissimilar. There are many reasons why this is the case, ranging from the amount of data used to train each grammar to the possibility that constructional representations overlap with slightly different slot-constraints. Putting aside the baseline similarity that is observed using this particular measure, the larger point is that there is a clear distinction between production-based and perception-based grammars. This figure shows a clear distinction between the production-based (\textsc{ind}) and perception-based (\textsc{agg}) conditions. The grammars learned from individuals vary widely among themselves: some pairs have a high overlap but others a low overlap. Furthermore, the most similar pairs in the individual conditions are as similar or less similar than the average pair for the aggregated condition. This indicates that there are individual differences in these grammars, the same phenomenon that resulted in the higher growth curves identified in the second experiment above. The perception-based grammars, however, have a low degree of variation: the similarity measures are centered densely around the mean because most grammars have the same degree of similarity. This means that the aggregated or perception-based condition is forcing the induction algorithm to converge onto more stable representations by exposing it to many individuals. The inverse of this generalization is that individuals have unique or idiosyncratic constructions which are only revealed when the training corpus is centered around that individual. This finding fits well with studies in variation \cite{Dunn2019, Dunn2019a} which reveal the high degree of syntactic differences across speech communities. We also notice in Figure 3 that the news register, although part of the perception-based condition, is not as densely centered as the other background registers. This shows the importance of including many registers in a study like this. The likely reason is that different publications enforce their own stylistic conventions. This data set is balanced to ensure that no single publication venue accounts for more than 10 of the articles in any sub-set of the corpus. It remains the case, however, that the presence of a publication-specific style may simulate a different distribution of grammar overlap. We formalize this violin plot in Table 4 using Bayesian estimates of the mean and variance for each condition at a 99\% confidence interval. Because the Jaccard distance is between 0 and 1, we multiply each value by 100 to make the values easier to read. First, the mean distance in the production-based condition is significantly higher in each case; further, the production-based conditions have a higher mean than any of the background conditions. Second and more importantly, the variance for the production-based conditions is greater by an order of magnitude than all other conditions. Only the news register is close; and this is still more similar to the other background data sets than to the individual data sets. The variance is important because it represents the range of overlap caused by individual differences in the grammars. These Bayesian estimates reinforce the visualization and show that there is more variance and thus more individual differences within grammars that are trained from the production of a single individual. This experiment thus confirms what is suggested by the increased growth curves seen in the second experiment: production-based grammars diverge into more individual-specific representations. \begin{table} \centering \begin{tabular}{|ccc|} \hline \textbf{Condition} & \textbf{Mean} & \textbf{Variance}\\ \hline \textsc{ac-ind} & 91.35 & 0.053 \\ \textsc{pg-ind} & 87.79 & 0.045 \\ \hline \textsc{ac-agg} & 85.08 & 0.009 \\ \textsc{pg-agg} & 79.01 & 0.006 \\ \hline \textsc{cc-agg} & 79.33 & 0.005 \\ \textsc{tw-agg} & 83.06 & 0.009 \\ \textsc{wi-agg} & 84.76 & 0.008 \\ \textsc{nw-agg} & 86.33 & 0.026 \\ \hline \end{tabular} \caption{Estimated Mean and Variation at Bayesian Confidence Interval of 99\% (Each *100 for readability)} \label{tab:4} \end{table} \section{Discussion and Conclusions} The three computational experiments in this paper have shown that there is a significant difference between perception-based and production-based grammar induction, even when these conditions are contrasted across many registers. Grammars based on individuals (i) have a significantly steeper growth curve and (ii) a significantly more long-tailed distribution of pairwise similarity. We have also seen that the growth curve of the grammar in general does not have the same $\alpha$ parameter as the lexicon, but does still conform to the generalizations provided by Heap's Law. This supports the idea of a continuum between grammar and the lexicon, with the symbolic representations in the grammar more complex and more abstract, thus showing a slower growth curve. The results obtained by the three experiments overall reveal that, given a certain number of word tokens, the number of constructions extracted is higher if the sample is taken from one unique individual as opposed to a set of unknown individuals. For example, 100k words of data from academic prose written by the same individual contain 1845 construction types, while the same amount of data from a combination of individuals contains about 1512 construction types, a difference of 333. This is not a trivial result: as a counter-factual, it would also be plausible to expect that the aggregated data would contain a wider variety of constructions because it represents a wider variety of individuals. These results therefore suggest that the constructions that are normally observed in traditional (aggregated) corpora are just the tip of the iceberg: there are many individual-specific constructions that are never observed in aggregated production. In other words, the \textit{uniqueness} of individual construction grammars is disguised when observing the aggregated usage of many individuals. These findings are consistent with the usage-based proposal that the general grammatical representation of a language emerges as a complex-adaptive system \cite{Beckner2009}. The grammars learned in the perception-based condition contain fewer construction types and are relatively similar to each other. However, these seemingly homogeneous grammars are in fact formed from the shared usage across a number of different individuals. And, as shown in the production-based condition, these aggregated individuals on their own are likely to use very different grammars.
1,314,259,996,939
arxiv
\section{Introduction} The timescale of wavelength-dependent lags in AGNs has long been problematic since the delays are much longer than a dynamical timescale. When delays were first convincingly established in NGC~7469 \citep{wanders97,collier98} we interpreted them as a consequence of light-travel time in the external illumination of a disk with a $T \propto R^{-3/4}$ radial temperature structure. This gives delays, $\tau \propto \lambda^{4/3}$ \citep{collier98, kriss00}. From broad-band optical photometry, \citet{sergeev05} have found many more wavelength-dependent lags and made the important discovery that the lags are luminosity-dependent with $\tau \propto L^{1/2}$. To explain this they postulate that the height of the external illumination source depends on the square-root of the luminosity. \section{Problems with Lamp Posts} Although the external-illumination (``lamp-post'') model can readily reproduce the wavelength dependence of the lags in NGC~7469, I believe it has enormous problems. The first major problem is that the ``lamp'' is not seen at {\it any} wavelength. It shines on the disk but never in the direction of the observer. In the terminology of the {\it International Dark Sky Association} the lamp is a ``fully-shielded'' fixture! While observational optical astronomers consider this to be highly desirable for all fixtures on terrestrial lamp posts, this is impossible for the putative external sources of illumination above AGN accretion disks -- no shield could survive in the harsh conditions near the hypothetical energy source. A second major problem is that after the correct subtraction of the host galaxy light, the UV and optical variability amplitude is large -- an order of magnitude is not unusual. To explain the lags with external illumination requires that this order-of-magnitude variability be due to the external illumination. If this is so, the luminosity of the ``lamp'' exceeds that of the accretion disk, the disk is irrelevant, and the lamp-post model is inconsistent, since it requires the disk radial temperature dependence to be determined by the accretion disk. In Fig.~\ref{fig1} I show the mean lags relative to the B band ($\lambda$4400) for the 14 AGNs measured by \citet{sergeev05}. The lags were determined with the cross-correlation function (CCF) technique of \cite{gaskell86}. Since the lags are luminosity dependent, I have normalized the lags from the centroids of the CCFs of the $\lambda$8000 and $\lambda$9000 bands to the corresponding \citet{sergeev05} centroid lags for NGC~5548. The error bars give errors in the means. It can be seen that there is a clear wavelength dependence and that this can easily be fit by $\tau \propto \lambda^{4/3}$. However, this fit (the solid line in Fig.~\ref{fig1}) predicts a far larger UV-to-optical lag than is observed. The dotted line shows a $\tau \propto \lambda^{4/3}$ fit to the actual $\lambda$1350--$\lambda$5100 delay we reported for NGC~5548 \citep{korista95}. \begin{figure}[t!] \vskip 0.2cm \centering \epsfxsize=10cm \epsfbox{gaskell_fig1.eps} \caption{Normalized wavelength dependent delays scaled to the size of NGC~5548. Filled circles are from the centroids of the cross-correlation functions; open circles are from the peaks of the cross-correlation functions. The solid line is a fit of $\tau \propto \lambda^{4/3}$ through the mean centroid delays observed at $\lambda\lambda$ 4400, 5500, 8000, and 9000. The dotted line is a $\tau \propto \lambda^{4/3}$ relationship fit to the observed delay between $\lambda$1350 and $\lambda$5100 in NGC~5548.} \label{fig1} \end{figure} \section{The Effect of Optical Emission from the Dusty Torus} The sharp increase in lag at long optical wavelengths in Fig.~\ref{fig1} has no explanation in the lamp-post model. I propose instead that it is due to contamination from optical emission from the hot dust in the inner torus. Although the dust emission peaks in the IR, the high dust sublimation temperature ($\sim 1500$\,K) means that there is substantial {\it optical} emission as well\footnote{A candle flame is at the graphite/PAH condensation temperature and a candle emits in the optical!}. It is well known that the variability of the IR dust emission lags the optical variability (see Fig.~\ref{fig2}), and IR lags have been determined for many objects \citep[see also these proceedings]{suganuma06}. The contaminating dust emission flux at shorter wavelengths will thus also lag the direct emission from the AGN. If two time series are cross correlated, the effect of contaminating one series with a third series with a different lag is to shift the peak in the CCF. This was modeled in a different context by \citet{gaskell87}. In the model proposed here, the flux in the R band, say, is the sum of the intrinsically-varying continuum (assumed to be varying coherently) and a small, much delayed contribution from the Wien tail of the hot dust. \begin{figure}[t!] \vskip 0.2cm \centering \epsfxsize=10cm \epsfbox{gaskell_fig2.eps} \caption{The K-band flux lagging the V-band flux in NGC~4151. The vertical arrows show how the (V-K) colors are substantially different at different epochs because of the lag of the K-band flux. Adapted from \citet{minezaki06}} \label{fig2} \end{figure} The size of the lag primarily depends linearly on two things: (a) the ratio of the contaminating flux to the intrinsic flux, and (b) the inner radius of the dusty torus. These dependencies give us two predictions: first, the optical lag will increase with increasing wavelength (because the flux from the dust increases with wavelength), as is shown to be the case in Fig.~\ref{fig1}. The second prediction is that because the inner radius of the torus (which is determined by the dust sublimation radius) increases as $L^{1/2}$ \citep{suganuma06}, the relative lags will also increase as $L^{1/2}$, as has been observed already by \citet{sergeev05}. In Fig.~\ref{fig1} the R band ($\lambda$7000) lag lies significantly above the line fit to the other points. This is to be expected because the strong H$\alpha$ emission line falls within the R passband and so introduces additional delayed contamination. \citet{korista01} have also pointed out that broad lines will produce diffuse continuum emission. This will be another source of lagged contamination. Because the dust emission comes from an extended region, its variabilty is smeared out as well as delayed. Rapid variations are washed out. This gives two additional predictions. The first is that the CCF will be asymmetric and the centroid of the CCF, which is less sensitive to the variability power spectrum (see Koratkar \& Gaskell 1991), will show a larger delay than the peak of the CCF. It can be seen in Fig.~\ref{fig1} that this indeed the case. The second prediction is that the lag given by the peak of the CCF will be smaller when the variability is more rapid. Thus different lags will be measured at different times. A final prediction is that there will be {\it hysteresis} in the color-magnitude diagram. It is obvious in Fig.~\ref{fig2} that the (V-K) colors are substantially different at two different epochs with similar V flux levels. In the model I have proposed, optical to IR colors can be predicted from the V-band light curve alone. For example, the hysteresis found by \citet{bachev04} in their (V-I) versus V diagram for Mrk 279 is quantitatively reproduced. The contamination model proposed here can be applied to other wavelength regions (e.g., cross-correlation analyses of X-ray variability). Because of the effects of contamination on cross-correlation analyses it is important to note that the lag given by a CCF often does {\it not} correspond to a physical scale. In AGNs there must also be substantial contamination from {\it scattered} radiation. This could be an explanation of the general smoothness of optical light curves. Since the albedo of scatterers is largely wavelength independent, this will not cause wavelength-dependent lags, but the contamination can be detected through lags in the polarized flux \citep{shoji05}. \acknowledgments This research has been supported by the National Science Foundation through grant AST 03-07912 and by the Space Telescope Science Institute through grant AR-09926.01.
1,314,259,996,940
arxiv
\section{Additional Definitions} \new{Let us fix ${\cal{K}}=({\cal{T}}, {\cal{A}})$.} A \emph{homomorphism} from interpretation ${\cal{I}}$ to interpretation ${\cal{J}}$, written as $h : {\cal{I}} \to {\cal{J}}$ is a function $h : \Delta^{\cal{I}} \to \Delta^{\cal{J}}$ that preserves roles, concepts, and individual names: that is, for all $r \in \ensuremath{\mn{N_{\mn{R}}}}$, $(h(d), h(e)) \in r^{\cal{J}}$ whenever $(d,e) \in r^{\cal{I}}$, for all $A \in \ensuremath{\mn{N_{\mn{C}}}}$, $h(d) \in A^{\cal{J}}$ whenever $d \in A^{\cal{I}}$, and $h(a)= a$ for all \new{$a \in \mn{ind}({\cal{A}})$}. \section{Proofs} \lemabox* \begin{proof} Let ${\cal{I}}$ be a finite model of ${\cal{K}} = ({\cal{T}}, {\cal{A}})$ such that ${\cal{I}}\not\models\Phi$. We can think of ${\cal{A}}$ as an interpretation with domain \new{$\mn{ind}({\cal{A}})$}. Then, ${\cal{I}}$ contains a subinterpretation ${\cal{I}}'$ that is an isomorphic copy of ${\cal{A}}$, except that the extension of concepts over \new{$\mn{ind}({\cal{A}})$} is kept as in ${\cal{I}}$. Let ${\cal{J}}$ be the interpretation obtained by starting from ${\cal{I}}'$ and for each \new{$a \in \mn{ind}({\cal{A}})$}, adding an isomorphic copy ${\cal{I}}_a$ of ${\cal{I}}$ sharing only $a$ with ${\cal{I}}'$. Clearly, ${\cal{J}}$ is a model of ${\cal{K}}$ and ${\cal{J}} \not\models\Phi$, because ${\cal{I}}$ is a homomorphic image of ${\cal{J}}$. Note also, that ${\cal{I}}_a\models {\cal{T}}$ for all \new{$a\in\mn{ind}({\cal{A}})$}. This shows that it suffices to look for counter models that are unions of a \emph{core} interpretation ${\cal{J}}'$ that is a copy of ${\cal{A}}$ up to the interpretations of concept names, and a collection of disjoint \emph{peripheric} models ${\cal{J}}_a$ of ${\cal{T}}$ such that $\Delta^{{\cal{I}}_a} \cap \Delta^{{\cal{J}}'} = \{a\}$ for \new{$a \in \mn{ind}({\cal{A}})$}. The algorithm iterates through all possible core models ${\cal{J}}'$. For each ${\cal{J}}'$ it needs to decide if there exist peripheric models ${\cal{J}}_a$ for \new{$a\in\mn{ind}({\cal{A}})$} such that no partial match $\pi$ of $\varphi \in \Phi$ in ${\cal{J}}'$ can be extended to a full match of $\varphi$ in the whole ${\cal{J}}$. For this it is enough to know if $({\cal{T}},{\cal{A}}')\models_{\mathsf{fin}} \Phi'$ where ${\cal{A}}'$ ranges over trivial ABoxes using a fixed individual $a$ and concept names from $\mn{CN}({\cal{K}})$, and $\Phi'$ ranges over sets of CRPQs $\varphi_U$ for $\varphi\in\Phi$ and $U \subseteq \textit{var}(\varphi)$ defined as follows. The CRPQ $\varphi_U$ is obtained from $\varphi$ by \begin{itemize} \item dropping all atoms that involve no variable from $U$, as well as all edge atoms involving a variable not in $U$; \item replacing each ${\cal{B}}_{q,q'}(x,x')$ such that $x\in U$ and $x\notin U$ with ${\cal{B}}_{q,p}(x,a)$ for some $p'$, and each ${\cal{B}}_{q,q'}(x,x')$ such that $x\notin U$ and $x' \in U$ with ${\cal{B}}_{p,q'}(a,x')$ for some $p$. \end{itemize} Note that $|\Phi'| = 2^{\mathrm{poly}(\|\Phi\|)}$ but all CRPQs in $\Phi'$ have size bounded by $\max_{\varphi \in\Phi} |\varphi|$ and the underlying semiautomaton ${\cal{B}}$ is not altered. The number of possible choices of ${\cal{A}}'$ and $\Phi'$ is \[2^{\mn{CN}({\cal{K}})} \cdot 2^{2^{\mathrm{poly}(\|\Phi\|)}}\,.\] The number of distinguishable choices for each peripheric model ${\cal{I}}_a$ is \[2^{2^{\mathrm{poly}(\|\Phi\|)}}\,.\] This gives up to \[2^{|\mn{CN}({\cal{K}})|\cdot|\new{\mn{ind}({\cal{A}})}|}\cdot 2^{2^{\mathrm{poly}(\|\Phi\|)}\cdot|\new{\mn{ind}({\cal{A}})}|} = 2^{\mathrm{poly}(\|{\cal{K}}\|)\cdot 2^{\mathrm{poly}(\|\Phi\|)}}\] choices for the algorithm. For each choice there are $|\Phi|\cdot|\new{\mn{ind}({\cal{A}})}|^{O(m)}$ partial matches to consider. The cost of verifying a single match is polynomial in the size of ${\cal{J}}'$ and the size of a single $\Phi'$; that is, $\mathrm{poly}(\|{\cal{K}}\|,2^{\mathrm{poly}(\|\Phi\|)})$. Overall, the complexity of the algorithm is \[2^{2^{\mathrm{poly}(\|\Phi\|)}\cdot\mathrm{poly}(\|{\cal{K}}\|)}\,.\qedhere\] \end{proof} \lemlevels* \begin{proof} ($\Rightarrow$) The run of ${\cal{B}}$ from $q$ to $q'$ induces a thread in the run of $\widehat{\cal{B}}$. We can split the thread into segments that stay at the same level, giving levels $\ell_1, \ell_2, \dots, \ell_k$, separated by transitions that decrease the level. Clearly, $1 \leq k \leq n$. The last positions on the subsequent levels give $j_1, j_2, \dots, j_k$. It is easy to check that the corresponding states satisfy the conditions specified in the lemma. ($\Leftarrow$) The first and third condition, combined with the fact that threads are non-increasing, imply that between indexes $j_i+1$ and $j_{i+1}$ the thread---from $\delta(q_i, w[j_i+1])$ in $\mathbf{p}_{j_i+1}$ to $q_{i+1}$ (or $q'$ for $i=m-1$) in $\mathbf{p}_{j_{i+1}}$---stays at the same level; similarly for the prefix and suffix. Combined with the transitions mentioned in the second condition they give a single thread witnessing a run of ${\cal{B}}$ from $q$ to $q'$ on $w$. \end{proof} \lemaxioma* \begin{proof} It is straightforward to express the condition that no element has incoming edges over different roles from $\mn{rol}({\cal{K}})$. Pick a fresh concept name $A_r$ for each $r\in\mn{rol}({\cal{K}})$ and include axioms $\top \sqsubseteq \forall r.A_r$ and $A_r \sqcap A_s \sqsubseteq \bot$ for all $r,s\in\mn{rol}({\cal{K}})$ with $r\neq s$. We next provide an alternative axiomatization of \begin{eqnarray} C_\mathbf{p} &\sqsubseteq &\forall r. C_{\hat \delta(\mathbf{p},r)} \label{eq:trans}\\ C_\mathbf{p} \sqcap C_{\mathbf{p}'} &\sqsubseteq &\bot \label{eq:consistent} \\ \top &\sqsubseteq &\bigsqcup_{\mathbf{p}\in \widehat Q} C_{\mathbf{p}} \label{eq:states} \end{eqnarray} To encode conditions \eqref{eq:states} we include the following axioms for every $\ell \in\{1,\dots,n\}$. \[\top \sqsubseteq \bigsqcup_{q\in Q} C_{q,\ell}\] and \[C_{q,\ell} \sqcap C_{q',\ell} \sqsubseteq \bot\] with $q,q' \in Q$ such that $q\neq q'$. These, together with the following will enforce condition \eqref{eq:consistent}. \[C_{q,\ell} \sqcap C_{q,\ell'} \sqsubseteq \bot\] for each pair $\ell,\ell'$ with $\ell \neq \ell'$ and each $q \in Q$. To ensure that the transitions of $\widehat {\cal{B}}$ are faithfully represented, we will use auxiliary concepts $A^r_{i,j}, D^r_{i,j}, B^r_i$ with $i,j\in\{1, \dots ,n\}$, and $r$ a role name. Let $\mathbf{p}=(p_1, \dots, p_n) \in \widehat Q$, and let $\delta(\mathbf{p},r)= (p'_1, \dots, p'_n)$, for some arbitrary (but fixed) role name $r$. We will use concept $A^r_{i,j}$ is to indicate that $\delta(q p_i,r) = p'_k $ for some $k \in \{1,2,\dots, i\}$. Further, $D^r_{i,j}$ will indicate that $\delta(p_i,r)= p'_j$. Finally, $B^r_\ell$ will help to indicate that the level of $\mathbf{p}'$ is equal to $\ell$. More precisely, if an element $d$ in the domain encodes the state $\mathbf{p}$, and the level of $\delta(\mathbf{p},r)$ is $\ell$, then every $r$-successor of $d$ must satisfy $B^r_\ell$ (see \eqref{eq:levels}) below. We have the following axioms: \begin{align} \top &\sqsubseteq A^r_{1,1} \label{eq:ini} \end{align} For every triple $q,\ell,\ell'$ with $q\in Q$ and $\ell,\ell' \in \{1,\dots, n\}$ \begin{align} C_{q,\ell} \sqcap A^r_{\ell,\ell'} &\sqsubseteq \forall r. (\bigsqcup_{1\leq k \leq \ell'}C_{\delta(q,r),k}) \label{eq:t1} \\ C_{q,\ell} \sqcap \exists r.C_{\delta(q,r),\ell'} &\sqsubseteq D^r_{\ell,\ell'} \label{eq:rec1} \end{align} For every $\ell,k$ with $1\leq \ell, k < n$: \begin{align} D^r_{\ell,k} \sqcap A^r_{\ell,k} &\sqsubseteq A^r_{\ell+1,k+1} \label{eq:updt1} \\ D^r_{\ell,k'} \sqcap A^r_{\ell, k} &\sqsubseteq A_{\ell+1,k} \label{eq:updt12} \quad \text{ for every } k' < k \end{align} For every $k \in \{1, \dots, n\}$, and every $\ell < k$, \begin{align} D^r_{n,\ell} \sqcap A^r_{n,k} &\sqsubseteq \forall r. B^r_k \label{eq:levels} \end{align} And for every $\ell < n$, \begin{align} B^r_\ell &\sqsubseteq (\bigsqcup_{i<j} (C_{q_i,\ell} \sqcap C_{q_j,\ell+1})) \label{eq:fill} \end{align} Finally, we require that for every $\ell,k,k' \in \{1, \dots , n\}$ such that $k\neq k'$: \begin{align} A^r_{\ell,k} \sqcap A^r_{\ell,k'} &\sqsubseteq \bot \label{eq:ic1}\\ D^r_{\ell,k} \sqcap D^r_{\ell,k'} &\sqsubseteq \bot \\ B^r_{k} \sqcap B^r_{k'} & \sqsubseteq \bot \label{eq:ic2} \end{align} Intuitively, the axioms encode the listing order of $(\delta(p_1), \dots, \delta(p_n))$ in $\hat \delta(\mathbf{p})= (p'_1, \dots, p'_n)$ as follows. \eqref{eq:ini} encodes that the first position of the tuple is (the only) available for $\delta(p_1,r)$. Further, by \eqref{eq:t1} we have that if $q=p_\ell$ and the next available position for $p_\ell$ is $\ell'$, then $\delta(p_\ell,r) = p'_{k}$ for some $1\leq k \leq \ell'$, which means in particular that $\delta(p_1,r) = p'_1$. As mentioned above, \eqref{eq:rec1} is used for ``recording" the level of $\delta(p_\ell,r)$ using the concept $D^r_{\ell, \ell'}$. That is, $D^r_{\ell, \ell'}$ holds whenever $\delta(p_\ell,r)= p'_{\ell'}$. Clearly, if $\delta(p_\ell,r)= p'_k$, and its next available position was $k$, then the next available position for $\delta(p_{\ell+1})$ is $k+1$. This situation is captured by \eqref{eq:updt1}. On the other hand, if $\delta(p_\ell,r)$ does not takes position $k$ (which is only possible if $\delta(p_\ell,r) = p'_{k'}$, with $k' < k$) then $k$ is available for $\delta(p_{\ell+1},r)$, as captured by \eqref{eq:updt12}. Now, we need to account for the positions not taken by any $\delta(p_i,r)$. By the way the positions are taken, it is enough to record the smallest position unused. This information is encoded using the concept $B^r_k$. Thus, if $A^r_{n,k}$ and $D_{n,\ell}$, for some $\ell < k$, are both satisfied then $k$ is the next available position for listing the remaining states ordered as in $Q= q_1, \dots q_n$. This is captured by~\eqref{eq:levels} and \eqref{eq:fill}. Finally, the role of CIs \eqref{eq:ic1}--\eqref{eq:ic2} is to ensure the consistency of the information encoded. With this intuition in mind, it is not difficult to see~\eqref{eq:trans} is faithfully encoded. \end{proof} \lemreach* \begin{proof} ($\Rightarrow$) This is obvious because the definition of ${\cal{B}}^{\cal{I}}_{q,q'}$ requires a path from $e$ to $e'$ in ${\cal{I}}$. ($\Leftarrow$) Consider a path from $e$ to $e'$ in ${\cal{I}}$ and the corresponding run of the automaton $\widehat{\cal{B}}$. We will focus on the thread starting in the state $q$. It starts on level $\ell$ because $e \in C^{\cal{I}}_{q,\ell}$ and cannot drop below level $\ell$ because ${\cal{I}}$ is a level-$\ell$ interpretation. Hence, the thread ends in $e'$ on level $\ell$. But the state in $e'$ on level $\ell$ is $q'$. Thus, this thread corresponds to a correct run of ${\cal{B}}$ from $q$ to $q'$. \end{proof} \lemdecoratemodel* \begin{proof} ${\cal{I}} \times \widehat{\cal{B}}$ is a $\widehat{\cal{B}}$-decorated interpretation by construction. The mapping $(e, r, \mathbf{p}) \mapsto e$ from ${\cal{I}} \times {\cal{B}}$ to ${\cal{I}}$ is a homomorphism. Should some $\phi\in\Phi$ be matched in ${\cal{I}} \times \widehat{\cal{B}}$, one could compose the match with the homomorphism above to obtain a match in ${\cal{I}}$. Assume that ${\cal{I}} \models {\cal{K}}$. Satisfaction of the ABox transfers directly to ${\cal{I}} \times {\cal{B}}$. Let us see that ${\cal{I}} \times {\cal{B}}$ is a model of the TBox of ${\cal{K}}$. For CIs of the forms $\bigcap_i A_i \sqsubseteq \bigsqcup_j B_j$ and $A \sqsubseteq \forall r.B$ this follows from the existance of a homomorphic mapping from ${\cal{I}} \times {\cal{B}}$ to ${\cal{I}}$, described above. For CIs of the form $A \sqsubseteq \forall r.B$ the reason is that the transition function of $\widehat {\cal{B}}$ is defined for each state $\mathbf{p}$ of $\widehat{\cal{B}}$ and each $r\in\mn{rol}({\cal{K}})$, thus each $r$-edge originating in $e$ will have its counterpart originating in $(e,s,\mathbf{p})$ for each $s$ and $\mathbf{p}$. \end{proof} \lemconsistent* \begin{proof} By contradiction, suppose that ${\cal{I}}\models\Phi$. Then there exists a match $\pi$ of some $\phi \in \tilde \Phi$ in ${\cal{I}}$. Take any element $e$ in the image of $\pi$. The definition of consistency applied for the trivial partition with $\varphi = \varphi'$ implies that $e \in A^{\cal{I}}_{\varphi,V}$ for $V=\pi^{-1}(a)$. \end{proof} \lemconsistentcomp* \begin{proof} Left to right implication is obvious. For right to left implication assume the contrary, that all bags and edge-bags are $\ell$-consistent, but there is: \begin{itemize} \item a fragment $\varphi$ of $\Phi$, \item a partition of $\varphi$ into a CRPQ $\varphi'$ of level $\ell$ and fragments $\varphi_1, \varphi_2, \dots, \varphi_k$, and sets $V$, $V_1, V_2, \dots, V_k$ such that: \begin{itemize} \item $\textit{var}(\varphi_i)\cap \textit{var}(\varphi_j) = \emptyset$ for $i\neq j$, \item $V_i = \textit{var}(\varphi_i) \cap \textit{var}(\varphi')$, \item $\emptyset \neq V\subseteq \textit{var}(\varphi) \cap \textit{var}(\varphi')$; \end{itemize} \item a match $\pi$ for $\varphi'$ in ${\cal{I}}$ and functions $\kappa$, $\kappa_1, \dots, \kappa_k$ such that: \begin{itemize} \item $\pi(V_i) = \{e_i\} \subseteq \big(A_{\varphi_i, V_i}^{\kappa_i}\big)^{{\cal{I}}}$ for all $i$, \item $\pi(V) = \{e\} \not\subseteq (A_{\varphi, V}^\kappa)^{{\cal{I}}}$, \item $\kappa_i(x) \leq \ell$ for all $x \in \textit{var}(\varphi_i)\,$, \item $\kappa(x) = \kappa_i(x)$ for all $x \in \textit{var}(\varphi_i) \setminus V_i\,$, \item $\kappa (x) = \ell$ for all $x \in \textit{var}(\varphi) \cap \textit{var}(\varphi')\,$. \end{itemize} \end{itemize} For each RPQ atom ${\cal{B}}_{q,q'}(x, y)$ in $\varphi'$, choose a path from $\pi(x)$ to $\pi(y)$ witnessing that the atom is satisfied. Pick the parameters above and the witnessing paths for which $\pi$ spans through the smallest number of bags and edge-bags (we count a bag if some edge atom in $\varphi'$ is mapped by $\pi$ to an edge of this bag, or if some witnessing path shares an edge with this bag). Note that the match of $\varphi'$ given by $\pi$ and the witnessing paths is connected. This is because the whole query $\varphi$ is connected and because $\pi(V_i)$ consists of just one element for each $i$ -- the query $\varphi'$ itself might not be connected, although it would be if we equated all variables in each $V_i$. The number of bags $\pi$ spans through must be at least two: were it contained in one bag, this bag would be inconsistent. Essentially, we will show that we can derive the fact that $e \in (A_{\varphi, V}^\kappa)^{{\cal{I}}}$ from $\ell$-consistency conditions for some matches spanning through smaller number of bags. Let $b$ be the bag of $e$ (not edge-bag, so it is unique). The match $\pi$ necessarily spans through the bag $b$: otherwise no edge or RPQ is matched inside $b$, so $\varphi'$ consists only of unary atoms, which means that $k = 1$, $\varphi_1 = \varphi$ and $e = e_1$, which easily leads to contradiction. Let $\psi', \psi_1, \dots, \psi_m$ be a partition of $\varphi$ taking into account the bag $b$, match $\pi$ and the chosen witnessing paths, where $\psi_1, \dots, \psi_m$ are fragments. That is, in the definition of a partition: \begin{itemize} \item the initial set $X'$ is the set of variables of $\varphi$ which are mapped by $\pi$ to the bag $b$ (note that $\pi$ is defined only on $\textit{var}(\varphi')$); \item each RPQ is split (or not) in an appropriate way, depending on whether the corresponding witnessing path has zero, one, or two endpoints in the bag $b$, and the fresh variables are assigned level and state according to the last (or first) elements in $b$ on the corresponding witnessing paths; \item the sets $X_i$ are chosen in the way which results in $\psi_i$ being fragments. \end{itemize} Let $U_i = \textit{var}(\psi_i) \cap \textit{var}(\psi')$ and $\pi'$ be a match agreeing with $\pi$ on $\textit{var}(\psi')\cap\textit{var}(\varphi')$, extended with fresh variables mapped to the appropriate elements on witnessing paths. The set $\pi'(U_i)$ consists of exactly one element, call it $e'_i$. Indeed, for some $i$ there is $j$ such that $\psi_i = \varphi_j$ and $U_i = V_j$, and so $\pi'(U_i) = \pi(V_j) = \{e_j\}$; and for other $i$, $\psi_i$ consists of some part of $\varphi'$, possibly merged with some $\varphi_j$, with all the variables shared with $\psi'$ being matched to an endpoint of an edge leaving or entering bag $b$. We claim that for each $i$, $e'_i \in A^{\lambda_i}_{\psi_i, U_i}$ for some appropriate $\lambda_i$, such that we can use $\ell$-consistency for $b$ to show that $e \in (A_{\varphi, V}^\kappa)^{{\cal{I}}}$. To show that, we need to relate all $\varphi_j$ to some $\psi_i$. Specifically, for each $i \in \{1, \dots, m\}$, consider all $j \in \{1, \dots, k\}$ such that $\textit{var}(\psi_i) \cap \textit{var}(\varphi_j) \neq \emptyset$. Note that variables in all $\psi', \psi_1, \dots, \psi_m$ are exactly the variables of $\varphi$, along with some fresh variables splitting some RPQs, and analogously for $\varphi', \varphi_1, \dots, \varphi_k$ (and each of the fragments $\varphi_j$ shares at least one variable with $\varphi$). Since $\psi'$ can be seen as a part of $\varphi'$ \big(in particular, $\textit{var}(\varphi)\cap\textit{var}(\psi') \subseteq \textit{var}(\varphi)\cap\textit{var}(\varphi')$\big), it is easy to see that each $j$ will be assigned to some $i$. For each $i$, $\psi_i$ can be (yet again) partitioned into $\psi'_i$ (intuitively being the common part of $\varphi'$ and $\psi_i$) and $\varphi_j$ for all $j$ assigned to this $i$. There is also a match $\pi_i$ agreeing with $\pi$ on $\textit{var}(\psi_i')\cap\textit{var}(\varphi')$, as usual, extended with fresh variables from $U_i$ mapped to the appropriate elements on witnessing paths. Since this match spans through fewer bags than $\pi$ (it does not span through the bag $b$, as all edges and parts of RPQs that were mapped inside $b$ are in $\psi'$), the $\ell$-consistency condition must be satisfied for this choice of parameters; since $\pi(V_j) = \{e_j\} \subseteq \big(A_{\varphi_j, V_j}^{\kappa_j}\big)^{{\cal{I}}}$ for all $j$, we get that $e'_i \in A^{\lambda_i}_{\psi_i, U_i}$ for $\lambda_i$ which agrees with $\kappa_j$ on all $\textit{var}(\varphi_j) \setminus V_j$, and is equal to $\ell$ on all other variables. Using this fact for all $i$ and using $\ell$-consistency in the bag $b$ for the match $\pi'$, we get that $e \in (A_{\varphi, V}^\kappa)^{{\cal{I}}}$ for $\kappa$ which agrees with each $\kappa_i$ on all variables from $\varphi_i$ apart from $V_i$ and is equal to $\ell$ on all other variables, which is exactly what needed to be shown. \end{proof} \lemunravellingconnected* \begin{proof} This is proved by routine unravelling. Let ${\cal{I}}$ be a finite $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$. For each element $d$ in ${\cal{I}}$ define ${\cal{I}}_d$ as the subinterpretation of ${\cal{I}}$ obtained by restricting the domain of ${\cal{I}}$ to the elements in the maximal strongly connected subset of $\Delta^{\cal{I}}$ that contains $d$. \new{Recall that the ABox ${\cal{A}}$ of ${\cal{K}}$ is trivial. Let $a$ be the unique element of $\mn{ind}({\cal{A}})$.} Begin the construction of a tree-like model ${\cal{J}}$ by taking a copy of ${\cal{I}}_a$. Then, as long as there exists an element $e$ in ${\cal{J}}$ and a CI $A \sqsubseteq \exists r. B$ in the TBox of ${\cal{K}}$ such that $e\in A^{\cal{J}}$ but there is yet no $e'\in B^{\cal{J}}$ such that $(e,e')\in r^{\cal{J}}$, find the original $d$ of $e$ in ${\cal{I}}$ and an element $d'\in B^{\cal{I}}$ such that $(d,d')\in r^{\cal{I}}$. Add to ${\cal{J}}$ a copy of ${\cal{I}}_{d'}$ as a new bag, with an $r$-edge from $e$ to the copy of $d'$. This construction gives a finite interpretation: the height of the tree of bags associated to ${\cal{J}}$ is bounded by the height of the DAG of strongly connected components of ${\cal{I}}$. It is straightforward to check that ${\cal{J}}$ is a level-$\ell$ model of ${\cal{K}}$ modulo ${\cal{E}}$. It is also clear that ${\cal{J}}$ can be mapped homomorphically to ${\cal{I}}$ by mapping each element of ${\cal{J}}$ to its original in ${\cal{I}}$. Because $\ell'$-consistency is defined in terms of forbidden matches it follows immediately that $\ell'$-consistency of ${\cal{I}}$ implies $\ell'$-consistency of ${\cal{J}}$. \end{proof} \crpqSCCreducts* \begin{proof} Use Lemma \ref{lem:reach} and the definition of reducts. \end{proof} \lemenvreduct* \begin{proof}[Proof] The proof will use constructions that are very similar to the ones used in the proof of Lemma~\ref{lem:consistentcomp}. Let us start with right to left implication. Let ${\cal{I}}$ be some $(\ell+1)$-consistent model of ${\cal{K}}$ modulo ${\cal{E}}'$ for some $(\ell+1)$-reduct ${\cal{E}}'$ of ${\cal{E}}$. We will show that the exact same model ${\cal{I}}$ is a strongly $\ell$-consistent model of ${\cal{K}}$ modulo ${\cal{E}}$ (note that strong $\ell$-consistency does not mention concept names from $\mn{CN}^\Phi_{\ell+1}$, so adjusting them is not necessary for this implication). Assume the contrary; so there is: \begin{itemize} \item a fragment $\varphi$, \item a partition of $\varphi$ into a CRPQ $\varphi'$ of level $\ell$ and fragments $\varphi_1, \varphi_2, \dots, \varphi_k$, and sets $V$, $V_1, V_2, \dots, V_k$ such that: \begin{itemize} \item $\textit{var}(\varphi_i)\cap \textit{var}(\varphi_j) = \emptyset$ for $i\neq j$, \item $V_i = \textit{var}(\varphi_i) \cap \textit{var}(\varphi')$, \item $\emptyset \neq V\subseteq \textit{var}(\varphi) \cap \textit{var}(\varphi')$; \end{itemize} \item a match $\pi$ for some $(\ell+1)$-reduct $\psi'$ of $\varphi'$ in ${\cal{I}}$ and functions $\kappa$, $\kappa_1, \dots, \kappa_k$ such that: \begin{itemize} \item $\pi(V_i) = \{e_i\} \subseteq \big(A_{\varphi_i, V_i}^{\kappa_i}\big)^{{\cal{I}}}$ for all $i$, \item $\pi(V) = \{e\} \not\subseteq (A_{\varphi, V}^\kappa)^{{\cal{I}}}$, \item $\kappa_i(x) \leq \ell$ for all $x \in \textit{var}(\varphi_i)\,$, \item $\kappa(x) = \kappa_i(x)$ for all $x \in \textit{var}(\varphi_i) \setminus V_i\,$, \item $\kappa (x) = \ell$ for all $x \in \textit{var}(\varphi) \cap \textit{var}(\varphi')\,$. \end{itemize} \end{itemize} For each RPQ atom ${\cal{B}}_{q,q'}(x, y)$ in $\psi'$, choose a path from $\pi(x)$ to $\pi(y)$ witnessing that the atom is satisfied. When constructing ${\cal{E}}'$, the exact same parameters were considered ($\varphi$, its partition into $\varphi'$, $\varphi_1, \varphi_2, \dots, \varphi_k$, $(\ell+1)$-reduct $\psi'$ of $\varphi'$, the set $V$ and the function $\kappa$), sets $U_1, \dots, U_m$ and fragments $\psi_1, \dots, \psi_m$ were defined (note that for a fixed match $\pi$ and witnessing paths, partial matches and partial witnessing paths for $\psi_i$ can also be obtained, for the parts common with $\psi'$), and a choice was made: \begin{itemize} \item either pick $i$ such that $U_i = \emptyset$ and remove all unary types that contain any $A^{\lambda_i}_{\psi_i,W_i}$ with $W_i\subseteq \textit{var}(\psi_i) \cap \textit{var}(\psi')$, $\lambda_i\big(\textit{var}(\psi_i) \cap \textit{var}(\psi')\big)=\{\ell+1\}$, and $\lambda_i(x)=\kappa(x)$ for all $x \in \textit{var}(\psi_i)\setminus \textit{var}(\psi')$ ; \item or remove all unary types that contain some $A^{\lambda_i}_{\psi_i,U_i}$ with $\lambda_i\big(\textit{var}(\psi_i) \cap \textit{var}(\psi')\big)=\{\ell+1\}$ and $\kappa(x)=\lambda_i(x)$ for all $x \in \textit{var}(\psi_i)\setminus \textit{var}(\psi')$, for each $i$ such that $U_i \neq \emptyset$, but do not contain $A^\kappa_{\varphi,V}$. \end{itemize} For each $i \in \{1, \dots, m\}$ consider a partition of $\psi_i$ into $\psi'_i$ (intuitively being the common part of $\psi_i$ and $\psi'$) and fragments $\varphi_j$ for all $j$ such that $\textit{var}(\varphi_j)\cap\textit{var}(\psi_i) \neq \emptyset$, and a match $\pi_i$ of $\psi'_i$ agreeing with $\pi$ and the choice of witnessing paths. If the first choice was made, take the chosen $i$, choose any element $e'$ in the image of $\textit{var}(\psi_i)\cap\textit{var}(\psi')$ under $\pi_i$, and let $W_i$ be the preimage of $e'$ under $\pi_i$. By $(\ell+1)$-consistency, we see that $e' \in A^{\lambda_i}_{\psi_i, W_i}$, where $\lambda_i$ agrees with $\kappa_j$ (and $\kappa$) on $\textit{var}(\varphi_j)\setminus V_j$ for all $j$ such that $\textit{var}(\varphi_j)\cap\textit{var}(\psi_i) \neq \emptyset$, and is equal to $\ell+1$ on all other variables. However, all unary types containing this concept were forbidden in ${\cal{E}}'$, which is a contradiction with ${\cal{I}}$ being a model of ${\cal{K}}$ modulo ${\cal{E}}'$. If the second choice was made, use $(\ell+1)$-consistency for each $\psi_i$ with $U_i \neq \emptyset$ and its partition as described above, which proves that $e \in A^{\lambda_i}_{\psi_i, U_i}$ for some $\lambda_i$ agreeing with $\kappa_j$ (and $\kappa$) on $\textit{var}(\varphi_j)\setminus V_j$ for all $j$ such that $\textit{var}(\varphi_j)\cap\textit{var}(\psi_i) \neq \emptyset$, and is equal to $\ell+1$ on all other variables. Thus, because of the environment ${\cal{E}}'$, $A^\kappa_{\varphi, V}$ must also be satisfied. (Note that the union of $\textit{var}(\psi_i) \setminus \textit{var}(\psi')$ for all $i$ with $U_i \neq \emptyset$ is equal to the union of $\textit{var}(\varphi_j) \setminus V_j$ for all $\varphi_j$ which are not disjoint with all $\psi_i$ considered here). Now we will prove left to right implication. Assume that ${\cal{I}}$ is a strongly $\ell$-consistent model of ${\cal{K}}$ modulo ${\cal{E}}$. Let ${\cal{I}}'$ be an interpretation that agrees with ${\cal{I}}$ over all role and concept names except $\mn{CN}_{\ell+1}^\Phi$, and in which the interpretation of these concept names is \emph{correct} in the following sense. First, for each $e\in \left( A^{\kappa}_{\varphi,V}\right)^{{\cal{I}}'}$ with $\mn{CN}_{\ell}^\Phi$ we let $e\in \left( A^{\kappa'}_{\varphi,V}\right)^{{\cal{I}}'}$ where $\kappa'(x)=\ell+1$ for $x\in V$ and $\kappa'(x)=\kappa(x)$ for $x \in \textit{var}(\varphi)\setminus V$. Then, we add element $e$ to $(A^\kappa_{\varphi, V})^{{\cal{I}}'}$ with $(A^\kappa_{\varphi, V})^{{\cal{I}}'}\in\mn{CN}^{\Phi}_{\ell+1}$ if and only if there exists some partition of $\varphi$ into $\varphi', \varphi_1, \dots, \varphi_k$, sets $V_1, \dots, V_k$ and a match $\pi$ for $\varphi'$ in ${\cal{I}}'$ with all requirements exactly as in the definition of $(\ell+1)$-consistency, in which additionally $\kappa_i(x) \leq \ell$ for all $x \in \textit{var}(\varphi_i)\setminus V_i$ and $\kappa_i(x) = \ell+1$ for all $x \in V_i$, for $i \in \{1, \dots, k\}$. Note that this does not leave many choices regarding the partition: all variables $x$ such that $\kappa(x) = \ell+1$ must be in $\varphi'$, all other variables of $\varphi$ must be outside $\varphi'$, so it is even known which RPQ atoms are split; only the levels and states of the fresh variables might differ between different partitions. The interpretation ${\cal{I}}'$ is $(\ell+1)$-consistent, since any partition of some $\varphi$ and a corresponding match (as in the definition of $(\ell+1)$-consistency) for which $\kappa_i(V_i) = \{\ell+1\}$ for some $i$, can be ``unwrapped'' to ones satisfying the correctness condition, as follows. If $e_i \in A^{\kappa_i}_{\varphi_i, V_i}$ and $\kappa_i(V_i) = \{\ell+1\}$, by the correctness condition, there is a partition of $\varphi_i$ and an appropriate match witnessing that. This partition and match of $\varphi_i$ can me ``merged'' into the partition and match of $\varphi$. Applying this procedure for all $i$ such that $\kappa_i(V_i) = \{\ell+1\}$ results in a partition and a match of $\varphi$ to which the correctness condition can be applied. Now we will construct an appropriate $(\ell+1)$-reduct ${\cal{E}}'$ of ${\cal{E}}$. Consider some parameters $\kappa$, $\varphi, \varphi', \varphi_1, \dots, \varphi_k, \psi'$, $V$, define $\psi_1, \dots, \psi_m, U_1, \dots, U_m$ as when constructing an $(\ell+1)$-reduct of the environment; recall that $\kappa$ uses levels at most $\ell$, and $\kappa\big(\textit{var}(\varphi)\cap\textit{var}(\varphi')\big) = \{\ell\}$. We need to choose one of the two options mentioned in the construction. If for some $i$ such that $U_i = \emptyset$, the interpretation ${\cal{I}}'$ does not violate the restrictions imposed by the first choice (that is, there are no elements in concept $A^{\kappa_i}_{\psi_i,W_i}$ in ${\cal{I}}'$ for all $W_i$ and $\kappa_i$ as defined during the construction of a reduct), choose this option and this $i$. Otherwise, choose the second option, knowing that in ${\cal{I}}'$ for each $i$ such that $U_i = \emptyset$ there is some element $e_i \in A^{\kappa_i}_{\psi_i,W_i}$ for some $W_i$ and $\kappa_i$ such that $W_i\subseteq \textit{var}(\psi_i) \cap \textit{var}(\psi')$, $\kappa_i\big(\textit{var}(\psi_i) \cap \textit{var}(\psi')\big)=\{\ell+1\}$, and $\kappa_i(x)=\kappa(x)$ for all $x \in \textit{var}(\psi_i)\setminus \textit{var}(\psi')$. We claim that ${\cal{I}}'$ is a model of ${\cal{K}}$ modulo the environment ${\cal{E}}'$ constructed as above. Suppose this is not the case. By the construction of ${\cal{E}}'$, the interpretation ${\cal{I}}'$ satisfies all the requirements imposed by the first choice. Hence, it must be the case that ${\cal{I}}'$ does not satisfy the requirements imposed by the second choice; that is, in ${\cal{I}}'$ there is an element $e$ in concepts $A^{\kappa_i}_{\psi_i,U_i}$ where $\kappa_i\big(\textit{var}(\psi_i) \cap \textit{var}(\psi')\big)=\{\ell+1\}$ and $\kappa(x)=\kappa_i(x)$ for all $x \in \textit{var}(\psi_i)\setminus \textit{var}(\psi')$, for all $i$ such that $U_i \neq \emptyset$, but not in $A^\kappa_{\varphi,V}$. For each $i \in \{1, \dots, m\}$ and the witnessing element $e_i \in \left(A^{\kappa_i}_{\psi_i,W_i}\right)^{{\cal{I}}'}$ (if $U_i \neq \emptyset$, we let $e_i = e$ and $W_i = U_i$), take the partition and match $\pi_i$ guaranteed by the correctness condition; all variables from $\textit{var}(\psi_i) \cap \textit{var}(\psi')$ are guaranteed to be in the domain of $\pi_i$. Merge all these matches together into one match $\pi'$. It is nearly a match of $\psi'$; since $\kappa_i$ determines for each $i$ which RPQ atoms in $\psi_i$ will be split in the partition, and it agrees with $\kappa$ on this matter, we can identify the splitting variables in $\psi'$ and in the domain of $\pi_i$; but these variables \big($\textit{var}(\psi')\setminus \bigcup_i\textit{var}(\psi_i)$\big) might have different states and levels assigned in $\pi'$; since ${\cal{I}}$ is a level-$\ell$ interpretation, the assigned levels are at most $\ell$. Let $\tilde\psi'$ be an $(\ell+1)$-reduct of $\psi'$ with states and levels of these variables adjusted to match the ones in $\pi'$. Let $\tilde\pi$ be $\pi'$ adjusted to $\tilde\psi'$ (it is easy to modify a match for a CRPQ to a match of its $(\ell+1)$-reduct). We get that there is a partition of $\varphi$ into $\tilde\varphi'$, $\tilde\varphi_1, \dots, \tilde\varphi_h$, sets $\tilde{V_1}, \dots, \tilde{V_h}$ and the match $\tilde\pi$ of an $(\ell+1)$-reduct $\tilde\psi'$ of $\tilde\varphi'$ to ${\cal{I}}'$, where $\tilde\pi(\tilde{V_i}) = \{\tilde{e_i}\}$, $\tilde{e_i} \in \left(A^{\lambda_i}_{\tilde{\varphi_i}, \tilde{V_i}}\right)^{{\cal{I}}'}$, and $\lambda_i$ do not use level $\ell+1$. Using strong $\ell$-consistency of ${\cal{I}}$ with these parameters gives us that $e \in \left(A^\lambda_{\varphi,V}\right)^{\cal{I}}$ for some $\lambda$. One just needs to show that $\lambda(x) = \kappa(x)$ for all $x \in \textit{var}(\varphi)$ to arrive at a contradiction. The matches $\pi_i$ obtained from correctness condition for $e_i \in \left(A^{\kappa_i}_{\psi_i, W_i}\right)^{{\cal{I}}'}$ guarantee that $\kappa_i(x) = \kappa(x)$ for all $x \in \textit{var}(\psi_i) \setminus \textit{var}(\psi')$. We also know that all $x \in \textit{var}(\psi_i) \cap \textit{var}(\psi')$ are in the domain of $\pi_i$, so (by the above use of strong $\ell$-consistency) $\lambda(x) = \ell$ for $x \in \textit{var}(\psi_i) \cap \textit{var}(\psi')$, and $\lambda(x) = \kappa(x)$ for all other variables of $\varphi$, which is exactly what is needed. \end{proof} \lemalgoconnected* \begin{proof} By Lemma~\ref{lem:strongly-consistent} it suffices to decide if there is a finite tree-like level-$\ell$ model of ${\cal{K}}=({\cal{T}},{\cal{A}})$ modulo ${\cal{E}}=(\Theta,\varepsilon)$ whose edge-bags are $\ell$-consistent and bags are strongly $\ell$-consistent. Our algorithm will compute the set of unary types that are realizable in such interpretations of increasing height. Here, by the height of a tree-like interpretation we mean the number of edges on the longest path from the root bag to a leaf bag. Th algorithm begins from the empty set of types $\Phi_{0} = \emptyset$. In round $h=1, 2, \dots$, based on the set $\Phi_{h-1}$ it computes the set $\Phi_{h}$ of types that can be realized in models of height $h-1$. The type $\tau$ is added to $\Phi_h$ iff there exists a finite strongly $\ell$-consistent level-$\ell$ model of ${\cal{K}}_\tau=({\cal{T}},{\cal{A}}_\tau)$ modulo ${\cal{E}}_h$ where \[{\cal{A}}_\tau = \left\{A(a) \bigm| A\in\tau\right\}\] for a designated $a\in\mn{N_I}$ and ${\cal{E}}_h$ is defined based on ${\cal{E}}$ and $\Phi_{h-1}$ as explained below; for the existence test we use Lemma~\ref{lem:envreduct}. For unary types $\tau_1, \tau_2$ and a role name $r$ let ${\cal{J}}_{(\tau_1,r,\tau_2)}$ be the edge-bag built from an element $e_1$ of type $\tau_1$ and an element $e_2$ of type $\tau_2$ connected by an $r$-edge. We let ${\cal{E}}_h=(\Theta,\varepsilon_h)$ and include $(r,B)$ in $\varepsilon_h(\tau_1)$ for $\tau_1\in\Theta$ iff either $(r,B)\in\varepsilon(\tau_1)$ or there exists $\tau_2\in\Theta_{h-1}$ such that $B\in\tau_2$ and ${\cal{I}}_{(\tau_1,r,\tau_2)}$ is an $(\ell,\ell)$-interpretation and satisfies all CIs of the form $A'\sqsubseteq \forall r.B'$ in ${\cal{T}}$. Recall that the first condition amounts to checking that ${\cal{I}}_{(\tau_1,r,\tau_2)}$ is ${\cal{B}}$-decorated (${\cal{I}}_{(\tau_1,r,\tau_2)}\models\widehat{\cal{T}}_{\cal{B}}$), level-$\ell$, and $\ell$-consistent. Because the computed sets satisfy \[\Phi_0 \subseteq \Phi_1 \subseteq \dots \subseteq \Phi_{h-1} \subseteq \Phi_{h} \subseteq \dots ,\] after at most $2^{|\mn{CN}({\cal{K}})|+2^{\mathrm{poly}(\|{\cal{K}}\|)}}$ rounds the sets $\Phi_h$ stabilize. The algorithm should return yes iff the last $\Phi_h$ contains a unary type compatible with the assertions made by the Abox ${\cal{A}}$ of ${\cal{K}}$ about the unique individual it mentions. It is not hard to check that each round can be performed in time $2^{O(\|{\cal{K}}\|)+2^{\mathrm{poly}(\|{\cal{K}}\|)}}$, yielding the desired complexity upper bound. \end{proof} \lemaldiscrete* \begin{proof} Let us first see when a discrete interpretation is $(n+1)$-consistent. Consider a partition of a fragment $\varphi$ into $\varphi', \varphi_1,\dots, \varphi_k$ like in the definition of $(n+1)$-consistency. Because $\varphi'$ has level $n+1$, it must be a UCQ. If $\varphi'$ contains a binary atom, it cannot be matched in a discrete interpretation. Hence, we can assume that $\varphi'$ contains no binary atoms. Fragments $\varphi_1,\dots, \varphi_k$ share no variables, but $\varphi$ is connected, so $k=1$. It follows that $\varphi_1 = \varphi$ and $V \subseteq \textit{var}(\varphi') \subseteq V_1$. Then, $(n+1)$-consistency reduces to the condition $A^\kappa_{\varphi,V_1} \sqsubseteq A^\kappa_{\varphi,V}$ for all $\kappa$ such that $\kappa(x) = n+1$ for all $x \in V_1$. We can capture $(n+1)$-consistency of the model by replacing ${\cal{E}}$ with the environment ${\cal{E}}'=(\Theta',\epsilon')$ obtained from ${\cal{E}}$ by filtering out unary types that violate this condition. This can be done in time polynomial in the size of ${\cal{E}}$. It remains to decide if there is a discrete model of ${\cal{K}}$ modulo ${\cal{E}}'$. This is the case iff for the individual $a$ mentioned in ${\cal{A}}$ there is a type $\tau \in \Theta'$ compatible with the assertions on $a$ in ${\cal{A}}$ such that for each concept inclusion $A \sqsubseteq \exists r. B$ in ${\cal{T}}$ if $A \in \tau$ then $(r,B) \in \varepsilon'(\tau)$. This can be checked in time polynomial in the size of ${\cal{K}}$ and ${\cal{E}}'$. Overall, the existence of an $(n+1,n+1)$-model of ${\cal{K}}$ modulo ${\cal{E}}$ can be decided in time polynomial in the size of ${\cal{K}}$ and ${\cal{E}}$; that is, in time $2^{O(\|{\cal{K}}\|)+2^{\mathrm{poly}(\|\Phi\|)}}$. \end{proof} \lemunravellingflat* \begin{proof} This is also proved by routine unravelling, much like Lemma~\ref{lem:unravelling-connected}. The difference is that this time for ${\cal{I}}_d$ we take the interpretation ${\cal{I}}$ with all edges of level strictly below $\ell'$ removed. This unravelling procedure may pass through the same element multiple times on the same branch, so the resulting tree-like structure ${\cal{J}}$ may be infinite. Because new bags are added to ${\cal{J}}$ only when a witness is missing in the parent bag, it follows that all edges between bags have level strictly below $\ell'$ (all edges of level at least $\ell'$ are already copied in the parent bag, together with their targets). Hence, ${\cal{J}}$ is $\ell'$-flat. The size of each bag is equal to the size of ${\cal{I}}$ and the degree within each bag is bounded by the maximal degree in ${\cal{I}}$. The number of child bags connected to the same element in the parent bag is bounded by the size of the TBox. Hence, the degree in ${\cal{J}}$ is bounded. Checking that ${\cal{J}}$ is a $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ is straightforward, just like in Lemma~\ref{lem:unravelling-connected}. \end{proof} \lemboundedpaths* \begin{proof} Suppose a $\widehat{\cal{B}}$-decorated CRPQ $\varphi$ is matched in a $\widehat{\cal{B}}$-decorated interpretation ${\cal{J}}$. Each path in ${\cal{J}}$ corresponds to the run $\rho$ of $\widehat B$ obtained by reading the states decorating the elements on the path. Such a path witnesses an RPQ atom ${\cal{B}}_{q,q'}(x,x')$ iff the thread of $\rho$ beginning in $q$ ends in $q'$. If the atom has end level at least $\ell'$, then the level of $q'$ in the last state of $\rho$ must be at least $\ell'$. Observe however that each edge of level strictly below $\ell'$ brings all threads from levels $\ell'$ and higher at least one level down. Consequently, the path may use at most $n - \ell'$ edges of level strictly below $\ell'$. \end{proof} \lemboundedcrpq* \begin{proof} Consider a level-$\ell'$ CRPQ $\varphi$ matched in ${\cal{J}}$. Consider a witnessing path $e_0, e_1, \dots, e_k$ in ${\cal{J}}$. Let $\mathbf{p}_0, \mathbf{p}_1, \dots, \mathbf{p}_k$ be the run of $\widehat {\cal{B}}$ corresponding to the witnessing path and let $\ell_0, \ell_1, \dots, \ell_k$ be the thread in $\rho$ that corresponds to the witnessing run $q_0,q_1,\dots, q_k$ of ${\cal{B}}$. We have $\ell_0 \geq \ell_1 \geq \dots \geq \ell_k \geq \ell'$. Suppose that for some $i < j$ we have $e_i = e_j$ and $\ell_i = \ell=j$. It follows immediately that $\mathbf{p}_i = \mathbf{p}_j$ and $q_i = q_j$. Thus, we can choose a shorter witnessing path by skipping $e_{i+1}, e_{i+2}, \dots, e_{j}$. Consequently, it is enough to look at witnessing paths that visit each element at most $(n-\ell'+1)$ times. From the assumption on the structure of ${\cal{J}}$ and from Lemma~\ref{lem:bounded-paths} it follows that every simple witnessing path has length at most $M(n-\ell'+1)^2$. \end{proof} \lemalgoflat* \begin{proof} By Lemma~\ref{lem:unravelling-flat} it suffices to decide if there exists an $\ell'$-flat $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ with bounded degree and bag size. Using Lemma~\ref{lem:consistentcomp} and the definition of $\ell'$-flatness this is amounts to deciding if there exists a (possibly infinite) tree-like model of ${\cal{K}}$ modulo ${\cal{E}}$ such that each bag is a finite $(\ell',\ell')$-interpretation, each edge-bag is an $(\ell,\ell')$-interpretation but not level-$\ell'$, and the size of bags and the degree of elements is bounded. The algorithms is similar to the one in Lemma~\ref{lem:algo-connected}. The main difference is that the model can now be infinite. Suppose for a while, however, that we are interested in computing only finite models. Then we can proceed just like in Lemma~\ref{lem:algo-connected}, computing sets \[\emptyset= \Phi_0 \subseteq \Phi_1 \subseteq \Phi_2 \subseteq \dots\] but as we are after $\ell'$-consistent bags, rather than strongly $\ell'$-consistent, we reduce directly to the $(\ell',\ell')$-model problem for ${\cal{K}}_\tau$ defined like before, and ${\cal{E}}_h$ define almost like before, the difference being that we additionally require that the edge in ${\cal{J}}_{(\tau_1,r,\tau_2)}$ has level strictly below $\ell'$. We do not need to do anything about the size of the bags and the degree, because in a finite interpretation these are always bounded. In order to take into account also infinite models, we replace induction by co-induction. The algorithm proceeds just like described above but it starts from $\Phi_0$ containing all unary types over $\mn{CN}({\cal{K}})$ and concepts $C_{q,k}$ and $A_{\psi,V}^\kappa$. It follows that \[\Phi_0 \supseteq \Phi_1 \supseteq \dots \supseteq \Phi_{h-1} \supseteq \Phi_h \supseteq \dots\,.\] Like before, the sequence must stabilize after at most $2^{|\mn{CN}({\cal{K}})|+2^{\mathrm{poly}(\|{\cal{K}}\|)}}$ steps. We claim that the algorithm can answers yes iff the last computed $\Phi_h$ contains a type compatible with the ABox of ${\cal{K}}$. This is because one can built the potentially infinite model top down plugging in as bags the models witnessing the addition of $\tau$ to $\Phi_h$ in the last round. Importantly, the number of these witnesses is finite, because the number of invoked instances of the $(\ell',\ell')$-model problem is finite. Consequently, the size of bags in the constructed model is bounded. The number of child bags attached to each element is bounded by the number of existential restrictions in the TBox of ${\cal{K}}$, so the degree in the constructed model is also bounded. The complexity bound follows like in Lemma~\ref{lem:algo-connected}. \end{proof} \section{Looking Forward (and Back)} \label{sec:conclusions} This paper provides first positive results on finite entailment of navigational queries over DLs ontologies. The main technical contribution is an optimal automata-based \textsc{2ExpTime} upper bound for finite entailment of UCRPQs in $\ensuremath{{\cal{ALC}}\xspace}$. Let us take a look back at our journey. We devised an expansion of the semiautomaton used to represent UCRPQs to keep track of its runs that begin in all possible states, on all infixes of the input word. By making interpretations and CRPQs knowledgeable of the runs of this expansion, we are able to associate levels to them as dictated by the transitions of the expansion. To solve the entailment problem, we use a recursive method eliminating the lowest level from the query and from the interpretation, and solving then the simpler problem. In particular, we look at problem of finding $(\ell, \ell')$ models, and solve it by recursively increasing $\ell$ and $\ell'$ in an alternating way, until both reach a maximum level: Section~\ref{ssec:queries} and~\ref{ssec:interpretations} respectively address the increment of the query level and of the model. We finally showed what to do when $\ell =\ell' = n+1$, which as argued, is enough to solve the original finite entailment problem. As for future work, the first immediate step is to extend our method to deal with test atoms of the form $A?$, which are usually available in UCRPQs. For the ontology language, we believe our method can be adapted to allow inverses, nominals or counting. Regarding more expressive query languages, the natural next step is to consider \emph{two-way} CRPQs. Our current approach relies on the fact that information only flows forward, and it is not clear whether it can be adapted to deal with queries that can go back. \section{Introduction} \label{sec:introduction} At the intersection of knowledge representation and database theory lies the fundamental problem of \emph{ontology-mediated query entailment (OMQE)}, where the background knowledge provided by an ontology is used to enrich the answers to queries posed to databases. In this context, description logics (DLs) are a widely accepted family of logics used to formulate ontologies. By now, the OMQE problem under the unrestricted semantics (reasoning over arbitrary models) is well understood for various query languages and DLs~\cite{DBLP:journals/ki/SchneiderS20a}. In contrast, for the \emph{finite} OMQE problem, where one is interested in reasoning over finite models only, the overall landscape is rather incomplete. However, in recent years, the study of finite OMQE has been gaining traction, considering both lightweight and expressive DLs and (mostly) unions of conjunctive queries~\cite{DBLP:conf/esws/Rosati08,DBLP:conf/kr/GarciaLS14,DBLP:conf/kr/Rudolph16,GogaczIM18,DBLP:conf/ijcai/GogaczGIJM19,DBLP:conf/mfcs/DanielskiK19,GogaczGGIM20,DBLP:conf/dlog/BednarczykK21}. In this paper we consider the problem of finite OMQE with unions of conjunctive regular path queries (UCRPQs) as the query language. UCRPQs~\cite{DBLP:conf/pods/FlorescuLS98,DBLP:conf/kr/CalvaneseGLV00} are a powerful navigational query language for graph databases in which one can express that two entities are related by a path of edges that can be specified by a regular language over binary relations. So, UCRPQs extend unions of conjunctive queries (UCQs) with atoms that might contain regular expressions that traverse the edges of the database. Indeed, path navigation is included in the query language XPath 2.0 for XML data, and it is also present in the SPARQL 1.1 query language for RDF data through the property path feature. Given the resemblance of instance data stored in ABoxes in DLs to graph-like data, several investigations on unrestricted entailment of various types of navigational query languages mediated by DL ontologies have been carried out~\cite{DBLP:journals/jair/StefanoniMKR14,DBLP:journals/iandc/CalvaneseEO14,DBLP:journals/jair/BienvenuOS15,DBLP:conf/aaai/Gutierrez-Basulto18,DBLP:conf/ijcai/GogaczGIJM19,DBLP:conf/ijcai/BednarczykR19}, yielding algorithmic approaches and optimal complexity bounds. For finite entailment of regular path queries mediated by DL ontologies, there are only undecidability results available~\cite{DBLP:conf/kr/Rudolph16}. The most relevant positive news are the decidability and computational complexity results by \citeauthor{DBLP:conf/mfcs/DanielskiK19}~(\citeyear{DBLP:conf/mfcs/DanielskiK19}) and~\citeauthor{GogaczGGIM20}~(\citeyear{GogaczGGIM20}) on finite entailment of conjuctive queries with transitive closure over roles mediated by expressive DL ontologies. We focus on ontologies formulated using the description logic \ensuremath{{\cal{ALC}}\xspace}{}. Note that entailment of UCRPQs over \ensuremath{{\cal{ALC}}\xspace}{} ontologies is not \emph{finitely controllable}, i.e.\ finite and unrestricted entailment do not coincide as it is \emph{not} the case that for any \ensuremath{{\cal{ALC}}\xspace}{} knowledge base ${\cal{K}}$ and any UCRPQ $\varphi$, it holds that ${\cal{K}}$ entails $\varphi$ over all (unrestricted) models iff ${\cal{K}}$ entails $\varphi$ over all finite models. By assuming that the represented world is finite, we can therefore not reuse existing complexity bounds or algorithmic approaches to UCRPQ entailment. From a usability perspective, the suitability of this assumption depends on the potential applications. A particular interest for navigational queries comes from bioinformatics and cheminformatics \cite{Lysenko2016,Bio1,bio2,DBLP:conf/semweb/HuQD15,doi:10.1177/0165551519865495,bio3}. For instance, experts often need to find associations between entities in protein, cellular, drug, and disease networks (represented as graph databases), so that e.g.\ gene-disease-drug associations (corresponding to paths in the database) can be discovered for developing new treatment methods. In this type of applications, databases and the models they represent are clearly meant to be finite. Importantly, biochemical networks contain complex motifs involving e.g.\ \emph{cycles} or cliques. This type of patterns can be described using UCRPQs, however, without the finiteness assumption these patterns could be disregarded as the associated query might not be entailed when reasoning over all models (including infinite ones) \subsection*{Contribution} The main technical contribution of our investigation is the development of a dedicated automata-based method for entailment of UCRPQs over \ensuremath{{\cal{ALC}}\xspace}{} ontologies, providing an optimal upper bound. More precisely, we obtain the following result, where the matching lower bound is inherited from~\cite{DBLP:conf/rr/OrtizS14}. \begin{theorem}~\label{thm:mainresult} Finite entailment of UCRPQs over \ensuremath{{\cal{ALC}}\xspace}{} ontologies is \textsc{2ExpTime}-complete. \end{theorem} In prior work, \citeauthor{DBLP:conf/kr/Rudolph16}~(\citeyear{DBLP:conf/kr/Rudolph16}) showed that finite entailment of 2RPQs in $\mathcal{ALCIO\hspace{-1pt}F}$ is undecidable. Theorem~\ref{thm:mainresult} thus provides a key step towards delimiting the decidability boundary of finite OMQE with navigational queries. \smallskip At the heart of our approach to finite entailment of UCRPQs in \ensuremath{{\cal{ALC}}\xspace}{} there is a stratification of interpretations induced by the deterministic finite automaton underlying the UCRPQ. This stratification builds upon the so-called \emph{tape construction}, previously used to efficiently evaluate queries in the extension of XPath 1.0 where arbitrary regular expressions may appear as path expressions~\cite{DBLP:journals/jacm/BojanczykP11}. To realize the tape construction, our method represents UCRPQs by means of a semiautomaton ${\cal{B}}$~\cite{AlgebraicAutomata} and defines an expansion of ${\cal{B}}$, allowing to trace runs of ${\cal{B}}$ that begin in all possible states, on all infixes of the input word. We make interpretations ${\cal{I}}$ knowledgeable of the expansion by enriching paths of ${\cal{I}}$ with its possible runs and by associating edges of ${\cal{I}}$ with levels $\ell$ induced by the transitions of the expansion. In a similar fashion we also make CRPQs sensible of levels. With this at hand, we tackle finite entailment by eliminating the lowest level from a query and from an interpretation, and then recursively solving the simpler problem. At each step of this process, we should be able to arrange solutions to simpler problems in a hierarchical way so that we can reason over them. To this aim, we consider a variant of entailment that includes an \emph{environment}, which will provide the necessary information to position the arranged solutions to simpler problems in the context of larger interpretations. To better keep track of the complexity of our recursive method, we introduce a modification of the entailment problem modulo environment in which we look at a particular type of finite models: \emph{$(\ell,\ell')$-models}, which are models with edges of levels $\ell$ or higher that are `consistent' w.r.t.\ queries referring to edges of level $\ell'$ or higher. We solve the problem of finding $(\ell,\ell')$-models recursively by increasing $\ell$ and $\ell'$ in an alternating way, until both reach the maximum level $n+1$, with $n$ the number of states of ${\cal{B}}$. This will mean solving finite entailment modulo environment, and thus standard finite entailment as well. \subsection*{Related Work} We next discuss some existing work relevant to our study. \smallskip \noindent \textbf{OMQE of Navigational Queries. }As previously discussed, there exist various works on unrestricted entailment of navigational query languages mediated by DL ontologies. Most of them concentrate on extensions of regular path queries (RPQs), such as UCRPQs, and consider both Horn~\cite{DBLP:journals/jair/BienvenuOS15} and expressive DLs~\cite{DBLP:journals/iandc/CalvaneseEO14,DBLP:conf/aaai/Gutierrez-Basulto18,DBLP:conf/ijcai/GogaczGIJM19,DBLP:conf/ijcai/BednarczykR19}. There have been also some studies on entailment of graph XPath queries~\cite{DBLP:journals/jair/StefanoniMKR14,DBLP:conf/kr/BienvenuCOS14,DBLP:conf/dlog/KostylevRV14}. \smallskip \noindent \textbf{Finite OMQE. } There exist various decidability results and optimal complexity bounds for finite entailment of union of conjunctive queries in Horn DLs~\cite{DBLP:conf/esws/Rosati08,DBLP:conf/kr/GarciaLS14} and in expressive DLs from the $\mathcal S$ family~\cite{GogaczIM18,DBLP:conf/ijcai/GogaczGIJM19,DBLP:conf/mfcs/DanielskiK19}. In most cases, the computational complexity coincides with that of the unrestricted case, but the algorithmic approaches are completely different. On the negative side, undecidability of finite entailment of UCQs in the more expressive DL $\mathcal{SHOIF}$ was shown by~\cite{DBLP:conf/kr/Rudolph16}, as well as the undecidability result for finite entailment of 2RPQs in $\mathcal{ALCIOF}$. Closer to our work are the positive results on finite entailment of UCQs with transitive closure over roles in expressive DLs allowing for transitivity or transitive closure over roles~\cite{DBLP:conf/mfcs/DanielskiK19,GogaczGGIM20}. These results close the distance to the undecidability frontier for finite entailment from a different angle by considering ontology languages more expressive than \ensuremath{{\cal{ALC}}\xspace}{}, but a subclass of UCRPQs as query language. In the context of database theory research, finite OMQE (also called open-world query entailment) has also been investigated; for instance,~\citeauthor{DBLP:journals/tocl/AmarilliB20}~(\citeyear{DBLP:journals/tocl/AmarilliB20}) study finite OMQE for inclusion dependencies and functional dependencies over relations of arbitrary arity, and \citeauthor{DBLP:journals/iandc/Pratt-Hartmann09}~(\citeyear{DBLP:journals/iandc/Pratt-Hartmann09}) looks at finite OMQE in the two-variable fragment of FOL with counting quantifiers. \smallskip \noindent \textbf{Finite Controllability. } There have been also a few works on finite controllability in the context of DLs. For instance, \citeauthor{DBLP:conf/dlog/BednarczykK21}~(\citeyear{DBLP:conf/dlog/BednarczykK21}) recently showed that the $\mathcal{ZOI}$ and $\mathcal{ZOQ}$ members of the $\mathcal{Z}$ family are finitely controllable for UCQs. Beyond DLs, there have been several works on UCQ-finite controllability: for the guarded fragment of FOL~\cite{DBLP:journals/corr/BaranyGO13} or for various fragments of existential rules~\cite{DBLP:conf/datalog/CiviliR12,DBLP:conf/lics/GogaczM13,BAGET20111620,DBLP:conf/ijcai/AmendolaLM18,DBLP:conf/ijcai/GottlobMP18}. Closer to our study, is the work by \citeauthor{DBLP:conf/kr/FigueiraFB20}~(\citeyear{DBLP:conf/kr/FigueiraFB20}) on the classification of finitely and non-finitely controllable subclasses of CRPQs over ontologies formulated in the guarded-negation fragment of FOL or in the frontier fragment of existential rules. However, no complexity results or algorithms for finite entailment are provided for the non-finitely controllable cases. \section{Expansion and Decorations} \label{ssec:expansion} In order to handle UCRPQs expressed by means of a semiautomaton ${\cal{B}}$ we need to be able to trace runs of ${\cal{B}}$ that begin in all possible states, on all infixes of the input word. We achieve this using the following construction. Let us fix an arbitrary linear order on the set $Q$ of the states of ${\cal{B}}$. The \emph{expansion} of ${\cal{B}}$ is a semiautomaton $\widehat {\cal{B}}$ whose set of states is the set $\widehat Q$ of all permutations of $Q$. Thus, an element of $\widehat Q$ can be seen as a tuple $\mathbf{p}=(p_1, p_2, \dots, p_n)$ such that $p_i$ is the image of the $i$th state of ${\cal{B}}$ under the respective permutation. We refer to positions in this tuple as \emph{levels}. In particular, the \emph{level of $q\in Q$ in $\mathbf{p}$} is the unique $i$ such that $q = p_i$. Assuming $\delta:Q \times \mn{rol}({\cal{K}}) \to Q$ is the transition function of ${\cal{B}}$, we define the transition function \[\hat \delta : \widehat Q \times \mn{rol}({\cal{K}}) \to \widehat Q\] of $\widehat {\cal{B}}$ by letting $\hat \delta \big(\mathbf{p}, r\big)$ be the permutation $\mathbf{p}'$ obtained by listing all states appearing in the sequence \[\delta(\mathbf{p},r) = \big(\delta(p_1,r),\delta(p_2,r),\dots, \delta(p_n,r)\big)\] in the order of their first appearances, followed by all remaining states of ${\cal{B}}$ ordered as in $Q$. Note that the level of $\delta(p_i,r)$ in $\mathbf{p}'$ is at most $i$. Consider the set $P \subseteq \{1,2,\dots,n\}$ of levels $i$ such that the level of $\delta(p_i,r)$ in $\mathbf{p}'$ is equal to $i$. It follows from the definition of $\mathbf{p}'$ that $P = \{1, 2, \dots, \ell\}$ for some $\ell \in \{1, 2, \dots, n\}$. We call this number $\ell$ the \emph{level of transition $\mathbf{p} \stackrel{r}{\longrightarrow} \mathbf{p}'$}. From each run of $\widehat{\cal{B}}$ on a word $w$ we can reconstruct all runs of ${\cal{B}}$ on $w$. Let $\mathbf{p}_0,\mathbf{p}_1,\dots, \mathbf{p}_m$ be a run of $\widehat {\cal{B}}$ on $w$. Consider a run $q_0, q_1, \dots, q_m$ of ${\cal{B}}$ on $w$. For $i=0, 1, \dots, m$, let $\ell_i$ be the level of $q_i$ in $\mathbf{p}_i$. Any sequence $\ell_1, \ell_2, \dots, \ell_m$ associated like this with a run of ${\cal{B}}$ will be called a \emph{thread} in the run of $\widehat {\cal{B}}$ (see Fig.~\ref{fig:thread}). Notice that two threads that begin at different levels can meet at the same level somewhere along the run; if this happens they remain equal until the end of the run. Also, threads can be born in the middle of a run of $\widehat{\cal{B}}$, but they never disappear. A crucial property of threads is that they are non-increasing sequences: the level of $q_{i+1}$ in $\mathbf{p}_{i+i}$ is bounded by the level of $q_{i}$ in $\mathbf{p}_{i}$. \begin{figure*} \centering \includegraphics[scale=0.6]{Fig1.pdf} \caption{A thread in a run of the expansion of a semiautomaton.} \label{fig:thread} \end{figure*} \begin{restatable}{lemma}{lemlevels} \label{lem:levels} Let $\mathbf{p}_0,\mathbf{p}_1,\dots, \mathbf{p}_m$ be a run of $\widehat {\cal{B}}$ on $w$, and let $q,q'$ be states of ${\cal{B}}$. There is a run of ${\cal{B}}$ on $w$ from $q$ to $q'$ iff there exist positions $0 \leq j_1 < j_2 < \dots < j_k = m$, levels $n \geq \ell_1 > \ell_2 > \dots > \ell_k \geq 1$, and states $q_0, q_1, \dots, q_{k}$ with $1\leq k\leq n$ such that $q_0=q$, $q_k=q'$, and \begin{itemize} \item the level of $q_0$ in $\mathbf{p}_0$ is $\ell_1$ and the level of $q_k$ in $\mathbf{p}_m$ is $\ell_k$; \item for all $i \in \{1, 2, \dots, k-1\}$, the level of $q_i$ in $\mathbf{p}_{j_i}$ is $\ell_i$ and the level of $\delta\big(q_i, w[j_i+1]\big)$ in $\mathbf{p}_{j_i+1}$ is $\ell_{i+1}$; \item for all $i \in \{1, 2, \dots, k\}$, each transition taken in the run up to $\mathbf{p}_{j_i}$ has level at least $\ell_i$. \end{itemize} \end{restatable} As an illustration of Lemma~\ref{lem:levels}, consider the run of the expanded semiautomaton shown in Fig.\ref{fig:thread}. Tracing the run of the original semiautomaton on the same word, starting in state $q_4$, we discover the positions $j_1=3$ and $j_2=7$ where the corresponding thread drops to a lower level. Between these positions, the thread stays at the same level, beginning with $\ell_1=4$ (taking transitions of levels $5, 4, 5 \geq \ell_1$), followed by $\ell_2=3$ (taking transitions of levels $5, 3, 5\geq \ell_2$), and $\ell_3=1$ (taking a transition of level $5 \geq \ell_3$). The next step is to account for the possible runs of $\widehat {\cal{B}}$ over paths in the interpretation. Towards this goal, we decorate elements of the interpretation with states of $\widehat {\cal{B}}$. To avoid additional blow-up, we represent states of $\widehat {\cal{B}}$ using combinations of fresh concept names $C_{q,\ell}$ where $q$ is a state of ${\cal{B}}$ and $\ell\in \{1, 2, \dots, n\}$ is a level; we write $\ensuremath{\mn{CN}(\widehat\Bb)}$ for the set of all $C_{q,\ell}$. For a state $\mathbf{p} = (p_1, p_2, \dots, p_n)$ of $\widehat {\cal{B}}$, by $C_\mathbf{p}$ we mean the concept $C_{p_1,1} \sqcap C_{p_2,2} \sqcap \dots \sqcap C_{p_n,n}$. We say that an element $e\in\Delta^{\cal{I}}$ is \emph{decorated} with state $\mathbf{p}$ if $e\in C_\mathbf{p}^{\cal{I}}$. An interpretation ${\cal{I}}$ is \emph{$\widehat{\cal{B}}$-decorated} if no element has incoming edges over different roles from $\mn{rol}({\cal{K}})$ and ${\cal{I}}$ satisfies the CIs \[C_\mathbf{p} \sqsubseteq \forall r. C_{\hat \delta(\mathbf{p},r)}\,, \quad C_\mathbf{p} \sqcap C_{\mathbf{p}'} \sqsubseteq \bot\,, \quad \top \sqsubseteq \bigsqcup_{\mathbf{p}\in \widehat Q} C_{\mathbf{p}}\] for all states $\mathbf{p},\mathbf{p}'$ of $\widehat{\cal{B}}$ such that $\mathbf{p}\neq \mathbf{p}'$. The axiomatization above is exponential in the size of ${\cal{B}}$, but we can do better. \begin{restatable}{lemma}{lemaxioma}~\label{lem:axioma} Given ${\cal{B}}$ one can compute in polynomial time a TBox $\widehat {\cal{T}}_{{\cal{B}}}$ such that ${\cal{I}}\models \widehat {\cal{T}}_{{\cal{B}}}$ iff ${\cal{I}}$ is $\widehat {\cal{B}}$-decorated. \end{restatable} To every edge in a $\widehat{\cal{B}}$-decorated interpretation ${\cal{I}}$ we can assign a level as follows. Consider elements $e,e' \in \Delta^{\cal{I}}$ such that $(e,e') \in r^{\cal{I}}$ for some $r\in\mn{rol}({\cal{K}})$. Note that $(e,e')\notin s^{\cal{I}}$ for every $s \in \mn{rol}({\cal{K}}) \setminus \{r\}$. Let $\mathbf{p}$ and $\mathbf{p}'$ be the states decorating $e$ and $e'$, respectively. It holds that $\mathbf{p} \stackrel{r}{\longrightarrow} \mathbf{p}'$. By \emph{the level of the edge} $(e,e')$ we shall understand the level of this transition. A \emph{level-$\ell$ interpretation} is a $\widehat {\cal{B}}$-decorated interpretation that does not contain edges of level strictly below $\ell$; if $\ell>n$, this means that there are no edges at all. The following lemma is the key to our algorithm. \begin{restatable}{lemma}{lemreach} \label{lem:reach} Consider a level-$\ell$ interpretation ${\cal{I}}$ and elements $e \in C_{q,\ell}^{\cal{I}}$ and $e'\in C_{q',\ell}^{\cal{I}}$. Then, $(e,e') \in {\cal{B}}_{q,q'}^{\cal{I}}$ iff there is a path from $e$ to $e'$ in ${\cal{I}}$. \end{restatable} We make use of Lemma~\ref{lem:reach} by decomposing RPQs into segments corresponding to different levels, as was done for the runs of $\widehat{\cal{B}}$ in Lemma~\ref{lem:levels}. To facilitate this, we make our queries aware of levels. A \emph{$\widehat{\cal{B}}$-decorated CRPQ} is a CRPQ $\varphi$ represented by means of semiautomaton ${\cal{B}}$ that contains exactly one atom of the form $C_{q,\ell}(x)$ and exactly one atom of the form $C_{q',\ell'}(x')$ for each atom ${\cal{B}}_{q,q'}(x,x')$ in $\varphi$. We call $\ell$ and $\ell'$ the \emph{begin level} and the \emph{end level} of atom ${\cal{B}}_{q,q'}(x,x')$ in $\varphi$, respectively. Because levels never increase in a thread of a run of $\widehat {\cal{B}}$, we can assume without loss of generality that $\ell \geq \ell'$ always holds. A \emph{level-$\ell$ CRPQ} is a $\widehat {\cal{B}}$-decorated CRPQ that contains no RPQ atoms of end level strictly below $\ell$. As all end levels are at most $n$, a level-$\ell$ CRPQ for $\ell>n$ contains no RPQ atoms; that is, it is a CQ. To \emph{complete} a CRPQ $\varphi$ means to turn it into a $\widehat{\cal{B}}$-decorated CRPQ $\varphi'$ by adding unary atoms over concepts $C_{q,\ell}$ in an arbitrary minimal way. Each resulting $\varphi'$ is called a \emph{completion} of $\varphi$. Over $\widehat{\cal{B}}$-decorated interpretations, $\varphi$ is equivalent to the union of its completions. The \emph{completion of a UCRPQ} $\Phi$ is the union of all completions of all CRPQs in $\Phi$. We conclude this section by showing how to turn any counterexample to ${\cal{K}} \models_{\mathsf{fin}}^{\cal{E}} \Phi$ into a $\widehat{\cal{B}}$-decorated one. Let ${\cal{I}}$ be an interpretation over $\mn{CN}({\cal{K}}) \cup \mn{rol}({\cal{K}})$. The \emph{product} of ${\cal{I}}$ and $\widehat {\cal{B}}$ is the interpretation ${\cal{I}} \times \widehat {\cal{B}}$ over $\ensuremath{\mn{CN}(\widehat\Bb)}\cup \mn{CN}({\cal{K}}) \cup \mn{rol}({\cal{K}})$ such that \begin{itemize} \item $\Delta^{{\cal{I}}\times\widehat{\cal{B}}} = \Delta^{\cal{I}} \times \mn{rol}({\cal{K}}) \times \widehat Q$, \item $C^{{\cal{I}}\times\widehat{\cal{B}}} = C^{{\cal{I}}} \times \mn{rol}({\cal{K}}) \times \widehat Q$ for all $C \in \mn{CN}({\cal{K}})$, \item $C_{q,\ell}^{{\cal{I}}\times\widehat{\cal{B}}} = \Delta^{{\cal{I}}} \times \mn{rol}({\cal{K}}) \times \{(p_1, p_2, \ldots, p_n) \in \widehat Q : p_\ell = q\}$ for all $q\in Q$ and $\ell \in \{1,2,\dots, n\}$, \item $r^{{\cal{I}}\times\widehat{\cal{B}}} = \big\{\big((e,s,\mathbf{p}), (e',r,\mathbf{p}')\big): (e,e') \in r^{{\cal{I}}}, \mathbf{p} \stackrel{r}{\longrightarrow} \mathbf{p}',\linebreak s \in \mn{rol}({\cal{K}})\big\}$ for $r \in \mn{rol}({\cal{K}})$. \end{itemize} Note that if ${\cal{I}}$ is finite, so is ${\cal{I}}\times\widehat{\cal{B}}$. \begin{restatable}{lemma}{lemdecoratemodel}\label{lem:decorate_model} Let $\Phi$ be a UCRPQ, ${\cal{K}}$ an $\ensuremath{{\cal{ALC}}\xspace}$ KB with a trivial ABox, and ${\cal{E}}$ an environment. \begin{itemize} \item ${\cal{I}} \times \widehat{\cal{B}}$ is a $\widehat{\cal{B}}$-decorated interpretation. \item If ${\cal{I}} \not\models \Phi$ then ${\cal{I}} \times \widehat{\cal{B}} \not\models \Phi$. \item If ${\cal{I}} \models^{\cal{E}} {\cal{K}}$ then ${\cal{I}}\times\widehat{\cal{B}} \models^{\cal{E}} {\cal{K}}$ up to identifying the unique individual $a$ in ${\cal{K}}$ with some $(a, r, \mathbf{p}) \in\Delta^{{\cal{I}}\times\widehat{\cal{B}}}$. \end{itemize} \end{restatable} \section{Core Computational Problem} To solve the entailment problem we eliminate the lowest level from the query and from the interpretation, and solve the problem with fewer levels recursively. Eliminating each level will involve interpretations built from pieces that are solutions for the simplified problem. Evaluating CRPQs over such interpretations requires breaking them down into fragments and it must accommodate single RPQs witnessed across multiple pieces. For a UCRPQ $\Phi$ let $\tilde\Phi$ be the completion of an equivalent UCRPQ represented by means of a semiautomaton ${\cal{B}}$. A~\emph{fragment} of $\varphi\in\tilde\Phi$ is either of the following: \begin{itemize} \item a $\widehat {\cal{B}}$-decorated CRPQ of the form $C_{q_1,\ell_1}(y_1) \land {\cal{B}}_{q_1,q_2}(y_1,y_2) \land C_{q_2,\ell_2}(y_2)$ or $C_{q_1,\ell_1}(y_1) \land {\cal{B}}_{q_1,q_2}(y_1,y_2) \land C_{q_2,\ell_2}(y_2) \land r(y_2,y_3) \land C_{q_3,\ell_3}(y_3)$ where $y_1,y_2,y_3$ are fresh variables and $r\in\mn{rol}({\cal{K}})$, \item a connected $\widehat {\cal{B}}$-decorated CRPQ that can be obtained from $\varphi$ by dropping selected atoms, replacing selected RPQ atoms ${\cal{B}}_{q,q'}(x,x')$ by a subset of ${\cal{B}}_{q,q_1}(x,y_1)$, $r(y_1,y_2)$, ${\cal{B}}_{q_3,q'}(y_3,x')$ for some fresh variables $y_1,y_2,y_3$ and \mbox{$r\in\mn{rol}({\cal{K}})$}, and completing the resulting CRPQ. \end{itemize} A fragment of a $\Phi$ is a fragment of any of the CRPQs in $\tilde\Phi$. Importantly, a fragment of a fragment of $\Phi$ is also a fragment of $\Phi$, and each $\varphi\in\tilde\Phi$ is a fragment of $\Phi$. Up to renaming fresh variables, $\Phi$ has $2^{\mathrm{poly}(\|\Phi\|)}$ different fragments, despite ${\cal{B}}$ being exponential in $\|\Phi\|$. We now enrich interpretations again by including information about matched fragments of $\Phi$. For each fragment $\varphi$ of $\Phi$ and each $\emptyset \neq V \subseteq \textit{var}(\varphi)$ we choose a fresh concept name $A_{\varphi, V}$. We call an interpretation ${\cal{I}}$ \emph{correct (wrt.~$\Phi$)} if $e \in A_{\varphi, V}^{{\cal{I}}}$ iff $\pi(V) = \{e\}$ for some match $\pi$ for $\varphi$ in ${\cal{I}}$. Assuming ${\cal{I}}$ is correct, ${\cal{I}} \models \Phi$ iff $A_{\varphi, V}^{{\cal{I}}} \neq \emptyset$ for some $\varphi\in \tilde\Phi$ and $\emptyset \neq V \subseteq \textit{var}(\varphi)$. Correctness is not compositional: the union of two correct interpretations sharing a single element need not be correct. As our method of eliminating levels relies on such decompositions of interpretations, we replace correctness with a notion that is weaker, but compositional. We first abstract the decomposition of a $\widehat{\cal{B}}$-decorated CRPQ induced by a match in a union of disjoint `peripheric' interpretations, each sharing a single element with a single `core' interpretation (Fig.~\ref{fig:partition} shows three `peripheric' interpretations connected to the `core' by single edges, included in the `peripheric' interpretations). A \emph{partition} of a $\widehat{\cal{B}}$-decorated CRPQ $\varphi$ into $\varphi', \varphi_1,\dots, \varphi_k$ is obtained as follows. Choose $X', X_1, \dots X_k \subseteq \textit{var}(\varphi)$ such that \begin{itemize} \item $X_i \cap X_j = \emptyset$ for all $i \neq j$; \item for each atom of the form $r(x,x')$ in $\varphi$ there exists $i$ such that either $\{x,x'\} \subseteq X_i$ or $\{x,x'\} \subseteq X'$. \end{itemize} Based on $X', X_1, \dots X_k$ define $\varphi', \varphi_1, \dots, \varphi_k$ as follows. For each atom of the form $r(x,x')$ in $\varphi$ choose $i$ such that $\{x,x'\} \subseteq X_i$ and add $r(x,x')$ to $\varphi_i$ or add $r(x,x')$ to $\varphi'$ provided that $\{x,x'\}\subseteq X'$. For each RPQ atom ${\cal{B}}_{q,q'}(x,x')$ of begin level $\ell$ and end level $\ell'$ in $\varphi$ do one of the following: \begin{itemize} \item provided that $\{x,x'\} \subseteq X'$, add ${\cal{B}}_{q,q'}(x,x')$ to $\varphi'$; \item choose $i$ such that $\{x,x'\} \subseteq X_i$ but $\{x,x'\} \not\subseteq X'$, and add ${\cal{B}}_{q,q'}(x,x')$ to $\varphi_i$ (light green RPQ in Fig.~\ref{fig:partition}); \item choose $i$ such that $x \in X'\setminus X_i$ and $x'\in X_i \setminus X'$, a level $m$ such that $\ell \geq m \geq \ell'$, a state $p$ of ${\cal{B}}$, and a fresh variable $y$, and add ${\cal{B}}_{q,p}(x,y) \land C_{p,m}(y)$ to $\varphi'$ and $C_{p,m}(y) \land {\cal{B}}_{p,q'}(y,x')$ to $\varphi_i$ (blue and orange in Fig.~\ref{fig:partition}); \item choose $i$ such that $x \in X_i \setminus X'$ and $x'\in X' \setminus X_i$, a level $m$ such that $\ell \geq m \geq \ell'$, a state $p$ of ${\cal{B}}$, and a fresh variable $y$, and add ${\cal{B}}_{q,p}(x,y) \land C_{p,m}(y)$ to $\varphi_i$ and $C_{p,m}(y) \land {\cal{B}}_{p,q'}(y,x')$ to $\varphi'$ (dark green in Fig.~\ref{fig:partition}); \item choose $i\neq j$ such that $x \in X_i \setminus X'$ and $x'\in X_j \setminus X'$, levels $m, m'$ such that $\ell \geq m \geq m'\geq \ell'$, states $p,p'$ of ${\cal{B}}$, and fresh variables $y,y'$, add ${\cal{B}}_{q,p}(x,y) \land C_{p,m}(y)$ to $\varphi_i$, $C_{p,m}(y) \land {\cal{B}}_{p,p'}(y,y') \land C_{p',m'}$ to $\varphi'$, and $C_{p',m'}(y') \land {\cal{B}}_{p',q'}(y',x')$ to $\varphi_j$ (purple in Fig.~\ref{fig:partition}). \end{itemize} Note that for each ${\cal{B}}_{q,q'}(x,x')$ exactly one of the above actions can be performed and the choice of $i$ and $j$ is unique. To complete the construction, add to $\varphi'$ all unary atoms of $\varphi$ over variables already used in $\varphi'$, and similarly for each $\varphi_i$. Observe that for each $X' \subseteq \textit{var}(\varphi)$ there is exactly one choice of $X_1, X_2, \dots, X_k$ (up to a permutation) such that the resulting $\varphi_1, \varphi_2, \dots, \varphi_k$ are connected (regardless of the choice of $p,p'$ and $m,m'$). Assuming that $\varphi$ is a fragment of $\Phi$, it then holds that so are $\varphi_1, \varphi_2, \dots, \varphi_k$. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig2.jpg} \caption{CRPQ $\varphi$ is distributed over the bags constituting ${\cal{I}}$.} \label{fig:partition} \end{figure} We call ${\cal{I}}$ \emph{consistent (wrt.~$\Phi$)} if for each partition of a fragment $\varphi$ of $\Phi$ into a CRPQ $\varphi'$ and fragments $\varphi_1, \varphi_2, \dots, \varphi_k$ with $\textit{var}(\varphi_i)\cap \textit{var}(\varphi_j) = \emptyset$ for $i\neq j$, $V_i = \textit{var}(\varphi_i) \cap \textit{var}(\varphi')$, and $\emptyset \neq V\subseteq \textit{var}(\varphi) \cap \textit{var}(\varphi')$, there is no match $\pi$ for $\varphi'$ in ${\cal{I}}$ such that $\pi(V_i) = \{e_i\} \subseteq \big(A_{\varphi_i, V_i}\big)^{{\cal{I}}}$ for all $i$ but $\pi(V) = \{e\} \not\subseteq \big(A_{\varphi, V}\big)^{{\cal{I}}}$. Clearly, all correct interpretations are consistent. The converse is not true in general, but the following key property is preserved. \begin{restatable}{lemma}{lemconsistent} \label{lem:consistent} For every UCRPQ $\Phi$ and every consistent \mbox{${\cal{B}}$-decorated} interpretation ${\cal{I}}$, if $A_{\varphi, V}^{{\cal{I}}} = \emptyset$ for each $\varphi \in \tilde\Phi$ and $\emptyset \neq V \subseteq \textit{var}(\varphi)$, then ${\cal{I}} \not\models \Phi$. \end{restatable} Consistency is sufficient to express entailment, but it does not yield well to the recursive elimination of levels. We generalize it by refining the information about matched fragments of $\Phi$. We introduce fresh concepts $A_{\varphi,V}^\kappa$ where $\varphi$ is a fragment of $\Phi$, $\emptyset \neq V \subseteq \textit{var}(\varphi)$, \[\kappa:\textit{var}(\varphi) \to \{1, 2, \dots, \ell\}\,,\] and $\kappa(V) = \{\ell\}$ for some $\ell\in\{1, 2, \dots, n+1\}$. We write $\mn{CN}_{\ell}^\Phi$ for the set of $A_{\psi,V}^\kappa$ such that $\kappa(V) = \{\ell\}$. Intuitively, $\kappa$ is a synopsis of when specific fragments of $\psi$ were matched during the recursive search for the model. Specifically, $\kappa(x) = \ell$ indicates that $x$ was matched after all levels strictly below $\ell$ had been eliminated from the query, but while level $\ell$ was still present. Accordingly, $\ell$-consistency, defined below, ensures that the synopses built so far are consistently updated while level $\ell$ is being handled. We call ${\cal{I}}$ \emph{$\ell$-consistent (wrt.~$\Phi$)} if for each partition of a fragment $\varphi$ of $\Phi$ into a CRPQ $\varphi'$ of level $\ell$ and fragments $\varphi_1, \varphi_2, \dots, \varphi_k$ with $\textit{var}(\varphi_i)\cap \textit{var}(\varphi_j) = \emptyset$ for $i\neq j$, $V_i = \textit{var}(\varphi_i) \cap \textit{var}(\varphi')$, and $\emptyset \neq V\subseteq \textit{var}(\varphi) \cap \textit{var}(\varphi')$, there is no match $\pi$ for $\varphi'$ in ${\cal{I}}$ such that $\pi(V_i) = \{e_i\} \subseteq \big(A_{\varphi_i, V_i}^{\kappa_i}\big)^{{\cal{I}}}$ for all $i$ but $\pi(V) = \{e\} \not\subseteq (A_{\varphi, V}^\kappa)^{{\cal{I}}}$ where \begin{itemize} \item $\kappa_i(x) \leq \ell$ for all $x \in \textit{var}(\varphi_i)\,$, \item $\kappa(x) = \kappa_i(x)$ for all $x \in \textit{var}(\varphi_i) \setminus V_i\,$, \item $\kappa (x) = \ell$ for all $x \in \textit{var}(\varphi) \cap \textit{var}(\varphi')\,$. \end{itemize} We stress that while $\varphi'$ has level $\ell$, fragments $\varphi, \varphi_1, \dots, \varphi_k$ can have any level. Note also that $\ell$-consistency speaks only of concept names in $\mn{CN}_1^\Phi \cup\mn{CN}_2^\Phi \cup \dots \cup \mn{CN}_\ell^\Phi$. Identifying $A_{\varphi,V}$ with $A_{\varphi,V}^\kappa$ for $\kappa$ constantly equal to 1, we get that consistency and $1$-consistency are equivalent. In what follows, by an \emph{$(\ell,\ell')$-interpretation} we mean an $\ell'$-consistent level-$\ell$ interpretation. By an \emph{$(\ell,\ell')$-model} of ${\cal{K}}$ modulo ${\cal{E}}$ we mean an $(\ell,\ell')$-interpretation that is model of ${\cal{K}}$ modulo ${\cal{E}}$. The actual problem we will be solving is the following \emph{$(\ell, \ell')$-model problem} for $\ell\leq \ell'$: Given a KB ${\cal{K}}$ with a trivial ABox, an environment ${\cal{E}}$, and a UCRPQ $\Phi$ decide if there exists a finite $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$. By Lemma~\ref{lem:decorate_model}, entailment modulo environment (with trivial ABox) can be reduced to the $(1,1)$-model problem by modifying the environment to forbid all unary types containing $A^\kappa_{\varphi, V}$ for any $\varphi \in \tilde\Phi$, $\emptyset \neq V \subseteq \textit{var}(\varphi)$, and $\kappa$ constantly equal 1. Note that the reduction does not affect the query $\Phi$, nor the KB ${\cal{K}}$. However, it introduces up to $2^{\mathrm{poly}(\|\Phi\|)}$ new concept names $A_{\varphi,V}$ and $C_{q,\ell}$. Consequently, the number of unary types is at most $2^{|\mn{CN}({\cal{K}})|+2^{\mathrm{poly}(\|\Phi\|)}}$. It follows that the size of the environment is bounded by $2^{\|{\cal{K}}\|+2^{\mathrm{poly}(\|\Phi\|)}}$. To solve the $(1,1)$-model problem we will proceed recursively, incrementing $\ell$ and $\ell'$ in an alternating fashion, until $\ell=\ell'=n+1$. At each level of the recursion we will be making multiple recursive calls. During the recursion the UCRPQ $\Phi$ and the TBox ${\cal{T}}$ will remain unchanged, but the ABox and the environment will evolve. Importantly, we will not introduce any new concepts, so the size of the environment will always be bounded by $2^{\|{\cal{K}}\|+2^{\mathrm{poly}(\|\Phi\|)}}$. The size of the ABox will be bounded by $\|{\cal{K}}\| + 2^{\mathrm{poly}(\|\Phi\|)}$ and the number of individuals will never grow. In consequence, the total cost of the algorithm can be computed as the cost of a single recursion step times the number of steps. In the following sections we will show that each recursion step can be carried out in time $2^{O(\|{\cal{K}}\|) + 2^{\mathrm{poly}(\|\Phi\|)}}$, excluding the cost of the recursive calls. The depth of the recursion is $O(n) = 2^{\mathrm{poly}(\|\Phi\|)}$. The number of recursive calls within a single recursion step is also bounded by $2^{O(\|{\cal{K}}\|) + 2^{\mathrm{poly}(\|\Phi\|)}}$. This means that the total number of recursion steps is $2^{\|{\cal{K}}\|\cdot 2^{\mathrm{poly}(\|\Phi\|)}}$ and so is the overall complexity of the recursive algorithm for the $(1,1)$-model problem. \section{Incrementing the Level of Queries} \label{ssec:queries} The main goal of this section is to solve the $(\ell,\ell)$-model problem by reduction to multiple instances of the $(\ell, \ell+1)$-model problem for $\ell\leq n$. The $(n+1,n+1)$-model problem is discussed briefly at the end of the section. As a first step, we observe that it is enough to consider interpretations whose DAG of strongly connected components is a tree. For this purpose we define \emph{tree-like} interpretations as those that can be decomposed into multiple finite subinterpretations, called \emph{bags}, arranged into a (possibly infinite) tree such that: (1) all bags are pairwise disjoint; (2) between each parent and child bag there is a single edge, pointing from an element of the parent bag to an element of the child bag; (3) all other edges are between elements of the same bag. We think of edges between bags as 2-element interpretations, called \emph{edge-bags}, sharing the origin with the parent bag and the target with the child bag. A tree-like interpretation is then a union of all its bags and edge-bags. Fig.~\ref{fig:partition} shows a tree-like interpretation with 4 bags and 3 edge-bags. In tree-like interpretations $\ell$-consistency is a local property. \begin{restatable}{lemma}{lemconsistentcomp} \label{lem:consistentcomp} A tree-like interpretation is $\ell$-consistent iff each of its bags and edge-bags is $\ell$-consistent. \end{restatable} \begin{restatable}{lemma}{lemunravellingconnected} \label{lem:unravelling-connected} There is a finite $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ iff there is a finite tree-like $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ whose bags are strongly connected. \end{restatable} The next step is to eliminate the lowest level from the queries. An \emph{$(\ell+1)$-reduct} of a level-$\ell$ CRPQ $\varphi$ is any CRPQ that can be obtained from $\varphi$ by first splitting each RPQ atom ${\cal{B}}_{q_1,q_2}(x_1,x_2)$ of begin level $\ell_1 > \ell$ and end level $\ell$ into ${\cal{B}}_{q_1,q'_1}(x_1,x'_1) \land C_{q'_1, \ell'_1}(x'_1) \land r(x'_1,x'_2) \land C_{q'_2,\ell}(x'_2) \land {\cal{B}}_{q'_2,q_2}(x'_2,x_2)$ where $\ell_1 \geq \ell'_1 \geq \ell+1$, and then dropping from the resulting CRPQ all atoms whose begin and end level is $\ell$ (all unary atoms are kept). Note that each $(\ell+1)$-reduct $\varphi'$ of $\varphi$ is a conjunction of at most $|\varphi|$ disjoint fragments of $\varphi$ and that $\textit{var}(\varphi) \subseteq \textit{var}(\varphi')$. \begin{restatable}{lemma}{crpqSCCreducts} \label{lem:reducts} Over ${\cal{B}}$-decorated interpretations, each level-$\ell$ CRPQ implies the union of its $(\ell+1)$-reducts. Over strongly-connected level-$\ell$ interpretations, each level-$\ell$ CRPQ is equivalent to the union of its $(\ell+1)$-reducts. \end{restatable} Because Lemma~\ref{lem:unravelling-connected} guarantees tree-like solutions with strongly connected bags, we can replace $\ell$-consistency with \emph{strong $\ell$-consistency}: the only difference is that $\pi$ ranges over matches of all possible $(\ell+1)$-reducts of $\varphi'$, rather than over matches of $\varphi'$ itself. We restate Lemma~\ref{lem:unravelling-connected} as follows. \begin{restatable}{lemma}{lemstronglyconsistent} \label{lem:strongly-consistent} There is a finite $(\ell,\ell)$-model of ${\cal{K}}$ modulo ${\cal{E}}$ iff there is a finite tree-like level-$\ell$ model of ${\cal{K}}$ modulo ${\cal{E}}$ whose edge bags are $\ell$-consistent and bags strongly $\ell$-consistent. \end{restatable} It remains to show how to find models of the latter form. Let us first see how to find one consisting of a single bag; that is, how to find a finite strongly $\ell$-consistent level-$\ell$ model of ${\cal{K}}$ modulo ${\cal{E}}$. We will show that this amounts to finding a finite $(\ell, \ell+1)$-model of ${\cal{K}}$ modulo ${\cal{E}}'$ for one of the $(\ell+1)$-reducts ${\cal{E}}'$ of ${\cal{E}}$ described below. Consider a fragment $\varphi$, a non-empty set $V\subseteq \textit{var}(\varphi)$, a partition of $\varphi$ into a CRPQ $\varphi'$ of level $\ell$, and fragments $\varphi_1, \varphi_2, \dots, \varphi_k$, as in the definition of (strong) $\ell$-consistency. Let $\kappa :\textit{var}(\varphi) \to \{1,2, \dots, \ell\}$ be such that $\kappa\big(\textit{var}(\varphi)\cap \textit{var}(\varphi')\big) = \{\ell\}$. Let $\psi'$ be an $(\ell+1)$-reduct of $\varphi'$. Consider CRPQs $\psi$ with $\textit{var}(\varphi) \subseteq \textit{var}(\psi)$ that can be partitioned into $\psi'$ and $\varphi_1, \varphi_2, \dots, \varphi_k$. Choose the one with minimal $\textit{var}(\psi)$. This amounts to merging back all RPQ atoms split during the partition of $\varphi$, provided that their segments were not affected by replacing $\varphi'$ with $\psi'$. The CRPQ $\psi$ is not a fragment, because it need not be connected: Figure~\ref{fig:partition} right illustrates passing from $\varphi$ to $\psi$ consisting of two disconnected fragments. Let $\psi_1, \psi_2, \dots, \psi_m$ be the fragments constituting $\psi$ and let $U_i = V \cap \textit{var}(\psi_i)$. An \emph{$(\ell+1)$-reduct} ${\cal{E}}'$ of ${\cal{E}}$ is constructed by iterating over all possible choices of $\varphi$, $V$, $\varphi'$, $\varphi_1, \varphi_2, \dots, \varphi_k$, $\psi'$, $\kappa$, as above, and pruning ${\cal{E}}$ for each choice in one of the following ways: \begin{itemize} \item either pick $i$ such that $U_i = \emptyset$ and remove all unary types that contain any $A^{\kappa_i}_{\psi_i,W_i}$ with $W_i\subseteq \textit{var}(\psi_i) \cap \textit{var}(\psi')$, $\kappa_i\big(\textit{var}(\psi_i) \cap \textit{var}(\psi')\big)=\{\ell+1\}$, and $\kappa_i(x)=\kappa(x)$ for all $x \in \textit{var}(\psi_i)\setminus \textit{var}(\psi')$ ; \item or remove all unary types that contain some $A^{\kappa_i}_{\psi_i,U_i}$ with $\kappa_i\big(\textit{var}(\psi_i) \cap \textit{var}(\psi')\big)=\{\ell+1\}$ and $\kappa(x)=\kappa_i(x)$ for all $x \in \textit{var}(\psi_i)\setminus \textit{var}(\psi')$, for each $i$ such that $U_i \neq \emptyset$, but do not contain $A^\kappa_{\varphi,V}$. \end{itemize} \begin{restatable}{lemma}{lemenvreduct} \label{lem:envreduct} ${\cal{I}}$ is a strongly $\ell$-consistent level-$\ell$ model of ${\cal{K}}$ modulo ${\cal{E}}$ iff some interpretation that agrees with ${\cal{I}}$ over all role names and all concept names except $\mn{CN}_{\ell+1}^\Phi$ is an $(\ell+1)$-consistent level-$\ell$ model of ${\cal{K}}$ modulo ${\cal{E}}'$ for some $(\ell+1)$-reduct ${\cal{E}}'$ of ${\cal{E}}$. \end{restatable} Finite models consisting of multiple bags can be constructed bottom-up by a least fixed point procedure, using Lemma~\ref{lem:envreduct} to find each bag. \begin{restatable}{lemma}{lemalgoconnected} \label{lem:algo-connected} The $(\ell,\ell)$-model problem for an $\ensuremath{{\cal{ALC}}\xspace}$ KB ${\cal{K}}$, a UCRPQ $\Phi$, and an environment ${\cal{E}}$ can be solved in time \[2^{O(\|{\cal{K}}\|)+ 2^{\mathrm{poly}(\|\Phi\|)}}\] given an oracle for the $(\ell, \ell+1)$-model problem (with the same UCRPQ and TBox). \end{restatable} At the bottom of the recursion we need to check if there exists a $(n+1, n+1)$-model for ${\cal{K}}$ modulo ${\cal{E}}$. Now, a $\widehat{\cal{B}}$-decorated interpretation is level-$(n+1)$ iff it is \emph{discrete}; that is, it has no edges at all. This allows solving the problem by a direct inspection. Because the ABox is trivial and $\ell$-consistency is preserved under restrictions of the domain, it is enough to go through all singleton interpretations. \begin{restatable}{lemma}{lemaldiscrete} \label{lem:algo-discrete} The $(n+1,n+1)$-model problem for an $\ensuremath{{\cal{ALC}}\xspace}$ KB ${\cal{K}}$, a UCRPQ $\Phi$, and an environment ${\cal{E}}$ can be solved in time $2^{O(\|{\cal{K}}\|)+ 2^{\mathrm{poly}(\|\Phi\|)}}$. \end{restatable} \section{Incrementing the Level of Models} \label{ssec:interpretations} In this section we solve the $(\ell,\ell')$-model problem by reduction to multiple instances of the $(\ell',\ell')$-model problem for $\ell<\ell'$; that is, we eliminate level-$\ell$ edges from the interpretations. Like in Section~\ref{ssec:queries}, we rely on tree-like models of a special form; this time, however, they may be infinite and an additional step is needed to turn them into finite ones. A $\widehat{\cal{B}}$-decorated interpretation is \emph{$\ell'$-flat} if it is a tree-like interpretation where all edges between bags have level strictly below $\ell'$, whereas all edges inside bags have level at least $\ell'$. \begin{restatable}{lemma}{lemunravellingflat} \label{lem:unravelling-flat} If there exists a finite $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ then there exists an $\ell'$-flat $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ with bounded degree and bag size. \end{restatable} In contrast to Lemma~\ref{lem:unravelling-connected}, the above only shows that the reformulated condition is necessary. We show that it is sufficient, by turning an arbitrary $\ell'$-flat $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ with bounded degree and bag size into a finite $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$. For this we use \emph{coloured blocking}. For $d \in\Delta^{\cal{I}}$, the \emph{$m$-neighbourhood $N_m^{{\cal{I}}}(d)$ of $d$} is the interpretation obtained by restricting ${\cal{I}}$ to elements $e \in \Delta^{\cal{I}}$ within distance $m$ from $d$ in ${\cal{I}}$, enriched with a fresh concept interpreted as $\{d\}$. A \emph{colouring of ${\cal{I}}$ with $k$ colours} is an extension ${\cal{I}}'$ of ${\cal{I}}$ to $k$ fresh concept names $B_1, \dots, B_k$ such that $B_1^{{\cal{I}}'}, \dots, B_k^{{\cal{I}}'}$ is a partition of $\Delta^{{\cal{I}}'} = \Delta^{{\cal{I}}}$. We say that $d \in B_i^{{\cal{I}}'}$ has colour $B_i$. We call ${\cal{I}}'$ \emph{$m$-proper} if for each $d \in \Delta^{{\cal{I}}'}$ all elements of $N_m^{{\cal{I}}'}(d)$ have different colours. \begin{fact}[\protect\citeauthor{GogaczIM18} \protect\citeyear{GogaczIM18}] \label{fact:coloured-blocking} If ${\cal{I}}$ has bounded degree, then for all $m\geq 0$ there exists an $m$-proper colouring ${\cal{I}}'$ of ${\cal{I}}$ with finitely many colours. Consider interpretation ${\cal{J}}$ obtained from ${\cal{I}}'$ by redirecting some edges such that the old target and the new target have isomorphic $m$-neighbourhoods in ${\cal{I}}'$. Then, for each conjunctive query $\varphi$ with at most $\sqrt{m}$ binary atoms, if ${\cal{I}}\not\models \varphi$, then ${\cal{J}}\not\models \varphi$. \end{fact} Let ${\cal{I}}$ be an $\ell'$-flat $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ of bounded degree with bags of size at most $M$. In order to make Fact~\ref{fact:coloured-blocking} applicable, we need to express the $\ell'$-consistency condition over ${\cal{I}}$ by means of a finite set of conjunctive queries, rather than CRPQs. Towards this end, we show that over ${\cal{I}}$ each level-$\ell'$ CRPQ is equivalent to a UCQ. We rely on the following observation. \begin{restatable}{lemma}{lemboundedpaths} \label{lem:bounded-paths} In a match of a $\widehat{\cal{B}}$-decorated CRPQ in a $\widehat{\cal{B}}$-decorated interpretation, each path witnessing an RPQ atom of end level at least $\ell'$ uses at most $n - \ell'$ edges of level strictly below $\ell'$. \end{restatable} We say that a CRPQ $\varphi$ is \emph{bounded by $K$} over an interpretation ${\cal{J}}$ if for each match of $\varphi$ in ${\cal{J}}$ each RPQ atom of $\varphi$ can be witnessed by a path of length at most $K$. \begin{restatable}{lemma}{lemboundedcrpq} \label{lem:bounded-crpq} Let ${\cal{J}}$ be $\widehat{\cal{B}}$-decorated interpretation made up of disjoint level-$\ell'$ interpretations of size at most $M$ connected by edges of level strictly below $\ell'$. Assuming $\ell' \leq n$, each level-$\ell'$ CRPQ is bounded by $M (n-\ell'+1)^2$ over ${\cal{J}}$. \end{restatable} For a ${\cal{B}}$-decorated CRPQ $\varphi$, let $\varphi^{(K)}$ be the UCQ obtained by taking the union of all CQs that can be obtained from $\varphi$ by eliminating each RPQ atom ${\cal{B}}_{q,q'}(x, x')$ in one of the following ways: either remove the atom and equate variables $x$ and $x'$, or replace the atom with a CQ of the form \[ r_1(x, y_1) \land r_2(y_1,y_2)\land \dots \land r_N(y_{N-1},x') \] where $N \leq K$, $y_1, \dots, y_{N-1}$ are fresh variables, and there is a run of ${\cal{B}}$ on $r_1\dots r_N$ that begins in $q$ and ends in $q'$. \begin{fact} \label{fact:bounded} If a ${\cal{B}}$-decorated CRPQ $\varphi$ is bounded by $K$ on an interpretation ${\cal{J}}$, then ${\cal{J}} \models \varphi$ iff ${\cal{J}} \models \varphi^{(K)}$. \end{fact} The final step before we can apply Fact~\ref{fact:coloured-blocking} is to express $\ell'$-consistency as query evaluation. Consider a partition of a fragment $\varphi$ of $\Phi$ into a CRPQ $\varphi'$ of level $\ell'$ and fragments $\varphi_1, \varphi_2, \dots, \varphi_k$ with $\textit{var}(\varphi_i)\cap \textit{var}(\varphi_j) = \emptyset$ for $i\neq j$, $V_i = \textit{var}(\varphi_i) \cap \textit{var}(\varphi')$, and $\emptyset \neq V\subseteq \textit{var}(\varphi) \cap \textit{var}(\varphi')$. Let $\psi$ be the CRPQ obtained from $\varphi'$ as follows. Begin from a copy of $\varphi'$. For each $i \in \{1, \dots, k\}$, add to $\psi$ an atom $A_{\varphi_i, V_i}^{\kappa_i}(u)$ for some $\kappa_i$ satisfying $\kappa_i (x) \leq \ell$ for all $x \in \textit{var}(\varphi_i)$ and $\kappa_i(x)=\ell$ for all $x \in V_i$, and some variable $u$ in $V_i$ ($V_i$ is nonempty, because $\psi'$ is connected), and equate all variables in $V_i$. Similarly, add to $\psi$ the atom $\bar{A}_{\varphi', V}(u)$ for some $\kappa$ satisfying $\kappa (x) = \ell$ for all $x \in \textit{var}(\varphi) \cap \textit{var}(\varphi')$ and $\kappa(x) = \kappa_i(x)$ for all $x \in \textit{var}(\varphi) \cap \textit{var}(\varphi_i)$, and some $u\in V$, and equate all variables in $V$. Let $\Phi_{\ell'}$ be the union of all CRPQs $\psi$ obtained as above for different choices of $\varphi$, $\varphi'$, $\varphi_1, \varphi_2, \dots, \varphi_k$, $V$, and $\kappa_1, \kappa_2, \dots, \kappa_k$. Note that $\Phi_{\ell'}$ is a union of level-$\ell'$ CRPQ. If $\ell'>n$, $\Phi_{\ell'}$ is a UCQ. \begin{restatable}{lemma}{lemconsistencyaseval} \label{lem:consistency-as-evaluation} If ${\cal{J}}$ is a $\widehat {\cal{B}}$-decorated interpretation, then ${\cal{J}}$ is $\ell'$-consistent iff ${\cal{J}} \not\models \Phi_{\ell'}$. \end{restatable} Let $K=M(n-\ell'+1)^2$. Let $t$ be the maximal number of binary atoms in one CQ in $\Phi_{\ell'}^{(K)}$. (Note that if $\ell'>n$, the query $\Phi_{\ell'}$ is a UCQ and $\Phi_{\ell'}^{(K)}$ coincides with $\Phi_{\ell'}$.) Fix $m = t^2$ and let ${\cal{I}}'$ be an $m$-proper colouring of ${\cal{I}}$. On each infinite branch, select the first bag ${\cal{M}}$ such that for some bag ${\cal{M}}'$ higher on this branch, the $m$-neighbourhood of the target element $e$ of the edge from the parent of ${\cal{M}}$ to ${\cal{M}}$ is isomorphic to the $m$-neighbourhood of the target $e'$ of the edge from the parent of ${\cal{M}}'$ to ${\cal{M}}'$. Because the number non-isomorphic of $m$-neighbourhoods in a structure of bounded degree is bounded, the depth of the selected bags in the tree of bags is also bounded. The set of selected bags is finite and forms a maximal antichain. Let ${\cal{F}}$ be the interpretation obtained by taking the union of all strict ancestors of the selected bags, and for each element $e$ as above, redirect the edge coming from the parent of ${\cal{M}}$ to $e'$. Clearly, ${\cal{F}}$ is a finite level-$\ell$ interpretation. It is routine to check that ${\cal{F}}\models^{\cal{E}} {\cal{K}}$. It remains to prove that ${\cal{F}}$ is $\ell'$-consistent. We know that ${\cal{I}}$ is $\ell'$-consistent. By Lemma~\ref{lem:consistency-as-evaluation}, ${\cal{I}} \not \models \Phi_{\ell'}$. By Lemma~\ref{lem:bounded-crpq} and Fact~\ref{fact:bounded}, ${\cal{I}} \not \models \Phi_{\ell'}^{(K)}$. By Fact~\ref{fact:coloured-blocking}, ${\cal{F}} \not\models \Phi_{\ell'}^{(K)}$. By construction, ${\cal{F}}$ satisfies the assumptions of Lemma~\ref{lem:bounded-crpq}. Hence, by Lemma~\ref{lem:bounded-crpq} and Fact~\ref{fact:bounded}, ${\cal{F}} \not\models \Phi_{\ell'}$. We conclude that ${\cal{F}}$ is $\ell'$-consistent using Lemma~\ref{lem:consistency-as-evaluation}. Thus we have proved the converse of Lemma~\ref{lem:unravelling-flat}. \begin{restatable}{lemma}{lemfoldingback} \label{lem:folding-back} If there exists an $\ell'$-flat $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ with bounded degree and bag size then there exists a finite $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$. \end{restatable} Combining Lemmas~\ref{lem:consistent},~\ref{lem:unravelling-flat}, and~\ref{lem:folding-back}, we get that there is a finite $(\ell,\ell')$-model of ${\cal{K}}$ modulo ${\cal{E}}$ iff there is a bounded-degree $\ell'$-flat model of ${\cal{K}}$ modulo ${\cal{E}}$ whose bags are $\ell'$-consistent and have bounded size. As in an $\ell'$-flat model each bag is a level-$\ell'$ interpretation, when we restrict our search to one-bag models the problem is an instance of the $(\ell', \ell')$-model problem. Models consisting of multiple bags can be built coinductively top-down by means of a greatest fixed point algorithm (similar to type elimination), using the $(\ell', \ell')$-model problem to check if each bag exists. \begin{restatable}{lemma}{lemalgoflat} \label{lem:algo-flat} The $(\ell,\ell')$-model problem for an $\ensuremath{{\cal{ALC}}\xspace}$ KB ${\cal{K}}$, a UCRPQ $\Phi$, and an environment ${\cal{E}}$ can be solved in time \[2^{O(\|{\cal{K}}\|)+ 2^{\mathrm{poly}(\|\Phi\|)}}\] given an oracle for the $(\ell', \ell')$-model problem (with the same UCRPQ and TBox). \end{restatable} \section*{Acknowledgments} This work was supported by Poland's National Science Centre grant 2018/30/E/ST6/00042. It also benefited from inspiring discussions with Charles Paperman. \bibliographystyle{kr} \section{Preliminaries} \label{sec:preliminaries} \subsection{Description Logics} We consider a vocabulary consisting of countably infinite disjoint sets of \emph{concept names} $\ensuremath{\mn{N_{\mn{C}}}}$, \emph{role names} $\ensuremath{\mn{N_{\mn{R}}}}$, and \emph{individual names} $\mn{N_I}$. \emph{$\ensuremath{{\cal{ALC}}\xspace}$-concepts $C,D$} are defined by the grammar \[ C,D ::= A \mid \neg C \mid C \sqcap D \mid \exists r. C \] where $A \in \mn{N_C}$ and $r \in \mn{N_R}$. We use standard abbreviations $\bot$, $\top$, $C\sqcup D$ and $\forall r.C$. An \emph{$\ensuremath{{\cal{ALC}}\xspace}$-TBox ${\cal{T}}$} is a finite set of \emph{concept inclusions (CIs)} $C\sqsubseteq D$, where $C,D$ are $\ensuremath{{\cal{ALC}}\xspace}$-concepts. An \emph{ABox} ${\cal{A}}$ is a finite non-empty set of \emph{concept} and \emph{role assertions} of the form $A(a)$, $r(a,b)$, where $A \in \mn{N_C}$, $r \in \mn{N_R}$ and $\{a,b\} \subseteq \mn{N_I}$. \new{We write $\mn{ind}({\cal{A}})$ for the \emph{set of individual names} occurring in ${\cal{A}}$.} A \emph{knowledge base (KB)} is a pair ${\cal{K}}=({\cal{T}}, {\cal{A}})$. \new{We write $\mn{CN}({\cal{K}})$ and $\mn{rol}({\cal{K}})$ for the \emph{sets of all concept and role names} occurring in ${\cal{K}}$.} We let $\|{\cal{K}}\|$ be the total size of the representation of ${\cal{K}}$. Without loss of generality, we assume throughout the paper that all CIs are in one of the following \emph{normal forms}: \[\bigsqcap_i A_i \sqsubseteq \bigsqcup_j B_j, \quad A \sqsubseteq \exists r.B, \quad A \sqsubseteq \forall r.B, \] where $A,A_i,B,B_j \in \mn{N_C}$, $r\in \mn{N_R}$, and empty disjunction and conjunction are equivalent to $\bot$ and $\top$, respectively. Additionally, for each $A \in \mn{CN}({\cal{K}})$ there is a complementary $\bar A \in \mn{CN}({\cal{K}})$ axiomatized with $\top \sqsubseteq A \sqcup \bar A$ and $A \sqcap \bar A \sqsubseteq \bot$. \subsection{Interpretations} The semantics is given as usual via \emph{interpretations} ${\cal{I}}= (\Delta^{\cal{I}}, \cdot^{\cal{I}})$ consisting of a non-empty \emph{domain} $\Delta^{\cal{I}}$ and an \emph{interpretation function $\cdot^{\cal{I}}$} mapping concept names to subsets of the domain and role names to binary relations over the domain, \new{and individual names to elements of the domain.} The interpretation of complex concepts $C$ is defined in the usual way~\cite{DLBook}. An interpretation ${\cal{I}}$ is a \emph{model of a TBox ${\cal{T}}$}, written ${\cal{I}} \models{\cal{T}}$ if $C^{\cal{I}} \subseteq D^{\cal{I}}$ for all CIs $C\sqsubseteq D\in {\cal{T}}$. \new{It is a \emph{model of an ABox ${\cal{A}}$}, written ${\cal{I}}\models {\cal{A}}$, if $\mathsf{ind}({\cal{A}}) \subseteq \Delta^{\cal{I}}$, $a^{\cal{I}} = a$ for each $a\in\mn{ind}({\cal{A}})$, $(a,b)\in r^{\cal{I}}$ for all $r(a,b)\in {\cal{A}}$, and $a\in A^{\cal{I}}$ for all $A(a)\in {\cal{A}}$.} \new{The first two conditions constitute the so-called \emph{standard name assumption.}} Finally, ${\cal{I}}$ is a \emph{model of a KB ${\cal{K}}=({\cal{T}}, {\cal{A}})$}, written ${\cal{I}} \models {\cal{K}}$, if ${\cal{I}}\models {\cal{T}}$ and ${\cal{I}} \models {\cal{A}}$. An interpretation ${\cal{I}}$ is \emph{finite} if $\Delta^{\cal{I}}$ is finite. An interpretation ${\cal{I}}'$ is a \emph{sub-interpretation} of ${\cal{I}}$, written as ${\cal{I}}'\subseteq {\cal{I}}$, if $\Delta^{{\cal{I}}'}\subseteq \Delta^{\cal{I}}$, $A^{{\cal{I}}'}\subseteq A^{\cal{I}}$, and $r^{{\cal{I}}'}\subseteq r^{{\cal{I}}}$ for all $A\in\mn{N_C}$ and $r\in \mn{N_R}$. For $\Sigma \subseteq \ensuremath{\mn{N_{\mn{C}}}} \cup \ensuremath{\mn{N_{\mn{R}}}}$, ${\cal{I}}$ is an interpretation \emph{over signature} $\Sigma$ if $A^{\cal{I}}=\emptyset$ and $r^{\cal{I}}=\emptyset$ for all $A\in \ensuremath{\mn{N_{\mn{C}}}} \setminus \Sigma$ and $r\in \ensuremath{\mn{N_{\mn{R}}}}\setminus\Sigma$. The union ${\cal{I}} \cup {\cal{J}}$ of ${\cal{I}}$ and ${\cal{J}}$ is an interpretation such that $\Delta^{{\cal{I}} \cup {\cal{J}}} = \Delta^{{\cal{I}}} \cup \Delta^{{\cal{J}}}$, $A^{{\cal{I}} \cup {\cal{J}}} = A^{{\cal{I}}} \cup A^{{\cal{J}}}$, and $r^{{\cal{I}} \cup {\cal{J}}} = r^{{\cal{I}}} \cup r^{{\cal{J}}}$ for all $A\in\mn{N_C}$ and $r\in \mn{N_R}$. A \emph{unary ${\cal{K}}$-type} is a subset of $\mn{CN}({\cal{K}})$ including either $A$ or $\bar A$ for each $A \in \mn{CN}({\cal{K}})$. For an interpretation ${\cal{I}}$ and an element $d \in \Delta^{\cal{I}}$, the \emph{unary ${\cal{K}}$-type of $d$ in ${\cal{I}}$} is $\mn{tp}^{\cal{I}}(d) = \left \{ A \in \mn{CN}({\cal{K}}) \bigm | d \in A^{\cal{I}}\right\}$. We say that ${\cal{I}}$ \emph{realizes} a unary ${\cal{K}}$-type $\tau$ if $\tau = \mn{tp}^{\cal{I}}(d)$ for some $d \in \Delta^{\cal{I}}$. \subsection{Queries and Finite Entailment} We next introduce the query language. We concentrate on Boolean queries, that is, queries without answer variables. The extension to queries with answer variables is standard; see, for example,~\cite{GlimmLHS08}. A \emph{conjunctive regular path query (CRPQ)} is a first-order formula \[\varphi=\exists\mathbf x\, \psi(\mathbf x)\] such that $\psi(\mathbf x)$ is constructed using $\wedge$ over atoms of the form $A(t)$ or ${\cal{E}}(t,t')$ where $A \in \mathsf{N_C}$, $t,t'$ are variables from $\mathbf x$ or individual names from $\mn{N_I}$, and ${\cal{E}}$ is a \emph{path expression} defined by the grammar $${\cal{E}},{\cal{E}}' ::= r \mid {\cal{E}}^* \mid {\cal{E}} \cup {\cal{E}}' \mid {\cal{E}}\circ{\cal{E}}'$$ where $r\in \ensuremath{\mn{N_{\mn{R}}}}$. Thus, ${\cal{E}}$ is essentially a regular expression over the (infinite) alphabet $\{r\mid r\in \ensuremath{\mn{N_{\mn{R}}}} \}$. The set of individual names in $\varphi$ is denoted with $\mn{ind}(\varphi)$. A \emph{conjunctive query (CQ)} is a CRPQ that does not use the operators $*,\cup$ and $\circ$ in path expressions, and a \emph{regular path query (RPQ)} consists of a single atom of the form ${\cal{E}}(t,t')$. The semantics of CRPQs is defined via matches. Let us fix a CRPQ $\varphi=\exists \mathbf x \,\psi(\mathbf x)$ and an interpretation ${\cal{I}}$. A \emph{match for $\varphi$ in ${\cal{I}}$} is a function \[\pi:\mathbf x \cup \mathsf{ind}(\varphi)\to \Delta^{\cal{I}} \] such that $\pi(a)=a$, for all $a\in \mn{ind}(\varphi)$, and ${\cal{I}},\pi\models\psi(\mathbf x)$ under the standard semantics of first-order logic extended with a rule for atoms of the form ${\cal{E}}(t,t')$. More formally, we define: \begin{itemize} \item ${\cal{I}},\pi\models \psi_1\wedge\psi_2$ iff ${\cal{I}},\pi\models \psi_1$ and ${\cal{I}},\pi\models \psi_2$; \item ${\cal{I}},\pi\models A(t)$ iff $\pi(t) \in A^{\cal{I}}$; \item ${\cal{I}},\pi\models {\cal{E}}(t,t')$ iff $(\pi(t),\pi(t'))\in{\cal{E}}^{\cal{I}}$, where ${\cal{E}}^{\cal{I}}$ is defined inductively as $({\cal{E}}^*)^{\cal{I}} = ({\cal{E}}^{\cal{I}})^*$, $({\cal{E}}_1\cup {\cal{E}}_2)^{\cal{I}} = {\cal{E}}_1^{\cal{I}}\cup {\cal{E}}_2^{\cal{I}}$, $({\cal{E}}_1\circ {\cal{E}}_2)^{\cal{I}} = {\cal{E}}_1^{\cal{I}}\circ {\cal{E}}_2^{\cal{I}}$. % \end{itemize} An interpretation ${\cal{I}}$ \emph{satisfies} $\varphi$, written ${\cal{I}}\models \varphi$, if there exists a match for $\varphi$ in ${\cal{I}}$. A \emph{union of CRPQs (UCRPQ)} is a finite set of CRPQs and a \emph{union of CQs (UCQ)} is a finite set of CQs. An interpretation ${\cal{I}}$ satisfies an UCRPQ $\Phi$, written as ${\cal{I}}\models \Phi$, if ${\cal{I}} \models \varphi$ for some $\varphi \in \Phi$. We say that ${\cal{K}}$ \emph{finitely entails} $\Phi$, written ${\cal{K}}\models_{\mathsf{fin}} \Phi$, if each finite model of ${\cal{K}}$ satisfies $\Phi$. A model of ${\cal{K}}$ that does not satisfy $\Phi$ is a \emph{counter-model}. The \emph{finite entailment problem} asks if a given KB ${\cal{K}}$ finitely entails a given query $\Phi$. \subsection{UCRPQs via Semiautomata} We work with UCRPQs represented by means of a \emph{semiautomaton} \cite{AlgebraicAutomata} ${\cal{B}} = (Q, \Gamma, \delta)$ where $Q$ is a finite set of states, $\Gamma \subseteq \{r\mid r \in \mathsf{N_R}\}$ is a finite alphabet---throughout the paper we assume $\Gamma = \mn{rol}({\cal{K}})$, and $\delta:Q\times\Gamma \to Q$ is the transition function. A semiautomaton is essentially a deterministic finite automaton without initial and final states; a run of a semiautomaton ${\cal{B}}$ over a word $w$ is defined just like for a finite automaton, except that it can begin in any state and there is no notion of accepting runs. Under this representation, an RPQ is an atom over a binary predicate of the form ${\cal{B}}_{q,q'}$ where $q,q' \in Q$ are states of ${\cal{B}}$. We let ${\cal{I}},\pi\models {\cal{B}}_{q,q'}(t,t')$ iff $(\pi(t),\pi(t')) \in {\cal{B}}_{q,q'}^{\cal{I}}$ where ${\cal{B}}_{q,q'}^{\cal{I}}$ is the set of pairs $(e,e')$ such that for some $n\in \mathbb{N}$ there exist $r_1,\ldots,r_{n} \in \Gamma$ and $e_0,\ldots,e_n\in \Delta^{\cal{I}}$ such that \begin{itemize} \item $e_0=e$ and $e_n=e'$; \item $(e_{i-1},e_i) \in (r_i)^{\cal{I}}$ for all $i\in \{1,\ldots,n\}$; \item there exists a run of ${\cal{B}}$ on the word $r_1\ldots r_{n}$ that begins in state $q$ and ends in state $q'$. \end{itemize} We also allow \emph{edge atoms} of the form $r(x,x')$ for $r\in\Gamma$. Each UCRPQ $\Phi$ can be effectively rewritten into a UCRPQ $\Phi'$ expressed by means of a semiautomaton ${\cal{B}}$ of size $k\cdot 2^{O(m)}$ where $k$ is the number of path expressions in $\Phi$ and $m$ is their maximal size. The size of CRPQs in $\Phi'$ is bounded by the size of CRPQs in $\Phi$ and $|\Phi'| = 2^{\mathrm{poly}{\|\Phi\|}}$, where $\|\Phi\|$ is the total size of $\Phi$. For simplicity we work with KBs ${\cal{K}}=({\cal{T}},{\cal{A}})$ where the ABox ${\cal{A}}$ is \emph{trivial}; that is, \new{$\mn{ind}({\cal{A}}) = \{a\}$} for some $a\in\mn{N_I}$ and ${\cal{A}}$ contains only concept assertions. The general finite entailment problem can be reduced to this special case using the following lemma. \begin{restatable}{lemma}{lemabox} \label{lem:abox} Given an oracle for finite entailment for trivial ABoxes, the general finite entailment $({\cal{T}},{\cal{A}})\models_{\mathsf{fin}} \Phi$ can be decided in time $2^{\mathrm{poly}(\|({\cal{T}},{\cal{A}})\|)\cdot 2^{\mathrm{poly}(\|\Phi\|)}}$ using calls to the oracle for ${\cal{K}}'=({\cal{T}},{\cal{A}}')$ and $\Phi'$ consisting of $2^{\mathrm{poly}(\|\Phi\|)}$ CRPQs of linear size over the same semiautomaton as $\Phi$. \end{restatable} \subsection{Entailment Modulo Environment} We solve the entailment problem using a divide-and-conquer approach in which counter-models are decomposed into simpler ones, whose existence is easier to decide. Each level of this recursive procedure will involve certain modifications to the TBox. For complexity reasons we need to pay close attention to these changes, making sure that no blow-up is involved. To make it easier, we generalize the entailment problem by turning the modifications into a separate part of the input, which allows fixing the TBox for the duration of the whole procedure. At every level of the recursion, we will need to reason `externally' about the way simpler pieces are put together to form the larger counter-model, and `internally' about how to specify the required properties of a piece depending on what is happening outside. We will think of the models as induced subinterpretations of a larger interpretation. Dually, the remaining part of the larger interpretation can be seen as an external context, in which our models live. The relevant features of this context will be represented by environments, which we now define. An \emph{environment} ${\cal{E}} = (\Theta, \varepsilon)$ consists of a set $\Theta$ of unary types and a function $\varepsilon: \Theta \to 2^{\mn{rol}({\cal{K}}) \times \mn{CN}({\cal{K}})}$. The intended meaning is that only types from $\Theta$ are allowed and each element of an allowed unary type $\tau$ has an $r$-edge to an element in the extension of $B$ in the external context for each $(r,B)\in \varepsilon(\tau)$. Accordingly, we say that ${\cal{I}}$ is a \emph{model of ${\cal{K}}$ modulo ${\cal{E}}$} and write ${{\cal{I}} \models^{\cal{E}} {\cal{K}}}$ if ${\cal{I}}$ realizes only unary types from $\Theta$ and it is a model of ${\cal{K}}$ under the following \emph{relaxed semantics of existential restrictions}: \begin{itemize} \item for every existential restriction $\exists r. B$ in ${\cal{K}}$ and every element $d \in \Delta^{\cal{I}}$, $d \in (\exists r. B)^{\cal{I}}$ iff either there is an $r$-edge in ${\cal{I}}$ from $d$ to an element $e \in B^{\cal{I}}$ or $(r,B) \in \varepsilon\left(\mathsf{tp}^{\cal{I}}(d)\right)$. \end{itemize} (The semantics of universal restrictions is not altered and it is the environment's reponsibility to account for them.) Correspondingly, a query $\Phi$ is \emph{finitely entailed by ${\cal{K}}$ modulo ${\cal{E}}$}, written ${\cal{K}} \models_{\mathsf{fin}}^{\cal{E}} \Phi$, if for each finite interpretation ${\cal{I}}$, if ${\cal{I}} \models^{\cal{E}} {\cal{K}}$ then ${\cal{I}} \models \Phi$. The problem of \emph{finite entailment modulo environment} is to decide for a given KB ${\cal{K}}$, environment ${\cal{E}}$, and query $\Phi$ if ${\cal{K}} \models_{\mathsf{fin}}^{\cal{E}} \Phi$. Note that finite entailment modulo environment and ordinary finite entailment are interreducible. In one direction, it is enough to take the set of all unary ${\cal{K}}$-types for $\Theta$ and set $\varepsilon(\tau) = \emptyset$ for all $\tau \in \Theta$. In the other direction, the conditions imposed on unary types and the relaxed semantics of existential restrictions can be expressed easily in the TBox. The latter reduction, however, might significantly increase the size of the TBox. It is easier to control the size of the input at different levels of the recursion when these conditions are explicitly represented in the environment.
1,314,259,996,941
arxiv
\section{Introductory Facts}\label{hb1} The Hahn-Banach theorem has a lot of applications in different fields of analysis, which attracted the attention of several authors such as Vincent-Smith \cite{Vic} and Turan \cite{Tu}. In this present paper, we give an extension of the Hahn-Banach theorem on lattice normed $f$-algebras and some applications. The extension of one step in our theorem is not similar to the other Hahn-Banach theorems. Vector lattices (i.e., Riesz spaces) are ordered vector spaces that have many applications in measure theory, operator theory, and applications in economics. We suppose that the reader to be familiar with the elementary theory of vector lattices, and we refer the reader for information on vector lattices \cite{ABPO,LZ,Za} as sources of unexplained terminology. Besides, all vector lattices are assumed to be real and Archimedean. A vector lattice $E$ is a {\em lattice-ordered algebra} (briefly, {\em $l$-algebra}) if $E$ is an associative algebra whose positive cone $E_+$ is closed under the algebra multiplication. A Riesz algebra $E$ is called \textit{$f$-algebra} if $E$ has additionally property that $x\wedge y=0$ implies $(x\cdot z)\wedge y=(z\cdot x)\wedge y=0$ for all $z\in E_+$. For an order complete vector lattice (i.e., Dedekind complete), the set $L_b(E)$ of all order bounded operators on $E$ and the set $C(X)$ of all real valued continuous function on a topological space $X$ are examples of lattice-ordered algebra. However, $L_b(E)$ is not $f$-algebra because it is Archimedean vector lattice but not commutative because every Archimedean $f$-algebra is commutative; see for example \cite[Theorem 140.10.]{Za}. Consider $Orth(E):=\{T\in L_b(E):x\perp y\ \text{implies}\ Tx\perp y\}$ the set of orthomorphisms on a vector lattice $E$. Then, the space $Orth(E)$ is not only vector lattice but also an $f$-algebra. On the other hand, a sublattice $A$ of an $f$-algebra $E$ is called $f$-subalgebre of $E$ whenever it is also an $f$-algebra under the multiplication operation in $E$. In this paper, we assume that if a positive element has inverse then the inverse also positive. We refer the reader for much more information on $f$-algebras \cite{ABPO,Ay1,Ay2,Hu,P,Za}. Also, for more details information on the following example, we refer the reader to \cite[p.13]{BGKKKM}. \begin{exam}\label{example of orh} Let $E$ be a vector lattice. An order bounded band preserving operator $T:D\to E$ on an order dense ideal $D\subseteq E$ is an extended orthomorphism. $Orth^\infty(E)$ denote the set of all extended orthomorphisms: denote by $\mathcal{M}$ the collection of all pairs $(D;\pi)$, where $D$ is order dense ideal in $E$ and $\pi\in Orth(D,E)$. Then the space $Orth^\infty(E)$ is an $f$-algebra. Moreover, $Orth(E)$ is an $f$-subalgebra of $Orth^\infty(E)$. On the other hand, $\mathcal{L}(E)$ stands for the order ideal generated by the identity operator $I_E$ in $Orth(E)$. Then $\mathcal{L}(E)$ is an $f$-subalgebra of $Orth(E)$. \end{exam} Recall that a net $(x_\alpha)_{\alpha\in A}$ in a vector lattice $X$ is called \textit{order convergent} (or shortly, \textit{$o$-convergent}) to $x\in X$, if there exists another net $(y_\beta)_{\beta\in B}$ satisfying $y_\beta \downarrow 0$ (i.e. $y_\beta \downarrow$ and $\inf(y_\beta)=0$), and for any $\beta\in B$ there exists $\alpha_\beta\in A$ such that $|x_\alpha-x|\leq y_\beta$ for all $\alpha\geq\alpha_\beta$. In this case, we write $x_\alpha\xrightarrow{o} x$. On the other hand, for a given positive element $u$ in a vector lattice $E$, a net $(x_\alpha)$ in $E$ is said to converge $u$-uniformly to the element $x\in E$ whenever, for every $\varepsilon>0$, there exists an index $\alpha_0$, such that $\lvert x_\alpha-x\rvert<\varepsilon u$ for every $\alpha\geq\alpha_0$. Moreover, $E$ is said to be $u$-uniformly complete if every $u$-uniform Cauchy net has an $u$-uniform limit; see \cite{LZ}. Let $X$ be a vector space, $E$ be a vector lattice, and $p:X \to E_+$ be a vector norm (i.e. $p(x)=0\Leftrightarrow x=0$, $p(\lambda x)=|\lambda|p(x)$ for all $\lambda\in\mathbb{R}$, $x\in X$, and $p(x+y)\leq p(x)+p(y)$ for all $x,y\in X$), then the triple $(X,p,E)$ is called a {\em lattice-normed space}, abbreviated as $LNS$. A subset $Y$ of $X$ is called $p$-bounded whenever every net $(y_\alpha)$ in $Y$ with $p(y_\alpha-y)\xrightarrow{o} 0$ implies $y\in Y$. Let $(X,p,E)$ and $(Y,q,F)$ be two $LNS$s. Then an operator $T:X\to Y$ is called dominated operator if there is a positive operator $S:E\to F$ such that $q(T(x))\leq S(p(x))$ for all $x\in X$. In this case, $T$ is called a {\em dominated operator} and $S$ is called dominant of $T$. Take $maj(T)$ as the set of all dominants of the operator $T$. If there is a least element in $maj(T)$ then it is called the exact {\em dominant} of $T$ and denoted by $[T]$; see for much more details information see \cite{BGKKKM,Ku}. If $X$ is decomposable space and $F$ is order complete then exact dominant exists; see \cite[Theorem 4.1.2.]{Ku}. Consider an $LNS$ $(X,p,E)$. Assume $X$ and $E$ are $f$-algebras, and the vector norm $p$ is monotone (i.e. $|x|\leq |y|\Rightarrow p(x)\leq p(y)$) then the triple $(X,p,E)$ is said to be {\em lattice normed $f$-algebra} and abbreviated as $LNFA$. \begin{defn} Let $(X,p,E)$ be an $LNFA$ and $Y$ be an $f$-subalgebra of $X$. If $p(x\cdot y)=y\cdot p(x)$ holds for all $x\in X$ and $y\in Y$ then $p$ is said to be {\em $f$-subalgebraic-linear}. Also, we said that $(X,p,E)$ has {\em $f$-subalgebraic-linear property}. \end{defn} Recall that an element $x$ in Riesz algebra is called \textit{nilpotent} if $x^n=0$ for some $n\in \mathbb{N}$. Moreover, an algebra $E$ is called \textit{semiprime} if the only nilpotent element in $E$ is zero. \begin{lem}\label{inequality semiprime} Let $E$ be a semiprime $f$-algebra. Then $x\leq y$ and $x\leq z$ imply $x^2\leq y\cdot z$ for all $x,\ y,\ z\in E_+$. \end{lem} \begin{proof} Suppose $x,\ y,\ z$ are positive elements in $E$ such that $x\leq y$ and $x\leq z$. It follows from \cite[Theorem 3.2.(ii)]{P} that $x^2\leq y\cdot z$. \end{proof} \begin{exam} Let $E$ be a vector lattice such that $x^2=x$ for all $x\in E_+$ and $p:\mathcal{L}(E)\to Orth(E)$ be a map defined by $T\to p(T)=\lvert T\rvert$. Then one can see that $p$ is vector norm and $\big(\mathcal{L}(E),p, Orth(E)\big)$ is an $LNS$. Moreover, since $\mathcal{L}(E)$ and $Orth(E)$ are $f$-algebras and $\lvert\cdot\rvert$ is monotone, $\big(\mathcal{L}(E),p, Orth(E)\big)$ is an $LNFA$. Take arbitrary $T,\ S\in \mathcal{L}(E)$. Then there exists some positive scalars $\lambda_T$ and $\lambda_S$ such that $\lvert T\rvert\leq \lambda_T I$ and $\lvert S\rvert\leq \lambda_S I$ because $\mathcal{L}(E)$ is an order ideal generated by the identity operator $I_E$. So, by using \cite[Theorem 2.40.]{ABPO}, we have $$ p(S(T))=\lvert S(T)\rvert=\lvert S\rvert\big(\lvert T\rvert\big)\leq \lambda_S I\big(\lvert T\rvert\big)=\lambda_S \lvert T\rvert $$ and also $$ p(S(T))=\lvert S(T)\rvert= \lvert S\rvert\big(\lvert T\rvert\big)\leq \lvert S\rvert\big(\lambda_T\lvert I\rvert\big)=\lambda_T \lvert S\rvert. $$ So, it follows from Lemma \ref{inequality semiprime} and our assumption that $p(S(T))=\big[p(S(T))\big]^2\leq \lambda_S\lambda_T\lvert S\rvert \cdot \lvert T\rvert=\lambda_S\lambda_T\lvert S\rvert \cdot p(T)$ holds true because $Orth(E)$ is semiprime; see \cite[Theorem 142.5.]{Za}. Next, consider a new $LNFA$ $\big(\mathcal{L}(E)_+,q, Orth(E)\big)$, where $q(T)=\frac{1}{\lambda_T}p(T)$ for all $T\in \mathcal{L}(E)_+$. Then it follows from the above observation that the $LNFA$ space $\big(\mathcal{L}(E)_+,q,Orth(E)\big)$ has the $f$-subalgebraic-linear property. \end{exam} For the following example, we consider \cite[Theorem 2.62.]{ABPO}. \begin{exam} Let $E$ be an $f$-algebra. Then we define a map $p$ from $E$ to $Orth(E)$ by $u\to p(u)=p_u$ such that $p_u(x)=\lvert u\cdot x\rvert$ for each $x\in E$. So, by using \cite[Theorem 142.1.(ii)]{Za}, it is easy to see that $p$ is $(E,p,Orth(E))$ is an $LNFA$ with the $f$-subalgebraic-linear property. \end{exam} In this article, unless otherwise, all lattice normed $f$-algebra are assumed to be with the $f$-subalgebraic-linear property. \section{Main Results} We begin the section with the following definition. \begin{defn} Let $(X,p,E)$ be an $LNS$. Then an operator $T:X\to E$ is said to be {\em $E$-dominated} if it is dominated by $p$ on $E$. It means that $$ \lvert T(x)\rvert\leq p(x) $$ for all $x\in X$. \end{defn} It can be seen that every dominated operator on $LNS$s is $E$-dominated because dominant operators are positive. \begin{lem}\label{f algebra subspace} Let $X$ be an $f$-algebra and $Y$ be an $f$-subalgebra of $X$. Then, for any $w\in X_+$, the set $A=\{u+v\cdot w:u,v\in Y\}$ is also an $f$-subalgebra of $X$. \end{lem} \begin{proof} Firstly, we show that $A$ is an sublattice of $X$. Take an arbitrary $u+v\cdot w\in A$. Then we have $\lvert u+v\cdot w\rvert=\lvert u\rvert+\lvert v\rvert\cdot \lvert w\rvert=\lvert u\rvert+\lvert v\rvert\cdot w\in A$ because of $\lvert u\rvert,\lvert v\rvert \in Y$. Then we get the desired result. Next, we show that $A$ is an $f$-subalgebra of $X$. For any positive elements $y_1+u_1\cdot w,\ y_2+u_2\cdot w\in A_+$, we have $$ (y_1+u_1\cdot w)\cdot(y_2+u_2\cdot w)=y_1\cdot y_2+(y_1\cdot u_2+y_2\cdot u_1+u_1\cdot u_2\cdot w)w\in A_+ $$ because of $y_1\cdot y_2\in Y$, $y_1\cdot u_2+y_2\cdot u_1+u_1\cdot u_2\cdot w\in E$, $A\subseteq X$ and $X$ is $f$-algebra. Thus, $A$ is an $l$-algebra. On the other hand, assume $(y_1+u_1\cdot w)\wedge(y_2+u_2\cdot w)=0$ for arbitrary $y_1+u_1\cdot w,\ y_2+u_2\cdot w\in A$. Then we have $[(y+u\cdot w)\cdot(y_1+u_1\cdot w)]\wedge(y_2+u_2\cdot w)=0$ for all $y+u\cdot w\in A_+$ because $A_+\subseteq X_+$ and $X$ is $f$-algebra. Therefore, we obtain that $A$ is a $f$-subalgebra of $X$. \end{proof} \begin{prop}\label{f algebra order complete} Let $X$ be an $f$-algebra and $Y$ be an $u$-uniformly complete $f$-subalgebra of $X$. Then, for any $w\in X_+$, the set $A=\{u+v\cdot w:y,z\in Y_+\}$ is also an $u$-uniformly complete $f$-subalgebra. \end{prop} \begin{proof} Suppose $Y$ is $f$-subalgebra of $X$. Then, by applying Lemma \ref{f algebra subspace}, we see that $A$ is $f$-subalgebra of $X$. On the other hand, take an $u$-uniform Cauchy net $(x_\alpha)$ in $A$. Then there exist two $u$-uniform Cauchy nets $(y_\alpha)$ and $(z_\alpha)$ with $x_\alpha=y_\alpha+z_\alpha\cdot w$ in $Y_+$ because of $y_\alpha\leq x_\alpha$ and $z_\alpha\leq x_\alpha$. So, there are $y, \ z\in Y$ such that $y_\alpha\xrightarrow{u} y$ and $z_\alpha\xrightarrow{u}z$ because $Y$ is $u$-uniformly complete. Therefore, we get $x_\alpha=y_\alpha+z_\alpha\cdot w\xrightarrow{u}y+z\cdot w$. As a result, $A$ is also $u$-uniformly complete. \end{proof} \begin{thm}\label{basic theorem} Let $(X,p,E)$ be an $LNFA$ with $X$ being $f$-subalgebra of order complete $f$-algebra $E$ and $G$ be an unital $f$-subalgebra of $X$. If $T:G\to E$ is an $E$-dominated operator and $G$ is $e$-uniform complete then there exists another $E$-dominated operator $\hat{T}:X\to E$ such that $\hat{T}(g)=T(g)$ for all $g\in G$. \end{thm} \begin{proof} First of all, if we take $T=0$ or $X=G$ then the poof is obvious. Suppose, $G$ is a proper subspace of $X$ and $T\neq 0$. So, there is a vector $w$ in $X$ so that it is not in $G$. WLOG, we assume $w\in X_+$. Then we consider the set $G_1=\{u+v\cdot w:u,v\in G\}$. Thus, by Lemma \ref{f algebra subspace}, we get that $G_1$ is also an $f$-subalgebra of $X$. Also, by using this extension, we can arrive at $X$ because $G$ is $f$-subalgebra with the multiplicative unit. The extension of one step is not similar to the other Hahn-Banach theorems. It can be observed that $v\cdot w$ can be in $G$ for some $v\in G$. Thus, we have that the representation $G_1$ may not be unique. So, it causes to difficulties getting an extension of one step. Whenever it is done, by using Zorn's lemma and applying Proposition \ref{f algebra order complete}, we can get the extension of $\hat{T}$ to $X$. Now, consider elements $u,v \in G$. Since $T$ is an $E$-dominated operator. Then we have $$ T(u)+T(v)=T(u+v)\leq p(u-w+w+v)\leq p(u-w)+p(w+v) $$ Hence, we get $T(u)-p(u-w)\leq p(w+v)-T(v)$. From there, by applying order completeness of $E$, the both $$ s=\sup\{T(u)-p(u-w):u\in G\} $$ and $$ r=\inf\{p(v+w)-T(v):v\in G\} $$ exist in $E$. So, it is also clear $s\leq r$. Next, let's take any element $z\in E$ such that $s\leq z\leq r$ (for example we can take $z=s$). Now, we define a map \begin{align*} \hat{T}:G_1&\to E \\(u+v\cdot w)&\to\hat{T}(u+g\cdot w)=T(u)+v\cdot z. \end{align*} We need to show that $\hat{T}$ is a well defined operator. To prove that, we firstly prove the $E$-dominatedness fo $\hat{T}$. Let's apply $e$-uniformly completeness of $G$. Then we have that $(v+e)^{-1}$ exits for any positive element $v\in G_+$; see \cite[Theorem 146.3.]{Za}. Next, by using \cite[Theorem 11.1.]{P}, the inverse element $(v+\frac{1}{n}e)^{-1}$ exists in $G_+$ for all $n\in\mathbb{N}_+$. Then, for each $u\in G_+$ and $n\in\mathbb{N}$, we have $$ z\leq r\leq p(u\cdot(v+\frac{1}{n}e)^{-1}+w)-T(u\cdot (v+\frac{1}{n}e)^{-1}) $$ and so, by using the $f$-subalgebraic-linear property of $p$, we get $$ T(u)+(v+\frac{1}{n}e)\cdot z\leq p(u+w\cdot(v+\frac{1}{n}e))\leq p(u+w\cdot v)+\frac{1}{n}p(w). $$ Thus, we have $\hat{T}(u+v\cdot w)=T(u)+v\cdot z\leq p(u+v\cdot w)$ for any $u,v\in G_+$ because $F$ is an Archimedean vector lattice. Thus, $\hat{T}$ is $E$-dominated for arbitrary $u,v\in G_+$. Now, we show for arbitrary $v\in G$. We can write $v=v^+-v^-$. By using the first observation, we can write \begin{equation} \hat{T}(u+v^+\cdot w)=T(u)+v^+\cdot z\leq p(u+v^+\cdot w) \end{equation} For the band $B_{v^+}$ generated by $v^+$, we consider the band projection $q:G\to B_{v^+}$. Then $q$ holds $q(v)=v^+$ and $q=q^2$, and it is an positive orthomorphism on $G$ because every order projection is a positive orthomorphism on vector lattices. By using \cite[Theorem 141.1.]{Za}, we can choose a positive element $t\in G_+$ such that $q(x)=x\cdot t$ for all $x\in G$. Thus we have a positive vector $t\in G_+$ so that $v^+=q(v)=v\cdot t$, and $t=e\cdot t=q(e)=q(q(e))=t^2$, and $v^+=q(v^+)=v^+\cdot t$, and $0=q(v^-)=v^-\cdot t$. Also, the equality $v^+=q(v)=v\cdot t$ implies $v^-+v=v^+=v\cdot t$, and so we vet $v^-=v\cdot(t-e)$. Thus, we obtain the following both equalities \begin{equation} t\cdot(v^+\cdot z)=(t\cdot v^+)\cdot z=v^+\cdot z \end{equation} and \begin{equation} t\cdot(v^+\cdot w)=t\cdot v^+\cdot w=t\cdot(v\cdot t)\cdot w=t^2\cdot v\cdot w=t\cdot v\cdot w. \end{equation} It follows from $(1),\ (2)$ and $(3)$ and the $f$-subalgebraic-linear property of $p$ that \begin{eqnarray} t\cdot\big(T(u)+v^+\cdot z\big)\leq t\cdot p(u+v^+\cdot w)=p(t\cdot u+t\cdot v^+\cdot w)\big]=t\cdot p(u+v\cdot w). \end{eqnarray} As one repeat the same way and use $r\leq z$, it can be seen the following inequality \begin{equation} (e-t)\cdot\big(T(u)-v^-\cdot z\big)\leq (e-t)\cdot p(u+v\cdot w). \end{equation} Therefore, by summing up the inequalities $(4)$ and $(5)$, we can get the following result \begin{equation} T(u)+v\cdot z\leq p(u+v\cdot w) \end{equation} for arbitrary $v\in G$ and $u\in G_+$. Lastly, one can show for arbitrary element $u\in G$. Therefore, we get that $\hat{T}$ is $E$-dominated. Now, we show well defined of $\hat{T}$. Let's take arbitrary elements $u_1,\ u_2,\ v_1,\ v_2\in G$ such that $u_1+v_1\cdot w=u_2+v_2\cdot w$. It follows from $(6)$ that $T(u_1-u_2)+(v_1-v_2)\cdot z\leq p\big((u_1-u_2)+(v_1-v_2)\cdot w)\big)=p(0)=0$ and $T(u_2-u_1)+(v_2-v_1)\cdot z\leq p\big((u_2-u_1)+(v_2-v_1)\cdot w)\big)=p(0)=0$. As a result, we get $\hat{T}(v_1+g_1\cdot w)=\hat{T}(v_2+g_2\cdot w)$. Therefore, we have obtained that the map $\hat{T}$ is well defined. On the other hand, by using the linearity of $T$, one can show that $\hat{T}$ is a linear map (or, operator) from $G_1$ to $F$. Expressly, $\hat{T}$ is $E$-dominated operator by $f$-subalgebraic-linear map $p$. By applying Zorn's lemma under the desired conditions, we provide the extension of $\hat{T}$ to all of $X$. \end{proof} Under the condition of Theorem \ref{basic theorem}, we have the following results. \begin{cor} If $(X,p,E)$ is a decomposable $LNFA$ then we have $[\hat{T}]=[T]$. \end{cor} \begin{proof} Since $T$ is $E$-dominated operator, it is dominated. Indeed, Since $\lvert T(g)\rvert\leq p(g)$, we have $p(T(g))\leq p(p(g))$ (for example we can take a dominant $S=p$). Also, it follows from \cite[Theorem 4.1.2.]{Ku} that $T$ has the exact dominant $[T]$. Now, consider the $f$-subalgebra $G_1$ of $X$ in the proof of Theorem \ref{basic theorem}. For $v=0$ the addition unit and $u\in G$, we have $$ \hat{T}(u)=T(u)\leq \lvert T(u)\rvert\leq S(p(u)) $$ and also $$ -\hat{T}(u)=-T(u)\leq \lvert T(u)\rvert\leq S(p(u)). $$ Therefore, we get $\lvert \hat{T}(u)\rvert\leq S(p(u))$ for each $u\in G$. Hence, $\hat{T}$ is also dominated by $S$, and so, we get $[\hat{T}]\leq [T]$. On the other hand, by considering the $maj(T)$ and $maj(\hat{T})$, we have $[T]\leq [\hat{T}]$. As a result, we get the desired result. \end{proof} \begin{cor} Let $Y$ be an unital and $e$-uniform complete $p$-closed $f$-subalgebra of $X$. If every non zero positive element has inverse in $Y$ then, for each $y_0\notin Y$, we have a map $F:X\to E$ such that $F(Y)=0$ and $F(y_0)>0$. \end{cor} \begin{proof} Let's take a set $Y_1=\{u+v\cdot y_0:u,v\in Y\}$ and $w=\inf\{p(y+ y_0):y\in Y\}\geq0$. Then we show $w\neq0$. Assume it is not hold true, i.e., $w=0$. For any $a_1,\ a_2\in A$, it is enough to show that $a_1\wedge a_2\in A$. For proving that, we consider \cite[Theorem 2.1.2]{Ku} and take a band $B=(a_1-a_\vee a_2)$. The there is a band projection $\pi_B:E\to B$. Then we have another projection $\pi'_B$ on $X$ such that $\pi_B\big(p(x)\big)=p\big(\pi'_B(x)\big)$. So, we have \begin{eqnarray*} \pi_B(a_1)+\pi^d_B(a_1)&=&\pi_B(a_1\vee a_2+a_1\wedge a_2-a_2)+\pi^d_B(a_1\vee a_2+a_1\wedge a_2-a_2)\\&=& \pi_B(a_1\wedge a_2)+\pi^d_B(a_1\wedge a_2)\\&=&a_1\wedge a_2. \end{eqnarray*} Now, take $y_1,y_2\in Y_+$ so that $a_1=p(y_1+y_0)$ and $a_2=p(y_2+y_0)$. Thus, we can get \begin{eqnarray*} a_1\wedge a_2=\pi_B(a_1)+\pi^d_B(a_1)&=&\pi_B\big(p(y_1+y_0)\big)+\pi^d_B\big(p(y_2+y_0)\big)\\&=&p\big(\pi'_B(y_1+y_0)\big)+p\big(\pi'^d_B(y_2+y_0)\big)\\&=& p\big(\pi'_B(y_1+y_0)+\pi'^d_B(y_2+y_0)\big)\\&=& p\big(\pi'_B(y_1+y_0)+\pi'^d_B(y_2+y_0)\big)\\&=& p\big(y_0+\pi'_B(y_1)+\pi'^d_B(y_2)\big) \end{eqnarray*} Therefore, we can see $a_1\wedge a_2\in A$. Thus, one can see $a_1\wedge a_2\leq a_1$ and $a_1\wedge a_2\leq a_2$. So, $A$ is downward directed set. Therefore, we can take $A$ as a net in $E$. Since $p(y_\alpha-y_0)=p(y_0-y_\alpha)\downarrow 0$, we have $y_\alpha\xrightarrow{p}y_0$. Thus, we get $y_0\in Y$ because $Y$ is $p$-closed set. Which is contradict with $y_0\notin Y$, and so, we have $w>0$. Next, we define a map $T:Y_1\to E$ by $f(u+v\cdot y_0)=v\cdot w$. Then $T$ is linear and $T(Y)=0$. Moreover, $T$ is also $E$-dominated. Indeed, we can write $p(u+v\cdot y_0)=v\cdot p(v^{-1}\cdot u+y_0)\geq v\cdot w=T(u+v\cdot y_0)$. It follows from the Theorem \ref{basic theorem} that there exists a map from $X$ to $E$ satisfying the desired result. \end{proof} For the next result, we consider the $f$-algebraic spaces $\mathcal{L}(E)\subseteq Orth(E)\subseteq Orth^\infty(E)$ in Example \ref{example of orh}. \begin{cor} Let $E$ be an order complete vector lattice. $\big(Orth(E),\lvert\cdot\rvert,Orth^\infty(E)\big)$ is an $LNFA$. Moreover, If $T:\mathcal{L}(E)\to Orth^\infty(E)$ an $E$-dominated operator then it has an extension to $Orth(E)$. \end{cor} \begin{proof} Since $E$ be an order complete vector lattice, we see that $Orth^\infty(E)$ is order complete $f$-algebra; see \cite[p.14]{BGKKKM}. Moreover, we can say that $\big(Orth(E),\lvert\cdot\rvert,Orth^\infty(E)\big)$ is an $LNFA$ because $Orth(E)$ is $f$-subalgebra of $Orth^\infty(E)$ and $\lvert\cdot\rvert$ has the $f$-subalgebraic-linear property. By applying \cite[Theorem 3.1.]{WE}, we can see that $L(E)$ is order complete because $E$ is order complete. Moreover, by using \cite[Theorem 42.6.]{LZ}, we also get that $L(E)$ is $e$-uniform complete because $L(E)$ has unit $I_E$. Then, we have an $E$-dominated extension $T$ to $(Orth(E)$. \end{proof}
1,314,259,996,942
arxiv
\section{Introduction} A dialog system should correctly understand speakers’ utterances and respond in natural language. Dialog act recognition (DAR) and sentiment classification are two correlative tasks to realize the former. The goal of DAR is to attach semantic labels to each utterance in a dialog and identify the underlying intentions \cite{kim2011review}. Meanwhile, sentiment classification can detect the sentiments which are implicated in utterances and can help to capture speakers’ intentions \cite{kim2018integrated}. \begin{figure}[t] \centering \includegraphics[scale=0.33]{5400-example.pdf} \caption{\label{corpus-examples} A snippet of a dialog sample from the Mastodon Corpus and each utterance has a corresponding DA label and a sentiment label. (DA represents Dialog Act) } \label{example1} \end{figure} Intuitively, the two tasks are closely related and the information of one task can be utilized in the other task. For example, as illustrated in Figure~\ref{corpus-examples}, when predicting \texttt{User B} sentiment label, it's more likely to be \texttt{Negative} in the case of known \texttt{Agreement} DA label, since \texttt{Agreement} means the current utterance agrees with previous \texttt{User A} utterance and hence \texttt{User B} sentiment label tends to be the same with the \texttt{User A} response sentiment \texttt{Negative}. Similarly, knowing the sentiment information also contributes to the current DA prediction. Hence, it's promising to take the cross-impact between the two tasks into account. In recent years, \citep{mastodon} has explored the multi-task framework to model the correlation between sentiment classification and dialog act recognition. Unfortunately, their work does not achieve the promising performance, even underperforms some works which consider them as separate tasks. In this paper, we argue that this modeling method with no explicit interaction between the two tasks is not effective enough for transferring knowledge across the two tasks and has following weaknesses: (1) A simple multi-task learning framework just implicitly considers mutual connection between two tasks by sharing latent representations, which cannot achieve desirable results \cite{ijcai2019-296}. (2) With the shared latent representations, it is hard to explicitly control knowledge transfer for both tasks, resulting in lack of interpretability. To address the aforementioned issues, we propose a {\bf{D}}eep \textbf{C}o-Interactive \textbf{R}elation \textbf{N}etwork (\textbf{DCR-Net}) for joint dialog act recognition and sentiment classification, which can explicitly model relation and interaction between two tasks with a \textit{co-interactive relation layer}. In practice, we first adopt a shared hierarchical encoder with utterance-level self-attention mechanism to obtain the shared representations of dialog act and sentiment among utterances. The shared representations are then fed into the \textit{co-interactive relation layer} to get fusion of dialog act and sentiment representations and we call the process of fusion as one step of interaction. With the \textit{co-interactive relation layer}, we can directly control knowledge transfer for both tasks, which makes our framework more interpretable. Besides, the \textit{relation layer} can be stacked to form a hierarchy that enables multi-step interactions between the two tasks, which can further better capture mutual knowledge. The underlying motivation is that if a model extracts mutual knowledge in one step of interaction, then by stacking multiple such steps, the model can gradually accumulate useful information and finally capture the semantic relation between the two tasks \cite{tao-etal-2019-one}. Specifically, we explore several \textit{relation layers} including: 1) \textit{Concatenation} that concats the representation of dialog act and sentiment. 2) \textit{Multilayer Perceptron (MLP)} that uses the \textit{MLP} to learn the rich representation which contains both dialog act and sentiment information. 3) \textit{Co-Attention} that uses the co-attention mechanism \cite{xiong2016dynamic} to capture mutually important information to contribute to the two tasks (sentiment to act and act to sentiment). Finally, the final integrated outputs are then fed to separate decoders for dialog act and sentiment prediction respectively. We conduct experiments on two real-world benchmarks including Mastodon dataset \cite{mastodon} and Dailydialog dataset \cite{li-etal-2017-dailydialog}. The experimental results show that our system achieves significant and consistent improvement as compared to all baseline methods and achieves the state-of-the-art performance. Finally, Bidirectional Encoder Representation from Transformer (\citep{devlin-etal-2019-bert}, BERT), a pre-trained model, is used to further boost the performance. To summarize, the contributions of this work are as follows: \begin{itemize} \item We propose a deep co-interactive relation network for joint dialog act recognition and sentiment classification , which can explicitly control the cross knowledge transfer for both tasks and make our framework more interpretable. \item Our \textit{relation layer} can be stacked to form a hierarchy for multi-step interactions between the two tasks, which can gradually capture mutual relation and better transfer knowledge. \item We thoroughly study different relation layers and present extensive experiments demonstrating the benefit of our proposed framework. Experiments on two publicly available datasets show substantial improvement and our framework achieves the state-of-the-art performance. \item Finally, we analyze the effect of incorporating BERT in our framework. With BERT, our framework reaches a new state-of-the-art level. \end{itemize} \begin{figure*} [t] \centering \includegraphics[scale=0.9]{5400-framework.pdf} \caption{The top part (a) illustrates the overflow of our framework and the bottom part (b) represents different relation layers. } \label{fig:framework} \end{figure*} \section{Problem Formulation} In this section, we describe the formulation definition for dialog act recognition and sentiment classification in dialog. \begin{itemize} \item \textbf{Dialog Act Recognition} Given a dialog $C$ = $(u_{1},u_{2}...,u_{T})$ consisting of a sequence of $T$ utterances, dialog act recognition can be seen as a utterance-level sequence classification problem to decide the corresponding utterance dialog act label ($y_{1}^{d},y_{2}^{d},...,y_{T}^{d}$) for each utterance in dialog. \item \textbf{Sentiment Classification in Dialog } Sentiment classification in dialog can also be treated as an utterance-level sequence classification task that maps the utterance sequence $(u_{1},u_{2}...,u_{T})$ to the corresponding utterance sequence sentiment label ($y_{1}^{s},y_{2}^{s},...,y_{T}^{s}$). \end{itemize} \section{Our Approach} In this section, we describe the architecture of DCR-Net; see the top part (a) of Figure.~\ref{fig:framework} for its overview. DCR-Net mainly consists of three components: a shared hierarchical encoder, a stack of \textit{co-interactive relation layers} that repeatedly fuse dialog act and sentiment representations to explicitly model the relation and interaction between the two tasks, and two separate decoders for dialog act and sentiment prediction. In the following sections, the details of our framework are given. \subsection{Hierarchical Encoder} In our framework, dialog act recognition and sentiment classification share one hierarchical encoder that consists of a bidirectional LSTM (BiLSTM) \cite{hochreiter1997long}, which captures temporal relationships within the words, followed by a utterance-level self-attention layer to consider the dialog contextual information. \subsubsection{Utterance Encoder with BiLSTM} Given a dialog $C$ = $(u_{1},...,u_{T})$ consists of a sequence of $T$ utterances and $u_{t}$ = $(w_{t}^{1},..., w_{t}^{K_{t}})$ which consists of a sequence of $K_{t}$ words, we first adopt the BiLSTM to encode each utterance $u_{t}$$\in$$C$ to produce a series of hidden states (${ \mathbf { h } } _ t^1,..., { \mathbf { h } } _ t^{K_{t}}$), and we define ${\bf{h}}_{t}^{i}$ as follows: \def\mathop{\rm concat}{\mathop{\rm concat}} \begin{equation} \mathbf { h }_t^i = \mathop{\rm concat}{ \left( \overrightarrow { \mathbf { h } } _ t^i , \overleftarrow { \mathbf { h } } _ t^i \right)}, \end{equation} where $\mathop{\rm concat}(\cdot,\cdot)$ is an operation for concatenating two vectors, and $\overrightarrow { \mathbf { h } } _ t^i$ and $\overleftarrow { \mathbf { h } } _ t^i$ are the $i$-th hidden state of the forward LSTM and backward LSTM for $w_t^i$ respectively. Then, we regard the last hidden state $ \mathbf{ h }_t^{K_{t}}$ as the utterance $u_{t}$ representation. Hence, the sequence of $T$ utterances in $C$ can be represented as $\textbf{H}$ = ( $ \mathbf{ h }_1^{K_{1}}$, \dots, $ \mathbf{ h }_T^{K_{t}}$). \subsubsection{Utterance-Level Self-Attention} Self-attention is an effective method of leveraging context-aware features over variable-length sequences for natural language processing tasks \cite{yin2017chinese,tan2018deep}. In our case, we use self-attention mechanism to capture dialog-level contextual information for each utterance. In this paper, we adopt the self-attention formulation by \newcite{NIPS2017_7181} . We first map the matrix of input vectors $\textbf{H}$ $\in$ $\mathbb{R}^{T\times d}$ ($d$ represents the mapped dimension) to queries (${\textbf{Q}}$), keys (${\textbf{K}}$) and values (${\textbf{V}}$) matrices by different linear projections: \begin{equation} \left[ \begin{array} { c } { \textbf{K} } \\ { \textbf{Q} } \\ {\textbf{V} } \end{array} \right] = \left[ \begin{array} { c } { \textbf{W} _ { k } \textbf{H} } \\ { \textbf{W} _ { q } \textbf{H} } \\ { \textbf{W} _ { v } \textbf{H} } \end{array} \right]. \end{equation} The attention weight is then computed by dot product between $\textbf{Q}$, $\textbf{K}$ and the self-attention output {\bf{C}} $\in$ $\mathbb{R}^{T\times d}$ is a weighted sum of values $\textbf{V}$: \begin{equation} {{\textbf{C}}} = \text { Attention } ( \textbf{Q} ,\textbf{ K} , \textbf{V }) = \operatorname { Softmax } \left( \frac { \textbf{Q K }^ { T } } { \sqrt { d _ { k } } } \right) \textbf{V}, \end{equation} where we can see ${\textbf{C}}$ = $(\mathbf { c }_1, ..., \mathbf { c }_T)$ as the sequence utterances representations and each utterance representation captures the whole dialogue history information. $d_{k}$ represents the dimension of keys. Now, we obtain the initial shared representations of sequence utterances dialog act ${\textbf{D}}$ = $( { \textbf{c} }_1, ..., { \textbf{c} }_T)$ and sentiment representations $\bf{S}$ = $({ \textbf{c} }_1, ..., { \textbf{c} }_T)$. \subsection{Stacked Co-Interactive Relation Layer} We now describe the proposed \textit{co-interactive relation layer}; see the bottom part (b) of Figure.\ref{fig:framework}. In our paper, we use the \textit{co-interactive relation layer} to explicitly model the relation and interaction between dialog act recognition and sentiment classification. It takes the dialog act ${\textbf{D}}$ and sentiment representations ${\textbf{S}}$ as inputs and then outputs their updated versions which consider cross-impact on two tasks. In particular, it can be stacked to perform multi-step interaction for better capturing mutual knowledge and relation. In our framework, we explore several types of relation layers, which can either be used individually or combined together. Formally, given the $l^\text{th}$ layer inputs $\textbf{D}^{l}$ = ($\textbf{d}_{1}^{l}$, ...,$\textbf{d}_{T}^{l}$) $\in$ $\mathbb{R}^{T\times d}$ and $\textbf{S}^{l}$ = ($\textbf{s}_{1}^{l}$, ...,$\textbf{s}_{T}^{l}$) $\in$ $\mathbb{R}^{T\times d}$, we can adopt the following strategies to integrate the mutual knowledge between the two tasks. Before fusing information, we first apply a BiLSTM and MLP over act information and sentiment information separately to make them more task-specific, which can be written as $\textbf{S}^{l^{'}}$ = MLP ($\textbf{S}^{l}$ ) and $\textbf{D}^{l^{'}}$ = BiLSTM ($\textbf{D}^{l}$ ). {\bf{{Concatenation}}} Concatenation is an simple and effective method to combine two information \cite{wu2018improving}. Hence, we concatenate the $l^\text{th}$ layer of dialog act and sentiment representations as the updated representations. \def\mathop{\rm Concat}{\mathop{\rm Concat}} \begin{equation} { \textbf{D}}^{l+1} = \mathop{\rm Concat} ({ \textbf{S}}^{l^{'}},{ \textbf{D}}^{l^{'}}), \end{equation} \begin{equation} { \textbf{S}}^{l+1} = \mathop{\rm Concat} ({ \textbf{S}}^{l^{'}}, { \textbf{D}}^{l^{'}}). \end{equation} {\bf{{MLP}}} \textit{Multilayer Perceptron (MLP)} can automatically abstract the integrated representation \cite{nguyen2018improved}. Here, we add an MLP layer on the concatenation output to further learn the relation between two tasks and capture the mutual information, which can be formulated as follows: \def\mathop{\rm MLP}{\mathop{\rm MLP}} \def\mathop{\rm Concat}{\mathop{\rm Concat}} \begin{equation} { \textbf{D}}^{l+1} = \mathop{\rm MLP} (\mathop{\rm Concat} ({ \textbf{S}}^{l^{'}},{ \textbf{D}}^{l^{'}})) , \end{equation} \begin{equation} { \textbf{S}}^{l+1} = \mathop{\rm MLP} (\mathop{\rm Concat} ({ \textbf{S}}^{l^{'}},{ \textbf{D}}^{l^{'}})). \end{equation} {\bf{{Co-Attention}}} Co-Attention is a very effective method to grasp the mutually important information on both correlated tasks \cite{xiong2016dynamic}. Here, we extend the basic co-attention mechanism to utterance-level co-attention. It can produce the updated dialog act representations considering sentiment information, and the updated sentiment representations incorporating act knowledge. By doing this, we can transfer mutually relevant knowledge for the two tasks. The process can defined as follows: \def\mathop{\rm Softmax}{\mathop{\rm Softmax}} \def\mathop{\rm Concat}{\mathop{\rm Concat}} \begin{equation} { \textbf{D}}^{l+1}= { \textbf{D}}^{l^{'}} + \mathop{\rm Softmax} ({ \textbf{D}}^{l^{'}}( ({ \textbf{S}}^{l^{'}}) ^\top)) { \textbf{S}}^{l^{'}}, \end{equation} \begin{equation} { \textbf{S}}^{l+1} = { \textbf{S}}^{l^{'}} + \mathop{\rm Softmax} ({ \textbf{S}}^{l^{'}}( ({ \textbf{D}}^{l^{'}}) ^\top)){ \textbf{D}}^{l^{'}}, \end{equation} where ${\textbf{D}}^{l+1}$ = ($\textbf{d}_{1}^{l+1}$, ...,$\textbf{d}_{T}^{l+1}$) and ${\textbf{S}}^{l+1}$ = ($\textbf{s}_{1}^{l+1}$, ...,$\textbf{s}_{T}^{l+1}$) are the $l^\text{th}$ layer updated representations. \subsection{Decoder for Dialog Act Recognition and Sentiment Classification} After multi-step interaction with stacked co-interactive relation layer, we can get the outputs ${\textbf{D}}^{L}$ = ($\textbf{d}_{1}^{L}$, ...,$\textbf{d}_{T}^{L}$) and $\textbf{S}^{L}$ = ($\textbf{s}_{1}^{L}$, ...,$\textbf{s}_{T}^{L}$) of the last relation layer. We then adopt separate decoder to perform dialog act and sentiment prediction, which can be denoted as follows: \def\mathop{\rm softmax}{\mathop{\rm softmax}} \begin{equation} \textbf{y}_{t}^{d} = \mathop{\rm softmax} (\textbf{W}^{d}{ \textbf{d} }_{t}^{L} + \textbf{b}_{d}), \end{equation} \begin{equation} \textbf{y}_{t}^{s} = \mathop{\rm softmax} (\textbf{W}^{s}{\textbf{s} }_{t}^{L} + \textbf{b}_{s}), \end{equation} where $\textbf{y}_{t}^{d} $ and $\textbf{y}_{t}^{s}$ are the predicted distribution for dialog act and sentiment respectively; $\textbf{W}^{d}$ and $\textbf{W}^{s}$ are transformation matrices; $\textbf{b}_{d}$ and $\textbf{b}_{s}$ are bias vectors; $L$ is the number of stacked relation layers in our framework. \subsection{Joint Training} The dialog act recognition objection is formulated as: \begin{equation} \mathcal { L } _ { 1 } = - \sum _ { i = 1 } ^ { T} \hat { {\bf{y}} } _ { i } ^ { d } \log \left( {\bf{y}} _ { i} ^ { d } \right). \end{equation} Similarly, the sentiment classification objection is defined as: \begin{equation} \mathcal { L } _ { 2 } = - \sum _ { i = 1 } ^ { T} \hat { {\bf{y}} } _ { i } ^ { s } \log \left( {\bf{y}} _ { i} ^ { s} \right), \end{equation} where ${\hat { {\bf{y}} } _ { i } ^ { d } }$ and $ {\hat { {\bf{y}}} _ { i } ^ { s } }$ are gold utterance act label and gold sentiment label separately. To obtain dialog act recognition and sentiment classification jointly, we follow \newcite{qin-etal-2019-stack} to obtain the final joint objective: \begin{equation} \mathcal { L } _ { \theta } = \mathcal{ L }_{1} + \mathcal{ L }_{2}. \end{equation} \section{Experiments} \subsection{Dataset} \label{sec:dataset} We evaluate the performance of our model on two publicly available dialogue datasets, Mastodon \cite{mastodon} and Dailydialog \cite{li-etal-2017-dailydialog}. {\bf{Mastodon}} The Mastodon dataset\footnote{https://github.com/cerisara/DialogSentimentMastodon} consists of 269 dialogues for a total of 1075 utterances in training dataset and the test dataset is a corpus of 266 dialogues for a total of 1142 utterances. The vocabulary size is 5330. We follow the same partition as \newcite{mastodon}. {\bf{DailyDialog}} For Dailydialog dataset,\footnote{http://yanran.li/dailydialog} we adopt the standard split from the original dataset \cite{li-etal-2017-dailydialog}, employing 11,118 dialogues for training, 1,000 for validating, and 1,000 for testing. \begin{table*}[th] \small \centering \begin{adjustbox}{width=0.9\textwidth} \begin{tabular}{l|ccc|ccc|ccc|ccc} \hline \multirow{3}*{\textbf{Model}} & \multicolumn{6}{c}{\textbf{Mastodon}} & \multicolumn{6}{c}{\textbf{Dailydialog}} \\ \cline{2-13} ~ & \multicolumn{3}{c|}{SC} & \multicolumn{3}{c|}{DAR} & \multicolumn{3}{c|}{SC} & \multicolumn{3}{c}{DAR} \\ \cline{2-13} ~ & F1 (\%) & R (\%) & P (\%) & F1 (\%) & R (\%) & P (\%) & F1 (\%) & R (\%) & P (\%) & F1 (\%) & R (\%) & P (\%) \\ \hline HEC \cite{kumar2018dialogue} & - &- & -&56.1 &55.7 &56.5 &- &- &- &77.8 &76.5 &77.8 \\ CRF-ASN \cite{chen2018dialogue} & - & - &- &55.1 &53.9 &56.5 &- &- & -&76.0 &75.6 &78.2 \\ CASA \cite{raheja-tetreault-2019-dialogue} & - &- &- &56.4 &57.1 &55.7 & - &- &- &78.0 &76.5 &77.9 \\ \hline VDCNN \cite{conneau-etal-2017-deep} &39.6 & 31.6& 44.0 & - & - & - & 39.7 & 35.6 & 55.2 & - & - & - \\ Region.emb \cite{qiao2018new} &40.3 & 33.6 & 42.8 & - & - & - & 41.0 & 36.6 & 56.4 & - & - & -\\ DRNN \cite{wang-2018-disconnected} &37.9 & 34.3 & 39.7 & - & - & - & 41.1 & 37.0 & 56.4 & - & - & - \\ DialogueRNN \cite{majumder2019dialoguernn} &41.5 &42.8 & 40.5 & - & - & - & 40.3 & 37.7 & 44.5 & - & - & - \\ \hline JointDAS \cite{mastodon} & 37.6 & 41.6 & 36.1 & 53.2 & 51.9 & 55.6 & 31.2 & 28.8 & 35.4 & 75.1 & 74.5 & 76.2 \\ IIIM \cite{kim2018integrated} & 39.4 & 40.1 & 38.7 & 54.3 & 52.2 & 56.3 & 33.0 & 28.5 & 38.9 & 75.7 & 74.9 & 76.5 \\ \hline DCR-Net + Concat &42.1 & 41.3 & 42.9 & 57.1 & 56.9 & 57.2 & 41.2 & 37.4 & 57.4 & 78.2 & 77.6 & 78.7 \\ DCR-Net + MLP & 42.3 & 43.7 & 45.4 & 57.2 & 56.7 & 57.7 & 42.7 & 37.5 & \textbf{58.8} & 79.1 & 78.5 &{ 79.2} \\ DCR-Net + Co-Attention & \bf{*45.1} & \bf{*47.3} & \bf{*43.2} & \bf{*58.6} & \bf{*56.9} & \bf{*60.3} & \bf{*45.4} & \bf{*40.1} & 56.0 & \bf{*79.1} & \bf{*79.0} & \bf{*79.1} \\ \hdashline DCR-Net + Co-Attention + BERT & 55.1 & 56.5 & 56.5 & 67.1 & 65.2 & 69.2 & 48.9 & 46.9 & 63.2 & 80.0 & 79.9 & 80.2 \\ \hline \end{tabular} \end{adjustbox} \caption{Comparison of our model with baselines on Mastodon and Dailydialog test datasets. SC represents Sentiment Classification and DAR represents Dialog Act Recognition. The numbers with * indicate that the improvement of our model over all baselines is statistically significant with $p<0.01$ under t-test.} \label{table:over_all} \end{table*} \subsection{Experimental Settings} In our experiment setting, dimensionality of the embedding and all hidden units is selected from $\{100, 128, 256, 512, 600, 700, 800, 1024\}$. We do not use any pre-trained embedding and all word embeddings are trained from scratch. L2 regularization used on our model is $1\times 10^{-8}$ and dropout ratio adopted is selected from $\{0.1, 0.2, 0.25, 0.3, 0.4, 0.5\}$. In addition, we add a residual connection in self-attention and relation layer for reducing overfitting. We use Adam \cite{kingma-ba:2014:ICLR} to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization. We set the stacked number of relation layer as 3. For all experiments, we pick the model which works best on dev set, and then evaluate it on test set. \subsection{Baselines} We first make a comparison with the state-of-the-art dialog act recognition models: HEC, CRF-ASN and CASA, and then we compare our model with some state-of-the-art sentiment classification models: VDCNN, Region.emb, DRNN and DialogueRNN. Finally, we compare our framework with the existing state-of-the-art joint models including: JointDAS and IIIM. We briefly describe these baseline models below: 1) \textbf{{HEC}} \cite{kumar2018dialogue}: This work uses a hierarchical Bi-LSTM-CRF (Bi-directional Long Short Term Memory with CRF) model for dialog act recognition, which can capture both kinds of dependencies including word-level and utterance-level. 2) \textbf{{CRF-ASN}} \cite{chen2018dialogue}: This model proposes a crf-attentive structured network for dialog act recognition, which can dynamically separate the utterances into cliques. 3) \textbf{{CASA}} \cite{raheja-tetreault-2019-dialogue}: This work leverages a context-aware self-attention mechanism coupled with a hierarchical deep neural network and achieves state-of-the-art performance. 4) \textbf{{VDCNN}} \cite{conneau-etal-2017-deep}: This work proposes a deep CNN with 29 convolutional layers for text classification. 5) \textbf{{Region.emb}} \cite{qiao2018new}: This work proposes a new method of region embedding for text classification, which can effectively learn and utilize task-specific distributed representations of n-grams. 6) \textbf{{DRNN}} \cite{wang-2018-disconnected}: This work proposes a disconnected recurrent neural network for text classification which can incorporate position-invariance into RNN. 7) \textbf{{DialogueRNN}} \cite{majumder2019dialoguernn}: DialogueRNN is a RNN-based neural architecture for emotion detection in a conversation, which can keep track of the individual party states throughout the conversation and uses this information. 8) \textbf{{JointDAS}} \cite{mastodon}: This model uses a multi-task modeling framework for joint dialog act recognition and sentiment classification, which models relation and interaction between two tasks by sharing parameters. 9) \textbf{{IIIM}} \cite{kim2018integrated}: This work proposes an integrated neural network model which simultaneously identifies speech acts, predicators, and sentiments of dialogue utterances. For \textit{HEC, CRF-ASN, CASA} and \textit{IIM} we re-implemented the models. For \textit{VDCNN, Region.emb}, \textit{DRNN} and \textit{DialogueRNN}, we adopted the open-sourced code\footnote{https://github.com/Tencent/NeuralNLP-NeuralClassifier and https://github.com/senticnet/conv-emotion} to get the results. For \textit{JointDAS}, we adopted the reported results from \newcite{mastodon} and run their open-source code on Dailydialog dataset to obtain results. For \textit{IIIM}, we re-implemented the model and obtained results on the same datasets.\footnote{All experiments are conducted on the public datasets provided by \newcite{mastodon} and the dataset does not annotate the predictors. For direct comparison, we re-implemented the models excepting predicting the predicators and obtained the results on the same dataset.} For all BERT-based experiments, we just replace our utterance encoder LSTM with BERT base model.\footnote{The BERT model is fine-tuned with our framework.} \subsection{Overall Results} On Dailydialog dataset, following \newcite{kim2018integrated}, we adopt macro-average Precision, Recall and F1 for both sentiment classification and dialog act recognition. On Mastodon dataset, following \newcite{mastodon}, we ignore the neural sentiment label and adopt the average of the dialog-act specific F1 scores weighted by the prevalence of each dialog act for dar. The experimental result is shown in Table~\ref{table:over_all}. From the result, we can observe that: \begin{enumerate} \item We obtain large improvements compared with prior joint models. In Mastodon dataset, compared with \textit{IIIM} model, our framework with Co-Attention achieves 5.7\% improvement on F1 score on sentiment classification task and 4.3\% improvement on F1 score on dialog act recognition task. In Dailydialog dataset, we achieve 12.4\% improvement on F1 score on sentiment classification task and 3.4\% improvement on F1 score on dialog act recognition task. It is worth noting that the prior joint models have modeled the relation between two tasks implicitly by sharing parameters. This result demonstrates the effectiveness of explicitly modeling the interaction between the two tasks and both tasks can boost performance from this mechanism. \item Our framework with Co-Attention outperforms the state-of-the-art dialog act recognition models and sentiment classification models in all metrics in two datasets. It illustrates the advantages and effectiveness of our proposed joint model where the information of one task can be effectively utilized in the other task. \item The \textit{MLP} relation layer outperforms the \textit{concatenation}, which shows that the \textit{MLP} can further learn the deep implicit relation between two tasks and improve the performance. Especially, we can see that the \textit{Co-Attention} relation layer gains the best performance among three relation layers on F1 scores on all datasets. We attribute this to the fact that the \textit{Co-Attention} operation can automatically detect the mutually important information to each other and better interact with the two tasks. \item{From the last block of Table~\ref{table:over_all}, the BERT-based model performs remarkably well on both two datasets and achieves a new state-of-the-art performance, which indicates the effectiveness of a strong pre-trained model in two tasks. We attribute this to the fact that pre-trained models can provide rich semantic features, which can help to improve the performance. } \end{enumerate} Unless otherwise stated, we only apply the Co-Attention relation layer in the following experiments. \subsection{Analysis} \label{sec:analysis} Although achieving good performance, we would like to know the reason for the improvement. In this section, we study our model from several directions. We first conduct several ablation experiments to analyze the effect of different components in our framework. Next, we give a quantitative analysis to study how our proposed framework improves performance. Finally, we provide a co-attention visualization to better understand how relation layer affects and contributes to the performance. \subsubsection{Ablation} In this section, we perform several ablation experiments in our framework on two datasets and the results are shown in Table~\ref{table:no_relation} . The results demonstrate the effectiveness of different components of our framework to the final performance. We give a detailed analysis in the following: \begin{itemize} \item \textbf{w/o relation layer:} In this settings, we conduct experiments on the multi-task framework where dialog act recognition and sentiment classification promote each other only by sharing parameters of the encoder, which similar to \newcite{mastodon}. From the result, we can see that 4.8\% drop in terms of F1 scores in sentiment classification while 2.6\% drops in dialog act recognition in Mastodon dataset. In Dailydialog dataset, we can also observe the same trends that the F1 score drops a lot. This demonstrates that explicitly modeling the strong relations between two tasks with relation layer can benefit them effectively. \begin{table}[t] \centering \begin{adjustbox}{width=0.45\textwidth} \begin{tabular}{l|cc|cc} \hline \multirow{2}*{Model} & \multicolumn{2}{c|}{Mastodon} & \multicolumn{2}{c}{Dailydialog} \\ \cline{2-5} ~ & SC (F1) & DAR (F1) & SC (F1) & DAR (F1) \\ \hline Full Model & 45.1 & 58.6 & 45.4 & 79.1 \\ \hline w/o relation layer & 40.3 & 55.2 & 38.0 & 78.4 \\ w/o stacked relation layer & 42.5 & 57.4 & 42.1 & 78.5 \\ w/o self-attention & 43.2 & 57.3 & 42.1 & 77.2 \\ \ \ \ \ +CNN & 43.9 & 58.2 & 43.1 & 78.4 \\ \hline \end{tabular} \end{adjustbox} \caption{Ablation study on Mastodon and Dailydialog test datasets.} \label{table:no_relation} \end{table} \item \textbf{w/o stacked relation layer:} Here, we set the number of the stacked relation layer as 1 in our framework. From the result, we can see that performance drops in all metrics. It indicates that stacked structure with multiple steps of interaction does better model the semantic relation. \item \textbf{w/o self-attention:} In this setting, we remove our self-attention layer and there is no hierarchical architecture to capture dialog-level context information. The results show a significant drop in performance, indicating that capturing the dialog-level context information by the hierarchical encoder is effective and important for dialog act recognition and sentiment classification. In addition, we replace our self-attention with CNN \cite{kim-2014-convolutional} which can also model the dialog context information. The result is shown in the last row of Table~\ref{table:no_relation}. We can see that CNN outperforms \textit{w/o self-attention} version and underperforms our full model, which further demonstrates the effectiveness of the dialog context information and self-attention mechanism. \end{itemize} \subsubsection{Quantitative Analysis} In our DCR-Net model, we adopt the relation layer to model the interaction and relation between two tasks explicitly. To better understand our model, we compare the DA and sentiment performance between DCR-Net model and baseline without relation layer, as shown in Figure~\ref{fig:DA} and Figure~\ref{fig:sentiment}. We choose several DA types with a large performance boost which are shown in Figure~\ref{fig:DA}. From the results, we can see that our model yields significant improvements on the act type \texttt{Exclamation}, \texttt{Thanking}, \texttt{Agreement}, \texttt{Explicit Performative}. We attribute the improvements to the fact that those acts are strong correlative with sentiment and our model can provide sentiment information explicitly for DAR rather than in an implicit method by sharing parameters. Take the fourth utterance in Figure~\ref{fig:visual} for example, providing the current utterance \texttt{Negative} sentiment information explicitly and previous utterance sentiment \texttt{Negative} label can contribute to DA \texttt{Agreement} prediction, which demonstrates the effectiveness of our proposed framework. In addition, from Figure~\ref{fig:sentiment}, we can observe that our model outperforms baseline in both positive and negative sentiment label. We think that our relation layer can explicitly capture DA information which benefits sentiment classification task. \begin{figure}[t] \centering \includegraphics[scale=0.5]{5400-act.pdf} \caption{Quantitative analysis on different types of DA between our model with baseline.} \label{fig:DA} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.4]{5400-sentiment.pdf} \caption{Quantitative analysis on different types of sentiments between our model with baseline.} \label{fig:sentiment} \end{figure} \begin{figure*}[t]\label{visualization} \centering \includegraphics[scale=0.5]{5400-visiulation.pdf} \caption{Co-Attention distribution score from the fifth utterance to the whole dialog utterances. The top is corresponding dialog context. The bottom left is the act-to-sentiment attention and the right is the sentiment-to-act attention. } \label{fig:visual} \end{figure*} \subsection{Visualization of Co-Attention} In this section, with the attempt to better understand what the model has learnt, we visualized the co-attention distribution among utterances in dialogues. In particular, we visualized the attention distribution of the fifth utterance corresponding to other utterances with the number of stacked relation layers varying from 1 to 6. From Figure~\ref{fig:visual}, we can observe: (1) the act-to-sentiment attention distribution score in the fourth utterance is larger than other utterances. This is due to that the fifth utterance is more related to the fourth utterance and the \texttt{Agreement} DA represents that current utterance agrees with the fourth utterance statement. Similarly, we can see that sentiment-to-act attention in the fourth utterance distribution score is also the largest compared to other utterances. Those results demonstrate that our framework can correctly capture mutually important knowledge. (2) Using deeper layers could generally lead to better performance, especially when the number of stacked layers is less than four. It is because the stacked relation layer can better model the relation between two tasks and learn mutual knowledge. When the number of stacked layers exceeds three, the experimental performance goes worse. We suggest that the reason might lie in the gradient vanishing or overfitting problem as the whole network goes deeper. \section{Related Work} In this section, we will introduce the related work about dialog act recognition, sentiment classification and the joint model for the two tasks. \subsection{Dialog Act Recognition} Recently, more and more neural networks have been propose to solve the DAR. \newcite{kalchbrenner2013recurrent} propose the hierarchical CNN to model the utterance sequence for DA classification. \newcite{lee2016sequential} propose a model based on CNNs and RNNs which incorporated the previous utterance as context to classify the current DA and show the promising performance. \newcite{ji2016latent} propose the latent variable recurrent neural network for jointly modeling sequences of words and discourse relations between adjacent sentences. Furthermore, many work \cite{liu2017using,kumar2018dialogue,chen2018dialogue} explore different architectures to incorporate the context information for DAR. \newcite{raheja-tetreault-2019-dialogue} propose the token-level self-attention mechanism for DAR and achieved state-of-the-art performance. \subsection{Sentiment Classification} Sentiment classification in dialog system can be seen as the sentence-level sequence classification problem. One series of works are based on CNN \cite{zhang2015character,conneau-etal-2017-deep,johnson-zhang-2017-deep} to capture the local correlation and position-invariance. Another series of works adopt RNN based models \cite{tang-etal-2015-document,yang-etal-2016-hierarchical,xu-etal-2016-cached} to leverage temporal features and contextual information to perform sentence classification. Besides, Some works \cite{xiao2016efficient,shi-etal-2016-deep,wang-2018-disconnected} attempt to combine the advantages of CNN and RNN for sentence classification. \subsection{Joint Model} Considering the correlation between dialog act recognition and sentiment classification, joint models are proposed to solve two tasks simultaneously in a unified framework. \newcite{mastodon} explore the multi-task framework to model the correlation between the two tasks. Compared with their model, we propose a relation layer to explicitly model the correlation between dialog act recognition and sentiment classification while they model in an implicit way simply by sharing parameters. Specifically, our relation layer can be stacked to capture mutual knowledge sufficiently. \newcite{kim2018integrated} propose an integrated neural network model for identifying dialog act, predicators, and sentiments of dialogue utterances. Their framework classifies the current dialog act only considering the last time dialog act results, which can not make full use of context information, while we adopt the hierarchical encoder with utterance-level self-attention to leverage context information. In addition, their model does not model the sentiment information for dialog act while our framework considers interaction and mutual relation between two tasks. \section{Conclusion} This paper focuses on explicitly establishing the bi-directional interrelated connections for dialog act recognition and sentiment information. We propose a deep relation network to jointly model the interaction and relation between the two tasks, which adopts a stacked co-interactive relation layer to incorporate mutual knowledge explicitly. In addition, we explore three different relation layers and make a thorough study on their effects on the two tasks. Experiments on two datasets show the effectiveness of the proposed models and achieve the state-of-the-art performance. Extensive analysis further confirms the correlation between two tasks and reveals that modeling the relation explicitly can boost their performance. Besides, we analyze the effect of incorporating strong pre-trained BERT model in our joint model. With BERT, the result reaches a new state-of-the-art level. \section{Acknowledgments} We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153.
1,314,259,996,943
arxiv
\section{Introduction} A unified formulation of Einstein's theory of gravitation and theory of electromagnetism in four-dimensional space-time was first proposed by Kaluza \cite{bb12} by assuming a pure gravitational theory in five-dimensional space-time. The so called cylinder condition was later explained by Klein when the extra dimension was compactified on a circle $S^1$ with a microscopic radius \cite{bb13}, where the spatial dimension becomes five-dimensional. The idea behind introducing additional space-time dimensions has found application in quantum field theory, for instance, in string theory \cite{bb26}. There are studies on Kaluza-Klein theory with torsion \cite{bb31,bb32}, in the Grassmannian context \cite{ss1,ss2,ss3}, in K\"{a}hler fields \cite{ss4}, in the presence of fermions \cite{bb33,bb34,bb35}, and in the Lorentz-symmetry violation (LSV) \cite{bb36, bb37,bb38}. Also, there are investigations in space-times with topological defect in the context of Kaluza-Klein theory, for example, the magnetic cosmic string \cite{bb14} (see also, \cite{ss5}), and magnetic chiral cosmic string \cite{bb28} in five-dimensions. Aharonov-Bohm effect \cite{bb39,bb40,bb50} is a quantum mechanical phenomena that has been investigated in several branches of physics, such as in, graphene \cite{RJ}, Newtonian theory \cite{MAA}, bound states of massive fermions \cite{VRK}, scattering of dislocated wave-fronts \cite{CC}, torsion effect on a relativistic position-dependent mass system \cite{ff3,AHEP}, non-minimal Lorentz-violating coupling \cite{HB}. In addition, Aharonov-Bohm effect has been investigated in the context of Kaluza-Klein theory by several authors \cite{bb28,bb15,bb16,aa6,EVBL,EVBL2,EPJC}, and the geometric quantum phase in graphene \cite{KB}. It is well-known in condensed matter \cite{NB,WCT,LD,MBu,ACB} and in the relativistic quantum systems \cite{Bakke,Bakke2} that when there exist dependence of the energy eigenvalues on geometric quantum phase \cite{bb50}, then, persistent current arise in the systems. The studies of persistent currents have explored systems that deal with the Berry phase \cite{DL,DL2}, the Aharonov-Anandan quantum phase \cite{XCG,TZQ}, and the Aharonov-Casher geometric quantum phase \cite{AVB,SO,HM2,HM3}. Investigation of magnetization and persistent currents of mass-less Dirac Fermions confined in a quantum dot in a graphene layer with topological defects was studied in \cite{cc17}. Klein-Gordon oscillator theory \cite{bb1,bb2} was inspired by the Dirac oscillator \cite{bb3}. This oscillator field is used to study the spectral distribution of energy eigenvalues and eigenfunctions in $1-d$ version of Minkowski space-times \cite{bb4}. Klein-Gordon oscillator was studied by several authors, such as, in the cosmic string space-time with an external fields \cite{bb5}, with Coulomb-type potential by two ways : (i) modifying the mass term $m \rightarrow m+S(r)$ \cite{bb6}, and (ii) via the minimal coupling \cite{bb7} in addition to a linear scalar potential, in the background space-time generated by a cosmic string \cite{bb8}, in the G\"{o}del-type space-times under the influence of gravitational fields produced by topological defects \cite{bb9}, in the Som-Raychaudhuri space-time with a disclination parameter \cite{ff2}, in non-commutative (NC) phase space \cite{bb10}, in $(1+2)$-dimensional G\"{u}rses space-time background \cite{ff4}, and in $(1+2)$-dimensional G\"{u}rses space-time background subject to a Coulomb-type potential \cite{ff5}. The relativistic quantum effects on oscillator field with a linear confining potential was investigated in \cite{ff6}. We consider a generalization of the oscillator as described in Refs. \cite{EPJC,ff5} for the Klein-Gordon. This generalization is introduced through a generalized momentum operator where the radial coordinate $r$ is replaced by a general function $f (r)$. To author best knowledge, such a new coupling was first introduced by K. Bakke {\it et al.} in Ref. \cite{Bakke} and led to a generalization of the Tan-Inkson model of a two-dimensional quantum ring for systems whose energy levels depend on the coupling's control parameters. Based on this, a generalized Dirac oscillator in the cosmic string space-time was studied by F. Deng {\it et al.} in Ref. \cite{cc9} where the four-momentum $p_{\mu}$ is replaced with its alternative $p_{\mu}+m\,\omega\,\beta\,f_{\mu} ( x_{\mu} )$. In the literature, $f_{\mu} (x_{\mu})$ has chosen similar to potentials encountered in quantum mechanics (Cornell-type, exponential-type, singular, Morse-type, Yukawa-like etc.). A generalized Dirac oscillator in $(2+1)$-dimensional world was studied in \cite{cc10}. Very recently, the generalized K-G oscillator in the cosmic string space-time in \cite{FD}, and non-inertial effects on a generalized DKP oscillator in the cosmic string space-time in \cite{SZ} was studied. The relativistic quantum dynamics of a scalar particle of mass $m$ with a scalar potential $S (r)$ \cite{bb41,WG} is described by the following Klein-Gordon equation: \begin{equation} \left [\frac{1}{\sqrt{-g}}\,\partial_{\mu} (\sqrt{-g}\,g^{\mu\nu}\,\partial_{\nu})-(m + S)^2 \right]\,\Psi=0, \label{1} \end{equation} with $g$ is the determinant of metric tensor with $g^{\mu\nu}$ its inverse. To couple Klein-Gordon field with oscillator \cite{bb1,bb2}, following change in the momentum operator is considered as in \cite{dd2,bb5}: \begin{equation} \vec{p}\rightarrow \vec{p}+i\,m\,\omega\,\vec{r}, \label{2} \end{equation} where $\omega$ is the oscillatory frequency of the particle and $\vec{r}=r\,\hat{r}$ where, $r$ being distance from the particle to the string. To generalized the Klein-Gordon oscillator, we adopted the idea considered in Refs. \cite{EPJC,ff5,cc9,FD,SZ} by replacing $r \rightarrow f (r)$ as \begin{equation} X_{\mu}=(0, f(r), 0, 0, 0). \label{3} \end{equation} So we can write $\vec{p}\rightarrow \vec{p}+i\,m\,\omega\,f (r)\,\hat{r}$, and we have, $p^2 \rightarrow (\vec{p}+i\,m\,\omega\, f (r)\,\hat{r})(\vec{p}-i\,m\,\omega\,f (r)\,\hat{r})$. Therefore, the generalized Klein-Gordon oscillator equation: \begin{equation} \left[\frac{1}{\sqrt{-g}}\,(\partial_{\mu}+m\,\omega\,X_{\mu})\{\sqrt{-g}\,g^{\mu\nu}\,(\partial_{\nu}-m\,\omega\,X_{\nu})\}-(m+ S)^2 \right]\,\Psi=0, \label{4} \end{equation} where $X_{\mu}$ is given by Eq. (\ref{3}). Various potentials have been used to investigate the bound state solutions to the relativistic wave-equations. Among them, much attention has given on Coulomb-type potential. This kind of potential has widely used to study various physical phenomena, such as in, the propagation of gravitational waves \cite{gg}, the confinement of quark models \cite{gg1}, molecular models \cite{gg2}, position-dependent mass systems \cite{gg3,gg4,gg5}, and relativistic quantum mechanics \cite{bb7,bb8,bb6}. The Coulomb-type potential is given by \begin{equation} S(r)=\frac{\eta_{c}}{r}. \label{5} \end{equation} where $\eta_{c}$ is Coulombic confining parameter. Another potential that we are interest here is the Cornell-type potential. The Cornell potential, which consists of a linear potential plus a Coulomb potential, is a particular case of the quark-antiquark interaction, one more harmonic type term \cite{gg6}. The Coulomb potential is responsible by the interaction at small distances and the linear potential leads to the confinement. Recently, the Cornell potential has been studied in the ground state of three quarks \cite{CA}. However, this type of potential is worked on spherical symmetry; in cylindrical symmetry, which is our case, this type of potential is known as Cornell-type potential \cite{bb9}. This type of interaction has been studied in \cite{bb9,ff6,bb41,RLLV}. Given this, let us consider this type of potential \begin{equation} S(r)=\eta_{L}\,r+\frac{\eta_c}{r}, \label{6} \end{equation} where $\eta_{L}, \eta_{c}$ are the confining potential parameters. The aim of the present work is to analyze a relativistic analogue of the Aharonov-Bohm effect for bound states \cite{bb39,bb40,bb50} for a relativistic scalar particle with potential in the context of Kaluza-Klein theory. First, we study a relativistic scalar particle by solving the generalized Klein-Gordon oscillator with a Cornell-type potential in the five-dimensional cosmic string space-time. Secondly, by using the Kaluza-Klein theory \cite{bb12,bb13,bb26} a magnetic flux through the line-element of the cosmic string space-time is introduced, and thus write the generalized Klein-Gordon oscillator in the five-dimensional space-time. In the later case, a Coulomb-type potential by modifying the mass term $m \rightarrow m + S(r)$ is introduced which was not study earlier. Then, we show that the relativistic bound states solutions can be achieved, where the relativistic energy eigenvalues depend on the geometric quantum phase \cite{bb50}. Due to this dependence of the relativistic energy eigenvalue on the geometric quantum phase, we calculate the persistent currents \cite{NB,WCT} that arise in the relativistic system. This paper comprises as follow : In {\it section 2}, we study a generalized Klein-Gordon oscillator in the cosmic string background within the Kaluza-Klein theory with a Cornell-type scalar potential; in {\it section 3}, a generalized Klein-Gordon oscillator in the magnetic cosmic string in the Kaluza-Klein theory subject to a Coulomb-type scalar potential and obtain the energy eigenvalues and eigenfunctions; and the conclusion one in {\it section 4}. \section{Generalized Klein-Gordon oscillator in cosmic string space-time with a Cornell-type potential in Kaluza-Klein theory} The purpose of this section is to study the Klein-Gordon equation in cosmic string space-time with the use of Kaluza-Klein theory with interactions. The first study of the topological defects within the Kaluza-Klein theory was carried out in \cite{bb14}. The metric corresponding to this geometry can be written as, \begin{equation} ds^2=-dt^2+dr^2+\alpha^2\,r^2\,d\phi^2+dz^2+dx^2, \label{7} \end{equation} where $t$ is the time coordinate, $x$ is the coordinate associated with the fifth additional dimension and $(r, \phi, z)$ are cylindrical coordinates. These coordinates assume the ranges $-\infty < (t, z) < \infty$, $0 \leq r < \infty$, $0 \leq \phi \leq 2\,\pi$, $0 < x < 2\,\pi\,a$, where $a$ is the radius of the compact dimension $x$. The $\alpha$ parameter characterizing the cosmic string, and in terms of mass density $\mu$ given by $\alpha=1-4\,\mu$ \cite{bb21}. The cosmology and gravitation imposes limits to the range of the $\alpha$ parameter which is restricted to $\alpha <1$ \cite{bb21}. By considering the line element (\ref{7}) into the Eq. (\ref{4}), we obtain the following differential equation : \begin{eqnarray} &&[-\frac{\partial^2}{\partial t^2}+\frac{1}{r}\,\left(\frac{\partial}{\partial r} + m\,\omega\,f (r) \right)\,\left (r\,\frac{\partial}{\partial r}-m\,\omega\,r\,f (r) \right)+\frac{1}{\alpha^2\,r^2}\,\frac{\partial^2}{\partial \phi^2}+\frac{\partial^2}{\partial z^2}\nonumber\\&&+\frac{\partial}{\partial x^2} -(m+ S(r))^2]\,\Psi (t, r, \phi, z, x)=0,\nonumber\\ &&[-\frac{\partial^2}{\partial t^2}+\frac{\partial^2}{\partial r^2}+\frac{1}{r}\,\frac{\partial}{\partial r}-m\,\omega\,\left (f'(r)+\frac{f (r)}{r} \right)-m^2\,\omega^2\,f^{2} (r)+\frac{1}{\alpha^2\,r^2}\,\frac{\partial^2}{\partial \phi^2}\nonumber\\ &&+\frac{\partial^2}{\partial z^2}+\frac{\partial}{\partial x^2}-(m+ S(r))^2]\,\Psi (t, r, \phi, z, x)=0. \label{8} \end{eqnarray} Since the metric is independent of $t, \phi ,z, x$. One can choose the following ansatz for the function $\Psi$ \begin{equation} \Psi (t, r, \phi, z, x)=e^{i\,(-E\,t+l\,\phi+k\,z+q\,x)}\,\psi(r), \label{9} \end{equation} where $E$ is the total energy, $l=0,\pm\,1,\pm\,2..$, and $k, q$ are constants. Substituting the above ansatz into the Eq. (\ref{8}), we get the following equation for $\psi (r)$ : \begin{eqnarray} &&[ \frac{d^2}{dr^2} + \frac{1}{r}\,\frac{d}{dr} + E^2-k^2-q^2-\frac{l^2}{\alpha^2\,r^2}-m\,\omega\,\left (f'(r)+\frac{f (r)}{r} \right)\nonumber\\ &&-m^2\,\omega^2\,f^{2}(r)-\left(m+S(r) \right)^2]\,\psi(r)=0. \label{10} \end{eqnarray} We choose the function $f(r)$ a Cornell-type given by \cite{EPJC,ff5,cc9,SZ} \begin{equation} f(r)=a\,r+\frac{b}{r}\quad,\quad a, b>0. \label{11} \end{equation} Substituting the function (\ref{11}) and Cornell potential (\ref{6}) into the Eq. (\ref{9}), we obtain the following equation: \begin{equation} \left [\frac{d^2}{dr^2} + \frac{1}{r}\,\frac{d}{dr} + \lambda-\Omega^2\,r^2-\frac{j^2}{r^2}-\frac{2\,m\,\eta_{c}}{r}-2\,m\,\eta_{L}\,r \right]\,\psi(r)=0, \label{12} \end{equation} where \begin{eqnarray} &&\lambda=E^2-k^2-q^2-m^2-2\,m\,\omega\,a-2\,m^2\,\omega^2\,a\,b-2\,\eta_{L}\,\eta_{c},\nonumber\\ &&\Omega=\sqrt{m^2\,\omega^2\,a^2+\eta^2_{L}},\nonumber\\ &&j=\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_{c}}. \label{13} \end{eqnarray} Transforming $\rho=\sqrt{\Omega}\,r$ into the equation (\ref{12}), we get \begin{equation} \left [\frac{d^2}{d\rho^2} + \frac{1}{\rho}\,\frac{d}{d\rho} + \zeta-\rho^2-\frac{j^2}{\rho^2}-\frac{\eta}{\rho}-\theta\,\rho \right]\,\psi (\rho)=0, \label{14} \end{equation} where \begin{equation} \zeta=\frac{\lambda}{\Omega}\quad,\quad \eta=\frac{2\,m\,\eta_c}{\sqrt{\Omega}}\quad,\quad \theta=\frac{2\,m\,\eta_L}{\Omega^{\frac{3}{2}}}. \label{15} \end{equation} Let us impose that $\psi (\rho) \rightarrow 0$ when $\rho \rightarrow 0$ and $\rho \rightarrow \infty$. Suppose the possible solution to the Eq. (\ref{14}) is \begin{equation} \psi (\rho)=\rho^{j}\,e^{-\frac{1}{2}\,(\rho+\theta)\,\rho}\,H (\rho). \label{16} \end{equation} Substituting the solution Eq. (\ref{16}) into the Eq. (\ref{14}), we obtain \begin{equation} H''(\rho)+\left [\frac{\gamma}{\rho}-\theta-2\,\rho \right ]\,H'(\rho)+\left [-\frac{\beta}{\rho}+\Theta \right]\,H (\rho)=0, \label{17} \end{equation} where \begin{eqnarray} &&\gamma=1+2\,j,\nonumber\\ &&\Theta=\zeta+\frac{\theta^2}{4}-2\,(1+j),\nonumber\\ &&\beta=\eta+\frac{\theta}{2}\,(1+2\,j). \label{18} \end{eqnarray} Equation (\ref{17}) is the biconfluent Heun's differential equation \cite{ff3,AHEP,bb15,bb16,aa6,EVBL,EVBL2,EPJC, bb7,bb8,bb9,ff2,ff5,ff6,bb41,bb42,bb46,bb47,dd51,dd52} and $H (\rho)$ is the Heun polynomials. The above equation (\ref{17}) can be solved by the Frobenius method. We consider the power series solution around the origin \cite{bb43} \begin{equation} H (\rho)=\sum_{i=0}^{\infty}\,c_{i}\,\rho^{i} \label{19} \end{equation} Substituting the above power series solution into the Eq. (\ref{17}), we obtain the following recurrence relation for the coefficients: \begin{equation} c_{n+2}=\frac{1}{(n+2)(n+2+2\,j)}\,\left[\left\{\beta+\theta\,(n+1) \right\}\,c_{n+1}-(\Theta-2\,n)\,c_{n} \right]. \label{20} \end{equation} And the various coefficients are \begin{eqnarray} &&c_1=\left(\frac{\eta}{\gamma}-\frac{\theta}{2} \right)\,c_0,\nonumber\\ &&c_2=\frac{1}{4\,(1+j)}\,[\left(\beta+\theta \right)\,c_{1}-\Theta\,c_{0}]. \label{21} \end{eqnarray} The quantum theory requires that the wave function $\Psi$ must be normalized. The bound state solutions $\psi (\rho)$ can be obtained because there is no divergence of the wave function at $\rho \rightarrow 0$ and $\rho \rightarrow \infty$. Since we have written the function $H (\rho)$ as a power series expansion around the origin in Eq. (\ref{19}). Thereby, bound state solutions can be achieved by imposing that the power series expansion (\ref{19}) becomes a polynomial of degree $n$. Through the recurrence relation (\ref{20}), we can see that the power series expansion (\ref{19}) becomes a polynomial of degree $n$ by imposing two conditions \cite{ff3,AHEP,bb15,bb16,aa6,EVBL,EVBL2,EPJC, bb7,bb8,bb9,ff2,ff5,ff6,bb41,bb42,bb46,bb47}: \begin{eqnarray} \Theta&=&2\,n \quad (n=1,2,...),\nonumber\\ c_{n+1}&=&0 \label{23} \end{eqnarray} By analyzing the condition $\Theta=2\,n$, we get expression of the energy eigenvalues $E_{n,l}$: \begin{eqnarray} &&\frac{\lambda}{\Omega}+\frac{\theta^2}{4}-2\,(1+j)=2\,n\nonumber\\\Rightarrow &&E^{2}_{n,l}=k^2+q^2+m^2+2\,\Omega\,\left(n+1+\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_{c}} \right)\nonumber\\ &&+2\,m^2\,\omega^2\,a\,b+2\,m\,\omega\,a+2\,\eta_{L}\,\eta_c-\frac{m^2\,\eta^2_{L}}{\Omega^2}. \label{24} \end{eqnarray} We plot graphs of the above energy eigenvalues w. r. t. different parameters. In fig. 1, the energy eigenvalues $E_{1,1}$ against the parameter $\eta_c$. In fig. 2, the energy eigenvalues $E_{1,1}$ against the parameter $\eta_L$. In fig. 3, the energy eigenvalues $E_{1,1}$ against the parameter $M$. In fig. 4, the energy eigenvalues $E_{1,1}$ against the parameter $\omega$. In fig. 5, the energy eigenvalues $E_{1,1}$ against the parameter $\Omega$. Now we impose additional condition $c_{n+1}=0$ to find the individual energy levels and corresponding wave functions one by one as done in \cite{bb44,bb45}. As example, for $n=1$, we have $\Theta=2$ and $c_2=0$ which implies \begin{eqnarray} &&c_1=\frac{2}{\beta+\theta}\,c_0\Rightarrow\left(\frac{\eta}{1+2\,j}-\frac{\theta}{2} \right)=\frac{2}{\beta+\theta}\nonumber\\ &&\Omega^3_{1,l}-\frac{\eta^2}{2\,(1+2\,j)}\Omega^2_{1,l}-\eta\,\theta\,(\frac{1+j}{1+2\,j})\,\Omega_{1,l}-\frac{\theta^2}{8}\,(3+2\,j)=0 \label{25} \end{eqnarray} a constraint on the parameter $\Omega_{1,l}$. The relation given in Eq. (\ref{25}) gives the possible values of the parameter $\Omega_{1,l}$ that permit us to construct first degree polynomial to H(x) for $n=1$. Note that its values changes for each quantum number $n$ and $l$, so we have labeled $\Omega \rightarrow \Omega_{n,l}$. Besides, since this parameter is determined by the frequency, hence, the frequency $\omega_{1,l}$ is so adjusted that the Eq. (\ref{25}) can be satisfied, where we have simplified our notation by labeling: \begin{equation} \omega_{1,l}=\frac{1}{m\,a}\sqrt{\Omega^2_{1,l}-\eta^2_{L}}. \label{26} \end{equation} It is noteworthy that a third-degree algebraic equation (\ref{25}) has at least one real solution and it is exactly this solution that gives us the allowed values of the frequency for the lowest state of the system, which we do not write because its expression is very long. We can note, from Eq. (\ref{25}) that the possible values of the frequency depend on the quantum numbers and the potential parameter. In addition, for each relativistic energy level, we have a different relation of the magnetic field associated to the Cornell-type potential and quantum numbers of the system $\{l, n \}$. For this reason, we have labeled the parameters $\Omega$ and $\omega$ in Eqs. (\ref{25}) and (\ref{26}). Therefore, the ground state energy level and corresponding wave-function for $n=1$ are given by \begin{eqnarray} &&E^{2}_{1,l}=k^2+q^2+m^2+2\,\Omega_{1,l}\,\left(2+\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_{c}} \right)\nonumber\\ &&+2\,m^2\,\omega^2_{1,l}\,a\,b+2\,m\,\omega_{1,l}\,a+2\,\eta_{L}\,\eta_c-\frac{m^2\,\eta^2_{L}}{\Omega^2_{1,l}},\nonumber\\ &&\psi_{1,l}=\rho^{\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_{c}}}\,e^{-\frac{1}{2}\,\left (\frac{2\,m\,\eta_L}{\Omega^{\frac{3}{2}}_{1,l}}+\rho \right)\,\rho}\,\left(c_0+c_1\,\rho\right), \label{27} \end{eqnarray} where \begin{eqnarray} c_1&=&\frac{1}{\Omega^{\frac{1}{2}}_{1,l}}\,\left [\frac{2\,m\,\eta_c}{\left(1+2\,\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_{c}} \right)}-\frac{m\,\eta_L}{\Omega_{1,l}} \right]\,c_0. \label{28} \end{eqnarray} Then, by substituting the real solution of Eq. (\ref{26}) into the Eqs. (\ref{27})-(\ref{28}) it is possible to obtain the allowed values of the relativistic energy for the radial mode $n=1$ of a position dependent mass system. We can see that the lowest energy state defined by the real solution of the algebraic equation given in Eq. (\ref{26}) plus the expression given in Eq. (\ref{27}) is defined by the radial mode $n=1$, instead of $n=0$. This effect arises due to the presence of the Cornell-type potential in the system. For $\alpha \rightarrow 1$, the relativistic energy eigenvalue (\ref{25}) becomes \begin{eqnarray} &&E^{2}_{n,l}=k^2+q^2+m^2+2\,\Omega\,\left(n+1+\sqrt{l^2+m^2\,\omega^2\,b^2+\eta^{2}_{c}} \right)\nonumber\\ &&+2\,m^2\,\omega^2\,a\,b+2\,m\,\omega\,a+2\,\eta_{L}\,\eta_c-\frac{m^2\,\eta^2_{L}}{\Omega^2}. \label{29} \end{eqnarray} Equation (\ref{29}) is the relativistic energy eigenvalue of a scalar particles via the generalized Klein-Gordon oscillator subject to a Cornell-type potential in the Minkowski space-time in the Kaluza-Klein theory. We discuss bellow a very special case of the above relativistic system. \vspace{0.3cm} {\bf Case A}: Considering $\eta_{L}=0$, that is, only Coulomb-type potential $S(r)=\frac{\eta_c}{r}$. \vspace{0.3cm} We want to investigate the effect of Coulomb-type potential on a scalar particle in the background of cosmic string space-time in the Kaluza-Klein theory. In that case, the radial wave-equation Eq. (\ref{12}) becomes \begin{equation} \left [\frac{d^2}{dr^2}+\frac{1}{r}\,\frac{d}{dr}+\lambda_0-m^2\,\omega^2\,a^2\,r^2-\frac{j^2}{r^2}-\frac{2\,m\,\eta_{c}}{r} \right]\,\psi(r)=0, \label{aa1} \end{equation} where \begin{equation} \lambda_0=E^2-k^2-q^2-m^2-2\,m\,\omega\,a-2\,m^2\,\omega^2\,a\,b \label{aa2} \end{equation} Transforming $\rho=\sqrt{m\,\omega\,a}\,r$ into the Eq. (\ref{aa1}), we get \begin{equation} \left [\frac{d^2}{d\rho^2}+\frac{1}{\rho}\,\frac{d}{d\rho}+\frac{\lambda_0}{m\,\omega\,a}-\rho^2-\frac{j^2}{\rho^2}-\frac{2\,m\,\eta_c}{\sqrt{m\,\omega\,a}}\,\frac{1}{\rho} \right]\,\psi(\rho)=0. \label{aa6} \end{equation} Suppose the possible solution to Eq. (\ref{aa6}) is \begin{equation} \psi (\rho)=\rho^{j}\,E^{-\frac{\rho^2}{2}}\,H (\rho). \label{aa7} \end{equation} Substituting the solution Eq. (\ref{aa7}) into the Eq. (\ref{aa6}), we obtain \begin{equation} H ''(\rho)+\left [\frac{1+2\,j}{\rho}-2\,\rho \right]\, H' (\rho)+\left[-\frac{\tilde{\eta}}{\rho}+\frac{\lambda_0}{m\,\omega\,a}-2\,(1+j) \right]\, H (\rho), \label{aa8} \end{equation} where $\tilde{\eta}=\frac{2\,m\,\eta_c}{\sqrt{m\,\omega\,a}}$. Equation (\ref{aa8}) is the Heun's differential equation \cite{ff3,AHEP,bb15,bb16,aa6,EVBL,EVBL2,EPJC,bb7,bb8,bb9,ff2,ff5,ff6,bb41,bb42,bb46,bb47,dd51,dd52} with $H (\rho)$ is the Heun polynomial. Substituting the power series solution Eq. (\ref{19}) into the Eq. (\ref{aa8}), we obtain the following recurrence relation for coefficients \begin{equation} c_{n+2}=\frac{1}{(n+2)(n+2+2\,j)}\,\left [\tilde{\eta}\,c_{n+1}-\{\frac{\lambda_0}{m\,\omega\,a}-2\,(1+j)-2\,n \}\,c_n \right] \label{aa9} \end{equation} The power series solution becomes a polynomial of degree $n$ provided \cite{ff3,AHEP,bb15,bb16,aa6,EVBL,EVBL2,EPJC, bb7,bb8,bb9,ff2,ff5,ff6,bb41,bb42,bb46,bb47} \begin{eqnarray} \frac{\lambda_0}{m\,\omega\,a}-2\,(1+j)&=&2\,n\quad (n=1,2,...)\nonumber\\ c_{n+1}&=&0. \label{aa10} \end{eqnarray} Using the first condition, one will get the following energy eigenvalues of the relativistic system : \begin{eqnarray} &&E_{n,l}=\pm\{k^2+q^2+m^2+2\,m\,\omega\,a\,\left(n+2+\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_c} \right)\nonumber\\ &&+2\,m^2\,\omega^2\,a\,b\}^{\frac{1}{2}}. \label{aa3} \end{eqnarray} The ground state energy levels and corresponding wave-function for $n=1$ are given by \begin{eqnarray} &&E_{1,l}=\pm\{k^2+q^2+m^2+2\,m\,\omega_{1,l}\,a\,\left(3+\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_{c}} \right)\nonumber\\ &&+2\,m^2\,\omega^2\,a\,b \}^{\frac{1}{2}},\nonumber\\ &&\psi_{1,l} (\rho)=\rho^{\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_{c}}}\,e^{-\frac{\rho^2}{2}}\,\left(c_0+c_1\,\rho \right), \label{aa4} \end{eqnarray} where \begin{eqnarray} c_1&=&\frac{2\,m\,\eta_{c}}{\sqrt{m\,\omega_{1,l}\,a}\,\left(1+2\,\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_{c}}\right)}\nonumber\\ &=&\left(\frac{2}{1+2\,\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_{c}}}\right)^{\frac{1}{2}}\,c_0,\nonumber\\ \omega_{1,l}&=&\frac{2\,m\,\eta^2_{c}}{a\,\left(1+2\,\sqrt{\frac{l^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_c} \right)}. \label{aa5} \end{eqnarray} a constraint on the frequency parameter $\omega_{1,l}$. \vspace{0.3cm} {\bf Case B}: \vspace{0.3cm} We consider another case corresponds to $a \rightarrow 0$, $b \rightarrow 0$ and $\eta_{L}=0$, that is, a scalar quantum particle in the cosmic string background subject to a Coulomb-type scalar potential within the Kaluza-Klein theory. In that case, from Eq. (\ref{12}) we obtain the following equation: \begin{equation} \psi''(r)+\frac{1}{r}\,\psi'(r)+[\tilde{\lambda}-\frac{\tilde{j}^2}{r^2}-\frac{2\,m\,\eta_{c}}{r}]\,\psi(r)=0. \label{bb1} \end{equation} Equation (\ref{bb1}) can be written as \begin{equation} \psi''(r)+\frac{1}{r}\,\psi'(r)+\frac{1}{r^2}\,(-\xi_1\,r^2+\xi_2\,r-\xi_3)\,\psi(r)=0, \label{bb2} \end{equation} where \begin{equation} \xi_1=-\tilde{\lambda}=-(E^2-k^2-q^2-m^2),\quad \xi_2=-2\,m\,\eta_{c},\quad \xi_3=\tilde{j}^2=\frac{l^2}{\alpha^2}+\eta^2_{c}. \label{bb3} \end{equation} Compairing the Eq (\ref{bb2}) with Equation (\ref{A.1}) in appendix A, we get \begin{eqnarray} &&\alpha_1=1,\quad \alpha_2=0,\quad \alpha_3=0,\quad \alpha_4=0,\quad \alpha_5=0,\quad \alpha_6=\xi_1,\nonumber\\ &&\alpha_7=-\xi_2,\quad \alpha_8=\xi_3,\quad \alpha_9=\xi_1,\quad \alpha_{10}=1+2\,\sqrt{\xi_3},\nonumber\\ &&\alpha_{11}=2\,\sqrt{\xi_1},\quad \alpha_{12}=\sqrt{\xi_3},\quad \alpha_{13}=-\sqrt{\xi_1}. \label{bb6} \end{eqnarray} The energy eigenvalues using Eqs. (\ref{bb3})-(\ref{bb6}) into the Eq. (\ref{A.8}) in appendix A is given by \begin{equation} E_{n,l}=\pm\,m\,\sqrt{1-\frac{\eta^2_{c}}{(n+\sqrt{\frac{l^2}{\alpha^2}+\eta^{2}_{c}}+\frac{1}{2})^2}+\frac{k^2}{m^2}+\frac{q^2}{m^2}}, \label{bb4} \end{equation} where $n=0,1,2,..$ is the quantum number associated with the radial modes, $l=0,\pm\,1,\pm\,2,.$ are the quantum number associated with the angular momentum operator, $k$ and $q$ are arbitrary constants. Equation (\ref{bb4}) corresponds to the relativistic energy eigenvalues of a free-scalar particle subject to a Coulomb-type scalar potential in the background of cosmic string within the Kaluza-Klein theory. The corresponding radial wave-function is given by \begin{eqnarray} \psi_{n,l} (r)&=&|N|\,r^{\frac{\tilde{j}}{2}}\,{\sf e}^{-\frac{r}{2}}\,{\sf L}^{(\tilde{j})}_{n} (r)\nonumber\\ &=&|N|\,r^{\frac{1}{2}\sqrt{\frac{l^2}{\alpha^2}+\eta^2_{c}}}\,{\sf e}^{-\frac{r}{2}}\,{\sf L}^{(\sqrt{\frac{l^2}{\alpha^2}+\eta^2_{c}})}_{n} (r). \label{bb7} \end{eqnarray} Here $|N|$ is the normalization constant and ${\sf L}^{(\sqrt{\frac{l^2}{\alpha^2}+\eta^2_{c}})}_{n} (r) $ is the generalized Laguerre polynomial. For $\alpha \rightarrow 1$, the relativistic energy eigenvalues Eq. (\ref{bb4}) becomes \begin{equation} E_{n,l}=\pm\,m\,\sqrt{1-\frac{\eta^2_{c}}{(n+\sqrt{l^2+\eta^{2}_{c}}+\frac{1}{2})^2}+\frac{k^2}{m^2}+\frac{q^2}{m^2}}. \label{bb5} \end{equation} Equation (\ref{bb5}) correspond to the relativistic energy eigenvalue of a scalar particle subject to a Coulomb-type scalar potential in the Minkowski space-time within the Kaluza-Klein theory. \section{Generalized Klein-Gordon oscillator in the magnetic cosmic string with a Coulomb-type potential in Kaluza-Klein theory} Let us consider the quantum dynamics of a particle moving in the magnetic cosmic string background. In the Kaluza-Klein theory \cite{bb12,bb13,bb28}, the corresponding metrics with Aharonov-Bohm magnetic flux $\Phi$ passing along the symmetry axis of the string assumes the following form \begin{equation} ds^2=-dt^2+dr^2+\alpha^2\,r^2\,d\phi^2+dz^2+(dx+\frac{\Phi}{2\,\pi}\,d\phi)^2 \label{30} \end{equation} with cylindrical coordinates are used. The quantum dynamics is described by the equation (\ref{4}) with the following change in the inverse matrix tensor $g^{\mu\nu}$, \begin{equation} g^{\mu\nu}=\left (\begin{array}{lllll} -1 & 0 & \quad 0 & 0 & \quad 0 \\ \quad 0 & 1 & \quad 0 & 0 & \quad 0 \\ \quad 0 & 0 & \quad \frac{1}{\alpha^2\,r^2} & 0 & -\frac{\Phi}{2\,\pi\,\alpha^2\,r^2} \\ \quad 0 & 0 & \quad 0 & 1 & \quad 0 \\ \quad 0 & 0 & -\frac{\Phi}{2\,\pi\,\alpha^2\,r^2} & 0 & 1+\frac{\Phi^2}{4\,\pi^2\,\alpha^2\,r^2} \end{array} \right). \label{31} \end{equation} By considering the line element (\ref{30}) into the Eq. (\ref{4}), we obtain the following differential equation : \begin{eqnarray} &&[-\partial_{t}^2+\partial_{r}^2+\frac{1}{r}\,\partial_{r}+\frac{1}{\alpha^2\,r^2}\,(\partial_{\phi}-\frac{\Phi}{2\,\pi}\,\partial_{x})^2+\partial_{z}^2+\partial_{x}^2\nonumber\\ &&-m\,\omega\,\left(f' (r)+\frac{f(r)}{r} \right)-m^2\,\omega^2\,f^{2}(r)-\left(m + S(r) \right)^2]\,\Psi=0. \label{32} \end{eqnarray} Since the space-time is independent of $t, \phi, z, x$, substituting the ansatz (\ref{9}) into the Eq. (\ref{32}), we get the following equation : \begin{eqnarray} &&\psi ''(r)+\frac{1}{r}\,\psi'(r)+[E^2-k^2-q^2-\frac{l^2_{eff}}{r^2}-m\,\omega\,\left(f'(r)+\frac{f(r)}{r} \right)\nonumber\\ &-&m^2\,\omega^2\,f^{2}(r)-\left(m+S (r) \right)^2]\,\psi (r)=0, \label{33} \end{eqnarray} where the effective angular quantum number \begin{equation} l_{eff}=\frac{1}{\alpha}\,(l-\frac{q\,\Phi}{2\,\pi}). \label{34} \end{equation} Substituting the function (\ref{11}) into the Eq. (\ref{33}) and using Coulomb-type potential (\ref{5}), the radial wave-equation becomes \begin{equation} \left [\frac{d^2}{dr^2} + \frac{1}{r}\,\frac{d}{dr} + \lambda_0-m^2\,\omega^2\,a^2\,r^2-\frac{\chi^2}{r^2}-\frac{2\,m\,\eta_{c}}{r} \right]\,\psi(r)=0, \label{35} \end{equation} where \begin{eqnarray} &&\lambda_0=E^2-k^2-q^2-m^2-2\,m\,\omega\,a-2\,m^2\,\omega^2\,a\,b,\nonumber\\ &&\chi=\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_{c}}. \label{36} \end{eqnarray} Transforming $\rho=\sqrt{m\,\omega\,a}\,r$ into the equation (\ref{35}), we get \begin{equation} \left [\frac{d^2}{d\rho^2}+\frac{1}{\rho}\,\frac{d}{d\rho}+ \frac{\lambda_0}{m\,\omega\,a}-\rho^2-\frac{\chi^2}{\rho^2}-\frac{\tilde{\eta}}{\rho} \right]\,\psi (\rho)=0, \label{37} \end{equation} where $\tilde{\eta}=\frac{2\,m\,\eta_c}{\sqrt{m\,\omega\,a}}$. Suppose the possible solution to Eq. (\ref{37}) is \begin{equation} \psi (\rho)=\rho^{\chi}\,e^{-\frac{\rho^2}{2}}\,H (\rho) \label{42} \end{equation} Substituting the solution Eq. (\ref{42}) into the Eq. (\ref{37}), we obtain \begin{equation} H'' (\rho)+\left[\frac{1+2\,\chi}{\rho}-2\,\rho \right]\,H' (\rho)+\left [-\frac{\tilde{\eta}}{\rho}+\frac{\lambda_0}{m\,\omega\,a}-2\,(1+\chi) \right]\,H (\rho). \label{43} \end{equation} Equation (\ref{43}) is the second order Heun's differential equation \cite{ff3,AHEP,bb15,bb16,aa6,EVBL,EVBL2,EPJC, bb7,bb8,bb9,ff2,ff5,ff6,bb41,bb42,bb46,bb47,dd51,dd52} with $H (\rho)$ is the Heun polynomial. Substituting the power series solution Eq. (\ref{19}) into the Eq. (\ref{43}), we obtain the following recurrence relation for the coefficients: \begin{equation} c_{n+2}=\frac{1}{(n+2)\,(n+2+2\,\chi)}\,\left[\tilde{\eta}\,c_{n+1}-\left\{ \frac{\lambda_0}{m\,\omega\,a}-2-2\,\chi-2\,n \right\}\,c_n \right]. \label{44} \end{equation} The power series becomes a polynomial of degree $n$ by imposing the following conditions \cite{ff3,AHEP,bb15,bb16, aa6,EVBL,EVBL2,EPJC,bb7,bb8,bb9,ff2,ff5,ff6,bb41,bb42,bb46,bb47} \begin{equation} c_{n+1}=0\quad,\quad \frac{\lambda_0}{m\,\omega\,a}-2-2\,\chi=2\,n\quad (n=1,2,...) \label{45} \end{equation} By analyzing the second condition, we get the following energy eigenvalues $E_{n,l}$: \begin{eqnarray} &&E^{2}_{n,l}=k^2+q^2+m^2+2\,m\,\omega\,a\,\left(n+2+\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_c}\right)\nonumber\\ &&+2\,m^2\,\omega^2\,a\,b. \label{38} \end{eqnarray} Equation (\ref{38}) is the energy eigenvalues of a generalized Klein-Gordon oscillator in the magnetic cosmic string with a Coulomb-type scalar potential in the Kaluza-Klein theory. Observed that the relativistic energy eigenvalues Eq. (\ref{38}) depend on the Aharonov-Bohm geometric quantum phase \cite{bb50}. Thus, we have that $E_{n, l} (\Phi+\Phi_0)=E_{n, l \mp \tau} (\Phi)$ where, $\Phi_0=\pm\,\frac{2\,\pi}{q}\,\tau$ with $\tau=0,1,2..$. This dependence of the relativistic energy eigenvalue on the geometric quantum phase $\Phi$ gives rise to a relativistic analogue of the Aharonov-Bohm effect for bound states \cite{ff3,bb15,bb28,bb39,bb40,bb50}. We plot graphs of the above energy eigenvalues w. r. t. different parameters. In fig. 6, the energy eigenvalues $E_{1,1}$ against the parameter $\eta_c$. In fig. 7, the energy eigenvalues $E_{1,1}$ against the parameter $M$. In fig. 8, the energy eigenvalues $E_{1,1}$ against the parameter $\omega$. In fig. 9, the energy eigenvalues $E_{1,1}$ against the parameter $\Phi$. The ground state energy levels and corresponding wave-function for $n=1$ are given by \begin{eqnarray} &&E^{2}_{1,l}=k^2+q^2+m^2+2\,m\,\omega_{1,l}\,a\,\left(3+\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_c} \right)\nonumber\\ &&+2\,m^2\,\omega^2_{1,l}\,a\,b\quad,\nonumber\\ &&\psi_{1,l} (\rho)=\rho^{\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_c}}\,e^{-\frac{\rho^2}{2}}\,\left(c_0+c_1\,\rho \right), \label{39} \end{eqnarray} where \begin{eqnarray} c_1&=&\frac{2\,m\,\eta_{c}}{\sqrt{m\,\omega_{1,l}\,a}\,(1+2\,\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_{c}})}\nonumber\\ &=&\left(\frac{2}{1+2\,\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_c}}\right)^{\frac{1}{2}}\,c_0. \nonumber\\ \omega_{1,l}&=&\frac{2\,m\,\eta^2_{c}}{a\,\left (1+2\,\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2_{1,l}\,b^2+\eta^{2}_c}\right)} \label{40} \end{eqnarray} a constraint on the physical parameter $\omega_{1,l}$. Equation Eq. (\ref{39}) is the ground states energy eigenvalues and corresponding eigenfunctions of a generalized Klein-Gordon oscillator in the presence of Coulomb-type scalar potential in a magnetic cosmic string space-time in the Kaluza-Klein theory. For $\alpha \rightarrow 1$, the energy eigenvalues (\ref{38}) becomes \begin{eqnarray} &&E^{2}_{n,l}=k^2+m^2+q^2+2\,m\,\omega\,a\,\left(n+2+\sqrt{(l-\frac{q\,\Phi}{2\,\pi})^2+m^2\,\omega^2\,b^2+\eta^{2}_{c}}\right)\nonumber\\ &&+2\,m^2\,\omega^2\,a\,b. \label{41} \end{eqnarray} Equation (\ref{41}) is the relativistic energy eigenvalue of the generalized Klein-Gordon oscillator field with a Coulomb-type scalar potential with a magnetic flux in the Kaluza-Klein theory. Observed that the relativistic energy eigenvalue Eq. (\ref{41}) depend on the geometric quantum phase \cite{bb50}. Thus, we have that $E_{n,l} (\Phi+\Phi_0)=E_{n,l \mp \tau} (\Phi)$ where, $\Phi_0=\pm\,\frac{2\,\pi}{q}\,\tau$ with $\tau=0,1,2..$. This dependence of the relativistic energy eigenvalue on the geometric quantum phase gives rise to an analogous effect to Aharonov-Bohm effect for bound states \cite{ff3,bb15,bb28,bb39,bb40,bb50}. \vspace{0.3cm} {\bf Case A}: \vspace{0.3cm} We discuss below a special case corresponds to $b \rightarrow 0$, $a \rightarrow 0$, that is, a scalar quantum particle in a magnetic cosmic string background subject to a Coulomb-type scalar potential in the Kaluza-Klein theory. In that case, from Eq. (\ref{35}) we obtain the following equation: \begin{equation} \psi''(r)+\frac{1}{r}\,\psi'(r)+[\tilde{\lambda}-\frac{\tilde{\chi}^2}{r^2}-\frac{2\,m\,\eta_{c}}{r}]\,\psi(r)=0, \label{cc1} \end{equation} where \begin{eqnarray} &&\tilde{\lambda}=E^2-k^2-q^2-m^2,\nonumber\\ &&\tilde{\chi}_0=\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+\eta^{2}_c}. \label{cc2} \end{eqnarray} The above Eq. (\ref{cc1}) can be written as \begin{equation} \psi''(r)+\frac{1}{r}\,\psi'(r)+\frac{1}{r^2}\,\left(-\xi_1\,r^2+\xi_2\,r-\xi_3 \right)\,\psi(r)=0, \label{cc3} \end{equation} where \begin{equation} \xi_1=-\tilde{\lambda}\quad,\quad \xi_2=-2\,m\,\eta_{c}\quad,\quad \xi_3=\tilde{\chi}^{2}_0. \label{Bakke} \end{equation} Following the similar technique as done earlier, we get the following energy eigenvalues $E_{n,l}$: \begin{equation} E_{n,l}=\pm\,m\,\sqrt{1-\frac{\eta^2_{c}}{\left (n+\sqrt{\frac{1}{\alpha^2}\,(l-\frac{q\,\Phi}{2\,\pi})^2+\eta^{2}_{c}}+\frac{1}{2}\right)^2}+\frac{k^2}{m^2}+\frac{q^2}{m^2}}, \label{cc5} \end{equation} where $n=0,1,2,..$ is the quantum number associated with radial modes, $l=0,\pm\,1,\pm\,2,....$ are the quantum number associated with the angular momentum, $k$ and $q$ are constants. Equation (\ref{cc5}) corresponds to the relativistic energy levels for a free-scalar particle subject to Coulomb-type scalar potential in the background of magnetic cosmic string in a Kaluza-Klein theory. The radial wave-function is given by \begin{eqnarray} &&\psi_{n,l} (r)=|N|\,r^{\frac{\tilde{\chi}_0}{2}}\,{\sf e}^{-\frac{r}{2}}\,{\sf L}^{(\tilde{j})}_{n} (r)\nonumber\\ &=&|N|\,r^{\frac{1}{2},\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+\eta^{2}_c}}\,{\sf e}^{-\frac{r}{2}}\,{\sf L}^{(\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+\eta^{2}_c})}_{n} (r). \label{cc7} \end{eqnarray} Here $|N|$ is the normalization constant and ${\sf L}^{(\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+\eta^{2}_c})}_{n} (r) $ is the generalized Laguerre polynomial. For $\alpha \rightarrow 1$, the energy eigenvalues (\ref{cc5}) becomes \begin{equation} E_{n,l}=\pm\,m\,\sqrt{1-\frac{\eta^2_{c}}{\left (n+\sqrt{(l-\frac{q\,\Phi}{2\,\pi})^2+k^{2}_{c}}+\frac{1}{2}\right)^2}+\frac{k^2}{m^2}+\frac{q^2}{m^2}}, \label{cc6} \end{equation} which is similar to the energy eigenvalue obtained in \cite{bb16} (see Eq. (12) in \cite{bb16}). Thus we can see that the cosmic string $\alpha$ modify the relativistic energy eigenvalue (\ref{cc5}) in comparison to those results obtained in \cite{bb16}. Observe that the relativistic energy eigenvalues Eq. (\ref{cc5}) depend on the cosmic string parameter $\alpha$, the magnetic quantum flux $\Phi$, and potential parameter $\eta_c$. We can see that $E_{n, l} (\Phi+\Phi_0)=E_{n, l \mp \tau} (\Phi)$ where, $\Phi_0=\pm\,\frac{2\,\pi}{q}\,\tau$ with $\tau=0,1,..$. This dependence of the relativistic energy eigenvalues on the geometric quantum phase gives rise to a relativistic analogue of the Aharonov-Bohm effect for bound states \cite{ff3,bb15,bb28,bb39,bb40,bb50}. \subsection{Persistent currents of the Relativistic System} By following \cite{NB,WCT,LD}, the expression for the total persistent currents is given by \begin{equation} I=\sum_{n,l}\,I_{n,l}, \label{dd1} \end{equation} where \begin{equation} I_{n,l}=-\frac{\partial E_{n,l}}{\partial \Phi} \label{dd2} \end{equation} is called the Byers-Yang relation \cite{NB}. Therefore, the persistent current that arises in this relativistic system using Eq. (\ref{38}) is given by \begin{eqnarray} &&I_{n,l}=-\frac{\partial E_{n,l}}{\partial \Phi}\nonumber\\ &&=\mp\frac{m\,\omega\, a\,(\frac{\partial\,\chi}{\partial\,\Phi})}{\sqrt{k^2+q^2+m^2+2m^2\omega^2 a b+2 m \omega a\left(n+2+\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\omega^2 b^2+\eta^{2}_c}\right)}},\quad\quad \label{bb55} \end{eqnarray} where \begin{eqnarray} \frac{\partial\,\chi}{\partial\,\Phi}=-\frac{q\,(l-\frac{q\,\Phi}{2\,\pi})}{2\,\alpha^2\,\pi\,\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+m^2\,\omega^2\,b^2+\eta^{2}_c}}. \label{dd4} \end{eqnarray} Similarly, for the relativistic system discussed in {\bf case A} in this section, this current using Eq. (\ref{cc5}) is given by \begin{eqnarray} I_{n,l}&=&\pm\,\frac{m\,q\,\eta^{2}_c\,(l-\frac{q\,\Phi}{2\,\pi})}{2\,\pi\,\alpha^2\,\left(n+\frac{1}{2}+\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+\eta^2_{c}} \right)^3\,\sqrt{\frac{(l-\frac{q\,\Phi}{2\,\pi})^2}{\alpha^2}+\eta^2_{c}}}\nonumber\\ &&\times\frac{1}{\sqrt{1-\frac{\eta^2_{c}}{\left(n+\sqrt{\frac{1}{\alpha^2}\,(l-\frac{q\,\Phi}{2\,\pi})^2+\eta^{2}_c}+\frac{1}{2}\right)^2}+\frac{k^2}{m^2}+\frac{q^2}{m^2}}}. \label{dd5} \end{eqnarray} For $\alpha \rightarrow 1$, the persistent currents expression given by Eq. (\ref{dd5}) reduces to the result obtained in Ref. \cite{bb16}. Thus we can see that the presence of the cosmic string parameter modify the persistent currents Eq. (\ref{dd5}) in comparison to those results in Ref. \cite{bb16}. By introducing a magnetic flux through the line element of the cosmic string space-time in five dimensions, we see that the relativistic energy eigenvalue Eq. (\ref{38}) depend on the geometric quantum phase \cite{bb50} which gives rise to a relativistic analogue of the Aharonov-Bohm effect for bound states \cite{ff3,bb15,bb28,bb39,bb40,bb50}. Moreover, this dependence of the relativistic energy eigenvalues on the geometric quantum phase has yielded persistent currents in this relativistic quantum system. \section{Conclusions} In Ref. \cite{bb16}, Aharonov-Bohm effects for bound states of a relativistic scalar particle by solving the Klein-Gordon equation subject to a Coulomb-type potential in the Minkowski space-time within the Kaluza-Klein theory were studied. They obtained the relativistic bound states solutions and calculated the persistent currents. In Ref. \cite{bb14}, it is shown that the cosmic string space-time and the magnetic cosmic string space-time can have analogue in five dimensions. In Ref. \cite{bb28}, quantum mechanics of a scalar particle in the background of a chiral cosmic string using the Kaluza-Klein theory was studied. They shown that the wave functions, the phase shifts, and scattering amplitudes associated with the particle depend on the global features of those space-times. These dependence represent the gravitational analogues of the well-known Aharonov-Bohm effect. In addition, they discussed the Landau levels in the presence of a cosmic string within the framework of Kaluza-Klein theory. In Ref. \cite{aa6}, the Klein-Gordon oscillator on the curved background within the Kaluza-Klein theory were studied. The problem of the interaction between particles coupled harmonically with topological defects in the Kaluza-Klein theory were studied. They considered a series of topological defects and then treated the Klein-Gordon oscillator coupled to this background, and obtained the energy eigenvalue and corresponding eigenfunctions in this cases. They have shown that the energy eigenvalue depend on the global parameters characterizing these space-times. In Ref. \cite{EVBL}, a scalar particle with position-dependent mass subject to a uniform magnetic field and a quantum magnetic flux, both coming from the background which is governed by the Kaluza-Klein theory were investigated. They inserted a Cornell-type scalar potential into this relativistic systems and determined the relativistic energy eigenvalue of the system in this background of extra dimension. They analyzed particular cases of this system and a quantum effect were observed: the dependence of the magnetic field on the quantum numbers of the solutions. In Ref. \cite{EPJC}, the relativistic quantum dynamics of a scalar particle subject to linear potential on the curved background within the Kaluza-Klein theory was studied. We have solved the generalized Klein-Gordon oscillator in the cosmic string and magnetic cosmic string space-time with a linear potential within the Kaluza-Klein theory. We have shown that the energy eigenvalues obtained there depend on the global parameters characterizing these space-times and the gravitational analogue to the Aharonov-Bohm effect for bound states \cite{ff3,bb15,bb28,bb39,bb40,bb50} of a scalar particle was analyzed. In this work, we have investigated the relativistic quantum dynamics of a scalar particle interacting with gravitational fields produced by topological defects via the Klein-Gordon oscillator of the Klein-Gordon equation in the presence of cosmic string and magnetic cosmic string within the Kaluza-Klein theory with scalar potential. We have determined the manner in which the non-trivial topology due to the topological defects and a quantum magnetic flux modifies the energy spectrum and wave-functions of a scalar particle. We then have studied the quantum dynamics of a scalar particle interacting with fields by introducing a magnetic flux through the line element of a cosmic string space-time using the five-dimensional version of the General Relativity. The quantum dynamics in the usual as well as magnetic cosmic string cases allow us to obtain the energy eigenvalues and corresponding wave-functions that depend on the external parameters characterize the background space-time, a result known by gravitational analogue of the well studied Aharonov-Bohm effect. In {\it section 2}, we have chosen a Cornell-type function $f(r)=a\,r+\frac{b}{r}$ and Cornell-type potential $S(r)=\eta_{L}\,r+\frac{\eta_c}{r}$ into the relativistic systems. We have solved the generalized Klein-Gordon oscillator in the cosmic string background within the Kaluza-Klein theory and obtained the energy eigenvalues Eq. (\ref{24}). We have plotted graphs of the energy eigenvalues Eq. (\ref{24}) w. r. t. different parameters by figs. 1--5. By imposing the additional recurrence condition $c_{n+1}=0$ on the relativistic eigenvalue problem, for example $n=1$, we have obtained the ground state energy levels and wave-functions by Eqs. (\ref{27})--(\ref{28}). We have discussed a special case corresponds to $\eta_{L} \rightarrow 0$ and obtained the relativistic energy eigenvalues Eq. (\ref{aa3}) of a generalized Klein-Gordon oscillator in the cosmic string space-time within the Kaluza-Klein theory. We have also obtained the relativistic energy eigenvalues Eq. (\ref{bb4}) of a free-scalar particle by solving the Klein-Gordon equation with a Coulomb-type scalar potential in the background of cosmic string space-time in the Kaluza-Klein theory. In {\it section 3}, we have studied the relativistic quantum dynamics of a scalar particle in the background of magnetic cosmic string in the Kaluza-Klein theory with a scalar potential. By choosing the same function $f(r)=a\,r+\frac{b}{r}$ and a Coulomb-type scalar potential $S(r)=\frac{\eta_c}{r}$, we have solved the radial wave-equation in the considered system and obtained the bound states energy eigenvalues Eq. (\ref{38}). We have plotted graphs of the energy eigenvalues Eq. (\ref{38}) w. r. t. different parameters by figs. 6--9. Subsequently, the ground state energy levels Eq. (\ref{39}) and corresponding wave-functions Eq. (\ref{40}) for the radial mode $n=1$ by imposing the additional condition $c_{n+1}=0$ on the eigenvalue problem is obtained. Furthermore, a special case corresponds to $a\rightarrow 0$, $b\rightarrow 0$ is discussed and obtained the relativistic energy eigenvalues Eq. (\ref{cc5}) of a scalar particle by solving the Klein-Gordon equation with a Coulomb-type scalar potential in the magnetic cosmic string space-time in the Kaluza-Klein theory. For $\alpha \rightarrow 1$, we have seen that the energy eigenvalues Eq. (\ref{cc5}) reduces to the result obtained in Ref. \cite{bb16}. As there is an effective angular momentum quantum number, $l \rightarrow l_{eff}=\frac{1}{\alpha}\,(l-\frac{q\,\Phi}{2\pi})$, thus the relativistic energy eigenvalues Eqs. (\ref{38}) and (\ref{cc5}) depend on the geometric quantum phase \cite{bb50}. Hence, we have that $E_{n, l} (\Phi+\Phi_0)=E_{n, l \mp \tau} (\Phi)$ where, $\Phi_0=\pm\,\frac{2\,\pi}{q}\,\tau$ with $\tau=0,1,2,.$. This dependence of the relativistic energy eigenvalues on the geometric quantum phase gives rise to a relativistic analogue of the Aharonov-Bohm effect for bound states \cite{bb15,bb39,bb40,bb50}. Finally, we have obtained the persistent currents by Eqs. (\ref{bb55})--(\ref{dd5}) for this relativistic quantum system because of the dependence of the relativistic energy eigenvalues on the geometric quantum phase. So in this paper, we have shown some results which are in addition to those results obtained in Refs. \cite{bb28,bb15,bb16,aa6,EVBL,EVBL2,EPJC} presents many interesting effects. \section*{Data Availability} No data has been used to prepare this paper. \section*{Conflict of Interest} Author declares that there is no conflict of interest regarding publication this paper. \section*{Acknowledgement} Author sincerely acknowledge the anonymous kind referee(s) for their valuable comments and suggestions and thanks the editor. \section*{Appendix A : Brief review of the Nikiforov-Uvarov (NU) method} \setcounter{equation}{0} \renewcommand{\theequation}{A.\arabic{equation}} The Nikiforov-Uvarov method is helpful in order to find eigenvalues and eigenfunctions of the Schr\"{o}dinger like equation, as well as other second-order differential equations of physical interest. According to this method, the eigenfunctions of a second-order differential equation \cite{bb49} \begin{equation} \frac{d^2 \psi (s)}{ds^2}+\frac{(\alpha_1-\alpha_2\,s)}{s\,(1-\alpha_3\,s)}\,\frac{d \psi (s)}{ds}+\frac{(-\xi_1\,s^2+\xi_2\,s-\xi_3)}{s^2\,(1-\alpha_3\,s)^2}\,\psi (s)=0. \label{A.1} \end{equation} are given by \begin{equation} \psi (s)=s^{\alpha_{12}}\,(1-\alpha_3\,s)^{-\alpha_{12}-\frac{\alpha_{13}}{\alpha_3}}\,P^{(\alpha_{10}-1,\frac{\alpha_{11}}{\alpha_3}-\alpha_{10}-1)}_{n}\,(1-2\,\alpha_3\,s). \label{A.2} \end{equation} And that the energy eigenvalues equation \begin{eqnarray} &&\alpha_2\,n-(2\,n+1)\,\alpha_5+(2\,n+1)\,(\sqrt{\alpha_9}+\alpha_3\,\sqrt{\alpha_8})+n\,(n-1)\,\alpha_3+\alpha_7\nonumber\\ &&+2\,\alpha_3\,\alpha_8+2\,\sqrt{\alpha_8\,\alpha_9}=0. \label{A.3} \end{eqnarray} The parameters $\alpha_4,\ldots,\alpha_{13}$ are obtained from the six parameters $\alpha_1,\ldots,\alpha_3$ and $\xi_1,\ldots,\xi_3$ as follows: \begin{eqnarray} &&\alpha_4=\frac{1}{2}\,(1-\alpha_1)\quad,\quad \alpha_5=\frac{1}{2}\,(\alpha_2-2\,\alpha_3),\nonumber\\ &&\alpha_6=\alpha^2_{5}+\xi_1\quad,\quad \alpha_7=2\,\alpha_4\,\alpha_{5}-\xi_2,\nonumber\\ &&\alpha_8=\alpha^2_{4}+\xi_3\quad,\quad \alpha_9=\alpha_6+\alpha_3\,\alpha_7+\alpha^{2}_3\,\alpha_8,\nonumber\\ &&\alpha_{10}=\alpha_1+2\,\alpha_4+2\,\sqrt{\alpha_8}\quad,\quad \alpha_{11}=\alpha_2-2\,\alpha_5+2\,(\sqrt{\alpha_9}+\alpha_3\,\sqrt{\alpha_8}),\nonumber\\ &&\alpha_{12}=\alpha_4+\sqrt{\alpha_8}\quad,\quad \alpha_{13}=\alpha_5-(\sqrt{\alpha_9}+\alpha_3\,\sqrt{\alpha_8}). \label{A.4} \end{eqnarray} A special case where $\alpha_3=0$, as in our case, we find \begin{equation} \lim_{\alpha_3\rightarrow 0} P^{(\alpha_{10}-1,\frac{\alpha_{11}}{\alpha_3}-\alpha_{10}-1)}_{n}\,(1-2\,\alpha_3\,s)=L^{\alpha_{10}-1}_{n} (\alpha_{11}\,s), \label{A.5} \end{equation} and \begin{equation} \lim_{\alpha_3\rightarrow 0} (1-\alpha_3\,s)^{-\alpha_{12}-\frac{\alpha_{13}}{\alpha_3}}=e^{\alpha_{13}\,s}. \label{A.6} \end{equation} Therefore the wave-function from (\ref{A.2}) becomes \begin{equation} \psi (s)=s^{\alpha_{12}}\,e^{\alpha_{13}\,s}\,L^{\alpha_{10}-1}_{n} (\alpha_{11}\,s), \label{A.7} \end{equation} where $L^{(\alpha)}_{n} (x)$ denotes the generalized Laguerre polynomial. The energy eigenvalues equation reduces to \begin{equation} n\,\alpha_2-(2\,n+1)\,\alpha_5+(2\,n+1)\,\sqrt{\alpha_9}+\alpha_7+2\,\sqrt{\alpha_8\,\alpha_9}=0. \label{A.8} \end{equation} Noted that the simple Laguerre polynomial is the special case $\alpha=0$ of the generalized Laguerre polynomial: \begin{equation} L^{(0)}_{n} (x)=L_{n} (x). \label{A.9} \end{equation}
1,314,259,996,944
arxiv
\section{Introduction} This section first provides some historical background on the use of superconducting circuitry for mechanical detection in order to trace the origins and establish the present day context of the main focus of this paper: namely, QEMS that incorporate superconducting qubits, cavities and nanomechanical devices. It then outlines the basic model for a particular type of QEMS: CPBs coupled to nanomechanical resonators. In the end, the state-of-the-art for this system is reviewed and current experimental challenges are discussed. It should be noted that the introductory review does not attempt to do justice to the parallel (and increasingly interdependent) field of optomechanics. For more information on that field, several excellent reviews are cited below, which we recommend to interested readers. \subsection{The Origins of Quantum Electromechanical Systems} \label{sec:title} The use of superconducting systems for sensitive measurements of motion traces back at least 50 years to the origins of resonant-mass gravitational-wave (GW) antennas\cite{weber1960detection,braginsky1985systems,blair1995high,harry2000two,marin2013gravitational} . Over the decades, superconducting technology has played an integral role in that field, with massive, cryogenically-cooled superconducting bars serving as the high-Q acoustic cavities at the heart of the GW antennas, and superconducting quantum interference devices (SQUIDs)\cite{harry2000two,marin2013gravitational} or superconducting microwave resonators\cite{blair1995high} serving as ultrasensitive front-end detectors in the transducer circuitry. A parallel track in the history of superconducting devices and mechanical detection arose in the 1990’s, with the emergence of nanomechanics\cite{cleland1996fabrication} and the recognition that nanoelectromechanical systems (NEMS)\cite{roukes2000nanoelectromechanical,ekinci2005nanoelectromechanical} could serve as a new frontier for studying macroscopic quantum effects.\cite{cleland1999nanoscale,roukes2000nanoelectromechanical,blencowe2000quantum,roukes2001nanoelectromechanical,blencowe2000sensitivity,schwab2001quantum,milburn2001quantum,zhang2002intrinsic,armour2002mechanical,armour2002entanglement,irish2003quantum} Indeed SQUID-based detection and superconducting bias circuitry integrated with nanomechanical structures enabled the first measurements of the quantum of thermal conductance in 1999\cite{schwab2000measurement}. Moreover, researchers at the time, inspired in part by earlier developments in the GW-detection community\cite{braginsky1995quantum,caves1980measurement,bocko1996measurement}, also realized that full exploration of quantum NEMS (or more generally QEMS) would require the development of new detectors and control circuitry that could be integrated strongly with motional degrees of freedom at the nanoscale and yet simultaneously provide unprecedented resolution and minimal back-action, themselves operating in regimes governed by quantum mechanics\cite{roukes2000nanoelectromechanical,blencowe2000quantum,roukes2001nanoelectromechanical,blencowe2000sensitivity,schwab2001quantum,milburn2001quantum,zhang2002intrinsic,armour2002mechanical,armour2002entanglement,irish2003quantum,cleland2004superconducting,blencowe2004quantum,schwab2005putting}. Crucially, natural solutions to these challenges emerged from the nascent field of superconducting quantum computation. During the 1990’s a variety of mesoscopic superconducting devices were developed\cite{schoelkopf1998radio,nakamura1999coherent,devoret2000amplifying,Makhlin01} that became important candidates in the next decade as both detector elements and quantum bits (qubits) in scalable quantum processing architectures.\cite{Makhlin01,schoelkopf2008wiring,clarke2008superconducting,devoret2013superconducting} In these systems, at milli-Kelvin temperatures, the interplay of charging and Josephson effects\cite{fulton1989observation,van1991combined,michael2004introduction} can give rise to noise characteristics dominated by quantum transport processes\cite{clerk2002resonant,clerk2005quantum,blencowe2005dynamics,xue2009measurement,clerk2010introduction} and, in properly tuned devices, quantum coherent behavior\cite{nakamura1999coherent,Makhlin01,koch2007charge,schreier2008suppressing,schoelkopf2008wiring,clarke2008superconducting,devoret2013superconducting} analogous to that seen in atomic and spin-based systems. The same properties also make these devices ideally suited for sensing and controlling the quantum properties of mechanics.\cite{blencowe2000sensitivity,schwab2001quantum,milburn2001quantum,zhang2002intrinsic,armour2002mechanical,armour2002entanglement,irish2003quantum,cleland2004superconducting,blencowe2004quantum,clerk2005quantum,blencowe2005dynamics} Moreover, their size scale and material composition are commensurate with typical nano- and micromechanical systems, enabling the use of standard fabrication processes to engineer the devices \textit{on chip} with the mechanical elements, in order to achieve precisely controlled and even tunable interactions between the systems.\cite{knobel2003nanometre,lahaye2004approaching,naik2006cooling,flowers2007intrinsic,etaki2008motion,lahaye2009nanomechanical,o2010quantum,suh2010parametric,pirkkalainen2013hybrid} An early example of this synergy was seen with the single-electron transistor (SET) and its superconducting cousin the SSET.\cite{schoelkopf1998radio,devoret2000amplifying,fulton1989observation,van1991combined,michael2004introduction} By the late 1990's the SET was recognized as a potentially quantum-limited electrometer, with sufficient bandwidth, when operated in microwave circuitry (RF-SET), to perform single-shot quantum-state detection of charge-based qubits.\cite{schoelkopf1998radio,devoret2000amplifying,lehnert2003measurement} Soon thereafter it was appreciated that the unprecedented charge sensitivity ($\sim \mu \rm{e}/\sqrt{\rm{Hz}}$) and large bandwidth ($\sim100\,{\rm MHz}$) could also be utilized for performing continuous, linear displacement detection of MHz-range nanomechanical elements, with sensitivity approaching the limit allowed by the Heisenberg Uncertainty Principle.\cite{blencowe2000sensitivity,zhang2002intrinsic} This motivated several experimental efforts to integrate MHz-range NEMS with linear displacement transducers based upon SETs\cite{knobel2003nanometre} and RF-SSETs.\cite{lahaye2004approaching} Subsequent theoretical\cite{clerk2005quantum,blencowe2005dynamics} and experimental development\cite{naik2006cooling} of the SSET displacement detector in fact showed the coupled SSET-NEMS device to be a system with rich dynamics: it allowed for displacement detection near the uncertainty principle limit at particular SSET Cooper-pair/quasiparticle transport resonances; provided the first demonstrations of the quantum back-action of fundamental particles on the motion of a macroscopic mechanical system; and enabled detection of nanomechanical motion for the first time at low thermal occupation numbers, where observation of quantum effects in the behavior of the mechanics might reasonably be expected. In the early 2000’s it was also appreciated that coherent superconducting devices like the Cooper-pair box (CPB)\cite{nakamura1999coherent,Makhlin01} and the phase qubit\cite{martinis2002rabi} could be utilized to go beyond linear displacement detection and to enable the capability to manipulate and measure patently quantum mechanical states of nano- and micromechanical modes,\cite{armour2002mechanical,armour2002entanglement,irish2003quantum,cleland2004superconducting} in analogy to systems in cavity quantum electrodynamics (CQED) and ion-trap physics that had enabled groundbreaking research on the quantum properties of light and trapped ions.\cite{haroche2006exploring} Initial theoretical proposals put forth in the literature posited Josephson-junction-based qubits as tools for performing a variety of tasks, including the measurement and preparation of nanomechanical superposition states, number states and zero-point energy;\cite{armour2002mechanical,armour2002entanglement,irish2003quantum} as well, protocols were outlined for use of qubit-coupled nanoresonators (QCNR) as quantum memory and bus elements.\cite{cleland2004superconducting} These initial proposals and the explosion in subsequent years of new superconducting qubit technology, most notably circuit QED (cQED) architectures based upon superconducting transmission line resonators,\cite{koch2007charge,schreier2008suppressing,schoelkopf2008wiring,clarke2008superconducting,devoret2013superconducting,wallraff2004strong} fueled a myriad of proposals\cite{martin2004ground,rabl2004generation,wei2006probing,tian2006scheme,clerk2007using,jacobs2007continuous,utami2008entanglement,armour2008probing,jacobs2008energy,semiao2009kerr,heikkila2014enhancing} over the ensuing decade for the sake of exploring fundamental aspects of quantum mechanics such as the quantum-to-classical transition, the fundamental limits to the sensing of motion, and applications in quantum information processing and metrology. As well, it has since been appreciated that these systems offer the potential to study new regimes of the paradigmatic Jaynes-Cummings model\cite{haroche2006exploring}, going beyond the rotating-wave approximation.\cite{irish2005dynamics,zueco2009qubit} Notwithstanding the extensive theoretical effort to develop QCNRs, progress on the experimental front has been slow due to an array of challenges that will be elaborated in Section 1.3. Nonetheless, there have been several important developments in the field beginning in 2009 with the first demonstration of the interactions between a nanomechanical flexural resonator and a superconducting charge qubit.\cite{lahaye2009nanomechanical} The experiment in 2009 demonstrated that, for a CPB and nanoresonator whose energies were far out of resonance, a simple electrostatic interaction between the systems gives rise to shifts in the energy of the nanoresonator that are dependent on the qubit's state. Such dispersive shifts are analogous to single-atom index effects observed in some CQED systems\cite{haroche2006exploring} and in principle could be utilized for a multitude of tasks if developed further, including for generating highly-non-classical states of mechanics \cite{utami2008entanglement,armour2008probing,suh2010parametric,semiao2009kerr,jacobs2009engineering} and for generating a quantum switch to shuttle information coherently between multiple mechanical modes\cite{mariantoni2008two}. In 2010, shortly after these initial results, results were put forth demonstrating the first use of a superconducting qubit to manipulate and measure the quantum properties of a mechanical device.\cite{o2010quantum} In this work, a micromechanical piezo-disk resonator was integrated with a superconducting phase qubit; sophisticated techniques that had been developed for controlling and measuring the phase qubit were then adopted to perform quantum Rabi swapping and Ramsey interference experiments with the micromechanical mode. This experiment was a milestone not only for the field of mechanical quantum systems, but for the entire physics community, providing the first demonstration of energy quantization and quantum superposition states with a normally-classical macroscopic mechanical mode. More recently, in 2013, observations of dispersive interactions between a transmon qubit and micromechanical drumhead that were complimentary to the results in 2009 were published.\cite{pirkkalainen2013hybrid} Specifically, these results showed evidence for mechanical Stark shifts in the transmon's energy spectrum; the shifts were shown to be proportional to the number of quanta in the mechanical mode and thus analogous to the traditional AC Stark shift\cite{haroche2006exploring} seen in atomic physics, CQED and cQED. While single-quantum shifts were not resolved in the 2013 work, the capability is within reach using current technology (as discussed in Section 2) and could enable projective measurements and even quantum non-demolition (QND) measurements of the energy of nano- and microscale resonators.\cite{irish2003quantum,clerk2007using} Such techniques would find myriad applications in areas ranging from quantum information processing to the study of quantum fluctuation theorems\cite{campisi2011colloquium,brito2014testing} and fundamental investigations of how energy is transported and dissipated in nanoscale devices. Alongside the development of QCNRs, intense effort has been directed toward an additional branch of superconducting electromechanical systems: microwave cavity mechanics\cite{regal2008measuring,hertzberg2010back,rocheleau2010preparation,massel2011microwave,teufel2011sideband,palomaki2013coherent,palomaki2013entangling,suh2014mechanically}. While a full accounting of the origin of these systems is beyond the scope of this historical introduction, it is fair to say that they catalyzed from (and amidst) a confluence of diverse research directions including prior pioneering work on dynamical back-action in the GW community\cite{blair1995high,braginsky1995quantum} and contemporaneous research in the mid-2000’s in the fields of cQED\cite{wallraff2004strong}, superconducting astrophysical detectors\cite{day2003broadband}, and optomechanics\cite{kippenberg2008cavity,aspelmeyer2012quantum,aspelmeyer2014cavity}. By and large the cavities at the heart of these systems have been high-quality superconducting circuit resonators that have been engineered to provide parametric read-out and control of flexural type nano- and micromechanical modes. The earliest versions of microwave cavity mechanics utilized transmission-line resonators, primarily in coplanar waveguide (CPW) geometries.\cite{regal2008measuring,hertzberg2010back,rocheleau2010preparation} However the most successful, and now most widely used, scheme involves the integration of a micromechanical membrane structure as one electrode of a parallel-pate capacitor in a lumped-element LC circuit.\cite{teufel2011sideband,palomaki2013coherent,palomaki2013entangling,suh2014mechanically} The parametric coupling that can be achieved between membrane modes and the LC circuit in this configuration is several orders of magnitude greater than what has been demonstrated using CPW geometries and has enabled the use of side-band-resolved driving\cite{kippenberg2008cavity,aspelmeyer2014cavity,poot2012mechanical} of microwave cavities for a growing list of accomplishments: cooling of a MHz-range micromechanical mode to its quantum grounds state,\cite{teufel2011sideband} a feat not yet achieved using passive cryogenic refrigeration; coherently storing and retrieving quantum states of microwave fields in a mechanical mode;\cite{palomaki2013coherent} generating and characterizing entanglement between the motion of a mechanical mode and the electric field of a traveling microwave signal;\cite{palomaki2013entangling} and detecting, as well as partially evading, the quantum back-action noise of a microwave field in the measurements of mechanical motion.\cite{suh2014mechanically} The potential applications of these spectacular advances are numerous and range from the use of cavity-cooled mechanics to generate complex entangled states for teleportation and entanglement-swapping protocols, quantum squeezed states of motion for the detection of weak forces, and fundamental explorations of quantum mechanics in new limits.\cite{aspelmeyer2012quantum,poot2012mechanical,aspelmeyer2014cavity} The evolution highlighted here continues at the time of writing, with the parallel tracks noted above intermixing as well as incorporating new devices and materials. Hybrid quantum systems\cite{xiang2013hybrid} in a multitude of forms are being developed that either currently incorporate or ultimately will require superconducting QEMS: microwave-to-optical mechanical transducers in order to coherently link these disparate energy scales in future quantum networks;\cite{hill2012coherent,bochmann2013nanomechanical,bagci2014optical,andrews2014bidirectional} superfluid cavity mechanics, ultra-high-Q systems that incorporate superfluid acoustic modes within a 3D microwave cavity;\cite{de2014superfluid} cavity mechanics that integrate novel mechanical elements such as carbon nanotube resonators and suspended graphene sheets with superconducting cavities;\cite{singh2014optomechanical,weber2014coupling} and surface acoustic wave (SAW) circuits resonantly interfaced with transmon-type qubits,\cite{gustafsson2014propagating} to name a few. For many of these hybrid QEMS, including all of the ones mentioned above, further development of QCNR would have direct relevance for their future applications - if for no other reason than to utilize the QCNR as a tool for generating and detecting highly-nonclassical states of the mechanical components. Thus the remainder of the paper will focus on discussing some of the challenges facing further experimental development of the QCNR, particularly the CPB-based version, and the efforts by the authors to overcome these challenges. \subsection{Canonical Model for the Cooper-Pair Box and Nanomechanical Resonator} \begin{figure} \begin{center}\includegraphics[ width=1\columnwidth,keepaspectratio]{figure1-completev2-cropped-nolayers2}\end{center} \caption{(a) SEM micrograph of the first generation of CPB-type QCNR fabricated and measured by the authors. The CPB and nanostructure are patterned out of aluminum atop a high-resitivity silicon substrate. Additional details of the measurement process and the device are discussed in Section 2. (b) Basic circuit schematic for the device in (a) and mode shapes $U_1(z)$ and $U_3(z)$ for the fundamental mode and third mode respectively. Note the location of the CPB is flipped from its position in (a) in order to simplify the schematic. The second mode of the nanostructure is not illustrated; due to the asymmetry of the mode with respect to the CPB electrode, its motion should couple negligibly to CPB charge. Also note that the thickness parameter $t$ is not defined in the illustration, but is simply the out-of-plane thickness (in the $y$ direction) of the structure. } \label{fig:fig1} \end{figure} Devices like the CPB-based QCNR shown in Figure 1 are typically modeled in the literature using different limits of the following Hamiltonian\cite{irish2003quantum,lahaye2009nanomechanical} \begin{equation} \hat{H}=\hat{H}_{\mathit{CPB}}+\hat{H}_{\mathit{\mathit{\mathit{NR}}}}+\hat{H}_{\mathit{INT}}, \label{hamiltonian} \end{equation} which is composed of a contribution from the CPB that is given by \begin{equation} \label{HamiltonianCPB} \hat{H}_{\mathit{CPB}}=4E_C\sum_n(n-n_\Sigma)^2\ket{n}\bra{n}-\sum_n\left[\frac{{\cal E}_J(\Phi)}{2}\ket{n}\bra{n+1}+\frac{{\cal E}_J^\ast(\Phi)}{2}\ket{n+1}\bra{n}\right], \end{equation} a component due to the nanoresonator \begin{equation} \hat{H}_{\mathit{\mathit{NR}}}=\hbar\omega_{\mathit{NR}}(\hat{a}^{\dagger}\hat{a}+\frac{1}{2}), \label{HamiltonianNR} \end{equation} and a term representing the electrostatic interaction between the systems \begin{equation} \hat{H}_{\mathit{INT}}=\hbar\lambda\sum_n(n-n_{\Sigma})\ket{n}\bra{n}(\hat{a}^{\dagger}+\hat{a}). \label{HamiltonianINT} \end{equation} In Eq.(\ref{HamiltonianCPB}), $E_C$ and $E_J(\Phi)$ are the charging and Josephson energies of the CPB respectively. The value of $E_C$ sets the scale for the CPB's electrostatic energy, which can be tuned by adjusting the polarization charge $n_\Sigma=C_QV_Q/2e+C_gV_g/2e+C_{\mathit{NR}}V_{\mathit{NR}}/2e$ on nearby electrodes, where the capacitances $C_Q$, $C_g$ and $C_{\mathit{NR}}$ and voltages $V_Q$, $V_g$ and $V_{\mathit{NR}}$ are defined in Fig.~\ref{fig:fig1}, and $e$ is the magnitude of the electron charge. The CPB is often in DC-SQUID configuration (Fig.~\ref{fig:fig1}), and thus $E_J(\Phi)$ represents the effective Josephson energy of the SQUID, which can be tuned in situ by adjusting an applied magnetic flux $\Phi$. Importantly, the relative magnitude of the electrostatic and Josephson terms determines the nature of the CPB's energy eigenstates. For example, if $E_J/4E_C\ll 1$ then the energy eigenstates are essentially charge states $\ket{n}$ (i.e. eigenstates of the Cooper-pair number operator $\hat{n}$), except at charge degeneracy points, which are defined by $n_{\Sigma}\approx (2n+1)/2$. At these points adjacent charge states are mixed and the system is well characterized by the two-state charge qubit model.\cite{Makhlin01} On the other hand, if $E_J/4E_C\gtrsim1$, the energy eigenstates are no longer charge states and instead are composed of weighted superpositions of several $\ket{n}$. And in the limit that $E_J/4E_C\gg1$, the CPB is in the transmon regime.\cite{koch2007charge} It is presumed that the nanoresonator can be modeled via Eq. (\ref{HamiltonianNR}) as a quantum simple harmonic oscillator in the usual manner: $\hat{a}^{\dagger}$ and $\hat{a}$ are creation annihilation operators for the mechanical mode, which is generally assumed - but not necessarily limited - to be one of the fundamental flexural modes of the suspended nanostructure (the fundamental in-plane mode for the device in Fig.~\ref{fig:fig1}; the fundamental out-of-plane mode for the membrane resonator in Ref. \citen{pirkkalainen2013hybrid}); $\omega_{\mathit{NR}}/2\pi$ is the mode's frequency; and $\hbar$ is Planck's constant. Finally, Eq. (\ref{HamiltonianINT}) represents the electrostatic coupling that is established between the motion of the nanoresonator and the charge on the CPB island. Here, the scale of the coupling strength is set by the prefactor $\lambda$, which is given by\cite{lahaye2009nanomechanical} \begin{equation} \lambda=-4\frac{E_C}{\hbar}\frac{\mathrm{d}C_{\mathit{NR}}}{\mathrm{d}x}\frac{V_{\mathit{NR}}}{e}x_{zp}, \label{couplingNR} \end{equation} where $x_{zp}=\sqrt{\hbar/2m\omega_{\mathit{NR}}}$ represents the zero-point motion of the mechanical mode and $m$ is its effective mass, defined by $m=\alpha\rho wLt$, where $\alpha=\int_{-L/2}^{L/2}U(z)^2\mathrm{d}z$, $\rho$ is the mass density of the nanostructure, and the geometrical dimensions $w$, $t$, and $L$ are defined in Fig.~\ref{fig:fig1}(b). The quantity $U(z)$ is the displacement of the neutral axis\cite{cleland2002foundations} as a function of position $z$ along the beam [Fig.~\ref{fig:fig1}(b)]. It is important to note that the value of $\alpha$, and hence $m$, will depend upon the choice for normalization of $U(z)$ - e.g. whether $U(z)$ is normalized so that the displacement $x_{zp}$ represents the zero-point motion of the nanostructure's center of mass, or the average zero-point motion of the structure over the length of the CPB electrode $L_{e}$, or any other arbitrary convention. However, because $\tfrac{\mathrm{d}C_{\mathit{NR}}}{\mathrm{d}x}\propto\int_{-L_{e}/2}^{L_{e}/2}U(z)\mathrm{d}z$, $\lambda$ itself is independent of the definition of $x$, as one should expect. Experiments to date provide strong evidence that Eqs. (\ref{hamiltonian}) to (\ref{couplingNR}) give an accurate accounting of the dynamics of capacitively-coupled CPBs and nanomechanical resonators in a semi-classical limit where the mechanical mode is driven to a large amplitude with effective number state populations of $\sim10^3$ to $10^6$.\cite{lahaye2009nanomechanical,suh2010parametric,pirkkalainen2013hybrid} However, experiments fully in the quantum regime, where many of the proposals in the literature could be implemented, remain to be achieved. The primary roadblocks are technical in nature and derive from having to simultaneously satisfy the following conflicting demands: establishing strong coupling $\lambda$ between the qubit and the nanoresonator; maintaining long CPB coherence times, which from here on will be denoted generically by $T_{2}$ or, when appropriate, the inhomgeneously broadened coherence time $T_2^*$; and achieving low thermal occupation numbers $N_{th}$ in the mechanical mode. In the following section we discuss these interconnected criteria in greater detail. \subsection{Challenges in the Development of Coupled CPB-Nanoresonator Systems} It has been recognized for more than a decade that CPB-coupled nanoresonators can serve as testbeds for studying quantum properties of mechanical systems. However, experiments have yet to catch up with the theoretical ideas in this field. The main challenge has been engineering strong coupling between the two systems while simultaneously minimizing the interactions of the individual systems with the environment. Generally speaking, this requires establishing CPB-nanoresonator coupling strengths that exceed the decoherence rates of the nanoresonator and CPB, $\kappa$ and $\gamma$ respectively. Heuristically, what this \textit{strong coupling} requirement implies is that the two systems exchange energy or information with each other at a faster rate than with unaccounted for degrees of freedom. In the two experiments with CPB-coupled nanoresonators thus reported, coupling strengths $\lambda/2\pi> 1\,{\rm MHz}$ were achieved,\cite{lahaye2009nanomechanical,suh2010parametric,pirkkalainen2013hybrid} which exceed some of the best reported CPB decoherence rates ($\gamma/2\pi = 0.7\,{\rm MHz}$ for a charge qubit embedded in a CPW cavity\cite{wallraff2004strong} and $\gamma/2\pi=10\,{\rm kHz}$ for a single-junction transmon in a 3D waveguide\cite{rigetti2012superconducting}). Moreover, such coupling strengths are larger than typical linewidths of flexural nanoresonators at milli-Kelvin temperatures ($ \sim 1\,{\rm kHz}$ for Ref. \citen{lahaye2009nanomechanical} and $\sim 10\,{\rm kHz}$ for Ref. \citen{pirkkalainen2013hybrid}), which should set the scale of $\kappa$ in the quantum regime. However, in both cases the mechanical resonators were greatly detuned in energy from the CPBs ($\omega_{\mathit{NR}}/2\pi = 60 - 70\,{\rm MHz}$ versus $\Delta E_{\mathit{CPB}}/h \sim 4 - 10\,{\rm GHz}$), thus precluding the study or use of coherent, resonant interactions between the systems.\footnote{It should be noted that in Ref. \citen{pirkkalainen2013hybrid} transitions between electromechanical dressed-states were observed, however, this was accomplished by driving the mechanical element into an essentially classical regime with $\geq 10^3$ quanta in the mechanical mode.} In this far-detuned (dispersive) limit, a more appropriate figure for comparison is really the dispersive coupling strength, given by\cite{irish2003quantum,lahaye2009nanomechanical} \begin{equation} \frac{\chi}{2\pi}=\frac{\hbar\lambda^2E^2_J}{\pi\Delta E_{\mathit{CPB}}(\Delta E^2_{\mathit{CPB}}-(\hbar\omega_{\mathit{NR}})^2)}, \label{dispersive} \end{equation} which would set the time scale for generating Schr\"{o}dinger cat states of the mechanics\cite{armour2008probing} and limits CPB transition linewidths for performing number state detection\cite{clerk2007using} using dispersive techniques. For both experiments to date, $\chi/2\pi \sim 1\,{\rm kHz}$, comparable to the nanoresonators' linewidths, but orders of magnitude less than the decoherence rates of the qubits used for those experiments (and at least an order of magnitude less than the best $\gamma$ demonstrated thus far in cQED), making such quantum measurement infeasible with these first devices. On the face of it, there would appear to be multiple, independent paths toward further development of CPB-coupled nanoresonators for advanced quantum measurement: improve CPB-nanoresonator coupling strengths; engineer long CPB coherence times; and increase the nanoresonator's frequency. However, these three directions are interdependent, and modifications to enhance one parameter may, in some cases, adversely impact another. For example, Eq. (\ref{couplingNR}) suggests that CPB-nanoresonator coupling can be maximized by working with as large a charging energy $E_C$ as possible. This makes sense: the larger $E_C$ is, the greater the charge dispersion (or sensitivity to changes in polarization charge $n_g$) and hence the more strongly one can couple the motion of a nearby suspended electrode. But, unfortunately, increasing $E_C$, for the same reasons, also increases the CPB's suceptibility to local charge noise, whether it arises from trapped surface charge fluctuators, two-level systems (TLS) or non-equilibrium quasiparticle tunneling.\cite{schuster2007circuit} This yields short coherence times (typically $T_2\ll 1$ $\mu$s), as well as slow drifts and random jumps in the system's energy that makes these devices notoriously difficult to work with. For this reason, the superconducting quantum computing community has abandoned CPBs in the charge qubit regime and moved to low-$E_C$ transmons, which, as noted above have given the longest coherence times to date for any superconducting qubit, approaching 100 $\mu s$.\cite{rigetti2012superconducting} Thus, moving to the transmon regime would also appear to be the right direction for mechanics, as was done in Ref. \citen{pirkkalainen2013hybrid}. However, it is crucial to point out that the resulting 30-to-40-fold reduction in $E_C$ (in moving from typical charge qubit values to typical transmon values), without making any additional changes, leads to a reduction in $\chi$ of $\sim 1000$, essentially leaving the product of $\chi T_2$ unchanged at best. Additional solutions are thus required to reach the strong coupling regime. Moving forward, coupling strength can still be improved by several means while working in the transmon regime:\footnote{It should also be noted that in Ref. \citen{pirkkalainen2013hybrid} the factor of 40 reduction in $E_C$, in comparison with Ref. \citen{lahaye2009nanomechanical}, was made up for by a $\sim$ 1000-fold increase in $\mathrm{d}C_{\mathit{NR}}/\mathrm{d}x$ by utilizing a plate-style geometry. Taking into account the much larger mass of the plate, this yielded a maximum coupling of $\lambda/2\pi= 4.5\,{\rm MHz}$, a factor of two larger than in Ref. \citen{lahaye2009nanomechanical}, but achieved using one-third the value of $V_{\mathit{NR}}$. Unfortunately, this was not enough to achieve strong coupling, due in part to the short coherence time of the transmon, which was observed to be $T_2^*\sim 70\,{\rm ns}$ and thought to be limited by quasiparticle poisoning.} increasing the applied voltage $V_{\mathit{NR}}$; increasing $\mathrm{d}C_{\mathit{NR}}/\mathrm{d}x$, and utilizing low-frequency, high-aspect-ratio devices (i.e. $L/w\gg1$) . Increasing the voltage would appear to be a simple approach. However, it is not yet clear whether doing so leads to a degradation of $T_2$ due to the increased charge noise that arises from the application of $V_{\mathit{NR}}$ and the large electric fields ($\sim 10^6\,{\rm V}/{\rm cm}$) in between the nanoresonator electrode and nearby electrodes like the CPB island. Tailoring geometries and materials to engineer larger $\mathrm{d}C_{\mathit{NR}}/\mathrm{d}x$ would also seem to be a straightfoward approach. Nonetheless, it too is not trivial. If special precautions are not taken to limit the bandwidth of the external bias circuitry, the CPB (transmon or not) will experience radiative damping with a rate given by $\Gamma =\Delta E^2_{\mathit{CPB}}C^2_{\mathit{NR}}Z_0/\hbar^2C_{\mathit{CPB}}$,\cite{houck2008controlling} where $C_{\mathit{CPB}}$ is the CPB's total effective island capacitance and $Z_0$ is the impedance of the external bias circuitry. The resulting relaxation time $T_1$ can be quite low; for example, $C_{\mathit{NR}} = 5\,{\rm fF}$, $C_{\mathit{CPB}}=50\,{\rm fF}$, $\Delta E_{\mathit{CPB}}/h=5\,{\rm GHz}$, yields $T_1=1/\Gamma = 40\,{\rm ns}$ and a maximum coherence time of $T_2=2T_1=80\,{\rm ns}$. Thus proper engineering of the bias circuitry's impedance and bandwidth is also critical for maintaining CPB coherence times. Both of these effects are currently being researched by the authors (see Section \ref{subsection:sub3}), who, in unpublished work, have seen in spectroscopic measurements that transition linewidths ($\sim1/T_2^*$) as narrow as $2\,{\rm MHz}$ persist in a voltage-biased transmon up to at least $V_{\mathit{NR}}= 8\,{\rm V}$, where superconducting band-stop filters\cite{hao2014development} are used to limit the radiative decay of the bias channel. Through simple considerations, one can show from Eq. (\ref{couplingNR}) that $\lambda$ scales as $L^{3/2}/w$, motivating the use of high-aspect-ratio nanostructures to reach the strong coupling regime. Of course, because flexural mode frequencies scale as $w/L^2$,\footnote{For example, the in-plane flexural mode frequencies of a thin beam, considering pure bending, are given by $\omega_{\mathit{NR}}/2\pi=\frac{a^2_iw}{L^2}\sqrt{\frac{E}{12\rho}}$,\cite{cleland2002foundations} where $E$ is the Young's modulus of the material and $a_i = 4.73, 7.89, 10.99$ for the first, second and third modes respectively.} taking this approach would lead to greatly reduced mode frequencies. For instance, a factor of 10 increase in $\lambda$ compared with Ref. \citen{lahaye2009nanomechanical}, achieved by increasing only the length, would require extending $L$ by a factor $10^{2/3}\sim4.6$. This would result in $\omega_{\mathit{NR}}/2\pi \sim 3\,{\rm MHz}$, which would have a large thermal population even at milli-Kelvin temperatures (e.g. $N_{TH} \sim 140$ at $T= 20\,{\rm mK}$). Side-band cooling techniques developed in cavity mechanics\cite{teufel2011sideband} could be utilized for ground-state cooling of a mechanical mode prior to coupling to the CPB. However, it is expected that the thermal relaxation rate of the mode would be greatly increased, proportional to $\kappa N_{TH}$, which in turn would place more stringent constraints on nanoresonator Q-factors for achieving the strong coupling regime. Moreover, an additional concern for increasing the aspect-ratio is a resulting decrease in the voltage at which ``pull-in" occurs\cite{buks2001metastability} - this is the voltage at which the nanostructure becomes unstable and snaps into the bias electrode, which usually leads to stiction between the mechanical element and the electrode. As a rule of thumb, the pull-in voltage goes as $V_{Sn}\approx\sqrt{8kd^2/27C_{\mathit{NR}}}$, where $k$ is the effective spring constant and $d$ is the zero-voltage spatial gap between the structure and the electrode. Because $k\propto w^2/L^3$ and $C_{\mathit{NR}} \propto L$, one sees that $V_{Sn} \propto w/L^2$, which could be on the order of volts or much less for high-aspect-ratio devices. The gain in coupling strength by increasing the device's length can thus be completely cancelled out (and actually reversed) by the reduction in pull-in voltage. Pull-in voltage could also be problematic for achieving large $\lambda$ using graphene or carbon nanotube nanoresonators, due their greatly reduced $k$. Of course, the considerations in the previous paragraph are rather imprecise, and detailed modeling using finite element simulations and incorporating Casimir forces\cite{buks2001metastability} would be necessary to find optimal sets of parameters for maximizing $\lambda$ over different configurations. Nonetheless, the expressions for $V_{Sn}$ and $\lambda$ gives rise to the following rule of thumb for maximum coupling achievable for fundamental flexural modes: \begin{equation} \lambda_{max}\approx-\frac{8E_{c}}{\hbar}\sqrt{\beta\frac{\hbar\omega_{\mathit{NR}}C_{\mathit{NR}}}{27e^2}}, \label{couplingmax} \end{equation} which arises from substituting $V_{Sn}$ in to Eq. (\ref{couplingNR}). Here $\beta$ is a constant of order unity that accounts for the deviation of $\mathrm{d}C_{\mathit{NR}}/\mathrm{d}x$ from a parallel-plate approximation. Using typical parameters for transmon-type CPBs and UHF flexural resonators, Eq. (\ref{couplingmax}) suggests that $\lambda_{max}$ could approach 100 MHz, if coupling voltages approaching $V_{\mathit{NR}} \sim 30\,{\rm V}$ can be applied. It remains to be seen whether this can be achieved with CPB-coupled nanoresonators. Finally, it should be noted that one additional possibility for increasing the dispersive coupling $\chi$ without increasing $\lambda$ is to decrease the detuning in energy between the nanoresonator and qubit. There are clearly two ways to do this: increase nanoresonator frequencies; or decrease qubit transition energies $\Delta E_{\mathit{CPB}}$. For the former case, it should be possible to engineer flexural nanoresonators with third mode frequencies in the range of 3 GHz; transmon type qubits could then be tuned via magnetic flux close to resonance with these mechanical modes in the same way as is done in cQED\cite{wallraff2004strong} with cavity resonators. This technique is currently being investigated by the authors using a device similar to one shown in Section \ref{subsection:sub3}. For the latter case, fluxonium\cite{pop2014coherent} devices could be substitued for the transmons. Fluxonium, while relatively new in the lineage of superconducting qubits, has demonstrated long coherence times ($T_2^*= 14\,\mu{\rm s}$) at transition energies as low as $\sim 500\,{\rm MHz}$,\cite{pop2014coherent} which would be nearly resonant with the fundamental modes of properly engineered nanobeams. This approach remains the subject of future work. \section{Experimental Development of CPB-based Nanoresonator Read-Out at Syracuse} \label{sec:sections} In this section we highlight some of the experimental efforts in recent years at Syracuse to develop QEMS that integrate multiple superconducting devices and circuitry with nanomechanical systems. In particular, these new hybrid devices are composed of the three main components: superconducting CPW cavities; CPB-based qubits; and suspended superconducting wires as flexural nanomechanical elements. Sample characteristics and data for three generations of devices, extending back to 2012, are discussed. It our intent that this section serve not only as a record of what has been accomplished, but also to provide greater context to the challenges discussed in the previous section and to serve as a guide for those who might soon be interested in taking up the challenge. \subsection{Generation I: CPB in the Charge Regime Coupled to a Lumped-Element LC Circuit and Flexural Nanoresonator} \begin{figure} \begin{center}\includegraphics[ width=.95\columnwidth,keepaspectratio]{Figure2-completev6}\end{center} \caption{(a) SEM micrograph displaying a birds-eye-view of the detection circuit for measuring the first generation of CPB-type QCNR. The detection circuit consists of a lumped element inductor and capacitor that are capacitively coupled to the CPB (not visible in this image, but located in the region denoted by the red, dashed rectangle). The frequency of the LC circuit serves as a proxy for CPB state through a simple dispersive interaction, which thus enables measurements of the CPB absorption spectrum. The frequency response of the LC is probed by performing transmission measurements of the measurement feedline which is both capacitively and inductively coupled to the LC circuit. (b) Transmission measurements of the feedline for two different values of $\Phi$ applied to the CPB.} \label{fig:fig2} \end{figure} In 2012 the LaHaye group began fabrication and measurement of its first generation of CPB-coupled nanoresonators (Fig.~\ref{fig:fig2}), with the goal of utilizing the CPB to perform dispersive, number-state read-out of UHF-range nanomechanical elements at low thermal occupation numbers. This was to be accomplished by performing measurements of the CPB's absorption spectrum to look for the mechanical Stark shifts in the CPB's transition energy that should arise as a result of the dispersive interaction with the nanoresonator.\cite{clerk2007using} In the following paragraphs, some of the key design considerations for Generation I are discussed. In an initial attempt to balance the conflicting dependence on $E_C$ of coupling strength and dephasing due to charge noise, the CPB parameters were chosen so that the device resided in between the charge qubit and transmon regimes. Specifically, the chosen geometry yielded $E_{J0}/E_{C}\approx 6$ and $E_{C}/h \approx 1.8\,{\rm GHz}$; the value of $E_C$ is consistent - to within design tolerances - with electrostatics simulations of the geometry using ANSYS Q3D, which yield $E_C/h=2\,{\rm GHz}$. The CPB was embedded within a planar, lumped-element LC circuit [Fig.~\ref{fig:fig2}(a)], which was to serve the purpose of both filtering the CPB's electromagnetic environment and also providing a means for performing spectroscopy of the CPB to measure its absorption spectrum. The coupling between the two systems was provided by an inter-digitated capacitor $C_Q=5\,{\rm fF}$ as calculated using Q3D. In contrast to typical applications in cQED where CPW or 3D cavities are used for isolation and read-out, the effective L and C were chosen to yield a low resonance frequency $\omega_{LC}$, in the range of 1 to 2 GHz. The chosen geometry resulted in $\omega_{LC}/2\pi=1.94 -1.95\,{\rm GHz}$ [Fig.~\ref{fig:fig2}(b)], which was in good agreement with Sonnet simulations that predicted 1.93 GHz. The LC was engineered to be over-coupled to a measurement feedline in order to provide fast and efficient measurement.\cite{johansson2006fast} Fits to the feedline response\cite{megrant2012planar} [Fig.~\ref{fig:fig2}(b)] determined a loaded quality factor of $Q_L = 300 - 500$ and intrinsic quality factor $Q_i=12-15 \times 10^3$, depending on flux $\Phi$ applied to the CPB. The initial purpose of using a low-frequency, lumped-element LC resonance was to insure that the circuit would be far-detuned in energy from the CPB and thus interacting in the weak dispersive regime, where dephasing effects and modifications of the CPB's absorption spectrum would be minimal. However, because of initial miscalculations which led to a larger than desired $C_Q$, the two systems interacted very strongly as discussed below. As well, it was thought that introducing the DC voltage bias $V_{\mathit{NR}}$ would be technically less challenging with the lumped-element design than with a distributed resonator.\cite{chen2011introduction} \begin{figure} \begin{center}\includegraphics[ width=.95\columnwidth,keepaspectratio]{Figure3-newb-nolayers}\end{center} \caption{(a) SEM micrograph displaying the region in Fig.~\ref{fig:fig2} denoted by the red, dashed rectangle. This region includes the CPB and nanostructure from the first generation of QCNR developed and measured by the authors. The inset shows a close up of the aluminum nanostructure. The fundamental in-plane flexural mode of this structure should couple most strongly to the charge on the CPB island. From COMSOL simulations and analytical calculations, this mode should have a resonant frequency of $\omega_{\mathit{NR}}/2\pi \approx 300\,{\rm MHz}$. (b) Magnetomotive measurements of the fundamental mode response at $T=4\,{\rm K}$ are in good agreement with the expected frequency from simulations.} \label{fig:fig3} \end{figure} The nanostructure was fabricated out of aluminum using standard plasma etching. The geometric parameters of the structure, $w=200\,{\rm nm}$, $t=100\,{\rm nm}$, $L=1.8\,\mu{\rm m}$, $d=70\,{\rm nm}$ [Fig.~\ref{fig:fig3}(a)], were chosen to give a fundamental in-plane flexural resonance frequency of $\omega_{\mathit{NR}}/2\pi=300\,{\rm MHz}$ and coupling capacitance $C_{\mathit{NR}}=180\,{\rm aF}$ as calculated using finite element simulations. Measurements of the resonator's frequency at $T=4\,{\rm K}$ using magnetomotive detection\cite{cleland1996fabrication} [Fig.~\ref{fig:fig3}(b)] were in good agreement with the simulations of the mechanics. From these parameters, estimates of the maximum CPB-nanoresonator coupling using Eq. (\ref{couplingmax}) yielded $\lambda_{max}/2\pi=50-100\,{\rm MHz}$, depending on the value of $\beta$, which, from simulations, should have been on the order of 0.2 or larger. For such values of coupling strength, the dispersive interaction should have reached $\chi/2\pi> 1\,{\rm MHz}$. Based upon estimates from Ref. \citen{clerk2007using}, this would have been sufficient for the number-state statistics of the nanoresonator to be resolvable, even for a thermal state of the nanoresonator at $T=30\,{\rm mK}$ ($N_{TH}\approx2$), provided that the decoherence rate of the CPB satisfied $\gamma \lesssim 1\,{\rm MHz}$, which has been observed previously for CPBs in cQED architectures.\cite{wallraff2004strong} Moreover, the quality factor $Q_{\mathit{NR}} \approx 1000$ of the nanoresonator measured at $T=4\,{\rm K}$ using magnetomotive detection strongly suggested that the nanoresonator decoherence rate would satisfy $\kappa/2\pi< 1\,{\rm MHz}$ at milli-Kelvin temperatures as well. Samples were mounted in a light-tight copper box that was anchored to the mixing chamber (MC) of a dilution refrigerator and cooled down to $T\lesssim 30\,{\rm mK}$. Microwave lines for probing the transmission of the measurement feedline were filtered and isolated using standard techniques: the input feedline had $\sim$ 70 dB of attenuation inside the refrigerator, with cryogenic attenuators rigidly anchored to the 1K, still, cold-plate and MC stages; and two cryogenic isolators, nominally with 15 dB each of isolation, were located between the output of the feedline and the input of a cryogenic HEMT amplifier anchored to the 4K stage. DC lines for applying the CPB gate voltage bias $V_g$ and the nanoresonator coupling $V_{\mathit{NR}}$ were heavily filtered using lossy, stainless steel coaxial cables and homemade powder filters at multiple stages, resulting in $>100\,{\rm dB}$ of attenuation for frequencies above $1\,{\rm GHz}$. A homemade superconducting Helmholtz coil bolted to the top of the sample holder was used to provide the magnetic field to control the flux $\Phi$ applied the CPB. \begin{figure} \begin{center}\includegraphics[ width=.95\columnwidth,keepaspectratio]{figure-4-no-phase3}\end{center} \caption{Single-tone spectroscopy of the LC circuit and CPB in Figs. \ref{fig:fig1} to \ref{fig:fig3} performed at $T\lesssim 30\,{\rm mK}$ over one flux period, $\Delta\Phi = \Phi_0$. (a) Amplitude response of the LC reveals avoided level crossings that are indicative of the hybridization of the LC and CPB energy levels for values of $\Phi$ where $\Delta E_{\mathit{CPB}}\approx \hbar\omega_{LC}$. (b) Numerical simulations of the amplitude of the LC circuit's frequency response using linear response theory agree well with the data, reproducing the main features in the spectrum. Simulations were carried out for the following values: $E_{J0}/h=12.7\,{\rm GHz}$, $E_C/h=1.3\,{\rm GHz}$, $\lambda_{LC}/h=160\,{\rm MHz}$, and average LC photon number $\bar{N}=0.3$.} \label{fig:fig4} \end{figure} Before applying the nanoresonator coupling voltage $V_{\mathit{NR}}$, measurements were conducted to make sure that the LC circuit could be used to read-out the CPB. To a good approximation, the capacitively-coupled LC circuit and CPB can be described in a manner formally analogous to the CPB-coupled nanoresonator, with dynamics captured also by Eqs. \ref{hamiltonian} to \ref{couplingNR}, except with the coupling strength given by \begin{equation} \lambda_{LC}=\frac{4E_{C}C_Q}{e\hbar}\sqrt{\frac{\hbar\omega_{LC}}{2C_T}}, \label{lccoupling} \end{equation} where $C_T$ is the total capacitance of the LC circuit. Using $C_T=340\,{\rm fF}$, as calculated by Q3D, and the previously noted values of $E_C$ and $C_Q$, the coupling strength was estimated to be quite large: $\lambda_{LC}/2\pi \approx 200\,{\rm MHz}$. As a consequence of the large coupling, Jaynes-Cummings physics could readily be observed in spectroscopic measurements of the coupled LC-CPB systems (Fig.~\ref{fig:fig4}). This was particularly evident in single-tone spectroscopy measurements\cite{schuster2007circuit} where microwaves in the frequency range near $\omega_{LC}$ were applied to the system through the measurement feedline. By monitoring the amplitude [Fig.~\ref{fig:fig4}(a)] and phase (not shown) of signals transmitted through the feedline using standard heterodyne detection, and varying the flux $\Phi$ applied to the CPB, hybridization of the CPB and LC energy levels could be observed around values of $\Phi$ where $\hbar\omega_{LC}=\Delta E_{\mathit{CPB}}$. The hybridization manifested in the usual avoided level crossings that appear periodically as a function of $\Phi$ with a period of one flux quantum $\Phi_0$, as expected from the dependence of $E_J$ on $\Phi$. Numerical simulations of the transmission measurement versus $\Phi$ and LC probe frequency $\omega$ were performed using linear response theory and the analog of Eqs. (\ref{hamiltonian}) to (\ref{HamiltonianINT}) with Eq. (\ref{lccoupling}) for the CPB-coupled LC. The simulations [Fig.~\ref{fig:fig4}(b)] incorporated 50-50 averaging of $n_g=0$ and $n_g=0.5$ to account for quasiparticle poisoning,\cite{riste2013millisecond} which is believed to have been occurring on a much faster time scale than the measurement time at each value of $\Phi$. These results were seen to agree well with measurements, capturing many of the features seen in the spectroscopy. \begin{figure} \begin{center}\includegraphics[ width=.95\columnwidth,keepaspectratio]{figure5newV2}\end{center} \caption{Two-tone spectroscopy maps of the first generation of QCNR developed at Syracuse shown in Figs. \ref{fig:fig1} to \ref{fig:fig3} versus $\Phi$ at $T\lesssim 30\,{\rm mK}$. (a) and (b) Phase and amplitude of the LC circuit's response for $V_{\mathit{NR}}=0\,{\rm V}$. The inset shows a higher resolution spectroscopy scan of the top of the hyperbole to illustrate the regular spacing of the avoided level crossings. The color scale in the inset has been reversed to enhance viewing contrast while superimposed on top of the main figures. (c) and (d) Phase and amplitude of the LC circuit's response for $V_{\mathit{NR}}=10\,{\rm V}$. The black hyperbole in (a)-(d) indicate the lowest-order transition energy of the CPB, $\Delta E_{\mathit{CPB}}$, versus $\Phi$ and were generated from numerical calculations using Eq. (\ref{HamiltonianCPB}) and the following parameters: $E_C =1.8\,{\rm GHz}$, $E_{J0}=11.7\,{\rm GHz}$, and $n_g=0.5.$ Also plotted in (c) and (d) are values of $\Delta E_{\mathit{CPB}}$ for $n_g=0.25$ and $n_g=0.375$, which are denoted by dotted and dashed lines respectively. The dashed vertical lines indicate locations of the individual traces shown in Fig.~\ref{fig:fig6}.} \label{fig:fig5} \end{figure} Two-tone, continuous-wave spectroscopy\cite{schuster2007circuit} of the CPB and LC was perfomed next in order to measure the absorption spectrum of the CPB over the full-range of $\Delta E_{\mathit{CPB}}$ - the CPB's lowest transition energy - as a function of $\Phi$ (Figs. 5 and 6). These measurements were conducted by first fixing $\Phi$, and then applying two microwave tones to the CPB-coupled nanoresonator. The first tone, $\omega$, was applied to the LC circuit and fixed at $\omega=\omega_{LC}$ to probe the LC circuit's response to changes in the CPB's state; the second, spectroscopy tone $\omega_s$, was then applied to excite Rabi oscillations in the CPB. The average amplitude and phase of the signal transmitted at $\omega$ was then recovered using heterodyne detection. Measurements were typically repeated over a large range of $\omega_s$, from 0.5 GHz to 11 GHz, and one flux period $\Phi_0$. Results from two sets of measurements are shown in Fig. 5. It is clear that the envelope of the CPB's absorption spectrum is in good agreement with the predicted lowest-energy transition $\Delta E_{\mathit{CPB}}$ (solid hyberbolic line), which was calculated numerically using Eq. (\ref{HamiltonianCPB}). However, it also clear that there are many additional features in the spectrum. In fact, higher resolution spectroscopy of the LC circuit phase [inset, Fig. 5(a)] reveals that the main absorption line is broken by a series of approximately regularly-spaced avoided level crossings. Curiously, in many locations in the spectroscopy map, the spacings in energy between avoided level crossings are $\sim 300$ MHz, comparable to the nanoresonator's energy. Moreover, the location and spacing of the crossings did not appear to depend on power of the spectroscopic tone or probe tone; increasing power simply broadened the features. This suggests that these features were not related to coupling with the LC resonance or any higher-order modes in the extended LC circuit.\footnote{It should be noted that one higher-order mode of the inductor is thought to be seen in the spectrum at $\sim 9.3\,{\rm GHz}$. This would correspond to the third quarter-wave mode of the extended planar inductor; the inductor was shorted to the ground plane at one end.}. Because the avoided level crossings were observed even with $V_{\mathit{NR}}=0$, it is thought unlikely that these features were due to the nanoresonator. However, it is possible that an intrinsic DC bias existed between the nanoresonator electrode and the CPB island, which was electrically isolated from all other portions of the circuit, providing the coupling. Such offsets have been reported anecdotally in the literature before,\cite{hertzberg2009back} but in this case no measurements could be performed to confirm whether an offset was present or not. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[ width=.65\columnwidth,keepaspectratio]{figure6newV4} \end{tabular} \end{center} \caption{Comparison of spectroscopy data for different $V_{\mathit{NR}}$. (a) Amplitude and phase of individual spectroscopy traces from the maps in Fig.~\ref{fig:fig5} at values of flux denoted by the vertical dashed lines. As $V_{\mathit{NR}}$ is increased, the spacing between the apparent avoided level does not change, but the resonances broaden and ultimately overlap at $V_{\mathit{NR}}=10\,{\rm V}$. This is readily apparent in the spectroscopy ``close-ups" shown in (b), (c), and (d) for 0 V, 3 V, and 10 V respectively. Note that the color scale is reversed in (b), (c) and (d) to enhance the contrast of the resonances.} \label{fig:fig6} \end{figure} The next step in the measurement process was to increase $V_{\mathit{NR}}$ to probe any changes in the absorption spectrum of the CPB [Figs. 5(c,d) and 6] that resulted from the expected dispersive interaction with the nanoresonator. Voltages ranging from 0.5 V to 15 V were applied between the nanoresonator and CPB island using a home-made battery-powered source. A motor with a high gear-ratio was used to slowly increment the voltage to the desired value at a rate of mV/sec. This was implemented in order to make the change in charge on CPB electrode as adiabatic as possible and to avoid stirring up excessive charge noise in substrate or on surfaces in the vicinity of the CPB. When the desired value of $V_{\mathit{NR}}$ was achieved, the motor was powered-down and disconnected from the apparatus. Complete spectroscopy data was taken only up to $V_{\mathit{NR}}=10\,{\rm V}$ (Fig. 5) as the device was destroyed at $V_{\mathit{NR}}=15\,{\rm V}$ when the connection supplying $V_{\mathit{NR}}$ was erroneously removed. For measurements up to 10 V, the locations of the avoided level crossings did not appear to change location in energy nor did the spacing between the features change (Fig. 6). However, the features became progressively blurred out; this is clearest in Figs. 6(b) to 6(d). Interestingly, the spectroscopy maps at 10 V indicated that the change in amplitude of the probe signal flipped sign when the spectroscopic tone passed through the main CPB absorption line [Figs. 5(d) and 6(a)]. Because the first sample was destroyed, the origins of the additional structure in the phase and amplitude of the spectroscopy maps was not determined. One possibility was that this structure was due to coupling to an array of TLS, as has been previously reported in the literature.\cite{grabovskij2012strain} The exact spectrum of such TLS should be unique from sample to sample, so measurements of a second, identically-designed sample could be used to rule out whether TLS were responsible for the observed splittings. Thus a second, nominally-identical device was cooled down. However, the device was defective and spectroscopic signatures of the CPB could not be observed at all. The design of the first generation device was relatively complex, with possible spurious modes and couplings between the CPB, nanoresonator, LC and feedline that could give rise to the additional structure in the absorption spectrum. Thus, after the second device failed to function, it was decided to implement a new design in which the CPB-coupled nanoresonator was embedded within the ground plane of a CPW cavity. As well, to reduce the possible influence of charge-based fluctuators and TLS at high voltages, it was decided to engineer the CPB in the transmon regime. These changes were implemented in Generation II and III as discussed in the following two sections. \subsection{Generation II: CPB Integrated with CPW Cavity and Flexural Nanoresonator} The second generation of devices (Figs. \ref{fig:fig7} and \ref{fig:fig8}) was developed and measured in 2013. They featured one key difference from Generation I: the CPB and nanoresonator were embedded in a superconducting CPW cavity instead of a low-frequency lumped-element LC circuit. The CPW cavity was to play the same role as the LC in the first generation, providing read-out and isolation of the CPB-coupled nanoresonator. However, the CPW design had the additional benefit of a much simpler mode spectrum and reduced parasitic couplings in comparison with the large LC circuit and feedline from Generation I; there was a wealth of information in the literature on the characteristics of superconducting CPWs\cite{goppl2008coplanar} and thus the transmission properties could be readily understood and modeled both analytically and numerically. \begin{figure} \begin{center}\includegraphics[ width=.95\columnwidth,keepaspectratio]{figure7new2}\end{center} \caption{General schematic and circuit design for integrating a CPB within a superconducting CPW cavity. (a) In this schematic, the CPB-coupled nanoresonator is embedded in a pocket in the ground plane of the CPW cavity, and the pocket is located in the vicinity of a voltage anti-node of the fundamental mode. The cavity serves both for read-out and electromagnetic filtering of the CPB and nanoresonator. (b) In Generation II devices, the input and output ports, as well as all the bias lines (for $V_{\mathit{NR}}$ and $\Phi$), were fed by $50\,\Omega$ transmission lines that tapered to bond pads for wiring to external circuitry.} \label{fig:fig7} \end{figure} The CPW cavities consisted of a 50 $\Omega$ planar transmission line fabricated from sputtered niobium atop high resisitivity silicon substrates. The center trace of the transmission line was 6 $\mu\rm{m}$ wide and separated by 3 $\mu\rm{m}$ on both sides from a Nb ground plane. The cavity was formed from two gaps in the transmission line that also served as input and output coupling capcitors $C_{C1}$ and $C_{C2}$ [Fig. 7(a)] for performing transmission measurements of the cavity's frequency response. The total length of the cavity was designed to be $\sim$11 mm [Fig.~\ref{fig:fig7}(b)], yielding fundamental mode frequencies of $\sim$ 5.4 GHz, which agreed very well with EM field-solver simulations using the commercial software Sonnet. The coupling capacitors were designed to be symmetric with values $C_{C1} = C_{C2}=2\,{\rm fF}$, which should have yielded a coupling quality factor $Q_{C} = 6.5 \times 10^4$. However, the loaded quality factor $Q_L$ of the CPW fundamental mode was found to be quite low and limited to a maximum of $4 \times 10^3$ at high cavity power. As discussed below, it was determined that $Q_L$ was limited by losses through the parasitic coupling to the NR electrode. As illustrated in Fig. \ref{fig:fig8}(a), the CPB and nanoresonator were fabricated in a pocket in the ground plane near one of the voltage anti-nodes of the fundamental resonance. The CPB island was arranged to be parallel with the center trace of the CPW and flush with the edge of the ground plane. For this generation, the CPB was designed to be closer to the charge qubit regime, with $E_{C}/h\approx3\,{\rm GHz}$ as determined with Q3D simulations and $E_J/h\approx10\,{\rm GHz}$. Just like for the case of the LC circuit, the capacitive coupling $C_Q$ between the CPB and CPW center trace yields an interaction analogous to Eq. (\ref{HamiltonianINT}) with interaction strength given by Eq. (\ref{lccoupling}). For this geometry, simulations calculated $C_Q\approx 0.8\,{\rm fF}$, which would yield a coupling stength $\lambda_{CPW}/2\pi\approx 60\,{\rm MHz}$. This agreed well with single-tone spectroscopy measurements of the cavity and CPB which displayed the usual $\Phi_0$-periodic avoided level crossings at values of $\Phi$ where the CPB and CPW were in resonance [Fig. \ref{fig:fig8}(b)]. Here the currents for tuning $\Phi$ were applied through a $50\,\Omega$ Nb trace on chip that was set back in the ground plane $\sim 25\,\mu{\rm m}$ from the CPB. \begin{figure} \begin{center}\includegraphics[ width=.95\columnwidth,keepaspectratio]{figure8new4}\end{center} \caption{Optical image and single-tone spectroscopy data for a CPB integrated within a superconducting CPW cavity. (a) Optical image of a sample from Generation II displaying the CPB, unsuspended nanoresonator electrode, flux bias line, and CPW center trace. (b) In single-tone spectroscopy measurements of the CPW as a function of $\Phi$, periodic avoided level crossings are seen at values of flux where $\Delta E_{\mathit{CPB}}\approx\hbar\omega_{cpw}$ and are indicative of the usual hybridization of the two systems' energy bands. Due to the low quality factor of the CPW cavity mode that resulted from parasitic losses through the nanoresonator electrode, the avoided level crossings were somewhat blurred. The dashed lines are plots of the lowest two transition energies of the coupled CPW-CPB system using numerical calculations and the following parameters: $E_C/h=3$ GHz, $E_{J0}/h=10$ GHz, and $g/h=50$ MHz.} \label{fig:fig8} \end{figure} The nanoresonator electrode was fabricated 80 to 100 nm from the CPB island (not shown). It was connected to a $50\,\Omega$ Nb trace that meandered through the ground plane and eventually tapered to a bond pad so that connections could be made to supply the coupling voltage $V_{\mathit{NR}}$. Two-tone spectroscopy measurements versus $V_{\mathit{NR}}$ suggested that the coupling between the CPB and full-length nanoresonator electrode was actually quite large, $C_{\mathit{NR}} \sim 1\,{\rm fF}$. However, simulations were not done to determine how much of $C_{\mathit{NR}}$ was contributed from the portion of the electrode that was to be suspended to form the actual nanoresonator. In fact, the nanoresonator was never suspended for measurements with this generation. This was the case because preliminary single-tone spectroscopy measurements of the CPB and CPW with the resonator unetched showed that the CPW was heavily damped [Fig. 8(b)]. Further investigation with Q3D simulations showed that the parasitic capacitance between the trace to the nanoresonator electrode and the CPW center could explain the excess loading of the cavity quality factor. Moreover, simulations using Sonnet also illustrated that a significant fraction of the cavity signal was transmitted to the nanoresonator lead. As a result of this, measurements were stopped prematurely in order to redesign the samples to introduce $V_{\mathit{NR}}$ without degrading the cavity (or CPB) quality. \subsection{Generation III: CPB in the Transmon Regime Integrated with CPW Cavity, Flexural Nanoresonator, and Superconducting T-filter} \label{subsection:sub3} In 2014, to overcome the excessive cavity damping observed in Generation II devices due to parasitic coupling to the nanoresonator DC bias circuitry, the LaHaye group developed a new superconducting microwave filter that can be integrated with cQED architectures to apply DC biases without degrading CPW cavity mode quality.\cite{hao2014development} As described in in Ref. \citen{hao2014development}, the filter design utilizes on-chip, planar meander inductors and inter-digitated capacitors to form a reflective \textit{t-filter} that strongly attenuates ($\sim25\,{\rm dB}$) signals in the range from $2\,{\rm GHz}$ to $10\,{\rm GHz}$. Importantly, it was shown that the filter could be integrated into a CPW cavity [Fig. 9(a)] allowing for application of DC voltages without distorting the frequency response or reducing $Q_{L}$ of the fundamental mode, even for quality factors as high as $Q_L=2\times10^5$ and voltages as large as $V_{\mathit{NR}}=20\,{\rm V}$.\cite{hao2014development} \begin{figure} \begin{center}\includegraphics[ width=.95\columnwidth,keepaspectratio]{figure9new3}\end{center} \caption{Design and SEM micrograph of a transmon integrated in a voltage-biased CPW cavity for Generation III devices. (a) To eliminate the problem of excess cavity damping due to the introduction of DC bias circuitry into the cavity, a microwave t-filter was integrated with the CPW layout. $V_{\mathit{NR}}$ could then be applied to CPB and nanoresonator through this filter without degrading the CPW's fundamental mode frequency response or quality factor as discussed in Ref. \citen{hao2014development}. (b) In Generation III, CPB qubits in the transmon regime were embedded in the ground plane of the t-filtered CPW.} \label{fig:fig9} \end{figure} In subsequent and ongoing work at Syracuse, a transmon qubit was integrated with the new filtered CPW cavity design [Fig.~\ref{fig:fig9}(b)], and preliminary tests of the influence of the filter on the transmon's characteristics performed. In two-tone spectroscopic measurements, the number-state statistics of the CPW cavity\cite{schuster2007circuit} were observable, with no apparent increase in transition linewidth, for linewidths as small as $2\,{\rm MHz}$ and voltages as large as $V_{\mathit{NR}}=8V$ (not shown). More recently, time-domain measurements of a similar transmon have been made using dispersive read-out with the t-filtered cavity. Both Rabi oscillations and relaxation measurements were performed at $V_{\mathit{NR}}=0V$, from which estimates of $T_2^*\geq 0.5 \mu s$ and $T_1\geq12\mu s$ were obtained. Measurements are currently underway to observe how $T_1$ and $T_2^*$ change with $V_{\mathit{NR}}$. As well, suspended nanoresonators have now been integrated with the latest samples. \section{Conclusions} The technical difficulties that were brought to light in the previous sections related to integrated nanomechanical elements, superconducting devices and circuitry like the CPW and CPB will soon be overcome, enabling a series of important experiments to probe fundamental topics such as entanglement, decoherence, and quantum measurement in new macroscopic limits. The dispersive measurement techniques that are developed will also pave the way not only for generating Schr\"{o}dinger-cat states of mechanical structures but also for quantum non-demolition measurements of the energy of such structures, potentially allowing for new studies of energy transfer and dissipation at the mesoscale. These systems will also play important roles in the revolution of engineered quantum systems that is now beginning, serving as elements in quantum information and communication architectures and components in quantum sensing technologies. To continue the development further into the future, a new set of challenges arises: integrating superconducting quantum electromechanical systems with optical technology; interfacing superconducting devices like the CPB with truly macroscopic systems (beyond the nano and micromechanical regimes); and developing these systems for an array of sensing applications. \acknowledgments The authors would like to thank B. Plourde for technical assistance and helpful conversations. The work was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Infrastructure Network, which is supported by the National Science Foundation (Grant ECCS-0335765). The authors acknowledge support for this work provided by the National Science Foundation under Grant DMR-1056423 and Grant DMR-1312421 \bibliographystyle{spiebib}
1,314,259,996,945
arxiv
\section{Introduction}\label{S:Intro} Nowadays, the world economical and social developments and well-beings are heavily influenced by financial markets. People participate in financial activities, which promote the circulation of assets and developments of the world economy, with the ultimate goal of gaining economic benefits. Under this light, the success of the participants depends largely on the quality and quantity of information that they possess, as well as their ability to interpret these information for decision-making. Because of this, computational intelligence in finance, which utilizes modern computing methodologies to analyze financial markets for decision-making, has attracted many researchers and practitioners from both academia and industry. Representative topics under this discipline include stock market forecasting \cite{tran2018temporal, zhang2019deeplob}, algorithmic trading \cite{nuti2011algorithmic, hu2015application}, risk assessment \cite{khandani2010consumer, galindo2000credit}, asset pricing \cite{cochrane1996cross, lettau2020estimating}, and portfolio allocation and optimization \cite{demiguel2009generalized, ban2018machine}. Among these objectives, a substantial amount of research efforts has been dedicated to prediction and forecasting since financial decision-making, for the most part, depends on reliable projections about the future. There are two common approaches, namely fundamental analysis \cite{thomsett2006getting} and technical analysis \cite{murphy1999technical}, which are currently adopted in predicting future market behaviors. In fundamental analysis, valuation techniques take into account different economic indicators that reflect and affect the market movements to establish long-term views on the development of a financial entity. On the other hand, in technical analysis, it is generally believed that the prices themselves already encompass all factors that affect the market dynamics. For this reason, technical analysts construct forecasting models based on series of historical transactions with the assumption that history tends to repeat itself \cite{murphy1999technical}, and the underlying processes, which generate the observed series, can be captured by mathematical or computational models. Although financial time-series forecasting has been extensively studied over the past decades with a large body of literature dedicated to tackling specific problems, there are still many challenges in processing and analyzing data derived from financial markets, especially those coming from high-frequency intra-day activities. Over time, the development of internet technologies, database systems and electronic trading platforms have enabled us to collect a vast amount of digital footprints of the financial market. Enormous volumes of data, while ensuring statistical significance of any analysis, also create a great computational challenge when building financial prediction models. The computational aspect is especially critical for trading applications that take advantage statistical arbitrage, which usually exists in very short time before market correction \cite{avellaneda2010statistical}. Another challenge posed by financial time-series comes from the fact that they are usually complex, noisy, nonlinear and nonstationary in nature, which leads to difficulties not only in modeling but also in preprocessing. Techniques for financial time-series prediction fall into two categories: traditional statistical models and machine learning models. In the stochastic model based approach, linear relationship is often assumed between the independent variables. Representative tools in this category include autoregressive integrated moving average (ARIMA) and its variants or generalized autoregressive conditional heteroskedasticity (GARCH) \cite{engle1982autoregressive}, to name a few. While stochastic models often possess nice theoretical properties, the underlying assumption is often too strong, leading to poor generalization performance in real-world data. On the other hand, machine learning models, which make no prior statistical or structural assumption, are often capable of modeling complex nonlinear relationships among the independent factors and the prediction targets. For this reason, machine learning models often generalize better than stochastic models in many forecasting scenarios \cite{kane2014comparison, qian2017financial}. Among different types of machine learning models, neural networks are the leading solutions for many financial forecasting problems nowadays \cite{korczak2017deep, tran2018temporal, zhang2019deeplob, tsantekidis2017forecasting, dingli2017financial}. The majority of these solutions were adopted from computer vision (CV) and natural language processing (NLP) applications where neural networks have demonstrated unprecedented successes in the last decade. Despite the fact that future market prediction based on historical time-series can be casted as a pattern recognition problem similar to those encountered in CV and NLP, thus can be treated in some degree of success with tools from CV and NLP, the unique characteristics of financial data make the market prediction tasks fundamentally different and require special treatments. The majority of problems targeted in CV and NLP concern solving cognitive tasks in which the data is intuitive and well-understood by normal human beings, such as recognition of objects or understanding natural languages. On the other hand, historical financial phenomena even require human experts to recognize or interpret, not to mention speculating about the future. In addition, images, videos or speeches, for example, are well-behaved signals in the sense that the value range and variances are known and can be easily processed without losing the essential information within them, while financial time-series are highly volatile and often exhibit concept drift phenomena \cite{clements2004forecasting, hatemi2008tests}, i.e., dynamic changes in the relationship between independent and target variables over time. Because of this, data preprocessing is an important procedure when working with financial time-series. Among many preprocessing steps, data normalization, which is one of the most essential steps before building a machine learning model, aims at transforming input variables into a common range to avoid the potential bias induced by large numbers. For deep neural networks, improperly normalized data can easily lead to numerical issues with the gradient updates. In literature, there are many normalization methods such as z-score normalization, min-max normalization, pareto scaling, power transformation, to name a few \cite{singh2020investigating}. These normalization methods utilize global data statistics, such as the mean, standard deviation or maximum value to transform the data. For financial time-series, especially those covering long periods, replacing global statistics with local statistics computed over the recent history is a common practice to avoid the problem of potential regime shifts in which recent observations have significantly different value range than past observations. To deal with this phenomenon, several sophisticated methods have been proposed, for example \cite{shao2015self, nayak2014impact}. While many static normalization schemes have been developed as described above, we are only aware of one prior work \cite{passalis2019deep} that proposed an adaptive method for input time-series. Different from static approaches, an adaptive data-driven method transforms raw input data using statistics that are identified and learned via optimization. That is, the step is implemented as the first layer in a computation graph, with all parameters jointly estimated using stochastic gradient descent. In fact, one of the reasons that make neural nets work so well is the fact that they are estimated in an end-to-end manner, being able to learn data-dependent transformations. Thus, we argue that the normalization step for input time-series should also be learned in the same end-to-end manner when employing neural networks in financial forecasting. In this paper, we propose Bilinear Input Normalization (BiN), a neural network layer that takes into account the bimodal nature of multivariate time-series, and performs input data transformation using parameters that are jointly estimated with other parameters in the network. The preliminary results of this work was presented in \cite{tran2020data}, which includes limited analysis and empirical evaluation of BiN for Temporal Attention Augmented Bilinear Layer (TABL) networks. In this paper, we provide more detailed, in-depth presentation and discussion of the proposed method, as well as extensive experiments demonstrated with another state-of-the-arts (SoTA) architecture in financial forecasting using stock market data from two different markets (US and Nordic). The remainder of the paper is organized as follows. In Section \ref{related-works}, we review related works in data normalization methods, with a focus on normalization schemes for neural networks. Section \ref{method} describes in details the motivation and operations of the Bilinear Input Normalization layer. In Section 4, we provide basic information regarding limit order books and describe the problem of predicting stock mid-price dynamics using limit order book data, which is followed by the experimental setup, dataset description, the results and our analysis. Section \ref{conclusions} concludes our work. \section{Related Work}\label{related-works} Normalization is a scaling or transformation operation, usually in a linear manner, to ensure a uniform value range between different data dimensions, reducing the effects of dominant values and outliers \cite{garcia2015data}. Perhaps, the most common normalization method is z-score normalization, which centers the data around the origin with unit standard deviation. There are also works that only center the data, without the scaling step as in z-score normalization. The steps in Pareto scaling \cite{noda2008scaling} are similar to z-score normalization, except for the division of standard deviation instead of the variance. A generalization of z-score normalization is the variance stability scaling method \cite{van2006centering}, which multiplies the z-score standardized data with the ratio between the mean and standard deviation of the data. Power transformation is another normalization method employing the mean statistic to reduce the effects of heteroscedasticity \cite{kvalheim1994preprocessing}. Besides data's mean and variance, minimum, maximum and median values are also utilized in normalization, such as min-max normalization, median and median absolute deviation normalization. For interested readers, we refer to the analysis of different static data normalization techniques in machine learning models \cite{singh2020investigating}. The term data normalization is often understood as the operation that preprocesses raw data, i.e., input data. However, in neural networks, normalization operation is also popular in hidden layers. This is due to the fact that different layers in a deep network can encounter significant input distribution shift during stochastic gradient updates. Normalization operation can be used to help stablize and improve the training process. Batch Normalization (BN) was proposed for Convolutional Neural Networks such a purpose \cite{ioffe2015batch}. Since stochastic gradient descent only operates in a mini-batch manner, the mini-batch mean and variance are accumulated in a moving average style to estimate the global mean and variance in BN. After subtracting the mean and dividing by the variance, BN also learns to scale and shift the hidden representations. Instead of the mini-batch statistics, Instance Normalization \cite{ulyanov2016instance} uses sample-level statistics, and learns how to normalize each image so that its contrast matches with that of a predefined style image in the visual style transfer problems. Both BN and IN were originally proposed for visual data, although BN has also been widely used in NLP. Both BN and IN are adaptive data-driven normalization schemes. However, they were proposed to normalize the hidden representations, and they are not commonly used for input normalization. Regarding adaptive input normalization method for time-series, we are only aware of the work in \cite{passalis2019deep}, which formulated a 3-stage normalization procedure called Deep Adaptive Input Normalization (DAIN). Since DAIN is directly related to our proposed method, we describe DAIN in more details here. In this paper, let us denote the collection of $N$ multivariate series as $\{\mathbf{X}^{(n)} \in \mathbb{R}^{D \times H}\;|n=1, \dots, N\}$, where $D$ denotes the number of univariate series and $H$ denotes the temporal length of each series. Here $D$ and $H$ are also referred to as the feature and temporal dimensions, respectively. In addition, we denote the $h$-th column of $\mathbf{X}^{(n)}$ as $\mathbf{c}_h^{(n)} \in \mathbb{R}^{D}$, which is the representation of the series at the time index $h$. We also refer to $\mathbf{c}_h^{(n)}$ as the $h$-th temporal slice. The first step of DAIN is to shift every temporal slice in $\mathbf{X}^{(n)}$ as follows: \begin{equation}\label{eq1} \begin{aligned} &\bar{\mathbf{c}}^{(n)} = \frac{1}{H} \sum_{h=1}^{H} \mathbf{c}_h^{(n)} \\ & \mathbf{y}_h^{(n)} = \mathbf{c}_h^{(n)} - \mathbf{W}_{a} \bar{\mathbf{c}}^{(n)}, \quad \forall h=1, \dots, H \end{aligned} \end{equation} where $\mathbf{W}_{a} \in \mathbb{R}^{D \times D}$ is a learnable weight matrix that estimates the amount of shifting from the mean temporal slice ($\bar{\mathbf{c}}^{(n)}$) calculated from each series. \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{normalize_example.eps}% \caption{Illustration of the effect of normalization along temporal mode. Here we consider two samples $\mathbf{X}^{(n_1)}$ and $\mathbf{X}^{(n_2)}$ on the left and right sides, respectively, each of which contains the opening prices of two stocks for 10 consecutive days, thus the multivariate series has dimensions $2\times 10$. The continuous line represents the function governing the relationship between two stocks and the scatter plots represent the prices that we observe (our samples). We can see that compared to prices at $\mathbf{X}^{(n_1)}$, the price range at the time of $\mathbf{X}^{(n_2)}$ has shifted for both stocks but their relationship is similar (the relative arrangement of points in $2$-dimensional space is similar, but with different amounts of spread). After the normalization step (here we simply demonstrate with scaling factor of one and no shifting), the arrangements of normalized points are positioned at the same place in this $2$-dimensional space, with similar spreads.} \label{f1} \end{figure*} After shifting, the intermediate representation $\mathbf{y}_h^{(n)}$ is then scaled as follows: \begin{equation}\label{eq2} \begin{aligned} & \bm{\sigma}^{(n)} = \sqrt{\frac{1}{H} \sum_{h=1}^{H} \big(\mathbf{y}_h^{(n)} \odot \mathbf{y}_h^{(n)}\big)}\\ & \mathbf{z}_h^{(n)} = \mathbf{y}_h^{(n)} \varoslash \big(\mathbf{W}_b \bm{\sigma}^{(n)}\big), \quad \forall h=1, \dots, H \end{aligned} \end{equation} where $\mathbf{W}_{b} \in \mathbb{R}^{D \times D}$ is another weight matrix that estimates the amount of scaling from the standard deviation ($\bm{\sigma}^{(n)}$), which is computed from $H$ temporal slices. In Eq. (\ref{eq2}), the square-root operator is applied element-wise; $\odot$ and $\varoslash$ denote the element-wise multiplication and division, respectively. The final step in DAIN is gating, which is used as a type of attention mechanism to suppress irrelevant features: \begin{equation}\label{eq3} \begin{aligned} \bar{\mathbf{z}}^{(n)} &= \frac{1}{H} \sum_{h=1}^{H} \mathbf{z}_h^{(n)}\\ \bm{\gamma}^{(n)} & = \mathrm{sigmoid}\big(\mathbf{W}_{c} \bar{\mathbf{z}}^{(n)} + \mathbf{W}_d \big) \\ \mathbf{t}_h^{(n)} &= \mathbf{z}_h^{(n)} \odot \bm{\gamma}^{(n)} , \quad \forall h=1, \dots, H \end{aligned} \end{equation} where $\mathbf{W}_c \in \mathbb{R}^{D\times D}$ and $\mathbf{W}_d \in \mathbb{R}^{D}$ are two weight matrices to learn the gating function. The output of DAIN is, thus, $\mathbf{T}^{(n)} = [\mathbf{t}_1^{(n)}, \dots, \mathbf{t}_H^{(n)}] \in \mathbb{R}^{D \times H}$, which is the normalized series having the same size as the input series $\mathbf{X}^{(n)}$. Since the normalization scheme of DAIN contains several processing steps with nonlinear operations, stochastic updates in DAIN are sensitive to the learning rate. For this reason, the authors in \cite{passalis2019deep} used three different learning rates for the parameters associated with three computational steps in DAIN. As we will see in the next section, our normalization scheme is more intuitive for time-series while requiring fewer computation and parameters. In addition, since our normalization scheme only relies on linear operations, it is robust with respect to the learning rates that are normally adopted to train the network under consideration. \section{Adaptive Input Normalization with Bilinear Normalization Layer}\label{method} The proposed BiN layer formulation shares some similarities with DAIN and IN in the sense that we also propose to take advantage of sample-level statistics when learning to transform the input series. More specifically, the basic statistics, which are used to normalize each input sample, were calculated independently for each sample. There are also global parameters that are shared between samples in BiN. In this way, our formulation (as well as DAIN and IN) is different from BN, which utilizes global statistics estimated from the whole dataset to normalize every sample. For BN and IN, both methods were not proposed to work as an input normalization scheme for time-series, but to work with higher-order tensors in hidden layers of convolutional neural networks, which have different semantic structure than multivariate time-series. We are also not aware of any work that utilizes BN and IN for input data normalization, especially for time-series. The main difference between the proposed method and DAIN is that BiN is formulated to jointly learn to transform the input samples along both temporal and feature dimensions, taking into account the bimodal nature of multivariate time-series, while DAIN only works along the temporal dimension. In order to better understand our motivation in taking into consideration the bimodal nature of multivariate time-series, let us take an example in predicting the opening value of NASDAQ-100 index of a day based on the historical opening prices of 100 constituent companies in the last 10 days. In this case, each input sample $\mathbf{X}^{(n)}$ has dimensions of $100\times 10$. On one hand, we can consider that each $\mathbf{X}^{(n)}$ is represented by a set of $10$ features ($10$ columns of $\mathbf{X}^{(n)}$), each of which has $100$ dimensions, representing the snapshot of the opening prices of $100$ constituent companies in NASDAQ-100. Thus, the mean value and variance of this set, also of $\mathbf{X}^{(n)}$, would represent the average opening prices and their volatility of $100$ companies in the last $10$ days. On the other hand, we can also consider that each $\mathbf{X}^{(n)}$ is represented by a set of $100$ univariate series, each of which contains opening prices of a company over $10$ consecutive days. Therefore, the mean value and variance of this set, also of $\mathbf{X}^{(n)}$, would represent the mean and variance of the NASDAQ-100 equal weighted index\footnote{This means that each constituent company contributes 1\%, without taking into account market capitalization. For example QQQE is an ETF that tracks NASDAQ-100 with equal weights} during the last $10$ days. In our example, both ways of considering $\mathbf{X}^{(n)}$ and the corresponding statistics are valid and meaningful. Each gives a different interpretation of the data contained in $\mathbf{X}^{(n)}$, as well as the underlying assumption about elements being normally distributed in the set representing $\mathbf{X}^{(n)}$. Because of this, the proposed normalization layer utilizes and combines statistics from both views in order to transform the multivariate series. The proposed layer normalizes along the temporal dimension as follows: \begin{subequations} \label{eq5} \begin{align} \bar{\mathbf{c}}^{(n)} &= \frac{1}{H} \sum_{h=1}^{H} \mathbf{c}_h^{(n)}\label{eq5.1}\\ \bm{\sigma}_2^{(n)} &= \sqrt{\frac{1}{H} \sum_{h=1}^{H}\big(\mathbf{c}_h^{(n)} - \bar{\mathbf{c}}^{(n)}\big) \odot \big(\mathbf{c}_h^{(n)} - \bar{\mathbf{c}}^{(n)}\big)} \label{eq5.2}\\ \mathbf{a}_h^{(n)} &= \bm{\gamma}_2 \odot \big((\mathbf{c}_h^{(n)} - \bar{\mathbf{c}}^{(n)}) \varoslash \bm{\sigma}_2^{(n)}\big) + \bm{\beta}_2, \quad \forall h=1, \dots, H \label{eq5.3}\\ \mathbf{A}^{(n)} &= [\mathbf{a}_1^{(n)}, \dots, \mathbf{a}_h^{(n)}, \dots, \mathbf{a}_H^{(n)}] \in \mathbb{R}^{D\times H} \end{align} \end{subequations} where $\bm{\gamma}_2 \in \mathbb{R}^{D}$ and $\bm{\beta}_2 \in \mathbb{R}^D$ are two parameters of BiN that are optimized during stochastic gradient descent. After the computation steps in Eq. (\ref{eq5}), we obtain an intermediate series $\mathbf{A}^{(n)}$ that has been normalized in the temporal dimension. Basically, in Eq. (\ref{eq5}), given an input series $\mathbf{X}^{(n)}$, BiN first computes the mean temporal slice (column) $\bar{\mathbf{c}}^{(n)} \in \mathbb{R}^{D}$ and its standard deviation $\bm{\sigma}_2^{(n)} \in \mathbb{R}^{D}$ as in Eq. (\ref{eq5.1}, \ref{eq5.2}), which are then used to standardize each temporal slice of the input before applying element-wise scaling (using $\bm{\gamma}_2$) and shifting (using $\bm{\beta}_2$) as in Eq. (\ref{eq5.3}). While the standardizing step is independent for each sample in the training set, last shifting and scaling parameters are shared between all samples. Here we use the subscript ($2$) in $\bm{\sigma}_2^{(n)}$, $\bm{\gamma}_2$ and $\bm{\beta}_2$ to indicate that they are associated with the second dimension, i.e., the temporal dimension, of the multivariate series. In order to interpret the effects of Eq. (\ref{eq5.1}), (\ref{eq5.2}), and (\ref{eq5.2}), we can take the same approach as the example given for NASDAQ-100 previously. That is, the input series $\mathbf{X}^{(n)}$ can be viewed as the set $\mathcal{T}^{(n)}$ consisting of $H$ temporal slices, i.e., a set consisting of $H$ points in a $D$-dimensional space. The first part in Eq. (\ref{eq5.3}), i.e. $(\mathbf{c}_h^{(n)} - \bar{\mathbf{c}}^{(n)}) \varoslash \bm{\sigma}_2^{(n)}$, moves this set of points around the origin and as well as controlling their spread while keeping their arrangement pattern similarly. If we have two input series $\mathbf{X}^{(n_1)}$ and $\mathbf{X}^{(n_2)}$ with the corresponding sets $\mathcal{T}^{(n_1)}$ and $\mathcal{T}^{(n_2)}$ spreading and lying in two completely different areas of this $D$-dimensional space but have the same arrangement pattern, without the alignment performed by the first part of Eq. (\ref{eq5.3}), we cannot effectively capture the linear or nonlinear\footnote{Nonlinear patterns can be estimated by several piece-wise linear patterns (using more than one linear projections such as more than one convolution filters)} arragement patterns that are similar between the two series when using, for example, a 1D convolution filter that strides along the temporal dimension as often encountered in CNN architectures for time-series. We illustrate our example in Figure \ref{f1}. Here we should note that although BiN applies additional scaling and shifting in Eq. (\ref{eq5.3}) after the alignment, the values of $\bm{\gamma}_2$ and $\bm{\beta}_2$ are the same for every input series, thus the points of the set $\mathcal{T}^{(n_1)}$ and $\mathcal{T}^{(n_2)}$ are still centered at the same point and having approximately similar spreads. Since $\bm{\gamma}_2$ and $\bm{\beta}_2$ are optimized together with other network's parameters, they enable BiN to manipulate the aligned distributions of $\mathcal{T}^{(n)}$ to match with the statistics of other layers. While the effect of non-stationarity in the temporal mode are often visible and has been heavily studied, its effects when considered from the feature dimension perspective are less obvious. To see this, let us now view the series $\mathbf{X}^{(n)}$ as the set $\mathcal{F}^{(n)}$ of $D$ points (its $D$ rows) in a $H$-dimensional space. Let us also take the previous scenario where two series, $\mathbf{X}^{(n_1)}$ and $\mathbf{X}^{(n_2)}$, have $\mathcal{T}^{(n_1)}$ and $\mathcal{T}^{(n_2)}$ scattered in different regions of a $D$-dimensional co-ordinate system (viewed under the temporal perspective) before the normalization step in Eq. (\ref{eq5}). When $\mathcal{T}^{(n_1)}$ and $\mathcal{T}^{(n_2)}$ are very far away viewed from the feature perspective, these two series are also likely to possess $\mathcal{D}^{(n_1)}$ and $\mathcal{D}^{(n_2)}$ which are distributed in two different regions of a $H$-dimensional space, although having very similar arrangement. This scenario also prevents a convolution filter that strides along the feature dimension to effectively capture the prominent linear/nonlinear patterns existing in the feature dimension of all input series. For this reason, our proposed normalization scheme also normalizes the input series along the feature dimension as follows: \begin{subequations} \label{eq6} \begin{align} \bar{\mathbf{r}}^{(n)} &= \frac{1}{D} \sum_{d=1}^{D} \mathbf{r}_d^{(n)}\label{eq6.1}\\ \bm{\sigma}_1^{(n)} &= \sqrt{\frac{1}{D} \sum_{d=1}^{D}\big(\mathbf{r}_d^{(n)} - \bar{\mathbf{r}}^{(n)}\big) \odot \big(\mathbf{r}_d^{(n)} - \bar{\mathbf{r}}^{(n)}\big)} \label{eq6.2}\\ \mathbf{b}_d^{(n)} &= \bm{\gamma}_1 \odot \big((\mathbf{r}_d^{(n)} - \bar{\mathbf{r}}^{(n)}) \varoslash \bm{\sigma}_1^{(n)}\big) + \bm{\beta}_1, \quad \forall d=1, \dots, D \label{eq6.3}\\ \mathbf{B}^{(n)} &= \begin{bmatrix} \mathbf{b}_1^{(n)} \\ \vdots \\ \mathbf{b}_d^{(n)} \\ \vdots \\ \mathbf{b}_D^{(n)}\end{bmatrix} \in \mathbb{R}^{D\times H} \label{eq6.4} \end{align} \end{subequations} where $\mathbf{r}_d^{(n)} \in \mathbb{R}^{H}$ denotes the $d$-th row of $\mathbf{X}^{(n)}$. In addition, $\bm{\gamma}_1 \in \mathbb{R}^{H}$ and $\bm{\beta}_1 \in \mathbb{R}^{H}$ are two learnable weights. After computing the steps in Eq. (\ref{eq6}), we obtain another intermediate series $\mathbf{B}^{(n)}$ that has been normalized in the feature dimension. Finally, BiN linearly combines the intermediate normalized series obtained from Eq. (\ref{eq5}) and (\ref{eq6}) to generate the output $\mathbf{T}^{(n)} \in \mathbb{R}^{D\times H}$: \begin{equation}\label{eq7} \mathbf{T}^{(n)} = \lambda_a \mathbf{A}^{(n)} + \lambda_b \mathbf{B}^{(n)} \end{equation} where $\lambda_a \in \mathbb{R}$ and $\lambda_b \in \mathbb{R}$ are two learnable scalars, which enable BiN to weigh the importance of temporal and feature normalization. Here we should note that $\lambda_a$ and $\lambda_b$ are constrained to be non-negative. This constraint is achieved during stochastic optimization by setting the value (of $\lambda_a$ or $\lambda_b$) to $0$ whenever the updated value is negative. \section{Experiments}\label{experiments} \subsection{Limit Order Book} In finance, a limit order is a type of trade order to buy or sell a fixed number of shares with a specified price. In a buy (bid) limit order, the trader specifies the number of shares and the maximum price per share of the stock that he or she is willing to pay. On the contrary, for a sell (ask) limit order, the trader must specifies the number of shares and the minimum share price that he or she wants to sell. The two types of limit order form the two sides of the limit order book (LOB): the bid and the ask sides. The limit orders are sorted such that the ones with the highest bid price are on top of the bid side and the ones with the lowest ask price are on top of the ask side. Whenever the best ask price is equal or lower than the best bid price, those orders are executed and removed from the LOB. Since the LOB contains all the transactions related to a stock, it reflects the current supply and demand of the stock at different price levels. In literature, there are numerous researches that take advantage of the LOB data and formulate different research questions such as order flow distribution, price jumps, random walk nature of prices, stochastic models of limit orders, to name a few \cite{siikanen2017limit, siikanen2017drives, bouchaud2004fluctuations, cont2013price, makinen2019forecasting}. One of the problems related to the LOB that are heavily studied using machine learning methods is the problem of forecasting future mid-price movements. Mid-price, at any point in time, is the average value between the best-bid and best-ask prices. This quantity is a virtual price since no trade can happen at the current mid-price. Since the movements of mid-price reflect the changes in market dynamics, they are considered as important events to forecast. In order to benchmark performances of BiN, we conducted experiments using two different LOB datasets coming from two different markets: Nordic and US markets. \subsection{Experiments using Nordic data} \subsubsection{Dataset and Experimental Setup} FI-2010 \cite{ntakaris2018benchmark} is a large scale, publicly available Limit Order Book (LOB) dataset, which contains buy and sell limit order information (the prices and volumes) over $10$ business days from $5$ Finnish stocks traded in Helsinki Stock Exchange (operated by NASDAQ Nordic). At each order event (a point in time), the dataset contains the prices and volumes from the top $10$ best-bid and best-ask orders of both sides, leading to a $40$-dimensional vector representation. The authors of this dataset provided the labels (up, down, stationary) for the mid-price movements in the next $\{10, 20, 30, 50, 100\}$ order events. Since the majority of existing research results were reported for prediction horizons in the set $H = \{10, 20, 50\}$, we also conducted experiments with these values. Interested readers can read more about the FI-2010 dataset in \cite{ntakaris2018benchmark}. For the FI-2010 dataset, we followed the same experimental setup proposed in \cite{tran2018temporal}, which is widely used to benchmark the performances of deep neural networks in this task. Under this setting, data of the first $7$ days was used to train the models, and the last $3$ days were used for evaluation purposes. In this first set of experiments, we evaluated BiN in combination with the Temporal Attention augmented Bilinear Layer (TABL) network, which is one of the SoTA neural networks in FI-2010 dataset \cite{tran2018temporal}. Since TABL architectures also take advantage of the bimodal nature of the time-series, BiN is expected to ideally complement TABL networks. To enable comparisons with prior works, the best performing architecture C(TABL) reported in \cite{tran2018temporal} was adopted in our experiments. For this architecture, the input time-series were constructed from $10$ most recent order events. As we mentioned above, since at each order event, the LOB is represented by a $40$-dimensional vector, each input series that is fed to C(TABL) has dimensions of $40\times 10$. All C(TABL) networks were trained with ADAM optimizer for $80$ epochs, with an initial learning rate of $0.001$, which was reduced by a factor of $10$ at epoch $11$ and $71$. Weight decay ($0.0001$) and max-norm constraint ($10.0$) were used for regularization. Accuracy, average Precision, Recall and F1 are reported as the performance metrics. Since FI-2010 is an imbalanced dataset, average F1 measure is considered as the main performance metric for FI-2010 following prior conventions \cite{tran2018temporal}. Here we should note that we used no validation set for FI-2010, and simply used the F1 score measured on the train set for validation purposes. Each experiment was run $5$ times and the median value measured on the test set is reported. \subsubsection{Experiment Results} \begin{table}[t!] \begin{center} \caption{Experiment Results. Methods without any indication of normalization method means that z-score normalization was applied. Bold-face numbers denote the best F1 measure between the same model using different normalization methods. }\label{t1} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \multicolumn{5}{c}{} \\ \hline \textbf{Models} & \textbf{Accuracy \%} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{F1 \%} \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=10$}} \\ \hline CNN\cite{tsantekidis2017forecasting} & - & $50.98$ &$65.54$ & $55.21$ \\ \hline LSTM\cite{tsantekidis2017using} & - & $60.77$ &$75.92$ & $66.33$ \\ \hline \hline C(BL) \cite{tran2018temporal} & $82.52$ & $73.89$ &$76.22$ & $75.01$ \\ \hline DeepLOB \cite{zhang2019deeplob} & $84.47$ & $84.00$ &$84.47$ & $83.40$ \\ \hline \hline % DAIN-MLP \cite{passalis2019deep} & - & $65.67$ &$71.58$ & $68.26$ \\ \hline DAIN-RNN \cite{passalis2019deep} & - & $61.80$ &$70.92$ & $65.13$ \\ \hline \hline C(TABL) \cite{tran2018temporal} & $84.70$ & $76.95$ &$78.44$ & $77.63$ \\ \hline BN-C(TABL) & $79.20$ & $68.48$ &$72.36$ & $66.87$ \\ \hline BiN-C(TABL) & $86.87$ & $80.29$ &$81.84$ & $\mathbf{81.04}$ \\ \hline \hline % \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=20$}} \\ \hline CNN\cite{tsantekidis2017forecasting} & - & $54.79$ &$67.38$ & $59.17$ \\ \hline LSTM\cite{tsantekidis2017using} & - & $59.60$ &$70.52$ & $62.37$ \\ \hline C(BL) \cite{tran2018temporal} & $72.05$ & $65.04$ &$65.23$ & $64.89$ \\ \hline DeepLOB \cite{zhang2019deeplob} & $74.85$ & $74.06$ &$74.85$ & $72.82$ \\ \hline \hline % DAIN-MLP \cite{passalis2019deep} & - & $62.10$ &$70.48$ & $65.31$ \\ \hline DAIN-RNN \cite{passalis2019deep} & - & $59.16$ &$68.51$ & $62.03$ \\ \hline \hline C(TABL) \cite{tran2018temporal} & $73.74$ & $67.18$ &$66.94$ & $66.93$ \\ \hline BN-C(TABL) & $70.70$ & $63.10$ &$63.78$ & $63.43$ \\ \hline BiN-C(TABL) & $77.28$ & $72.12$ &$70.44$ & $\mathbf{71.22}$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=50$}} \\ \hline CNN\cite{tsantekidis2017forecasting} & - & $55.58$ &$67.12$ & $59.44$ \\ \hline LSTM\cite{tsantekidis2017using} & - & $60.03$ &$68.58$ & $61.43$ \\ \hline C(BL) \cite{tran2018temporal} & $78.96$ & $77.85$ &$77.04$ & $77.40$ \\ \hline DeepLOB \cite{zhang2019deeplob} & $80.51$ & $80.38$ &$80.51$ & $80.35$ \\ \hline \hline % C(TABL) \cite{tran2018temporal} & $79.87$ & $79.05$ &$77.04$ & $78.44$ \\ \hline BN-C(TABL) & $77.16$ & $75.70$ &$75.04$ & $75.34$ \\ \hline BiN-C(TABL) & $88.54$ & $89.50$ &$86.99$ & $\mathbf{88.06}$ \\ \hline % \end{tabular} } \end{center} \end{table} Table \ref{t1} shows the experiment results in three prediction horizons $H=\{10, 20, 50\}$ of C(TABL) networks using Batch Normalization and BiN, in comparison with existing results. Here we should note that the data provided in FI-2010 has been anonymized, i.e., the prices and volumes of orders were normalized. For those results reported in Table \ref{t1} without any indication of the normalization method, it means that z-score normalization was applied. In addition, we attempted to evaluate DAIN using the C(TABL) architecture on FI-2010 dataset, however, we could not achieve reasonable performances since this normalization strategy requires extensive tuning of three different learning rates for different computation steps. Besides, in the original paper \cite{passalis2019deep}, DAIN was only applied to MLP and RNN networks. For this reason, we report the original results of DAIN using MLP and RNN in Table \ref{t1}. In the experiments using US data, we did obtain reasonable results with DAIN and comparisons with DAIN are made in Section \ref{us-experiments}. \begin{table}[t!] \begin{center} \caption{Improvement comparisons between BiN-C(TABL) versus BiN-B(TABL)}\label{t3} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \multicolumn{5}{c}{} \\ \hline \textbf{Models} & \textbf{Accuracy \%} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{F1 \%} \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=10$}} \\ \hline B(TABL) \cite{tran2018temporal} & $78.91$ & $68.04$ &$71.21$ & $69.20$ \\ \hline C(TABL) \cite{tran2018temporal} & $84.70$ & $76.95$ &$78.44$ & $77.63$ \\ \hline \hline BiN-B(TABL) & $86.92$ & $80.43$ &$81.82$ & $\mathbf{81.10}$ \\ \hline BiN-C(TABL) & $86.87$ & $80.29$ &$81.84$ & $81.04$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=20$}} \\ \hline B(TABL) \cite{tran2018temporal} & $70.80$ & $63.14$ &$62.25$ & $62.22$ \\ \hline \hline C(TABL) \cite{tran2018temporal} & $73.74$ & $67.18$ &$66.94$ & $66.93$ \\ \hline \hline BiN-B(TABL) & $77.54$ & $72.56$ &$70.22$ & $\mathbf{71.29}$ \\ \hline BiN-C(TABL) & $77.28$ & $72.12$ &$70.44$ & $71.22$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=50$}} \\ \hline B(TABL) \cite{tran2018temporal} & $75.58$ & $74.58$ &$73.09$ & $73.64$ \\ \hline C(TABL) \cite{tran2018temporal} & $79.87$ & $79.05$ &$77.04$ & $78.44$ \\ \hline \hline BiN-B(TABL) & $88.44$ & $89.36$ &$86.92$ & $87.96$ \\ \hline BiN-C(TABL) & $88.54$ & $89.50$ &$86.99$ & $\mathbf{88.06}$ \\ \hline \end{tabular} } \end{center} \end{table} It is clear that our proposed BiN layer (BiN-C(TABL)) when used to normalize the input data yielded significant improvements over BN and z-score normalization when applied to the same network. The improvements are obvious for all prediction horizons. Especially, for the longest horizon $H=50$, BiN enhanced the C(TABL) network with up to $10\%$ improvement (from $78.44\%$ to $88.06\%$) in average F1 measure. Compared to DAIN, the performances achieved by our normalization strategy coupled with C(TABL) or DeepLOB networks are superior to that of DAIN coupled with MLP or RNN. Regarding BN when used as an input normalization scheme, it is obvious that BN deteriorated the performance of C(TABL) networks. For example, in case of $H=10$, adding BN to C(TABL) network led to more than $10\%$ drop in averaged F1. This phenomenon is expected since BN was originally designed to reduce covariate shift between hidden layers of Convolutional Neural Network, rather than as a mechanism to normalize input time-series. Comparing BiN-C(TABL) with a SoTA CNN-LSTM architecture having 11 hidden layers called DeepLOB \cite{zhang2019deeplob}, it is clear that our proposed normalization layer helped a TABL network having only 2 hidden layers to significantly close the gaps when $H=10$ and $H=20$ ($81.04\%$ versus $83.40\%$ for $H=10$, and $71.22\%$ versus $72.82\%$ for $H=20$), while outperforming DeepLOB by a large margin when $H=50$ ($88.06\%$ versus $80.35\%$). In order to investigate how much improvement BiN can contribute to neural networks of different complexities, we evaluated BiN with a smaller TABL architecture, namely B(TABL) as proposed in \cite{tran2018temporal}. B(TABL) has only one hidden layer with a total of $5843$ parameters, compared to C(TABL) which has two hidden layers with a total of $11343$ parameters. The results are shown in Table \ref{t3}. It is clear that BiN significantly boosted both B(TABL) and C(TABL) architectures in different prediction horizons, with BiN-B(TABL) networks perform as well as BiN-C(TABL) networks in all prediction horizons, making the additional hidden layer in BiN-C(TABL) redundant. Here we should note that adding our proposed normalization layer to B(TABL) networks only leads to a mere increase of $102$ parameters while achieving the same performances as BiN-C(TABL) networks, which have approximately twice the amount of parameters. Since BN was proposed to normalize hidden representations, we also experimented using BiN to normalize hidden representations in TABL networks. The results are shown in Table \ref{t2}, where BiN-C(TABL) and BN-C(TABL) denote the results when BiN and BN were only applied to input, while BiN-C(TABL)-BiN and BN-C(TABL)-BN denote the results when BiN and BN were applied to both the input and hidden representations. As we can see from Table \ref{t2}, there are very small differences between the two arrangements, except a noticeable improvement for BN when the prediction horizon is $H=10$. For BiN, the this results imply that adding normalization to the hidden layers bring no additional benefit for C(TABL) networks when the input data has been properly normalized. \begin{table}[t!] \begin{center} \caption{Comparisons between Bilinear Normalization and Batch Normalization when applied to only input layer (BiN-C(TABL) and BN-C(TABL)) or all layers (BiN-C(TABL)-BiN and BN-C(TABL)-BN}\label{t2} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|} \multicolumn{5}{c}{} \\ \hline \textbf{Models} & \textbf{Accuracy \%} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{F1 \%} \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=10$}} \\ \hline BN-C(TABL) & $79.20$ & $68.48$ &$72.36$ & $66.87$ \\ \hline BiN-C(TABL) & $86.87$ & $80.29$ &$81.84$ & $\mathbf{81.04}$ \\ \hline \hline BN-C(TABL)-BN & $78.72$ & $68.02$ &$72.58$ & $69.98$ \\ \hline BiN-C(TABL)-BiN & $86.84$ & $80.25$ &$81.85$ & $81.03$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=20$}} \\ \hline BN-C(TABL) & $70.70$ & $63.10$ &$63.78$ & $63.43$ \\ \hline BiN-C(TABL) & $77.28$ & $72.12$ &$70.44$ & $\mathbf{71.22}$ \\ \hline \hline BN-C(TABL)-BN & $71.28$ & $63.77$ &$63.65$ & $63.75$ \\ \hline BiN-C(TABL)-BiN & $76.68$ & $71.15$ &$70.48$ & $70.80$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=50$}} \\ \hline BN-C(TABL) & $77.16$ & $75.70$ &$75.04$ & $75.34$ \\ \hline BiN-C(TABL) & $88.54$ & $89.50$ &$86.99$ & $\mathbf{88.06}$ \\ \hline \hline BN-C(TABL)-BN & $76.74$ & $75.34$ &$74.66$ & $74.97$ \\ \hline BiN-C(TABL)-BiN & $88.44$ & $89.36$ &$86.92$ & $87.96$ \\ \hline \end{tabular} } \end{center} \end{table} \subsection{Experiments using US data}\label{us-experiments} \subsubsection{Dataset and Experiment Setup} While the Nordic dataset provides a reasonable testbed for our evaluation purpose, the Nordic market is less liquid compared to the US market, which is the biggest stock market worldwide. The number of intra-day orders in large-cap US stocks is significantly higher than that of the Nordic stocks, making it harder to predict the future market conditions. For the US market, we procured orders from TotalView-ITCH feed and obtained the LOB data of Amazon and Google from the 22nd of September 2015 to the 5th of October 2015. The trading hours in NASDAQ US spans from 09:30 to 16:00 (EST) and only orders submitted during this period were considered in our analysis. After the filtering process, we obtained approximately 13 millions order events for $10$ working days. Similar to the Nordic data, we used the first $7$ days for training the prediction models and the last $3$ days for testing purposes. In addition to forecasting the types of mid-price dynamics (up, down, stationary) at a fixed future horizon (Setting 1), we also evaluated the models in a more active setting (Setting 2), in which models were trained to predict the next movement (up or down) of the mid-price and when it occurs. That is, we have both classification (movement type) and regression (horizon value) objectives in Setting 2, with the loss function consists of the cross entropy and the mean squared error. The movement labels were derived following the same procedure used in \cite{ntakaris2018benchmark}, which includes price smoothing and movement classification based on a threshold of $0.00001$. For the experiments with US data, in addition to C(TABL) architecture, we also evaluated with the DeepLOB architecture \cite{zhang2019deeplob} as the predictors. Different from the Nordic dataset which was pre-normalized, the US data contains raw values for the prices and volumes. For this reason, we experimented with two static normalization methods, namely z-score normalization and min-max normalization with the results denoted as z-C(TABL) and mm-C(TABL) for C(TABL) networks, and z-DeepLOB and mm-DeepLOB for DeepLOB networks. \begin{table}[] \centering \caption{Results for C(TABL) architecture in experiment Setting 1 of US data} \label{t4} \resizebox{\linewidth}{!}{% \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Models}} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} & \textbf{Recall (\%)} & \textbf{F1 (\%)} \\ \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=10$}} \\ \hline \hline C(TABL) & $50.38$ & $41.46$ & $33.74$ & $23.62$ \\ \hline z-C(TABL) & $54.47$ & $50.05$ & $43.38$ & $42.50$ \\ \hline mm-C(TABL) & $53.13$ & $48.23$ & $40.90$ & $38.70$ \\ \hline BN-C(TABL) & $54.77$ & $50.20$ & $42.94$ & $41.64$ \\ \hline DAIN-C(TABL) & $62.35$ & $60.26$ & $61.64$ & $60.62$ \\ \hline BiN-C(TABL) & $68.31$ & $67.03$ & $62.97$ & $\mathbf{64.31}$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=20$}} \\ \hline \hline C(TABL) & $34.20$ & $37.17$ & $33.37$ & $17.74$ \\ \hline z-C(TABL) & $47.88$ & $47.44$ & $47.20$ & $46.45$ \\ \hline mm-C(TABL) & $47.37$ & $46.94$ & $46.75$ & $45.99$ \\ \hline BN-C(TABL) & $49.50$ & $49.29$ & $48.65$ & $47.81$ \\ \hline DAIN-C(TABL) & $64.46$ & $64.42$ & $64.41$ & $64.40$ \\ \hline BiN-C(TABL) & $65.52$ & $66.15$ & $65.15$ & $\mathbf{65.26}$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=50$}} \\ \hline \hline C(TABL) & $37.30$ & $36.08$ & $33.63$ & $25.83$ \\ \hline z-C(TABL) & $51.41$ & $50.78$ & $50.15$ & $50.23$ \\ \hline mm-C(TABL) & $51.71$ & $51.21$ & $49.93$ & $50.21$ \\ \hline BN-C(TABL) & $51.78$ & $51.37$ & $50.46$ & $50.72$ \\ \hline DAIN-C(TABL) & $65.85$ & $63.98$ & $64.73$ & $64.25$ \\ \hline BiN-C(TABL) & $67.51$ & $65.98$ & $64.99$ & $\mathbf{65.38}$ \\ \hline \end{tabular}% } \end{table} \subsubsection{Experiment Results} Table \ref{t4} shows the experiment results in Setting 1 of the US data for the C(TABL) architecture. First of all, it is clear that we obtained the worst performance when using raw data to train the predictors (results associated with C(TABL)). Between the two static normalization methods, z-score normalization exhibited better ability in preprocessing the data compared to min-max normalization. Both static normalization methods significantly improve the quality of training data. Among adaptive normalization methods, performances obtained from BN are inferior to DAIN and BiN. Overall, the proposed normalization layer when combined with C(TABL) architecture yielded the best performances in all prediction horizons compared to others. Table \ref{t5} shows the experiment results in Setting 1 of the US data for DeepLOB networks. Similar to the results obtained for C(TABL) networks, we also obtained the worst performance when using raw data to train the DeepLOB architecture. Between z-score normalization and min-max normalization, using the former led to slightly better results compared to the latter. While BN showed no superiority over z-score normalization, both DAIN and BiN outperformed static normalization methods. Among all normalization methods, BiN was the most suitable normalization technique to combine with the DeepLOB architecture. \begin{table}[] \centering \caption{Results for DeepLOB network architecture in experiment Setting 1 of US data} \label{t5} \resizebox{\linewidth}{!}{% \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Models}} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} & \textbf{Recall (\%)} & \textbf{F1 (\%)} \\ \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=10$}} \\ \hline \hline DeepLOB & $50.19$ & $31.52$ & $33.51$ & $23.28$ \\ \hline z-DeepLOB & $53.19$ & $44.98$ & $43.26$ & $42.21$ \\ \hline mm-DeepLOB & $51.83$ & $42.84$ & $39.99$ & $36.96$ \\ \hline BN-DeepLOB & $53.85$ & $45.78$ & $43.35$ & $42.24$ \\ \hline DAIN-DeepLOB & $66.80$ & $64.26$ & $64.94$ & $64.54$ \\ \hline BiN-DeepLOB & $69.79$ & $69.82$ & $63.21$ & $\mathbf{65.05}$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=20$}} \\ \hline \hline DeepLOB & $35.66$ & $23.44$ & $33.29$ & $18.47$ \\ \hline z-DeepLOB & $48.47$ & $47.59$ & $47.93$ & $47.36$ \\ \hline mm-DeepLOB & $48.46$ & $47.80$ & $47.97$ & $47.67$ \\ \hline BN-DeepLOB & $49.24$ & $48.14$ & $48.44$ & $47.81$ \\ \hline DAIN-DeepLOB & $67.35$ & $67.39$ & $67.14$ & $\mathbf{67.19}$ \\ \hline BiN-DeepLOB & $67.50$ & $68.65$ & $66.97$ & $67.07$ \\ \hline \hline \multicolumn{5}{|c|}{\textit{Prediction Horizon $H=50$}} \\ \hline \hline DeepLOB & $38.62$ & $33.32$ & $33.32$ & $20.84$ \\ \hline z-DeepLOB & $49.85$ & $49.97$ & $49.12$ & $49.36$ \\ \hline mm-DeepLOB & $50.11$ & $51.57$ & $48.49$ & $49.29$ \\ \hline BN-DeepLOB & $50.27$ & $50.17$ & $49.73$ & $49.66$ \\ \hline DAIN-DeepLOB & $66.86$ & $65.67$ & $65.19$ & $65.10$ \\ \hline BiN-DeepLOB & $67.86$ & $66.11$ & $65.56$ & $\mathbf{65.73}$ \\ \hline \end{tabular}% } \end{table} \begin{table}[] \centering \caption{Results for C(TABL) and DeepLOB architectures in experiment Setting 2 of US data} \label{t6} \resizebox{0.65\linewidth}{!}{% \begin{tabular}{l|c|c|} \cline{2-3} & \textbf{F1 (\%)} & \textbf{RMSE} \\ \hline \multicolumn{1}{|l|}{C(TABL)} & $33.68$ & $79994377.4940$ \\ \hline \multicolumn{1}{|l|}{z-C(TABL)} & $53.27$ & $4118.9763$ \\ \hline \multicolumn{1}{|l|}{mm-C(TABL)} & $51.97$ & $110628.9429$ \\ \hline \multicolumn{1}{|l|}{BN-C(TABL)} & $53.57$ & $331.2658$ \\ \hline \multicolumn{1}{|l|}{DAIN-C(TABL)} & $51.42$ & $731.5555$ \\ \hline \multicolumn{1}{|l|}{BiN-C(TABL)} & $\mathbf{54.79}$ & $\mathbf{231.4644}$ \\ \hline \hline \multicolumn{1}{|l|}{DeepLOB} & $41.91$ & $250.7388$ \\ \hline \multicolumn{1}{|l|}{z-DeepLOB} & $54.21$ & $250.7388$ \\ \hline \multicolumn{1}{|l|}{mm-DeepLOB} & $45.20$ & $250.7388$ \\ \hline \multicolumn{1}{|l|}{BN-DeepLOB} & $54.95$ & $250.7388$ \\ \hline \multicolumn{1}{|l|}{DAIN-DeepLOB} & $32.16$ & $\mathbf{246.2643}$ \\ \hline \multicolumn{1}{|l|}{BiN-DeepLOB} & $\mathbf{59.88}$ & $250.7388$ \\ \hline \end{tabular}% } \end{table} In experiment Setting 2, the models were trained to predict the type of the next movement of mid-price, which is measured by F1 score, as well as the horizon when it happens, which is measured by Root Mean Squared Error (RMSE). The performances of C(TABL) and DeepLOB networks using different input normalization methods are shown in Table \ref{t6}. For both network architectures, the best F1 scores were obtained using the proposed normalization method. Z-score standardization and BN performed similarly, being the second best in terms of F1 score. Min-max normalization, again, showed inferior performances compared to z-score normalization. Surprisingly, DAIN performed poorly in terms of F1 score when compared to z-score normalization in this experiment setting. Regarding the prediction of the horizon value, BiN achieved the best RMSE among all normalization methods used for the C(TABL) architecture. For the DeepLOB architecture, a peculiar phenomenon can be observed: for all normalization methods, we obtained the same RMSE, even between different runs, with DAIN as the only exception. For these models, the gradient updates toward the end of the training process seemed to only affect the classification objective and not the regression one. Even though DAIN achieved the best RMSE compared to others when applied to the DeepLOB architecture, the combination of DAIN and DeepLOB performed poorly in terms of F1 score. From the results obtained for both Setting 1 and Setting 2, we can see that the proposed normalization method performs consistently, being the best normalization method for SoTA neural networks in most cases. \section{Conclusions}\label{conclusions} In this paper, we propose Bilinear Input Normalization (BiN) layer, a completely data-driven time-series normalization strategy, which is designed to take into consideration the bimodal nature of financial time-series, and aligns the multivariate time-series in both feature and temporal dimensions. The parameters of the proposed normalization method are optimized in an end-to-end manner with other parameters in a neural network. Using large scale limit order books coming from the Nordic and US markets, we evaluated the performance of BiN in comparisons with other normalization techniques to tackle different forecasting problems related to the future mid-price dynamics. The experimental results showed that BiN performed consistently when combined with different state-of-the-arts neural networks, being the most suitable normalization method in the majority of scenarios. \section{Acknowledgement} The authors wish to acknowledge CSC – IT Center for Science, Finland, for computational resources.
1,314,259,996,946
arxiv
\section{Introduction} Quantum channels, which describe transformations between input and output states, are present in every quantum information and communication processing tasks. However, physical channels are inherently noisy, making their possible applications limited. Therefore, extensive research on optimal methods of information transmission is essential for quantum communication. One way to enhance the amount of reliably transmitted information is to reduce the effects of noise. Among the methods specially tailored for this task are error correction, error mitigation, and error suppression techniques~\cite{Review,QEC}. Another way to approach the problem of detrimental noise is to instead use the noise as a resource \cite{Verstraete,zanardi17,Engineering_capacity,fidelity}. This way, we can enhance the quantities that measure channel transmission properties, like fidelity, purity, or capacity. A full characterization of quantum channel properties is in general a very challenging undertaking. To make the problem more tractable, one can introduce additional symmetries, like the covariance property of channels. By definition, a quantum channel $\Lambda$ is covariant with respect to unitary representations $U$, $V$ of a finite (or compact) group $G$ if \begin{equation} \label{eq:covgen} \Lambda\left[U(g)\rho U^{\dagger}(g)\right]=V(g)\Lambda[\rho]V^{\dagger}(g)\qquad \forall \ g\in G, \end{equation} for any valid density operator $\rho$. Such maps are also known as {\it $G$-covariant}. The seminal results come from Scutaru~\cite{Scutaru}, who proved a Stinespring-type theorem in the $C^{\ast}$-algebraic framework, giving a base for more applicative research. In particular, SU(2)-covariant channels were used to describe entanglement in spin systems~\cite{Schliemann} and dimerization of quantum spin chains~\cite{Nachtergaele2017}. In quantum information, covariant channels help to analyze additivity property of the Holevo capacity~\cite{651037} and minimal output entropy~\cite{irr-cov1,irr-cov2,Fan1,irr-cov3,7790801}. The covariance property also allows to prove strong converse properties for the classical capacity~\cite{PhysRevLett.103.070504} and entanglement-assisted classical capacity~\cite{Datta2016}. There are also known methods on how to construct positive covariant maps~\cite{Kopszak,Studz}. A special class of $U(1)$-covariant maps consists in phase-covariant qubit maps, for which $U(\phi)=V(\phi)=\exp(-i\sigma_3\phi)$, where $\sigma_3=\rm{diag}(1,-1)$, $\phi\in\mathbb{R}$. Phase-covariant channels provide evolutions that combine pure dephasing with energy absorption and emission~\cite{phase-cov-PRL,phase-cov}. At first, they were introduced phenomenologically in the description of thermalization and dephasing processes that go beyond the Markovian approximation \cite{PC1}. The associated dynamical equations were later derived microscopically for a weakly-coupled spin-boson model under the secular approximation \cite{PC3}. Phase-covariant maps were applied in the contexts of quantum speed evolution~\cite{QSTCov}, non-Markovianity of quantum evolution~\cite{e23030331}, quantum optics~\cite{Marvian_2013,RevModPhys.79.555}, and quantum metrology~\cite{PhysRevA.94.042101}. They play a substantial role in the description of phase covariant devices~\cite{Buscemi:07} and quantum cloning machines~\cite{PhysRevA.62.012302}. The main goal of our paper is to prove that transition performance can be improved by allowing for non-unitality of quantum channels. This is shown on the example of fidelity and purity measures, which determine the distortion between input and output states. We start with analytical derivations of formulas for minimal and maximal channel fidelity on pure states, as well as maximal output purities in terms of Schatten $p$-norms. The pure states that correspond to the respective extremal values are also provided. Next, we ask about the evolution of quantum entanglement under the assumption that half of a maximally entangled state is sent through the phase-covariant channel. For an entanglement measure, we choose concurrence, which is also related to entanglement formation. In the main part, we provide important applications for our results. We consider families of quantum channels that differ only by the non-unitality degree. By comparing the fidelity and purity measures, we show that unital maps always display the worst performance for every analyzed measure except the minimal channel fidelity. Actually, this drop in channel performance is monotonically decreasing with the degree of non-unitality -- that is, the closer the channel is to being unital, the smaller the increase of the corresponding measure. Similar behavior is observed for concurrence and entanglement of formation, which measure entanglement between two qubits. In the presented examples, we observe not only how to prolong entanglement but also how to speed up its rebirth after sudden death. Finally, we also show how to engineer the desired non-unitality degree with a classical mixture of the unital and maximally non-unital phase-covariant quantum map. In this case, the probability distribution can be treated as the noise beneficial for the properties of quantum evolution. Finally, it is important to note that the enhanced performance of non-unital channels is observed at any moment in time. This is a novelty compared to previous works on noise suppression by counteracting its effects with another form of noise~\cite{Klesse,fidelity,Engineering_capacity}, where the positive effects were only temporary. \section{Phase-covariant channels} \label{sec:PhaseCovChann} Consider a class of qubit maps covariant with respect to phase rotations on the Bloch sphere, which are represented by a unitary transformation \begin{equation} \label{eq:conds} U(\phi)=\exp(-i\sigma_3\phi),\qquad\phi\in\mathbb{R},\qquad\sigma_3=\begin{pmatrix} 1 & 0\\0 & -1\end{pmatrix}. \end{equation} Such maps are called {\it phase-covariant} and satisfy the covariance condition \begin{equation} \label{eq:covohase} \Lambda\left[U(\phi)\rho U^{\dagger}(\phi)\right] =U(\phi)\Lambda[\rho]U^{\dagger}(\phi)\qquad \forall \ \phi\in \mathbb{R} \end{equation} for any input density operator $\rho$. Note that $U(\phi)$ defines a continuous group parameterized with an angle $\phi$. Up to the unitary transformation $\rho\mapsto\exp(-i\sigma_3\theta)\rho\exp(i\sigma_3\theta)$, $\theta\in\mathbb{R}$, the most general form of $\Lambda$ reads~\cite{phase-cov,phase-cov-PRL} \begin{equation} \label{eq:actionofLambda} \Lambda[\rho]=\frac 12 \left[(\mathbb{I}+\lambda_{\ast}\sigma_3)\mathrm{Tr}\rho +\lambda_1\sigma_1\mathrm{Tr}(\rho\sigma_1)+\lambda_1\sigma_2\mathrm{Tr}(\rho\sigma_2) +\lambda_3\sigma_3\mathrm{Tr}(\rho\sigma_3)\right], \end{equation} where $\sigma_1$, $\sigma_2$, $\sigma_3$ denote the Pauli matrices. The real numbers $\lambda_1$ and $\lambda_3$ are the eigenvalues of $\Lambda$ to the following eigenvectors, \begin{equation} \Lambda[\sigma_1]=\lambda_1\sigma_1,\qquad \Lambda[\sigma_2]=\lambda_1\sigma_2,\qquad \Lambda[\sigma_3]=\lambda_3\sigma_3. \end{equation} The last parameter, $\lambda_{\ast}$, is responsible for non-unitality -- that is, failing to preserve the identity operator $\mathbb{I}$ ($\Lambda(\mathbb{I})\neq \mathbb{I}$). It also determines the map's invariant state ($\Lambda[\rho_{\ast}]=\rho_{\ast}$), which reads \begin{equation}\label{rhoast} \rho_{\ast}=\frac{1}{2}\left[\mathbb{I}+\frac{\lambda_{\ast}}{1-\lambda_3}\sigma_3\right]. \end{equation} For $\lambda_\ast=0$, one recovers a symmetric subclass of Pauli channels, which are unital qubit maps. In this case, the invariant state $\rho_\ast=\mathbb{I}/2$ is maximally mixed. Therefore, one can say that the non-unitality property of phase-covariant channels is controlled by $\lambda_{\ast}$. Finally, to ensure that $\Lambda$ is a quantum channel (completely positive, trace-preserving map), its parameters have to satisfy the conditions \cite{phase-cov} \begin{equation} |\lambda_\ast|+|\lambda_3|\leq 1,\qquad 4\lambda_1^2+\lambda_\ast^2\leq (1+\lambda_3)^2. \end{equation} \section{Performance measures of quantum channels} \label{sec:PerfMeas} \subsection{Channel fidelity} In quantum information theory, the fidelity measures the distance that separates two quantum states \cite{Nielsen,Zyczkowski}. Therefore, it can be used to determine their distinguishability. According to Uhlmann's definition \cite{Uhlmann}, the fidelity between states represented by the density operators $\rho$ and $\sigma$ is given by \begin{equation}\label{statesfidelity} F(\rho,\sigma):=\left(\mathrm{Tr}\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)^2. \end{equation} Observe that $0\leq F(\rho,\sigma)\leq 1$, and the equality $F(\rho,\sigma)=1$ holds if and only if $\rho=\sigma$. This formula served as a starting point to introduce the channel fidelity $F(\rho,\Lambda[\rho])$; that is, the fidelity between its input $\rho$ and output $\Lambda[\rho]$ states \cite{Raginsky}. It measures the distortion of an initial state under the use of a channel $\Lambda$. For pure inputs represented by rank-1 projectors $P$, the channel fidelity is bounded by the minimal and maximal fidelity on pure input states \cite{Sommers3}, \begin{equation}\label{channelfidelity} \begin{split} f_{\min}(\Lambda)&=\min_PF(P,\Lambda[P])=\min_P \mathrm{Tr}(P\Lambda[P]),\\ f_{\max}(\Lambda)&=\max_PF(P,\Lambda[P])=\max_P \mathrm{Tr}(P\Lambda[P]). \end{split} \end{equation} Due to its concavity property, the minimal fidelity for mixed inputs is reached on a pure state. Hence, $f_{\min}(\Lambda)$ is also the minimal channel fidelity on mixed states. However, $f_{\max}(\Lambda)$ is not the maximal fidelity in general, as the maximal value $\max_\rho F(\rho,\Lambda[\rho])=F(\rho_\ast,\Lambda[\rho_\ast])=1$ is reached on the invariant state $\rho_\ast$ of $\Lambda$. \begin{Theorem}\label{Th1} The minimal and maximal channel fidelities on pure input states under the action of a phase-covariant channel are given by the following formulas, \begin{equation}\label{fidmin} f_{\min}(\Lambda)=\left\{ \begin{aligned} &\frac 12 \left(1+\lambda_1-\frac{\lambda_{\ast}^2}{4(\lambda_3-\lambda_1)}\right)\qquad{\rm for}\qquad \lambda_3>\lambda_1,\, |\lambda_\ast|\leq 2(\lambda_3-\lambda_1),\\ &\frac 12 (1+\lambda_3-|\lambda_{\ast}|)\qquad{\rm otherwise}, \end{aligned}\right. \end{equation} \begin{equation}\label{fidmax} f_{\max}(\Lambda)=\left\{ \begin{aligned} &\frac 12 \left(1+\lambda_1+\frac{\lambda_{\ast}^2}{4(\lambda_1-\lambda_3)}\right)\qquad{\rm for}\qquad \lambda_3<\lambda_1,\, |\lambda_\ast|\leq 2(\lambda_1-\lambda_3),\\ &\frac 12 (1+\lambda_3+|\lambda_{\ast}|)\qquad{\rm otherwise}. \end{aligned} \right. \end{equation} \end{Theorem} \begin{proof} Take a pure state represented by a rank-1 projector \begin{equation}\label{proj} P=\frac 12 \left(\mathbb{I}+\sum_{k=1}^3x_k\sigma_k\right), \end{equation} where $x_k$ are real numbers such that $\sum_{k=1}^3x_k^2=1$. The action of a phase-covariant channel $\Lambda$ onto $P$ produces \begin{equation}\label{akcja} \Lambda[P]=\frac 12 \left(\mathbb{I}+\lambda_{\ast}\sigma_3 +\lambda_1x_1\sigma_1+\lambda_1x_2\sigma_2+\lambda_3x_3\sigma_3\right), \end{equation} and hence the corresponding channel fidelity is, by definition, \begin{equation} F(P,\Lambda[P])=\mathrm{Tr}(P\Lambda[P])=\frac 12 [1+\lambda_1(x_1^2+x_2^2)+\lambda_3x_3^2+\lambda_{\ast}x_3]. \end{equation} The next step is to find $x_3$ that minimize or maximize $F$. For simplicity, we introduce the function \begin{equation} G(P,\Lambda[P])=2F(P,\Lambda[P])-1=\lambda_1(1-x_3^2)+\lambda_3x_3^2+\lambda_{\ast}x_3 =(\lambda_3-\lambda_1)x_3^2+\lambda_{\ast}x_3+\lambda_1, \end{equation} which reaches its extremal values for the same parameters $x_3$ as the function $F(P,\Lambda[P])$. From now on, we consider $G(P,\Lambda[P])$ as the function of $x_3$ with fixed channel parameters and denote it by $G(x_3)$. Recall that extremal points of a function are found by equating the first derivative to zero and checking the sign of the second derivative at that point. In our case, the first derivative results in a single critical point, \begin{equation}\label{x3} G^\prime(x_3)=2(\lambda_3-\lambda_1)x_3+\lambda_{\ast}=0\qquad\implies\qquad x_3=-\frac{\lambda_{\ast}}{2(\lambda_3-\lambda_1)}. \end{equation} Note that $|x_3|\leq 1$, which, together with eq. (\ref{x3}), gives us an additional constraint for the channel parameters, \begin{equation} |\lambda_\ast|\leq 2|\lambda_3-\lambda_1|. \end{equation} Calculating the second derivative yields \begin{equation} G^{\prime\prime}\left(x_3=-\frac{\lambda_{\ast}}{2(\lambda_3-\lambda_1)}\right) =2(\lambda_3-\lambda_1). \end{equation} This way, we found a local minimum for $\lambda_3>\lambda_1$ and a local maximum for $\lambda_3<\lambda_1$. For $\lambda_3=\lambda_1$, $G(x_3)$ is a linear function, so there are no local extrema. Due to the domain of $x_3$ being closed, the global extrema of $G$ are reached either at the local extremal points or at the endpoints $x_3=\pm 1$, where $G(x_3=\pm 1)=\lambda_3\pm\lambda_{\ast}$. Our results can be summarized as follows, \begin{equation} f_{\min}(\Lambda)=\left\{ \begin{aligned} &\frac 12 \min\left\{1+\lambda_1-\frac{\lambda_{\ast}^2}{4(\lambda_3-\lambda_1)}, 1+\lambda_3-|\lambda_{\ast}|\right\}\qquad{\rm for}\qquad \lambda_3>\lambda_1,\, |\lambda_\ast|\leq 2(\lambda_3-\lambda_1),\\ &\frac 12 (1+\lambda_3-|\lambda_{\ast}|)\qquad{\rm otherwise}, \end{aligned}\right. \end{equation} \begin{equation} f_{\max}(\Lambda)=\left\{ \begin{aligned} &\frac 12 \max\left\{1+\lambda_1+\frac{\lambda_{\ast}^2}{4(\lambda_1-\lambda_3)}, 1+\lambda_3+|\lambda_{\ast}|\right\}\qquad{\rm for}\qquad \lambda_3<\lambda_1,\, |\lambda_\ast|\leq 2(\lambda_1-\lambda_3),\\ &\frac 12 (1+\lambda_3+|\lambda_{\ast}|)\qquad{\rm otherwise}. \end{aligned} \right. \end{equation} Finally, observing that the first term in the curly brackets is always minimal for $f_{\min}(\Lambda)$ and maximal for $f_{\max}(\Lambda)$, we recover the formulas from eqs. (\ref{fidmin}--\ref{fidmax}). \end{proof} \begin{Remark} The minimal and maximal channel fidelities on pure inputs are reached on a one-parameter family of rank-1 projectors. Every projector $P$ is characterized by three parameters $x_k$, $k=1,2,3$, from which only $x_3$ is fixed in the process of finding the extremal points of $F(P,\Lambda[P])$. Due to the constraint $\sum_{k=1}^3x_k^2=1$ from eq. (\ref{proj}), $x_2$ depends on the choice of $x_1$. Hence, we are left with a free parameter $x_1$ that changes between $\pm\sqrt{1-x_3^2}$. \end{Remark} For $\lambda_\ast=0$, one recovers the formulas for the Pauli channels \cite{norms,fidelity}, \begin{equation} f_{\min}(\Lambda)=\frac{1+\lambda_{\min}}{2},\qquad f_{\max}(\Lambda)=\frac{1+\lambda_{\max}}{2}, \end{equation} where $\lambda_{\max}=\max_{k=1,3}\lambda_k$ and $\lambda_{\min}=\min_{k=1,3}\lambda_k$. Note that, contrary to the case with $\lambda_\ast\neq 0$, $f_{\min}(\Lambda)$ and $f_{\max}(\Lambda)$ depend only on a single eigenvalue. \subsection{Maximal output purity} The purity measures how close a given state is to a pure state. This question can also be applied to quantum channels $\Lambda$, where one checks the purity of the output $\Lambda[P]$ for pure inputs $P$. The higher the purity of the output, the less distorted is the input. However, one is usually interested in the best case scenario, which corresponds to the maximal output purity. This property is measured by the maximal output $p$-norm defined by \begin{equation} \nu_p(\Lambda):=\max_P\|\Lambda[P]\|_p, \end{equation} where the Schatten $p$-norm reads \cite{TQI,Bhatia} \begin{align} &\|\Lambda[P]\|_p:=(\mathrm{Tr}\Lambda[P]^p)^{1/p},\qquad 1\leq p<\infty,\\ &\|\Lambda[P]\|_\infty:=\max_Q\mathrm{Tr}(Q\Lambda[P]), \end{align} and $P$ and $Q$ are a rank-1 projectors. Here, let us consider two of the most popular choices: $p=2$ and $p=\infty$. \begin{Theorem}\label{Th2} The maximal output $2$-norm of phase-covariant channels satisfies \begin{equation}\label{nu2} \nu_2^2(\Lambda)=\left\{ \begin{aligned} &\frac 12 \left(1+\lambda_1^2 +\frac{\lambda_1^2\lambda_\ast^2}{\lambda_1^2-\lambda_3^2}\right),\qquad |\lambda_1|>|\lambda_3|,\,|\lambda_3\lambda_\ast|\leq \lambda_1^2-\lambda_3^2,\\ &\frac 12 (1+\lambda_3^2+\lambda_\ast^2+2|\lambda_3\lambda_\ast|)\qquad{\rm otherwise}. \end{aligned} \right. \end{equation} \end{Theorem} \begin{proof} Using eq. (\ref{akcja}), we find \begin{equation} \mathrm{Tr}(\Lambda[P]^2)=\frac 12 \left[1+(\lambda_3^2-\lambda_1^2)x_3^2+2\lambda_3\lambda_\ast x_3+\lambda_1^2+\lambda_\ast^2\right]. \end{equation} Define an auxiliary function $K(\Lambda):=2\mathrm{Tr}(\Lambda[P]^2)-1$ whose extremal points coincide with that of $\nu_2^2(\Lambda)$. To find the extremas of $(\Lambda)\equiv K(x_3)$, we calculate the first and second derivatives with respect to $x_3$; \begin{align} K^\prime(x_3)&=2(\lambda_3^2-\lambda_1^2)x_3+2\lambda_\ast \lambda_3=0\qquad\implies\qquad x_3=-\frac{\lambda_3\lambda_\ast}{\lambda_3^2-\lambda_1^2},\label{eq:funK1}\\ K^{\prime\prime}(x_3)&=2(\lambda_3^2-\lambda_1^2). \end{align} Since $|x_3|\leq 1$, the above formula for $x_3$ provides an additional constraint on the channel parameters, \begin{equation} |\lambda_3\lambda_\ast|\leq |\lambda_3^2-\lambda_1^2|. \end{equation} Now, if $\lambda_3^2>\lambda_1^2$, then we obtain the local minimum, whereas $\lambda_3^2<\lambda_1^2$ gives rise to the local maximum. However, if $\lambda_3^2=\lambda_1^2$, then there are no local extrema. In this case, the global extremal points are reached on the endpoints of the domain $x_3=\pm 1$, where the function $K$ takes the values \begin{equation} K(x_3=\pm 1)=\lambda_\ast^2+\lambda_3^2\pm 2\lambda_3\lambda_\ast. \end{equation} Therefore, the formula for the maximal output 2-norm reads \begin{equation} \nu_2^2(\Lambda)=\left\{ \begin{aligned} &\frac 12 \max\left\{1+K\left(x_3=-\frac{\lambda_3\lambda_\ast}{\lambda_3^2-\lambda_1^2}\right), 1+\lambda_\ast^2+\lambda_3^2+2|\lambda_3\lambda_\ast|\right\},\qquad |\lambda_1|>|\lambda_3|,\,|\lambda_3\lambda_\ast|\leq |\lambda_1^2-\lambda_3^2|,\\ &\frac 12 \max\{1+\lambda_\ast^2+\lambda_3^2+2|\lambda_3\lambda_\ast|\}\qquad{\rm otherwise}. \end{aligned} \right. \end{equation} Observing that, in the range provided by the first line of this equation, \begin{equation} K\left(x_3=-\frac{\lambda_3\lambda_\ast}{\lambda_3^2-\lambda_1^2}\right)=\lambda_1^2 \left(1+\frac{\lambda_\ast^2}{\lambda_1^2-\lambda_3^2}\right)\geq \lambda_\ast^2+\lambda_3^2+2|\lambda_3\lambda_\ast|, \end{equation} we finally arrive at eq. (\ref{nu2}). \end{proof} After putting $\lambda_\ast=0$, one recovers the squared maximal output $2$-norm for the Pauli channels \cite{Ruskai}, \begin{equation} \nu_2^2(\Lambda)=\frac 12 \left[1+\max_\alpha\lambda_\alpha^2\right]. \end{equation} Unlike in the formula for $\lambda_\ast\neq 0$, here $\nu_2^2(\Lambda)$ depends only on the squared channel parameters. \begin{Theorem}\label{Th3} The maximal output $\infty$-norm of phase-covariant channels is equal to \begin{equation}\label{nuinf} \nu_{\infty}(\Lambda)=\frac 12 \left[1+\max\{|\lambda_1|,|\lambda_3\pm\lambda_\ast|\}\right]. \end{equation} \end{Theorem} \begin{proof} Let us take two rank-1 projectors, \begin{equation} P=\frac 12 \left(\mathbb{I}+\sum_{k=1}^3x_k\sigma_k\right),\qquad Q=\frac 12 \left(\mathbb{I}+\sum_{k=1}^3y_k\sigma_k\right),\qquad \sum_{k=1}^3x_k^2=\sum_{k=1}^3ky_k^2=1. \end{equation} In what follows, we make use of the trace condition $0\leq\mathrm{Tr}(PQ)\leq 1$, which is equivalent to \begin{equation} -1\leq\sum_{k=1}^3x_ky_k\leq 1. \end{equation} On the other hand, we find \begin{equation}\label{trqp} \mathrm{Tr}(Q\Lambda[P])=\frac 12 \left[1+\lambda_\ast y_3+\lambda_1x_1y_1 +\lambda_1x_2y_2+\lambda_3x_3y_3\right]. \end{equation} From the form of $\mathrm{Tr}(Q\Lambda[P])$, it is easy to see that it has no local extrema in the projectors' parameters (due to being a linear function in all $x_k$, $y_k$). Hence, the global maximum is reached on one of the edges: $x_k=\pm 1$, $y_k=\pm 1$. After making this substitution in eq. (\ref{trqp}), one arrives at eq. (\ref{nuinf}). \end{proof} The formula for the maximal output $\infty$-norm \begin{equation} \nu_{\infty}(\Lambda)=\frac 12 \left[1+\max_{\alpha=1,3}|\lambda_\alpha|\right] \end{equation} for $\lambda_\ast=0$ was derived in \cite{norms}. There, it was also observed that for the Pauli channels one has $\nu_\infty=f_{\max}$ if $\max\lambda_\alpha=\max|\lambda_\alpha|$. Interestingly, an analogical comparison exists for phase-covariant channels. Namely, $\nu_\infty=f_{\max}$ if either $\lambda_\ast=0$ or $\lambda_3\geq|\lambda_1|+|\lambda_\ast|$. \subsection{Concurrence} Assume that we extend our qubit system by composing it with another qubit system. The first subsystem evolves according to a phase-covariant channel while the second subsystem remains unchanged. If initially the qubit pair is maximally entangled, then the total state changes according to $\rho_W\mapsto\rho_W^\prime=({\mathchoice{\rm 1\mskip-4mu l}{\rm 1\mskip-4mu l\otimes\Lambda)[\rho_W]$, where $\rho_W=(1/4)\sum_{i,j=0}^1 |ii\>\<jj|$. The entanglement of formation between two qubit systems can be measured using Wootters' concurrence \cite{Wooters1,Wooters2} \begin{equation} c(\rho)=\max\{0,\sqrt{r_1}-\sqrt{r_2}-\sqrt{r_3}-\sqrt{r_4}\}, \end{equation} where $r_1\geq r_2\geq r_3\geq r_4$ are the eigenvalues of $X(\rho):=\rho(\sigma_2\otimes\sigma_2)\overline{\rho}(\sigma_2\otimes\sigma_2)$. Observe that, in terms of the Pauli matrices, the state $\rho_W$ has the form \begin{equation} \rho_W=\frac{1}{4}\left(\mathbb{I}\otimes\mathbb{I}+\sigma_1\otimes \sigma_1-\sigma_2\otimes \sigma_2+\sigma_3\otimes \sigma_3\right), \end{equation} and therefore it is straightforward to show that its evolution is given by \begin{equation} \rho_W^\prime=\frac{1}{4}\left(\mathbb{I}\otimes\mathbb{I} +\lambda_{\ast}\mathbb{I}\otimes\sigma_3 +\lambda_1\sigma_1\otimes\sigma_1-\lambda_1\sigma_2\otimes\sigma_2 +\lambda_3\sigma_3\otimes\sigma_3\right). \end{equation} In the computational basis, $X[\rho_W^\prime]$ is represented by the matrix \begin{equation} X[\rho_W^\prime]= \frac{1}{16}\begin{pmatrix} 4\lambda_1^2+(1+\lambda_3)^2-\lambda_\ast^2 & 0 & 0 & 4\lambda_1(1+\lambda_3+\lambda_\ast) \\ 0 & (1-\lambda_3)^2-\lambda_\ast^2 & 0 & 0 \\ 0 & 0 & (1-\lambda_3)^2-\lambda_\ast^2 & 0 \\ 4\lambda_1(1+\lambda_3-\lambda_\ast) & 0 & 0 & 4\lambda_1^2+(1+\lambda_3)^2-\lambda_\ast^2 \end{pmatrix}, \end{equation} whose eigenvalues read \begin{equation} \begin{split} R_1=R_2=\frac{1}{16}\Big[(1-\lambda_3)^2-\lambda_\ast^2\Big],\qquad R_\pm=\frac{1}{16}\Big[2\lambda_1\pm \sqrt{(1+\lambda_3)^2-\lambda_\ast^2}\Big]^2. \end{split} \end{equation} Due to $R_+\geq R_1=R_2\geq R_-$, the corresponding formula for concurrence reduces to \begin{equation}\label{crhoW} c[\rho_W^\prime]=\frac{1}{2}\max\left\{0,2|\lambda_1|-\sqrt{(\lambda_3-1)^2-\lambda_\ast^2}\right\}. \end{equation} If one takes $\lambda_\ast=0$, the above equation reproduces the concurrence \begin{equation} c[\rho_W^\prime]=\max\{0,2|\lambda_1|+\lambda_3-1\} \end{equation} of the Pauli channels satisfying $\Lambda[\sigma_2]=\lambda_1\sigma_2$. \section{Applications: Using non-unitality to improve channel performance} \label{sec:Applications} The measures of fidelity, purity, and entanglement derived in previous section depend on the channel eigenvalues $\lambda_1$, $\lambda_3$, as well as the parameter $\lambda_\ast$ that vanishes for unital channels. Therefore, a question arises: given two quantum maps, one unital and one non-unital, can we determine which one has a better performance in quantum communication tasks according to those measures? To answer this, consider two phase-covariant qubit channels: a unital (Pauli) channel $\Lambda_{\rm U}$ and a non-unital channel $\Lambda_{\rm NU}$. Assume that these channels have common eigenvalues and share three eigenvectors, so that \begin{equation} \Lambda_{\rm U}[\sigma_k]=\lambda_k\sigma_k,\qquad \Lambda_{\rm NU}[\sigma_k]=\lambda_k\sigma_k,\qquad k=1,2,3\qquad (\lambda_2\equiv\lambda_1). \end{equation} The final eigenvector (to the eigenvalue $\lambda_0=1$) is associated with the invariant state of the channel and depends on the value of $\lambda_\ast$. For $\Lambda_{\rm U}$, the invariant state is the maximally mixed state $\rho_0=\mathbb{I}/2$. However, for $\Lambda_{\rm NU}$, the invariant state is instead given by $\rho_\ast$ in eq. (\ref{rhoast}). Due to the fact that $\Lambda_{\rm U}$ and $\Lambda_{\rm NU}$ differ only in one parameter, we can easily compare the results of Section 3 for the corresponding measures. \begin{Remark} Non-unital phase-covariant channels present a better performance than their unital counterparts when the maximal fidelity, maximal output purity, and concurrence are measured. The opposite behavior is observed for the minimal fidelity, which decreases for non-zero $\lambda_\ast$. \end{Remark} From now on, assume that $\Lambda_{\rm NU}$ is {\it maximally non-unital}; that is, its parameter $\lambda_\ast$ admits the highest absolute value \begin{equation} |\lambda_\ast|=1-|\lambda_3|, \end{equation} which follows from the complete positivity conditions for the phase-covariant channels. Now, to construct a channel $\Lambda$ with intermediate values of $\lambda_\ast$, we take convex combinations of $\Lambda_{\rm NU}$ and $\Lambda_{\rm U}$. The resulting channel \begin{equation}\label{mix} \Lambda=(1-p)\Lambda_{\rm U}+p\Lambda_{\rm NU},\qquad 0\leq p\leq 1, \end{equation} shares its eigenvalues with both $\Lambda_{\rm U}$ and $\Lambda_{\rm NU}$. Moreover, the parameter that characterizes its non-unitality satisfies the formula \begin{equation} \lambda_\ast^{\pm}=\pm p(1-|\lambda_3|). \end{equation} Hence, $\lambda_\ast^{\pm}$ can be treated as a measure of $\Lambda(t)$'s non-unitality. The greater $p$ we take, the more non-unital is the mixture, with $p=0$ and $p=1$ corresponding to the unital and maximally non-unital maps, respectively. This notion can be generalized to all non-unital phase-covariant maps. \begin{Remark} For phase-covariant qubit maps, we introduce the measure of non-unitality \begin{equation} \mathrm{NU}(\Lambda)=\frac{|\lambda_\ast|}{1-|\lambda_3|}, \end{equation} which determines their degree of non-unitality. In particular, if $\mathrm{NU}(\Lambda)=1$, then $\Lambda$ is maximally non-unital. On the other hand, $\mathrm{NU}(\Lambda)=0$ corresponds to unital maps. \end{Remark} In general, quantum channels are used to describe the dynamics of open quantum systems; that is, systems that interact with an external environment. Continuous time-evolution is provided by dynamical maps, which are time-parameterized quantum channels $\Lambda(t)$ with the initial condition $\Lambda(0)={\mathchoice{\rm 1\mskip-4mu l}{\rm 1\mskip-4mu l$. Such maps are often solutions of dynamical equations called the {\it master equations}. Quantum systems with memoryless (Markovian) evolution satisfy the semigroup master equation $\dot{\Lambda}(t)=\mathcal{L}\Lambda(t)$ with a constant generator $\mathcal{L}$. The presence of strong system-environment interactions makes it necessary to consider more complicated equations, e.g. with time-dependent generators or memory kernels. For our considerations, however, the explicit form of master equations does not matter. In what follows, we consider examples of phase-covariant dynamical maps given by eq. (\ref{mix}) and analyze the evolution of their purity, fidelity, and concurrence of the evolved maximally entangled state $\rho_W$. \subsection{Example 1 -- Exponential decay} \FloatBarrier Let us consider a maximally non-unital dynamical map $\Lambda_{\rm NU}(t)$ characterized by \begin{equation}\label{eigexp} \lambda_1(t)=e^{-t},\qquad \lambda_3(t)=e^{-2t},\qquad \lambda_\ast(t)=1-e^{-2t}, \end{equation} and a unital map $\Lambda_{\rm U}(t)$ that shares eigenvalues with $\Lambda_{\rm NU}(t)$. Observe that both channels are Markovian semigroups, where $\Lambda_{\rm NU}(t)$ corresponds to amplitude damping and $\Lambda_{\rm U}(t)$ to anisotropic dephasing. For any $0<p<1$, the mixture $\Lambda(t)$ is not a semigroup itself \cite{CC_GAD}. It is straightforward to derive the formulas for \begin{itemize} \item the minimal and maximal fidelities: \begin{equation} f_{\min}[\Lambda(t)]=\frac 12 [1-p+(1+p)e^{-2t}],\qquad f_{\max}[\Lambda(t)]=\left\{ \begin{aligned} &\frac{1-e^{-2t}}{4(1-e^{-t})}[2+p^2\sinh t] \qquad{\rm for}\qquad p\leq\frac{1-e^{-t}}{\sinh t},\\ &\frac 12 [1+p+(1-p)e^{-2t}]\qquad{\rm for}\qquad p>\frac{1-e^{-t}}{\sinh t}; \end{aligned} \right. \end{equation} \item the maximal output purities: \begin{equation} \nu_2^2[\Lambda(t)]=\frac{1+p^2}{2}+\frac{1-p^2}{2}e^{-2t},\qquad \nu_\infty[\Lambda(t)]=\left\{ \begin{aligned} &\frac 12 (1+e^{-t}) \qquad{\rm for}\qquad p\leq\frac{1-e^{-t}}{2\sinh t},\\ &\frac 12 [1+p+(1-p)e^{-2t}]\qquad{\rm for}\qquad p>\frac{1-e^{-t}}{2\sinh t}. \end{aligned} \right. \end{equation} \end{itemize} Note that $f_{\min}$ and $\nu_2^2$ are given by simple expressions with exponential decay. In contrast, the formulas for $f_{\max}$ and $\nu_\infty$ are much more involved, having two potential outcomes depending on the values of $t$ and $p$. Despite the range conditions being implicit functions of time, both $f_{\max}$ and $\nu_\infty$ are continuous, as \begin{equation} \lim_{t\to t_\ast^{\pm}}f_{\max}[\Lambda(t)]=e^{-t_\ast}(1+\sinh t_\ast),\qquad \lim_{t\to t_\ast^{\pm}}\nu_\infty[\Lambda(t)]=\frac 12 (1+e^{-t_\ast}),\qquad \frac{1-e^{-t_\ast}}{\sinh t_\ast}\equiv p. \end{equation} Our results are plotted in Fig.\ref{exp}. It is clear that all the measures asymptotically decay in time and the curves corresponding to distinct values of $p$ cross only at $t=0$. Moreover, the minimal fidelity monotonically decreases with the increase of $p$, whereas all the other measures monotonically increase with $p$. Moreover, $f_{\max}$ and $\nu_\infty$ have the same asymptotical values. For $p=1$, $f_{\max}$, $\nu_2$, and $\nu_\infty$ reach their maximal value of 1. The only function that ever drops to zero is $f_{\min}[\Lambda_{\rm U}(t\to\infty)]$. \begin{figure}[htb!] \includegraphics[width=0.8\textwidth]{fig1.pdf} \caption{Plots for exponentially decaying channel eigenvalues representing time-evolution of the minimal fidelity ($a$), maximal fidelity ($b$), maximal output 2-norm ($c$), and maximal output $\infty$-norm ($d$). The color curves correspond to the channel mixtures with $p=0$ (red), $p=0.5$ (purple), $p=0.7$ (blue), and $p=1$ (yellow).} \label{exp} \end{figure} \FloatBarrier \subsection{Example 2 -- Oscillations} \FloatBarrier This time, we take the dynamical map $\Lambda_{\rm NU}(t)$ whose parameters oscillate according to \begin{equation}\label{eigosc} \lambda_1(t)=\cos t,\qquad \lambda_3(t)=\cos^2t,\qquad \lambda_\ast(t)=\sin^2t, \end{equation} together with the unital map $\Lambda_{\rm U}(t)$ with the exact same eigenvalues. Note that both channels are non-invertible due to $\lambda_k(t)$ vanishing for finite times. The oscillatory behaviour manifests itself also in the associated measures, as one finds \begin{itemize} \item the minimal fidelity: \begin{equation} f_{\min}[\Lambda(t)]=\left\{ \begin{aligned} &\frac{\sin^2t(4\cos t+p^2\sin^2t)}{8\cos t(1-\cos t)} \qquad{\rm for}\qquad -1<\cos t<0,\,p\leq\frac{2|\cos t|}{\sin^2t}(1-\cos t),\\ &\frac 12 [1+\cos^2t-p\sin^2t]\qquad{\rm otherwise}; \end{aligned} \right.\\ \end{equation} \item the maximal fidelity: \begin{equation} f_{\max}[\Lambda(t)]=\left\{ \begin{aligned} &\frac{\sin^2t(4\cos t+p^2\sin^2t)}{8\cos t(1-\cos t)} \qquad{\rm for}\qquad 0<\cos t<1,\,p\leq\frac{2\cos t}{\sin^2t}(1-\cos t),\\ &\frac 12 [1+\cos^2t+p\sin^2t]\qquad{\rm otherwise}; \end{aligned} \right.\\ \end{equation} \item the maximal output purities: \begin{equation} \nu_2^2[\Lambda(t)]=\frac 12 (1+\cos^2t+p^2\sin^2t),\qquad \nu_\infty[\Lambda(t)]=\frac 12 (1+\max\{|\cos t|,|\cos^2t+p\sin^2t|\}). \end{equation} \end{itemize} This time, all the measures except for $\nu_2^2$ are given by relatively complicated formulas, even though the parameters of the dynamical map $\Lambda(t)$ are given by simple oscillations. Unlike in the previous example, the functions describing $f_{\min}$, $f_{\max}$, and $\nu_\infty$ are no longer smooth but piecewise analytic. Additionally, the conditions in the expressions for extremal fidelities depend not only on $p$ but also on the sign of the cosine function. We plot our results in Fig.\ref{osc}. As expected, the channel measures demonstrate a similar oscillatory behaviour to that of $\lambda_1(t)$, $\lambda_3(t)$, and $\lambda_\ast(t)$. However, whereas these parameters and the extremal fidelities are $2\pi$-periodic, the maximal output norms are $\pi$-periodic instead. All the plotted curves cross at $t=k\pi$, $k\in\mathbb{N}$, with some being colinear for wider ranges of time. The discontinuity points for the extremal fidelities and $\nu_\infty$ at $p=0$ correspond to $\pi/2+k\pi$, $\pi\in\mathbb{N}$. Again, the higher the value of $p$, the smaller $f_{\min}$ and the greater the functions $f_{\max}$, $\nu_2$, $\nu_\infty$ at any fixed time. Just like for the exponentially decaying $\lambda_k(t)$, $f_{\max}$, $\nu_2$, and $\nu_\infty$ reach their maximal value of 1 for $p=1$. The only function that ever reaches zero is $f_{\min}[\Lambda_{\rm U}(t\to\infty)]$ for $\pi+2k\pi$ regardless of the choice of $p$, and then also for $\pi/2+2k\pi$ and $3\pi/2+2k\pi$ if $p=0$. In \cite{Anindita}, non-monotonicity of the Gaussian channel fidelity implies non-Markovianity of the evolution. We obtain a similar correspondence for extremal channel fidelities, as Example 1 features a Markovian evolution (monotonic functions) and Example 2 deals with a non-Markovian evolution (non-monotonic functions) \cite{CC_GAD}. \begin{figure}[htb!] \includegraphics[width=0.8\textwidth]{fig2.pdf} \caption{Plots for oscillating channel eigenvalues representing time-evolution of the minimal fidelity ($a$), maximal fidelity ($b$), maximal output 2-norm ($c$), and maximal output $\infty$-norm ($d$). The curves correspond to the channel mixtures with $p=0$ (red), $p=0.3$ (black), $p=0.5$ (purple), $p=0.7$ (blue), and $p=1$ (yellow).} \label{osc} \end{figure} \FloatBarrier \subsection{Example 3 -- Entanglement death and rebirth} \FloatBarrier Finally, we take a closer look at what happens after sending a half of a maximally entangled qubit pair through the mixtures analyzed in the earlier examples. From eq. (\ref{crhoW}), we find that the concurrence is given by \begin{equation}\label{c1} c[\Lambda(t)[\rho_W]]=\max\{0,e^{-t}(1-\sqrt{1-p^2}\sinh t)\} \end{equation} for $\Lambda(t)$ defined via exponential functions in eq. (\ref{eigexp}) and by \begin{equation}\label{c2} c[\Lambda(t)[\rho_W]]=\frac 12 \max\{0,2|\cos t|-\sqrt{1-p^2}\sin^2 t\} \end{equation} if one instead takes the oscillating functions from eq. (\ref{eigosc}). In Fig.\ref{conc}.($a$)--($b$), we plot both functions for different values of $p$. Observe that in Fig.\ref{conc}.($a$), corresponding to exponentially decaying eigenvalues, the concurrence monotonically decays. With the increase of $p$, the entanglement of $\Lambda(t)[\rho_W]$ gets prolonged and its sudden death is postponed -- up until $p=1$, for which it eternally prevails. In Fig.\ref{conc}.($b$), on the other hand, we see that the state experiences periodical entanglement death and rebirth. The less non-unital the mixture is, the bigger the gap between entanglement death and rebirth. Moreover, the maximally entangled state is reached for $t=k\pi$, $k\in\mathbb{N}$, when $c[\Lambda(t)[\rho_W]]=1$. \begin{figure}[htb!] \includegraphics[width=0.8\textwidth]{fig3.pdf} \caption{Graphical representation of the evolution of concurrence $c[\Lambda(t)[\rho_W]]$ and the corresponding entanglement of formation $E_f(c)$ for exponentially decaying ($a$--$b$) and oscillating ($c$--$d$) channel eigenvalues. The color curves correspond to the channel mixtures with $p=0$ (red), $p=0.5$ (purple), $p=0.7$ (blue), and $p=1$ (yellow).} \label{conc} \end{figure} The concurrence is directly related to another entanglement measure: entanglement of formation \cite{Wooters1,Audenaert} \begin{equation} E_f(\rho)=h\left[\frac{1+\sqrt{1-c^2(\rho)}}{2}\right], \end{equation} where \begin{equation} h(x)=-x\operatorname{log}_2x-(1-x)\operatorname{log}_2(1-x). \end{equation} The main difference between them is that only the entanglement of formation is a resource-based, information theoretic measure \cite{Wootters4}. In Fig.\ref{conc}.($c$)--($d$), we plot $E_f[\Lambda(t)[\rho_W]]$ based on the concurrence from eqs. (\ref{c1}) and (\ref{c2}), respectively. Observe that both measures reach their minimal and maximal values at the same points in time, even though their in-between values differ. Therefore, $E_f=0$ and $E_f=1$ again correspond to separable and maximally entangled states, respectively. \FloatBarrier \section{Conclusions} We analyzed properties of phase-covariant channels with varying degrees of non-unitality. By fixing the channel eigenvalues and only changing its invariant state, we showed how to engineer the channel extremal fidelities, maximal output purity, and concurrence when the channel acted on one half of a maximally entangled state. We presented examples for mixing two semigroups and two non-invertible dynamical maps. Our results confirmed that, from among the measures we considered, only the minimal channel fidelity cannot be improved by introducing more non-unitality to the quantum channel. In other words, the more non-unital channels we took, the more pure and less distorted were the output states. This held true for any point in time, therefore this increase in channel performance was not only temporary, which was the case with engineering fidelity and classical capacity for unital maps \cite{Marshall,fidelity,Engineering_capacity}. Similarly, non-unital channels were better suited for prolonging quantum entanglement, even leading to its repetitive rebirth. We claim that for non-unitality of quantum channels it is possible to formulate a research theory, similarly to non-invertibility in ref. \cite{invertibility_measure}. Recall that quantum resource theories are used to quantify desirable quantum effects, like quantum entanglement or non-Markovianity. We consider non-unitality as a dynamical quantum resource \cite{QRT} for phase-covariant dynamical maps. Unital maps can be identified with free operations, which are resource non-increasing. Our non-unitality measure from Remark 3 is a good candidate for a resource quantifier, as it is a continuous function on quantum maps that measures resourcefulness. In further studies, it would be interesting to analyze other quantities that characterize channel performance, like von Neumann entropy or channel capacity. One could also check whether it is possible to engineer temporary increasing fidelity and purity via master equations with memory kernels, like for the Pauli channels in refs. \cite{Marshall,fidelity}. Another open question considers possible generalizations to qudit systems. \section{Acknowledgements} This research was funded in whole or in part by the National Science Centre, Poland, Grant numbers 2021/43/D/ST2/00102 (KS) and 2020/39/D/ST2/01234 (MS). For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
1,314,259,996,947
arxiv
\section{Introduction} Ultra-relativistic collisions, so called ``Little Bangs'' of almost fully ionized Au atoms are observed at the four experiments (BRAHMS, PHENIX, PHOBOS and STAR) of the Relativistic Heavy Ion Collider (RHIC) of the Brookhaven National Laboratory, New York. The aim of these experiments is to create new forms of matter that existed in Nature a few microseconds after the Big Bang, the creation of our Universe. A consistent picture emerged after the first three years of running the RHIC experiment: quarks indeed become deconfined, but also behave collectively, hence this hot matter acts like a liquid~\cite{Adcox:2004mh}, not like an ideal gas theorists had anticipated when defining the term QGP. The situation is similar to as if prisoners (quarks and gluons confined in hadrons) have broken out of their cells at nearly the same time, but they find themselves on the crowded jail-yard coupled with all the other escapees. This strong coupling is exactly what happens in a liquid~\cite{Riordan:2006df}. \section{High pt suppression} High transverse momentum particles resulting from hard scatterings between incident partons have become one of the most effective tools for probing the properties of the medium created in ultra-relativistic heavy ion collisions at RHIC. Nuclear modification factor, defined as \begin{equation} R_{\rm AA}(p_t) \equiv {\textnormal{Yield\ \ in\ \ Au+Au\ \ events} \over \textnormal{Scaled Yield\ \ in\ \ p+p\ \ events}}, \end{equation} was measured in central and preripheral Au+Au collisions at the four RHIC experiments~\cite{Adcox:2001jp,Adcox:2002pe,Adams:2003kv,Adler:2003qi,Adler:2003au,Arsene:2003yk,Back:2003qr,Adler:2006bw}. The measurements show a high transverse momentum hadron suppression in central Au+Au collisions compared to (appropriately scaled) p+p collisions, while there is no such suppression in peripheral Au+Au or d+Au collisions \cite{Adler:2003ii,Adams:2003im,Back:2003ns}, as shown in the upper plots of Fig.~\ref{f:raa}. This shows that the suppression is not due to modification of parton distributions in the colliding nuclei. The nuclear modification factor has been measured for several hadron species at highest $p_t$: for $\pi_0$, and most recently $\eta$ mesons\cite{Adler:2006hu}, as shown in the lower plots of Fig.~\ref{f:raa}. This confirms the above evidence for a dense and strongly interacting matter. On the other hand, direct photon measurements, which require tight control of experimental systematics over several orders of magnitude, show that the high $p_t$ photons in Au+Au collisions are not suppressed~\cite{Adler:2005ig} and, thus, provide final confirmation that hard scattering processes occur at rates expected from point-like processes. This observation makes definitive the conclusion that the suppression of high-$p_t$ hadron production in Au+Au collisions is a final-state effect. \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{raa_pi0_0005.png} \includegraphics[width=0.49\linewidth]{raa_pi0_8092.png}\\ \includegraphics[width=0.49\linewidth]{photon1.png} \includegraphics[width=0.49\linewidth]{photon2.png} \end{center} \caption{ Nuclear modification factor $R_{\rm AA}$ for $\pi_0$, $\eta$ and photon yields in Au+Au collisions as a function of $p_t$ for different centralities (different number of participants). The shaded error band around unity indicate systematic errors.} \label{f:raa} \end{figure} \section{The perfect fluid of quarks} One of the most important results of RHIC is the relatively strong second harmonic moment of the transverse momentum distribution, referred to as the elliptic flow. The elliptic flow is an experimentally measurable observable and is defined as the azimuthal anisotropy or second Fourier-coefficient of the single-particle momentum distribution $N_1(p)$. The $n^{\rm th}$ Fourier-coefficient is defined as: \begin{equation} v_n = \frac{\int_0^{2 \pi} N_1(p) \cos(n\varphi) d\varphi} {\int_0^{2 \pi} N_1(p) d\varphi}, \end{equation} $\varphi$ being the azimuthal (perpendicular to the beam) axis of momentum $p$ with respect to the reaction plane. This formula returns the elliptic flow $v_2$ for $n=2$. Measurements of the elliptic flow by the PHENIX, PHOBOS and STAR collaborations (see refs.~\cite{Back:2004zg,Back:2004mh,Adler:2003kt,Adams:2004bi,Adler:2001nb,Sorensen:2003wi}) reveal rich details in terms of its dependence on particle type, transverse ($p_t$) and longitudinal momentum ($\eta$) variables, and on the centrality and the bombarding energy of the collision. In the soft transverse momentum region ($p_t \lesssim 2$~GeV/c) measurements at mid-rapidity are found to be well described by hydrodynamical models~\cite{Adcox:2004mh,Adams:2005dq,Csanad:2003qa,Hama:2005dz,Broniowski:2002wp}. Important is, that in contrast to a uniform distribution of particles expected in a gas-like system, this liquid behavior means that the interaction in the medium of these copiously produced particles is rather strong, as one expects from a fluid. Detailed investigation of these phenomena suggests that this liquid flows with almost no viscosity~\cite{Adare:2006ti}. Measurement of elliptic flow of pions, kaons, protons, $\phi$ mesons and deuterons in Au+Au collisions at $\sqrt{s_{NN}}~=~200$ GeV, when plotted against scaling variable $KE_T$ (transverse kinetic energy) confirm the prediction of perfect fluid hydrodynamics, that the relatively ``complicated" dependence of azimuthal anisotropy on transverse momentum and particle type can be scaled to a single function\cite{Adare:2006ti,Csanad:2005gv,Csanad:2006sp,Borghini:2005kd,Bhalerao:2005mm}. On the left plot of Fig.~\ref{f:v2sc} we show this scaling. Mesons and barions gather into two different groups here. If one scales both axes of these plots by the number of constituent quarks of the measured hadrons (as shown on the right plot of Fig.~\ref{f:v2sc}), the two curves collapse to one~\cite{Afanasiev:2007tv}. Thus it appears that quark collectivity dominates the expansion dynamics of these collisions- \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{v2_KET.png} \end{center} \caption{(color online)(a) $v_2$ vs $KE_T$ for several identified particle species obtained in mid-central (20-60\%) Au+Au collisions. (b) $v_2/n_q$ vs $KE_T/n_q$ for the same particle species shown in panel (a). The shaded bands indicate systematic error estimates for $(\overline{d})d$ and $\phi$ mesons (see text).}\label{f:v2sc} \end{figure} \section{Heavy flavour} We also have measured electrons from heavy flavor (charm and bottom) decays in Au+Au collisions at $\sqrt{s_{\rm NN}}$ = 200 GeV. The nuclear modification factor $R_{\rm AA}$ relative to p+p collisions shows a strong suppression in central Au+Au collisions, indicating substantial energy loss of heavy quarks in the medium produced at RHIC energies. A large elliptic flow, $v_2$ is also observed indicating substantial heavy flavor elliptic flow. Both $R_{\rm AA}$ and $v_2$ show a $p_t$ dependence different from those of neutral pions. A comparison to transport models which simultaneously describe $R_{\rm AA}(p_t)$ and $v_2(p_t)$ suggests that the viscosity to entropy density ratio is close to the conjectured quantum lower bound, {\it i.e.} near a perfect fluid~\cite{Armesto:2005mz,vanHees:2005wb,Moore:2004tg}, as shown on Fig.~\ref{f:heavyfl} We see, that even heavy flavour is suppressed beyond extrapolations from cold nuclear matter effects, and even heavy flavour is flowing similarly to hadrons made out of light quarks. This suggests strong coupling of charm and bottom to the medium~\cite{Adler:2003ii,Adare:2006ns}. \begin{figure} \begin{center} \includegraphics[width=0.8\linewidth]{heavyfl.png} \end{center} \caption{ (a) $R_{\rm AA}$ of heavy-flavor electrons in 0-10\% central collisions compared with $\pi^0$ data~\cite{Adler:2003qi} and model calculations (curves I~\cite{Armesto:2005mz}, II~\cite{vanHees:2005wb}, and III~\cite{Moore:2004tg}). (b) $v_2^{\rm HF}$ of heavy-flavor electrons in minimum bias collisions compared with $\pi^0$ data~\cite{Adler:2005rg} and the same models. Boxes show systematic uncertainty in both plots.}\label{f:heavyfl} \end{figure} \section{Chiral dynamics} Correlation functions are important to see the collective properties of particles and the space-time structure of the emitting source, e.g.\ the observed size of a system can be measured by two-particle Bose-Einstein correlations~\cite{HanburyBrown:1956pf}. The $m_t$ dependence of the strength of the two-pion Bose-Einstein correlation function $\lambda$ can be used to extract information on the mass-reduction of the $\eta$' meson (the ninth, would-be Goldstone-boson), a signal of the U$_{\rm A}(1)$ symmetry restoration in hot and dense matter: It is known, that if the chiral U$_{\rm A}$(1) symmetry is restored, then the mass of the $\eta'$ boson is tremendously decreasing and its production cross section tremendously increasing. Thus $\eta'$ bosons are copiously produced, and decaying through $\eta$ bosons (with a very long lifetime) into low momentum pions. Hence the strength of the two-particle correlation functions at low relative momenta might change significantly.~\cite{Vance:1998wd,Kapusta:1995ww,Huang:1995fc,Hatsuda:1994pi}. PHENIX analyzed~\cite{Csanad:2005nr} two-pion Bose-Einstein correlations with fits to two-pion correlation functions using three different shapes, Gauss, Levy and Edgeworth, and determined $\lambda(m_t)$ from it, as described in refs.~\cite{Csorgo:1999sj,Csanad:2005nr,Csorgo:2003uv}. We re-normed the $\lambda(m_t)$ curves with their maximal value on the investigated $m_t$ interval. This way they all show the same shape, as shown in Fig.~\ref{f:ua1}. This confirms the existence and characteristics of the hole in the $\lambda(m_t)$ distribution. We conclude that at present, results are critically dependent on our understanding of statistical and systematic errors, and additional analysis is required to make a definitive statement. \begin{figure} \begin{center} \includegraphics[width=0.6\linewidth]{lambda_shape.png} \end{center} \caption{Measured $\lambda(m_t)$ from different methods}\label{f:ua1} \end{figure} The PHENIX experiment has also measured the dielectron continuum in $\sqrt{s_{NN}}$=200 GeV Au+Au collisions~\cite{Afanasiev:2007xw,Toia:2006zh}. The data below 150 MeV/c$^2$ are well described by the cocktail of hadronic sources. The vector mesons $\omega$, $\phi$ and $J/\psi$ are reproduced within the uncertainties. However, in minimum bias collisions, the yield is substantially enhanced above the expected yield in the continuum region from 150 to 750 MeV/c$^2$. The enhancement in this mass range is a factor of 3.4 $\pm$ 0.2(stat.) $\pm$ 1.3(syst.) $\pm$ 0.7(model), where the first error is the statistical error, the second the systematic uncertainty of the data, and the last error is an estimate of the uncertainty of the expected yield. Above the $\phi$ meson mass the data seem to be well described by the continuum calculation based on PYTHIA, as shown in Fig.~\ref{f:diel} \begin{figure} \begin{center} \includegraphics[width=0.49\linewidth]{chdyn1.png} \includegraphics[width=0.49\linewidth]{chdyn2.png} \end{center} \caption{ Invariant e$^+$e$^-$--pair yield of refs.~\cite{Afanasiev:2007xw,Toia:2006zh} compared to the yield from the model of hadron decays. The charmed meson decay contribution based on PYTHIA is included in the sum of sources (solid black line). The charm contribution expected if the dynamic correlation of $c$ and $\bar{c}$ is removed is shown separately. Statistical (bars) and systematic (boxes) uncertainties are shown separately; the mass range covered by each data point is given by horizontal bars. The systematic uncertainty on the cocktail is not shown.}\label{f:diel} \end{figure} \section{Summary and conclusions} Based on the measurements of suppression of high transverse momentum hadrons and of their elliptic flow, we can make the definitive statement, that in relativistic Au+Au collisions observed at RHIC we see a strongly interacting matter, that has the characteristics of a perfect fluid. We also see signals of chiral dynamics by the enhancement of the dielectron continuum above the expected yield from hadron production and the possible mass modification of the $\eta$' meson. Future plan is to explore all properties of the Quark Matter, by analyzing more data and using higher luminosity. \bibliographystyle{prlsty}
1,314,259,996,948
arxiv
\section{Introduction} A textbook way to generate a random integer on $\{1, \dots, m\}$ is to start with $X \sim U[0,1)$ and define $Y \equiv 1 + \lfloor mX \rfloor$. If $X$ is truly uniform on $[0,1)$, $Y$ is then uniform on $\{1, \dots, m\}$. But if $X$ has a discrete distribution derived by scaling a pseudorandom $w$-bit integer (typically $w=32$) or floating-point number, the resulting distribution is, in general, not uniformly distributed on $\{1, \ldots, m \}$ even if the underlying pseudorandom number generator (PRNG) is perfect. Theorem~\ref{thm:theorem_1} illustrates the problem. \begin{theorem}[\citet{knuth_art_1997}] \label{thm:theorem_1} Suppose $X$ is uniformly distributed on $w$-bit binary fractions, and let $Y_m \equiv 1 + \lfloor mX \rfloor$. Let $p_+(m) = \max_{1 \le k \le m} \Pr\{Y_m = k\}$ and $p_-(m) = \min_{1 \le k \le m} \Pr\{Y_m = k\}$. There exists $m < 2^w$ such that, to first order, $p_+(m)/p_-(m) = 1 + m2^{-w+1}$. \end{theorem} A better way to generate random elements of $\{1, \dots, m\}$ is to use pseudorandom bits directly, avoiding floating-point representation, multiplication, and the floor operator. Integers between $0$ and $m-1$ can be represented with $\mu(m) \equiv \lceil \log_2(m-1) \rceil$ bits. To generate a pseudorandom integer between $1$ and $m$, first generate $\mu(m)$ pseudorandom bits (for instance, by taking the most significant $\mu(m)$ bits from the PRNG output, if $w \ge \mu(m)$, or by concatenating successive outputs of the PRNG and taking the first $\mu(m)$ bits of the result, if $w < \mu(m)$). Cast the result as a binary integer $M$. If $M > m-1$, discard it and draw another $\mu(m)$ bits; otherwise, return $M+1$.\footnote{% See \citet[p.114]{knuth_art_1997}. This is also the approach recommended by the authors of the Mersenne Twister. See \url{http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/efaq.html}, last accessed 18~September 2018. } Unless $m = 2^{\mu(m)}$, this procedure is expected to discard some random draws---up to almost half the draws if $m = 2^p+1$ for some integer $p$. But if the input bits are IID Bernoulli(1/2), the output will be uniformly distributed on $\{1, \ldots, m\}$. This is how the Python function \texttt{numpy.random.randint()} (Version 1.14) generates pseudorandom integers.\footnote{% However, Python's built-in \texttt{random.choice()} (Versions 2.7 through 3.6) does something else biased: it finds the closest integer to $mX$, where $X$ is a binary fraction between 0 and 1. } The algorithm that R (Version 3.5.1 patched) \citep{R_2018} uses to generate random integers in \texttt{R\_unif\_index()} (in \texttt{RNG.c}) has the issue pointed out in Theorem~\ref{thm:theorem_1} in a more complicated form, because R uses a pseudorandom float at an intermediate step, rather than multiplying a binary fraction by $m$. The way the float is constructed depends on $m$. Because \texttt{sample} relies on random integers, it inherits the problem. When $m$ is small, R uses \texttt{unif\_rand} to generate pseudorandom floating-point numbers $X$ on $[0, 1)$ starting from a $32$-bit random integer generated from the Mersenne Twister algorithm \citep{mt1998}.\footnote{ % Luke Tierney pointed out that the seeding algorithm used in R is neither the one originally proposed by \citet{mt1998}, which is known to have issues, nor their updated 2002 version that fixes these issues. Instead, R uses its own initialization method invented by Brian Ripley.} % The range of \texttt{unif\_rand} contains (at most) $2^{32}$ values, which are approximately equi-spaced (but for the vagaries of converting a binary number into a floating-point number~\citep{goldberg91}, which R does using floating-point multiplication by 2.3283064365386963e-10). When $m > 2^{31}$, \texttt{R\_unif\_index()} calls \texttt{ru} instead of \texttt{unif\_rand}.\footnote{ A different function, \texttt{sample2}, is called when $m > 10^7$ and $k < m/2$. \texttt{sample2} uses the same method to generate pseudorandom integers. } \texttt{ru} combines two floating-point numbers, $R_1$ and $R_2$, each generated from a 32-bit integer, to produce the floating-point number $X$, as follows: the first float is multiplied by $U = 2^{25}$, added to the second float, and the result is divided by $U$: $$ X = \frac{\lfloor U R_1 \rfloor + R_2}{U}.$$ The relevant code is in \texttt{RNG.c}. The cardinality of the range of \texttt{ru} is certainly not larger than $2^{64}$. The range of \texttt{ru} is unevenly spaced on $[0, 1)$ because of how floating-point representation works. The inhomogeneity can make the probability that $X \in [x, x+\delta) \subset [0, 1)$ vary widely with $x$. For the way \texttt{R\_unif\_index()} generates random integers, the non-uniformity of the probabilities of $\{1, \ldots, m\}$ is largest when $m$ is just below $2^{31}$. The upper bound on the ratio of selection probabilities approaches $2$ as $m$ approaches $2^{31}$, about 2~billion. For $m$ close to 1~million, the upper bound is about $1.0004$. We recommend that the R developers replace the algorithm in \texttt{R\_unif\_index()} with the algorithm based on generating a random bit string large enough to represent $m$ and discarding integers that are larger than $m$. The resulting code would be simpler and more accurate. Other routines that generate random integers using the multiply-and-floor method \texttt{(int) unif\_rand() * n}, for instance, \texttt{walker\_ProbSampleReplace()} in \texttt{random.c}, should also be updated to use an unbiased integer generator (e.g., to call the new version of \texttt{R\_unif\_index()}). \bibliographystyle{plainnat}
1,314,259,996,949
arxiv
\section{Introduction}\label{sec_intro} The classification of insulating states of matter has been refined in terms of protecting symmetries through the discovery of topological insulators \cite{KaneMele05a,KaneMele05b,Bernevig06,König07}. For example, as long as time-reversal symmetry is not broken, topological insulators cannot be adiabatically connected to nontopological band insulators without closing the charge gap \cite{Schnyder08}, and the helical edge states are protected against perturbations \cite{KaneMele05a,KaneMele05b,Wu06,Xu06}. Recently, a further refinement was achieved by the theoretical prediction \cite{Fu11, Hsieh12,Slager13} and experimental realization \cite{Tanaka12,Dziawa12,Xu12} of topological crystalline insulators (TCIs). In this case, in addition to time-reversal symmetry, the two-dimensional surface has crystal symmetries which protect the topological state against perturbations. Because crystal (point group) symmetries are not defined in one dimension, this definition of TCIs requires a three-dimensional bulk and a two-dimensional surface. Here, we introduce a two-dimensional counterpart to the TCI. In addition to time-reversal symmetry, the model we consider preserves translation symmetry at the one-dimensional edge. This leads to protection at the single-particle level despite a trivial bulk $Z_2$ invariant. Our model is based on the Kane-Mele (KM) model \cite{KaneMele05a} on the honeycomb lattice, which has a quantum spin Hall ground state at half filling. By threading each honeycomb plaquette with a magnetic flux of size $\pm\pi$, we obtain the {\it $\pi$ Kane-Mele ($\pi$KM) model}. The idea of inserting $\pi$ fluxes has previously been considered for the case of an intensive number of fluxes \cite{Ran08,Qi08,Juricic12,Assaad13}, and a superlattice of well separated fluxes \cite{Wu13}. Isolated magnetic $\pi$ fluxes locally bind zero-energy modes and lead to spin-charge separation in topological insulators \cite{Ran08,Qi08}. This property can also be exploited to identify correlated topological insulators \cite{Ran08,Juricic12,Assaad13}. Dirac fermions on the $\pi$ flux square lattice have been studied in \cite{Weeks10,Jia13}. Furthermore, twisted graphene multilayers have been identified as an instance of a two-dimensional TCI \cite{Kindermann13}. The physics of the $\pi$KM model is surprisingly rich. In the noninteracting case, and for each spin projection, it has Chern insulator \cite{Haldane88} ground states characterized by Chern numbers $C=\pm 2$, separated by a topological phase transition. The band structure resembles that of the nucleated topological phase in the Kitaev honeycomb lattice model \cite{Kitaev06,Lahtinen11,Lahtinen12} which corresponds to the vortex sector of the Kitaev model characterized by a $\pi$ flux vortex at each plaquette. The spinful $\pi$KM model is found to have a trivial $Z_{2}$ invariant. However, there exist two pairs of helical edge states crossing at distinct points in the projected Brillouin zone, which are robust with respect to single-particle scattering processes as long as translation symmetry is preserved. An intriguing question, which we address in this manuscript using bosonization and quantum Monte Carlo methods, is if the edge states are robust to correlation effects. At half filling, we find that umklapp scattering processes between the two pairs of edge states localize the edge modes in the corresponding low-energy model, leading to a gap in the edge states without breaking translation symmetry. This prediction is consistent with quantum Monte Carlo results for the correlated edge states. Away from half filling, umklapp scattering is not relevant, and the edge states remain stable provided that translation symmetry is not broken by disorder. Finally, we investigate the bulk phase diagram of the $\pi$KM model with an additional Hubbard interaction. Our mean-field and quantum Monte Carlo results suggest the existence of a magnetic phase transition that extends to weak coupling at the quadratic band crossing point. The paper is organized as follows. In Sec.~\ref{sec_model}, we introduce the $\pi$KM model. Section~\ref{sec_qmc_method} provides a brief discussion of the quantum Monte Carlo methods. The bulk properties are discussed in Sec.~\ref{sec_nonint_bulk} (noninteracting case) and Sec.~\ref{sec_qmc_bulk} (interacting case). Sec.~\ref{sec_nonint_edge} contains a discussion of the noninteracting edge states. The bosonization analysis of the edge states is presented in Sec.~\ref{sec_boson}, followed by the quantum Monte Carlo results for correlation effects on the edge states in Sec.~\ref{sec_qmc_ribbon}. Finally, we conclude in Sec.~\ref{sec_sum}. \section{$\pi$ Kane-Mele-Hubbard model}\label{sec_model} The KM model describes electrons on the honeycomb lattice with nearest-neighbor hopping and spin-orbit coupling \cite{KaneMele05a}. Given the $U(1)$ spin symmetry which conserves the ${z}$ component of spin, the KM Hamiltonian reduces to two copies of the Haldane model \cite{Haldane88,Wright13}, one for each spin sector. The latter has an integer quantum Hall ground state or, in other words, it is a Chern insulator. The quantum spin Hall insulator results when the two Haldane models are combined in a way that restores time-reversal symmetry. Here, we construct a new model (referred to as the $\pi$KM model) by taking the KM model and inserting a magnetic flux $\pm\pi$ into each hexagon of the underlying honeycomb lattice. Each flux can be thought of as originating from a time-reversal symmetry preserving magnetic field of the form \begin{equation} \bm{B}_{\pm}(\bm{r})= \pi \delta( \bm{r}-\bm{r_{i}} ) (\pm)\bm{e}_{z}\;, \end{equation} and is given by \begin{equation} \phi_{\pm} = \frac{h c}{e}\int_{\hexagon}\bm{B}_{\pm}(\bm{r})d\bm{S}=\pm\pi \frac{h c}{e}. \end{equation} As illustrated in Fig.~\ref{fig_band}(a), such an arrangement of fluxes of size $\pm\pi$ (in units of $hc/e$) leads to a model with a unit cell consisting of two hexagons. For each spin projection $\sigma$, the Hamiltonian takes the form of a modified Haldane model \cite{Haldane88}, \begin{eqnarray} \label{eqn_CI} \mathcal{H}^{\sigma}&=& -\sum_{\langle\bm{i},\bm{j}\rangle} \left[t(\bm{i},\bm{j})-\mu \delta_{\bm{ij}}\right] \hat{c}_{\bm{i},\sigma}^{\dagger}\hat{c}_{\bm{j},\sigma}^{\phantom\dagger}\\ &&\quad+{i\sigma \sum_{\langle\langle\bm{i},\bm{j}\rangle\rangle}} \lambda(\bm{i},\bm{j}) \nu_{\bm{i},\bm{j}}\hat{c}_{\bm{i},\sigma}^{\dagger}\hat{c}_{\bm{j},\sigma}^{\phantom\dagger}\;.\nonumber \end{eqnarray} Here, $t(\bm{i},\bm{j})=t\tau_{\bm{i},\bm{j}}$ and $\lambda(\bm{i},\bm{j})=\lambda\tau_{\bm{i},\bm{j}}$ are the nearest-neighbor and next-nearest-neighbor hopping parameters, respectively; $\bm{i},\bm{j}$ index both lattice and orbital sites and $\mu$ is the chemical potential. The factor $\nu_{\bm{i},\bm{j}}$ is $-1$ ($+1$) for $\bm{i},\bm{j}$ indexing the orbitals $1$ or $3$ ($2$ or $4$). The additional, nonuniform hopping phase factors $\tau_{\bm{i},\bm{j}}=\pm1$ account for the presence of the $\pi$ fluxes. A $\pi$ flux is inserted in a honeycomb plaquette by choosing the phase factors $\tau_{\bm{i},\bm{j}}$ in such a way that their product along a closed contour around the plaquette is \begin{equation} \label{eqn_tau} \tau_{\bm{i},\bm{j}}\tau_{\bm{j},\bm{k}}\cdots\tau_{\bm{l},\bm{i}} = -1\,. \end{equation} In a periodic system, $\pi$ fluxes can only be inserted in pairs. Each hopping process from $\bm{i}$ to $\bm{j}$ that crosses the connecting line of a flux pair acquires a phase $\tau_{\bm{i},\bm{j}}=-1$, which fixes the position of both fluxes according to Eq.~(\ref{eqn_tau}). In general, there is no one-to-one correspondence between the flux positions and the set of $\tau_{\bm{i},\bm{j}}$, \ie, one eventually has to make a gauge choice. Due to the geometry of the four-orbital unit cell, two gauges exist [see Fig.~\ref{fig_band}(a)] which have unitarily equivalent Hamiltonians. On a torus geometry, Hamiltonian~(\ref{eqn_CI}) becomes \begin{equation} \mathcal{H}^{\sigma}= \sum_{\bm{k}} c^{\dagger}_{\bm{k},\sigma} H^\sigma(\bm{k}) c^{\phantom\dagger}_{\bm{k},\sigma}\,, \label{eqn_CI2} \end{equation} where $c^{\phantom\dagger}_{\bm{k},\sigma} =(\hat{c}_{1,\bm{k},\sigma}^{\phantom\dagger},\hat{c}_{3,\bm{k},\sigma}^{\phantom\dagger},\hat{c}_{2,\bm{k},\sigma}^{\phantom\dagger},\hat{c}_{4,\bm{k},\sigma}^{\phantom\dagger})^{T}$ is the basis in which the nearest-neighbor term is block off-diagonal. The Hamilton matrix $H^\sigma(\bm{k})$ can be expressed in terms of Dirac $\Gamma$ matrices \cite{KaneMele05a}, $\Gamma^{(1,2,3,4,5)}=(\sigma_{x}\otimes \openone,\sigma_{z}\otimes \openone, \sigma_{y}\otimes\sigma_{x}, \sigma_{y}\otimes\sigma_{y}, \sigma_{y}\otimes\sigma_{z})$ and their commutators $\Gamma^{ab}=[\Gamma^{a},\Gamma^{b}]/(2i)$: \begin{equation}\label{eqn_CI3} H^\sigma (\bm{k})= \mu\,\openone+ \sum\limits_{a=1}^{5} d_{a}(\bm{k}) \Gamma^{a} + \sum\limits_{a<b=1}^{5} d^\sigma_{ab}(\bm{k}) \Gamma^{ab}\,. \end{equation} The nonvanishing coefficients $d_{a}(\bm{k})$ and $d^\sigma_{ab}(\bm{k})$ are given in Table~\ref{tab_gamma}. \begin{table*} \begin{ruledtabular} \begin{tabular}{l l l} $d_{1}(\bm{k})=-t\cos(\bm{k}\bm{a}_{2})$ & $d^\sigma_{12}(\bm{k})=t\sin(\bm{k}\bm{a}_{2})$ & $d^\sigma_{23}(\bm{k})=t \cos(\bm{k}\bm{a}_{1}/2)\cos(\bm{k}(\bm{a}_{1}/2-\bm{a}_{2}))$ \\ $d_{3}(\bm{k})=-\frac{t}{2}\big[\sin(\bm{k}\bm{a}_{2}) -\sin(\bm{k}(\bm{a}_{1}-\bm{a}_{2}))\big]$ & $d^\sigma_{13}(\bm{k})=-2\sigma \lambda\sin(\bm{k}\bm{a}_{1}/2)\cos(\bm{k}\bm{a}_{1}/2)$ & $d^\sigma_{24}(\bm{k})=\frac{t}{2}\big[\sin(\bm{k}(\bm{a}_{1}-\bm{a}_{2}))+\sin(\bm{k}\bm{a}_{2})\big]$ \\ $d_{4}(\bm{k})=-\frac{t}{2}\big[\cos(\bm{k}(\bm{a}_{1}-\bm{a}_{2}))-\cos(\bm{k}\bm{a}_{2})\big]$ & $d^\sigma_{14}(\bm{k})=-2\sigma \lambda \sin^{2}(\bm{k}\bm{a}_{1}/2)$ & $d^\sigma_{35}(\bm{k})=2\sigma \lambda\cos(\bm{k}\bm{a}_{1}/2)\cos(\bm{k}(\bm{a}_{1}/2-\bm{a}_{2}))$ \\ $d_{25}(\bm{k})=t$ & $d^\sigma_{15}(\bm{k})=2\sigma \lambda \sin(\bm{k}\bm{a}_{2})$ & $d^\sigma_{45}(\bm{k})=2\sigma \lambda \cos(\bm{k}(\bm{a}_{1}/2-\bm{a}_{2}))\sin(\bm{k}\bm{a}_{1}/2)$ \end{tabular} \caption{Nonzero coefficients $d_{a}(\bm{k})$ and $d^\sigma_{ab}(\bm{k})$ of Eq.~(\ref{eqn_CI3}).\label{tab_gamma}} \end{ruledtabular} \end{table*} As for the KM model, a spinful and time-reversal invariant Hamiltonian results by combining $\mathcal{H}^{\uparrow}$ and $\mathcal{H}^{\downarrow}$; $\lambda$ then plays the role of an intrinsic spin-orbit coupling. Including a Rashba spin-orbit interaction which breaks the $U(1)$ spin symmetry, we have \begin{eqnarray} \label{eqn_KM} \mathcal{H}_{0}&=&-\sum_{\langle\bm{i},\bm{j}\rangle,\sigma} \left[t(\bm{i},\bm{j})-\mu\delta_{\bm{i},\bm{j}}\right] \hat{c}_{\bm{i},\sigma}^{\dagger}\hat{c}_{\bm{j},\sigma}^{\phantom\dagger}\\ &&+{i\!\!\sum_{\langle\langle\bm{i},\bm{j}\rangle\rangle,\sigma}}\sigma\lambda(\bm{i},\bm{j}) \nu_{\bm{i},\bm{j}}\hat{c}_{\bm{i},\sigma}^{\dagger}\hat{c}_{\bm{j},\sigma}^{\phantom\dagger}\nonumber\\ &&+{i\sum_{\langle \bm{i},\bm{j}\rangle}} \left(\hat{c}^{\dagger}_{\bm{i},\uparrow}, \hat{c}^{\dagger}_{\bm{i},\downarrow}\right) \lambda_{\text{R}}(\bm{i},\bm{j}) \bm{e}_{z} (\bm\sigma\times \bm{d}_{\bm{i},\bm{j}}) \left( \begin{array}{c} \hat{c}^{\phantom\dagger}_{\bm{j},\uparrow} \\ \hat{c}^{\phantom\dagger}_{\bm{j},\downarrow} \end{array} \right)\nonumber\;. \end{eqnarray} In the Rashba term, $\lambda_{\text{R}}(\bm{i},\bm{j})=\lambda_{\text{R}}\tau_{\bm{i}\bm{j}}$, $\bm{d}_{\bm{i},\bm{j}}$ is a vector pointing to one of the three nearest-neighbor sites, and $\bm\sigma=(\sigma^x,\sigma^y,\sigma^z)$ is the vector of Pauli matrices. Taking into account a Hubbard term to model electron-electron interactions, we finally arrive at the Hamiltonian of the $\pi$ Kane-Mele-Hubbard ($\pi$KMH) model, \begin{equation} \label{eqn_KMH} \mathcal{H}=\mathcal{H}_{0}+U\sum_{\bm{i}}\hat{n}_{\bm{i},\uparrow} \hat{n}_{\bm{i},\downarrow}\,. \end{equation} \section{Quantum Monte Carlo methods}\label{sec_qmc_method} The $\pi$KMH lattice model can be studied using the auxiliary-field determinant quantum Monte Carlo method. Simulations are free of a sign problem given particle-hole, time-reversal and $U(1)$ spin symmetry \cite{Hohenadler11,Zheng11, Hohenadler12}. This requirement excludes the $U(1)$ spin symmetry breaking Rashba term. The algorithm has been discussed in detail previously \cite{Hohenadler12,AssaadBook08}. To study the magnetic phase diagram of the $\pi$KMH model, we apply a finite-temperature implementation \cite{AssaadBook08}. The Trotter discretization was chosen as $\Delta\tau t=0.1$. An inverse temperature $\beta t=40$ was sufficient to obtain converged results. Interaction effects on the helical edge states can be studied numerically by taking advantage of the exponential localization of the edge states and of the insulating nature of the bulk which has no low-energy excitations. Accordingly, the low-energy physics is captured by considering the Hubbard term only for the edge sites at one edge of a (zigzag) ribbon. The bulk therefore is considered noninteracting and establishes the topological band structure; it plays the role of a fermionic bath. The resulting model is simulated without further approximations using the continuous-time quantum Monte Carlo algorithm based on a series expansion in the interaction $U$ (CT-INT) \cite{Rubtsov05}. A similar approach has previously been used to study edge correlation effects in the KMH model \cite{Hohenadler11,Ho.As.11}. Compared to the KMH model, the Rashba term leads to a moderate sign problem. \section{Bulk properties of the $\pi$KM model}\label{sec_nonint_bulk} In this section, we discuss the band structure and the topological phases of the noninteracting model~(\ref{eqn_CI2}), corresponding to one spin sector of the $\pi$KM model. Subsequently, we show that the spinful $\pi$KM model~(\ref{eqn_KM}) is $Z_{2}$ trivial at half filling. \subsection{Band structure} The band structure is established by the eigenvalues of Eq.~(\ref{eqn_CI2}) which are, for $\mu=0$, given by \begin{eqnarray} E_{m}(\bm{k})&=& \pm \Big\{ 3 t^{2} + 6 \lambda^{2} - 2 \lambda^{2} f(\bm{k})\\ &&\pm t \sqrt{ 2\big[ 3 \big( t^{2} + 8 \lambda^{2}\big) + \big( t^{2} -16 \lambda^{2} \big) f(\bm{k}) \big] }\;\Big\}^{1/2}\;,\nonumber \end{eqnarray} where $f(\bm{k})=\cos(\bm{k}\bm{a}_{1}) + \cos(2\bm{k}\bm{a}_{ 2}) - \cos[\bm{k}(2\bm{a}_{2}-\bm{a}_{1})]$. At $\lambda=0$, $\mathcal{H}^{\sigma}$ has four distinct Dirac points $\bm{K}_{i}$ with linear dispersion at zero energy, \begin{equation} E(\bm{K}_{i}+\bm{k})=\sqrt{\frac{3}{2}}t \left(k_{x}+k_{y}\right) +\mathcal{O}(k^{2})\;, \end{equation} where $\bm{K}_{1,2}=(\pi/3)(1,\pm 2/\sqrt{3})$, $\bm{K}_{3}=(\pi/3)(2,5/\sqrt{3})$ and $\bm{K}_{4}=(\pi/3)(2,1/\sqrt{3})$. At $\lambda/t=1/2$, the spectral gap closes quadratically at two points $\bm{\Gamma}_{i}$, \begin{equation} E(\bm{\Gamma}_{i}+\bm{k})=\frac{3\sqrt{3}}{4} t \left(k_{x}^{2}+k_{y}^{2}\right) +\mathcal{O}(k^{4})\;, \end{equation} where $\bm{\Gamma}_{1}=(\pi/3)(1,0)$ and $\bm{\Gamma}_{2}=(\pi/3)(2,\sqrt{3})$ (Fig.~\ref{fig_band}). For the spinful model (\ref{eqn_KM}) with nonzero Rashba coupling, the point of quadratic band crossing is replaced by a finite region with zero band gap. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{spectrum.pdf} \caption{ (Color online) (a) The unit cell of the $\pi$ flux honeycomb lattice has four orbitals and is defined by the lattice vectors $\bm{a}_{1}=\big(3,-\sqrt{3}\big)$ and $\bm{a}_{2}=\frac{1}{2}\big(3,\sqrt{3}\big)$. Each honeycomb plaquette carries a magnetic flux $\pm\pi$. The flux positions, defined by Eq.~(\ref{eqn_tau}), are fixed by requiring that hopping terms crossing the dashed blue line (which is a gauge choice) acquire a phase of $-1$. The eigenvalue spectrum $E_{m}(\bm{k})$ of $\mathcal{H}^{\sigma}$ [Eq.~(\ref{eqn_CI})] has (b) four Dirac cones at $\lambda=0$, and (c) two points of quadratic band crossing at $\lambda/t=0.5$.\label{fig_band}} \end{figure} \subsection{Quantized Hall conductivity} We first consider the Chern insulator defined by $H(\bm{k})$ in Eq.~(\ref{eqn_CI2}). In this case, the electromagnetic response reveals the topological properties of the band structure. In linear response to an external vector potential, the optical conductivity tensor of an $n$-band noninteracting system described by a Hamilton matrix $H(\bm{k})$ is given by \begin{equation} \sigma_{\alpha,\beta}(\omega) = \frac{1}{N}\frac{(e/\hbar)^{2}}{i(\omega +i0^{+})} \left[ \langle K_{\alpha}\rangle \delta_{\alpha,\beta} - \Lambda_{\alpha,\beta}(\omega) \right]\;, \end{equation} where \begin{eqnarray} \langle K_{\alpha}\rangle &=& \sum\limits_{\bm{k},n} f[E_{n}(\bm{k})]\text{Tr}[K_{\alpha}(\bm{k})P_{n}(\bm{k})]\;,\nonumber\\ \Lambda_{\alpha,\beta}(\omega) &=& \sum\limits_{\bm{k},m,n} \lambda_{mn}(\bm{k},\omega)\text{Tr}[J_{\alpha}(\bm{k})P_{n}(\bm{k})J_{\beta}(\bm{k})P_{m}(\bm{k})]\;,\nonumber\\ \lambda_{mn}(\bm{k},\omega) &=& \frac{f[E_{m}(\bm{k})]-f[E_{n}(\bm{k})]}{\omega + i0^{+} + E_{m}(\bm{k})-E_{n}(\bm{k}) }\;, \end{eqnarray} using the matrices $J_{\alpha}(\bm{k})=\partial H(\bm{k})/\partial k_{\alpha}$, $K_{\alpha}(\bm{k})=-\partial^{2}H(\bm{k})/\partial k_{\alpha}^{2}$, the projector on the $n$-th band $P_{n}(\bm{k})$, and the Fermi function $f[E_{n}(\bm{k})]$. The Hall conductivity is then computed by taking the zero-frequency limit of the optical conductivity, \begin{equation} \underset{\omega\rightarrow 0}{\text{lim}} \operatorname{Re}\big[ \sigma_{xy}(\omega)\big] =\sigma_{xy} =\left[\sum\limits_{n=1}^{N_{\text{occ}}}C_{n}\right]\frac{e^{2}}{h}\,. \end{equation} It directly measures the (first) Chern number $C$ of the gap, which is the sum of the Chern numbers $C_{n}$ of the $N_{\text{occ}}$ occupied bands. Figure~\ref{fig_map_sigma_xy_dos} shows the Chern number as a function of the chemical potential $\mu$ and the ratio $t_2/t_1$. Transitions between different Chern insulators are topological phase transitions and necessarily involve an intermediate metallic state where the Chern number can in principle take any value. Of particular interest for the understanding of correlation-induced instabilities is the transition at $\mu = 0$ as a function of $t_2/t_1$ between the states with $C= \pm 2 $. At $t_2/t_1 = 1/2$, we find a a quadratic band crossing point with a nonzero density of states. For the spinful model~(\ref{eqn_KM}) with $U=0$ and a $U(1)$ spin symmetry ($\lambda_\text{R}=0$), one can define a quantized spin Hall conductivity $\sigma^{s}_{xy}$ in terms of the Hall conductivity $\sigma^{\sigma}_{xy}$ of $\mathcal{H}^{\sigma}$ (\ref{eqn_CI2}). At $\mu=0$, $\sigma^{\sigma}_{xy}$ and $\sigma^{s}_{xy}$ take the values \begin{equation} \label{eqn_sigmaxy} \sigma^{\sigma}_{xy}=\mp\sigma 2 \frac{e^{2}}{h}\;,\;\;\sigma^{s}_{xy}=\frac{\hbar}{2e}\left(\sigma^{\uparrow}_{xy}-\sigma^{\downarrow}_{xy}\right) =\mp 2 \frac{e}{2\pi}\;. \end{equation} The sign change occurs at the quadratic band crossing point at $\lambda/t=\lambda_{0}=1/2$. \begin{figure} \includegraphics[width=0.5\textwidth]{map_sigma_xy_dos.pdf} \caption{(Color online) (a) Total Chern number $C=\sum_{n}C_{n}$ of the occupied bands of $\mathcal{H}^{\downarrow}$ [Eq.~(\ref{eqn_CI})], as obtained from the Hall conductivity $\sigma_{xy}$ in the insulating phases which are separated by metallic regions (white). (b) Density of states $\rho(\omega)=(1/4N)\sum_{\bm{k},n}\delta(\omega-E_{n}(\bm{k}))$ and Chern numbers $C_{n}$ of the individual bands. \label{fig_map_sigma_xy_dos}} \end{figure} \subsection{$Z_{2}$ invariant} In the general case where the $U(1)$ spin symmetry is broken, for example by the presence of a Rashba term, the topological properties of a system with time-reversal symmetry are determined by the $Z_2$ topological invariant \cite{KaneMele05b}. Recently, it was shown that the $Z_2$ index can be calculated with a manifestly gauge-independent method that only relies on time-reversal symmetry \cite{Prodan11,Leung12}. The idea is to consider the adiabatic change of one component of the reciprocal lattice vector, say $k_{y}$, along high-symmetry paths $k_{y}\in(k,k^{\prime})$ in a rectangular Brillouin zone, while keeping the other component ($k_{x}$) fixed. This process is determined by the unitary evolution operator $U_{k,k^{\prime}}$ and its differential equation \begin{equation} \label{eqn_diffeq} i\frac{\text{d}}{\text{d}k}U_{k,k^{\prime}} = i\left[P_{k},\partial_{k} P_{k} \right]U_{k,k^{\prime}}\;. \end{equation} The initial condition is $U_{k^{\prime},k^{\prime}} =P_{k^{\prime}}$ and $P_{k}=\sum_{i} |u_{i}(k)\rangle\langle u_{i}(k)|$ is the projector on the occupied eigenstates of the $\pi$KM Hamiltonian. Equation~(\ref{eqn_diffeq}) is integrated by evenly discretizing the path $(k,k^{\prime})$, \begin{equation} U_{k,k^{\prime}}=\underset{N\rightarrow\infty}{\mathrm{lim}}\prod\limits_{n=1}^{N}P_{k^{\prime}+k\frac{n-1}{N-1}}\,. \end{equation} The topological invariant is then given as the product of two pseudo-invariants \begin{eqnarray} \Xi_{\text{2D}}=\pm 1&=& \prod\limits_{k_{x}=0,\pi}\frac{\text{Pf}\left[ \langle u_{i}(0)| \theta |u_{j}(0)\rangle \right]}{\text{Pf}\left[ \langle u_{i}(\pi)| \theta |u_{j}(\pi)\rangle \right]}\\\nonumber &&\quad\times\frac{ \text{det}\left[ \langle u_{i}(\pi)| U_{(\pi,0)} |u_{j}(0)\rangle \right ]} {\sqrt{\text{det}\left[ \langle u_{i}(\pi)| U_{(\pi,-\pi)} |u_{j}(\pi)\rangle \right ]}}\;, \end{eqnarray} where the dependence on $k_{x}$ is implicit and the invariant is computed numerically \cite{Wimmer12}. In the actual implementation, one has to make sure to use the same branch for the square root at $k_{x}=0$ and at $k_{x}=\pi$. For the $\pi$KM model~(\ref{eqn_KM}) at half filling ($\mu=0$) we obtain, as expected \cite{Wu06}, a trivial insulator ($\Xi_{\text{2D}}=+1$). In contrast, if the chemical potential lies in the lower (upper) band gap, {\ie}, at quarter (three-quarter) filling, we obtain a quantum spin Hall insulator ($\Xi_{\text{2D}}=-1$). It is interesting to consider how other bulk probes for the $Z_2$ index lead to the conclusion of a trivial insulating state at half filling. For example, the $Z_{2}$ index can be probed by looking at the response to a magnetic $\pi$ flux \cite{Qi08,Ran08,Assaad13}. In the quantum spin Hall state, threading a $\delta$-function $\pi$ flux through the lattice amounts to generating a Kramers pair of states located at the middle of the gap. Provided that the particle number is kept constant during the adiabatic pumping of the $\pi$ flux, these mid-gap states give rise to a Curie law in the uniform spin susceptibility. This signature of the quantum spin Hall state has been detected in Ref.~\onlinecite{Assaad13} in the presence of correlations. For the half-filled $\pi$KM model, the insertion of a $\pi$ flux leads to a pair of Kramers degenerate states which form bonding and antibonding combinations and thereby cut off the Curie law at energy scales below the bonding-antibonding gap. \section{Bulk correlation effects}\label{sec_qmc_bulk} We begin our analysis of the effect of electron-electron interactions by considering the $\pi$KMH model~(\ref{eqn_KMH}) on a torus geometry. In order to compare our mean-field predictions to quantum Monte Carlo results, we set the Rashba spin-orbit coupling and the chemical potential to zero. \begin{figure} \includegraphics[width=0.5\textwidth]{meanfield_ftqmc.pdf} \caption{(Color online) (a) Phase diagram of the mean-field Hamiltonian~(\ref{eqn_Hmf}), showing the existence of a magnetically order phase with $xy$ magnetic order above a critical value $U_\text{c}$ that depends on the spin-orbit coupling $\lambda$. For $\lambda/t=0.5$, where the model has a quadratic band crossing point, magnetic order exists for any nonzero value of $U_\text{c}$. (b) Transverse magnetic structure factor $S^{xy}_{\text{AFM}}$ of the model~(\ref{eqn_KMH}) for different values $U/t$, as obtained from quantum Monte Carlo simulations of the $\pi$KMH model on a $6 \times 6$ lattice with periodic boundary conditions and at inverse temperature $\beta t=40$. } \label{fig_ftqmc} \end{figure} The KMH model without additional $\pi$ fluxes is known to exhibit long-range, transverse antiferromagnetic order at large values of $U/t$ \cite{Rachel10, Hohenadler11,Zheng11,Laubach13}. We therefore decouple the Hubbard term in Eq.~(\ref{eqn_KMH}) in the spin sector, allowing for an explicit breaking of time-reversal symmetry. The mean-field Hamiltonian reads \begin{equation}\label{eqn_Hmf} \mathcal{H}_{\text{mf}}=\mathcal{H}_{0} -\frac{2U}{3} \sum_{\bm{i}}\big(2\hat{S}_{\bm{i}}\langle\hat{S}_{\bm{i}}\rangle -\langle\hat{S}_{\bm{i}}\rangle^{2}\big) +\frac{UN}{2}\;, \end{equation} where $\mathcal{H}_{0}$ is given by Eq.~(\ref{eqn_KM}) with $\lambda_{\text{R}}=0$, and $\hat{S}_{\bm{i}}=(\hat{S}_{\bm{i}}^{x},\hat{S}_{\bm{i}}^{y},\hat{S}_{\bm{i}}^{z})$. Assuming antiferromagnetic order, we make the ansatz $\langle\hat{S}_{\bm{i}}\rangle=S_{\text{mf},\bm{i}}$ and \begin{eqnarray}\label{eqn_mfparam} S_{\text{mf},\bm{i}}^{x}&=&\nu_{\bm{i}}\,m\,,\;\; S_{\text{mf},\bm{i}}^{y,z}=0\,,\nonumber\\ S_{\text{mf},\bm{i}}^{x}&=&\frac{1}{Z} \frac{1}{2}\sum\limits_{s,s^{\prime}} \text{Tr}\left[ e^{-\beta \mathcal{H}_{\text{mf}}\{S_{\text{mf},\bm{i}}^{x} \}} \hat{c}_{\bm{i},s}^{\dagger}\sigma_{x}\hat{c}_{\bm{i},s^{\prime}}^{\phantom\dagger} \right], \end{eqnarray} where $\nu_{\bm{i}}=+1$ ($\nu_{\bm{i}}=-1$) if $\bm{i}$ indexes the orbitals $1,3$ ($2,4$). Equation~(\ref{eqn_mfparam}) is solved self-consistently, resulting in the phase diagram shown in Fig.~\ref{fig_ftqmc}(a). We find a magnetic phase with transverse antiferromagnetic order above a critical value of $U/t$ which depends on $\lambda/t$. In particular, at the quadratic band crossing point ($\lambda_{0}=0.5$), the magnetic transition occurs at infinitesimal values of $U/t$ as a result of the Stoner instability associated with the nonvanishing density of states at the Fermi level. Tuning the system away from the quadratic band crossing point, the critical interaction increases. To go beyond the mean-field approximation, we apply the auxiliary-field quantum Monte Carlo method discussed in Sec.~\ref{sec_qmc_method} to the $\pi$KMH model. We calculate the transverse antiferromagnetic structure factor \begin{equation} S^{xy}_{\text{AFM}} = \frac{1}{L^{2}} \sum_{\bm{i},\bm{j}} (-1)^{\nu_{\bm{i}} + \nu_{\bm{j}}} \langle \hat{S}^{+}_{\bm{i}}\hat{S}^{-}_{\bm{j}} + \hat{S}^{-}_{\bm{i}}\hat{S}^{+}_{\bm{j}}\rangle \end{equation} as a function of the interaction $U$ and the spin-orbit coupling $\lambda$. Simulations were done on a $6\times6$ $\pi$-flux honeycomb lattice (equivalent to $72$ honeycomb plaquettes). As shown in Fig \ref{fig_ftqmc}(b), for small $U/t$, the structure factor has a clear maximum close to $\lambda_{0}$, where the weak-coupling magnetic instability is observed in mean-field theory. At larger values of $U/t$, the maximum becomes less pronounced, and the enhancement of $S^{xy}_{\text{AFM}}$ for all values of $\lambda/t$ is compatible with the existence of a magnetic phase for all $\lambda/t$ at large $U/t$. These numerical results seem to confirm the overall features of the mean-field phase diagram. The numerical determination of the exact phase boundaries from a systematic finite-size scaling is left for future work. \section{Edge states of the $\pi$KM model}\label{sec_nonint_edge} We now consider the edge states of the noninteracting $\pi$KM model~(\ref{eqn_KM}) on a zigzag ribbon with open (periodic) boundary conditions in the $\bm{a}_{1}$ ($\bm{a}_{2}$) direction [Fig.~\ref{fig_band_ribbon2}(a)], and with momentum $k=\bm{k}\cdot\bm{a}_{2}$ along the edge. Since the model is $Z_{2}$ trivial, we expect an even number of edge modes to traverse the bulk gap \cite{Wu06}. Furthermore, given the spin Chern number $\sigma^{s}_{xy}/(e/2\pi)=\pm 2$ [see Eq.~(\ref{eqn_sigmaxy})], we expect two helical edge modes at half filling. Figure~\ref{fig_band_ribbon2}(b) shows the eigenvalue spectrum with degenerate Kramers doublets at the time-reversal invariant momenta $k=0$ and $k=\pi$. For $\lambda_{0}<\lambda/t<\lambda_{\pi}$, where $\lambda_{\pi}=\sqrt{3}/2$, the eigenvalue spectrum of Eq.~(\ref{eqn_CI}) has two additional cones at $k=\pi\pm\delta$. They are unstable in the sense that their existence relies on the $U(1)$ spin symmetry. \begin{figure} \includegraphics[width=0.5\textwidth]{band_ribbon.pdf} \caption{(Color online) (a) Ribbon geometry of the $\pi$ flux honeycomb lattice. In the spinful case, the edge states consist of two Kramers doublets with Fermi velocities $v_{0}$ and $v_{\pi}$. (b) Eigenvalue spectrum $E_{m}(k)$ of Eq.~(\ref{eqn_KM}) for $\lambda/t=0.3$ and $\lambda_{\text{R}}/t=0.1$ on a zigzag ribbon.\label{fig_band_ribbon2}} \end{figure} The edge modes at $k=0$ ($k=\pi$) and $\sigma=\uparrow,\downarrow$ can be further characterized by their Fermi velocity $v_{0}$ ($v_{\pi}$) and---in the case of a $U(1)$ spin symmetry---by their chirality (the sign of the velocity). The chirality changes at $\lambda_{0}$ and $\lambda_{\pi}$. For $\lambda/t<\lambda_{0}$, the edge modes have the same chirality, so that the ($0,\sigma$) modes propagate in the same direction as the ($\pi,\sigma$) modes. In contrast, for $\lambda_{0}<\lambda/t<\lambda_{\pi}$, they have opposite chirality since the direction of propagation of the ($0,\sigma$) modes is reversed after going through the point of quadratic band crossing. At $\lambda/t=\lambda_{\pi}$, the additional cones at $k=\pi\pm\delta$ merge with the ($\pi,\sigma$) modes. Consequently, the direction of propagation of the ($\pi,\sigma$) modes is reversed and for $\lambda/t>\lambda_{\pi}$ both edge modes have the same chirality again. In the limit $\lambda/t\rightarrow\infty$, $v_{0}$ and $v_{\pi}$ become equal. Furthermore, the velocities have equal magnitude but opposite sign at $\lambda/t=\lambda_{s}\approx 0.665$. To study the edge states, we consider the local single-particle spectral function \begin{equation} \label{eqn_spectral} A^{\sigma}_{i}(k,\omega)= -\frac{1}{\pi}\mathrm{Im}\;G_{ii}^{\sigma}(k,\omega+i0^{+})\;, \end{equation} where the local noninteracting Green function is \begin{equation} G_{ii}^{\sigma}(k,\omega+i0^{+})=\left[\omega+i0^{+}-H(k) \right]_{i\sigma,i\sigma}^{-1}\;. \end{equation} The edge corresponds to the orbital index $i=2$ [Fig.~\ref{fig_band}(a)] and for brevity we will omit the index $i$ in the following. The Fermi velocities $v_{0}$ and $v_{\pi}$ and the local spectral function are shown in Fig.~\ref{fig_aom_edge3} \footnote{The color schemes are based on gnuplot-colorbrewer; 10.5281/zenodo.10282.}. Similar phases, characterized by a trivial $Z_{2}$ index and two helical edge modes at $k=0,\pi$, have been found in the KM model with additional third-neighbor hopping terms \cite{Hung14}, and in the anisotropic Bernevig-Hughes-Zhang model \cite{Bernevig06,Jiang14}. \begin{figure} \includegraphics[width=0.5\textwidth]{aom_edge.pdf} \caption{(Color online) (a) The Fermi velocity $v_{0}$ ($v_{\pi}$) changes sign at $\lambda_{0}$ ($\lambda_{\pi}$) so that for $\lambda_{0}<\lambda<\lambda_{\pi}$, the ($0,\sigma$) and ($\pi,\sigma$) edge modes have opposite chirality. $\lambda_{s}$ defines a symmetric point where $v_{0}=-v_{\pi}$ holds. (b)--(d) Single-particle spectral function $A^{\uparrow}(k,\omega)$ along the edge. (e),(f) Spin-averaged single-particle spectral function $A(k,\omega)=\sum_{\sigma}A^{\sigma}(k,\omega)/2$ along the edge. Here, $\lambda_\text{R}=0$ in (a)--(d), and $\lambda_\text{R}/t=0.3$ in (e),(f). \label{fig_aom_edge3}} \end{figure} In the remainder of this section, we concentrate on the low-energy properties of the $\pi$KM model~(\ref{eqn_KM}). Furthermore, we focus on the edge modes at the time-reversal invariant momenta $k=0,\pi$, and neglect the two additional, unstable modes at $k=\pi\pm\delta$ occurring for $\lambda_{0}<\lambda/t<\lambda_{\pi}$ which are gapped out by any finite Rashba coupling. Then, the effective Hamiltonian can be written in terms of right (left) moving fields $R_{1}(x)$ [$L_{1}(x)$] at the Fermi wave vector $k_{\text{F}}^{(1)}=0$ and right (left) moving fields $R_{2}(x)$ [$L_{2}(x)$] at $k_{\text{F}}^{(2)}=\pi$: \begin{equation} \label{eqn_effham_x} \mathcal{H}=\int \mathrm{d}x\bm{\Psi}^{\dagger}(x) H_{\text{edge}}(-i\partial_{x}) \bm{\Psi}(x)\,, \end{equation} where $ \bm{\Psi}^{\dagger} (x) = ( R_{1}^{\dagger} (x) , L_{1}^{\dagger} (x) , R_{2}^{\dagger} (x) , L_{2}^{\dagger} (x) )$. The chiral fields have the anticommutation relations \begin{eqnarray} \label{eqn_anticom} \{R_{i}(x),R_{j}^{\dagger}(x^{\prime})\}&=& \{L_{i}(x),L_{j}^{\dagger}(x^{\prime})\} = \delta_{ij}\delta(x-x^{\prime})\,,\nonumber\\ \{R_{i}(x),L_{j}^{\dagger}(x^{\prime})\} &=& \{L_{i}(x),R_{j}^{\dagger}(x^{\prime})\}=0\;. \end{eqnarray} In the $U(1)$ spin symmetric case, we have \begin{equation} H_{\text{edge}}(-i\partial_{x}) = -i \partial_{x}\;\text{diag}(v_{1},v_{2})\otimes \sigma_{z}\;. \end{equation} Hamiltonian~(\ref{eqn_effham_x}) will be the starting point for the bosonization analysis in Sec.~\ref{sec_boson}. \subsection{Effective low-energy model}\label{subsec_effmodel} The edge of a two-dimensional bulk has two time-reversal invariant momenta, $k=0$ and $k=\pi$, and therefore several possibilities exist to have two pairs of helical edge states: (i) both Kramers doublets cross at $k=0$ (or $k=\pi$), (ii) one Kramers doublet crosses at $k=0$ while the other crosses at $k=\pi$, and (iii) each Kramers doublet has one branch at $-k$ (or $\pi-k$) and its time-reversed branch at $+k$ (or $\pi+k$). In cases (i) and (iii), degenerate states which are not Kramers partners exist at the same momentum and can be mixed by single-particle backscattering. The edge states (i) and (iii) are therefore unstable at the single-particle level. In contrast, the edge states (ii) are stable at the single-particle level if translation symmetry is preserved at the edge, thereby forbidding scattering between states at $k=0$ and $k=\pi$. The metallic edge modes of Eq.~(\ref{eqn_KM}) are an instance of case (ii). Given time-reversal symmetry and no interactions, the edge states remain gapless even in the generic case without $U(1)$ spin symmetry as long as translation symmetry and hence the momentum $k$ along the edge is preserved. On the other hand, the states acquire a gap when time-reversal symmetry is broken. This is the case in the presence of, for example, a Zeeman term that also breaks the $U(1)$ spin symmetry. To illustrate this point, we consider the most general time-reversal symmetric formulation of the model (\ref{eqn_effham_x}) in momentum space. Let $R_{i}^{\dagger}(p)$ [$L_{i}^{\dagger}(p)$] create an electron with velocity $v_{i}$ [$-v_{i}$] (where $v_{1}\equiv v_{0}$ and $v_{2}\equiv v_{\pi}$) and momentum $k = p + (i-1) \pi$. Then, Eq.~(\ref{eqn_effham_x}) reads \begin{equation} \label{eqn_effham_p} \mathcal{H}=\sum\limits_{p} \bm{\Psi}^{\dagger}(p) H_{\text{edge}}(p) \bm{\Psi}(p)\;, \end{equation} where $ \bm{\Psi}^{\dagger} (p) = ( R_{1}^{\dagger} (p) , L_{1}^{\dagger} (p) , R_{2}^{\dagger} (p) , L_{2}^{\dagger} (p) )$ and \begin{equation} \label{eqn_effham} H_{\text{edge}}(p) = H_{\text{SO}}(p) + H_{\text{S}}\;, \end{equation} where $H_{\text{SO}}(p)$ is a general spin-orbit term and $H_{\text{S}}$ a single-particle scattering term. Time-reversal symmetry is preserved when $\Theta H_{\text{edge}}(p)\Theta^{-1}=H_{\text{edge}}(-p)$, where $\Theta = \Gamma^{3}\Gamma^{5} K$. Here, $K$ denotes complex conjugation and the $\Gamma$ matrices were defined in Sec.~\ref{sec_model}. The spin-orbit coupling \begin{equation} H_{\text{SO}}= p\left( \begin{array}{cc} v_{1} \bm{\sigma}\cdot\bm{e}_1 & 0\\ 0 & v_{2} \bm{\sigma}\cdot\bm{e}_2 \end{array} \right)=H_{U(1)}(p)+H_{\text{R}}(p) \end{equation} can be split into a $U(1)$ spin-symmetric term, $H_{U(1)}(p)$, and a Rashba term, $H_{\text{R}}(p)$. The (not necessarily equal) spin quantization axes are labeled by real unit vectors $\bm{e}_{i}$. Choosing $\bm{e}_{i}$ to point along the $z$-axis one may write the $U(1)$ spin symmetric part as \begin{equation} H_{U(1)}(p) = p \left( \begin{array}{cc} v_{1}^{\phantom{x}} \sigma_{z} e_{1}^{z} & 0 \\ 0 & v_{2}^{\phantom{x}} \sigma_{z} e_{2}^{z} \end{array} \right) = p\left( v_{+}\Gamma^{15} + v_{-} \Gamma^{34}\right)\,, \end{equation} where $ v_{\pm} = ( v_{1}^{\phantom{x}} e_{1}^{z} \pm v_{2}^{\phantom{x}} e_{2}^{z})/2 $. Note that the generator of the $U(1)$ spin symmetry is $\Gamma^{34}=\openone\otimes\sigma_{z}$. One way to break the $U(1)$ spin symmetry is to include the Rashba term $H_{\text{R}}(p)$ by setting $\bm{e}_{1}\neq \bm{e}_{2}$. This can be accomplished by choosing, for example, $\bm{e}_{1}=(0,0,e_{1}^{z})^{T}$ and $\bm{e}_{2}=(e_{2}^{x},e_{2}^{y},e_{2}^{z})^{T}$, leading to \begin{eqnarray} \label{eqn_rashba} H_\text{R}(p) &=& p v_{2} \left( \begin{array}{cc} 0 & 0 \\ 0 & \sigma_{x} e_{2}^{x} + \sigma_{y} e_{2}^{y} \end{array} \right) \\\nonumber &=& \frac{p v_{2}}{2}\left[(\Gamma^{45}-\Gamma^{13}) e_{2}^{x} - (\Gamma^{35}+\Gamma^{14})e_{2}^{y}\right]\,. \end{eqnarray} $H_{\text{S}}$ breaks the translation symmetry of the bulk model in the sense that it allows single-particle scattering between the $i = 1$ and $i=2$ branches of the low-energy model. Its general, time-reversal symmetric form is \begin{eqnarray} H_{\text{S}} &=&\left( \begin{array}{cc} 0 & h_{\text{S}}\\ h_{\text{S}}^{\star} & 0 \end{array} \right) = \alpha_1 \Gamma^1 + \alpha_3 \Gamma^3 + \alpha_4 \Gamma^4 + \alpha_5\Gamma^5\nonumber\\ &=& H_{\text{S},U(1)} + H_{\text{S}^{\prime}}\;, \end{eqnarray} where $h_{\text{S}}$ denotes the corresponding complex $2\times 2$ matrix and $\alpha_i\in \mathbb{R}$. Note that $H_{\text{S}}$ generally breaks the $U(1)$ spin symmetry since $[H_{\text{S}},\Gamma^{34}]=2i(\alpha_{4}\Gamma^{3}-\alpha_{3}\Gamma^{4})$. Therefore, we write it as the sum of a symmetry-preserving term, $H_{\text{S},U(1)}=\alpha_{1}\Gamma^{1} + \alpha_{5}\Gamma^{5}$, and a symmetry-breaking term, $H_{\text{S}^{\prime}}=\alpha_{3}\Gamma^{3} + \alpha_{4}\Gamma^{4}$. We consider the following three cases: (a) unbroken translation symmetry and unbroken $U(1)$ spin symmetry, (b) broken translation symmetry but unbroken spin symmetry, and (c) broken translation symmetry and broken spin symmetry. In case (a), we have $H_{\text{S}}=0$, and $U(1)$ spin symmetry amounts to $\bm{e}_{1} = \bm{e}_{2}$. This implies $H_{\text{R}}(p)=0$, so that \begin{equation} \label{eqn_effham_a} H_{\text{edge}}^{(a)}(p)=H_{U(1)}(p)\;. \end{equation} The spectrum of $H_{\text{edge}}^{(a)}(p)$ is gapless, as shown in Fig.~\ref{fig_eff_edge}(a). \begin{figure} \includegraphics[width=0.5\textwidth]{eff_edge.pdf} \caption{Spectrum $E_{\pm}(p)$ of the effective model (\ref{eqn_effham}), with $v_{1} =1$, $v_{2} = 0.5$, and $\bm{e}_1 =\bm{e}_2 = \bm{e}_z$. (a) Both translation symmetry and $U(1)$ spin symmetry are preserved ($\alpha_{i}=0$). (b) Translation symmetry is broken, but $U(1)$ spin symmetry is preserved ($\alpha_1 = 0.2$, $\alpha_5 = 0.1$, $\alpha_3 =\alpha_4 =0 $). (c) Both translation symmetry and $U(1)$ spin symmetry are broken ($\alpha_1 = 0.2$, $\alpha_5 = 0.1$, $\alpha_3=0.1$, $\alpha_4 =0.05$). \label{fig_eff_edge}} \end{figure} In case (b), we have \begin{equation} \label{eqn_effham_b} H_{\text{edge}}^{(b)}(p)=H_{U(1)}(p) + H_{\text{S},U(1)}\,, \end{equation} and the spectrum, shown in Fig.~\ref{fig_eff_edge}(b), has two cones centered at $p_{0}=\pm\sqrt{(\alpha_1^2 + \alpha_5^2)/(v_+^2 - v_-^2)}$, with the linearized dispersion \begin{equation} E_{\pm}(p)=\pm \frac{v_+^2 - v_-^2}{v_+} (p \pm p_0) + \mathcal{O}(p^{2})\;. \end{equation} This illustrates that, as long as spin is conserved, the breaking of translation symmetry does not gap out the edge states. Finally, case (c) can be realized by adding the Rashba term (\ref{eqn_rashba}) to Eq.~(\ref{eqn_effham_b}) or, alternatively, by considering \begin{equation} \label{eqn_effham_c} H_{\text{edge}}^{(c)}(p)=H_{U(1)}(p)+H_{\text{S}}\;, \end{equation} where $\alpha_{i}\neq 0$. The resulting spectrum is gapped, see Fig.~\ref{fig_eff_edge}(c). Returning to the original $\pi$KM model~(\ref{eqn_KM}), we expect the combination of disorder (which breaks translation symmetry) and Rashba spin-orbit coupling to open a gap in the edge states. We have measured the spin polarization carried by the helical edge modes as a function of disorder strength and using twisted boundary conditions \cite{Sheng06}. Although the pair of Kramers doublets is in general not protected from localization by disorder, the spin polarization takes on finite values up to sizable disorder strengths. We attribute this finding to strong finite-size effects. The question of edge state destruction by disorder deserves further investigation. \subsection{Low-energy spin symmetries at $\lambda/t=\lambda_{s}$ and for $\lambda/t\rightarrow \infty$}\label{subsec_symmetry} \begin{figure} \subfigure[]{ \includegraphics[width=0.225\textwidth]{edge_modes_oppos_v.pdf} \label{fig_oppos_v} } \subfigure[]{ \includegraphics[width=0.225\textwidth]{edge_modes_equal_v.pdf} \label{fig_equal_v} } \caption{The ($0,\sigma$) and ($\pi,\sigma$) edge modes at (a) $\lambda/t=\lambda_{s}$ where $v_{0,\sigma}=-v_{\pi,\sigma}$, (b) $\lambda/t\rightarrow\infty$ where $v_{0,\sigma}=v_{\pi,\sigma}$. \label{fig_edge_vel}} \end{figure} In the following, we focus on two values of the intrinsic spin-orbit coupling, $\lambda/t=\lambda_{s}$ and $\lambda/t\rightarrow\infty$, where the velocities of the ($0,\sigma$) and the ($\pi,\sigma$) modes obey $v_{0,\sigma}=-v_{\pi,\sigma}$ and $v_{0,\sigma}=v_{\pi,\sigma}$, respectively (see Fig.~\ref{fig_edge_vel}). The corresponding low-energy Hamiltonians are \begin{equation} \label{eqn_effham_s} H_{\text{edge}}^{s}(-i\partial_{x})=-i\partial_{x} v \left( \begin{array}{cc} \sigma_{z} & 0 \\ 0 & - \sigma_{z} \end{array} \right) =-i\partial_{x} v \Gamma^{15}\;, \end{equation} where $\bm{\Psi}^{\dagger}_{s} (x) = ( R_{1}^{\dagger} (x) , L_{1}^{\dagger} (x) , L_{2}^{\dagger} (x) , R_{2}^{\dagger} (x) )$, and \begin{equation} \label{eqn_effham_infty} H_{\text{edge}}^{\infty}(-i\partial_{x})=-i\partial_{x} v \left( \begin{array}{cc} \sigma_{z} & 0 \\ 0 & \sigma_{z} \end{array} \right) =-i\partial_{x} v \Gamma^{34}\;, \end{equation} where $\bm{\Psi}^{\dagger}_{\infty} (x) = ( R_{1}^{\dagger} (x) , L_{1}^{\dagger} (x) , R_{2}^{\dagger} (x) , L_{2}^{\dagger} (x) )$. While the $SU(2)$ spin symmetry is obviously broken, we show in the following that a chiral $SU(2)$ symmetry exists for $\lambda/t=\lambda_{s}$. The electron annihilation operator $\hat{c}_{\sigma}(x)$ can be written in terms of the fields $R_{i}(x)$ and $L_{i}(x)$ \cite{Senechal99}, \begin{eqnarray} \label{eqn_cop} \hat{c}_{\uparrow}(x) &=& \left[ R_{1}(x) e^{-i k_{\text{F}}^{(1)} x} + Y_{2}(x) e^{-i k_{\text{F}}^{(2)} x}\right]/\sqrt{2}\;,\nonumber\\ \hat{c}_{\downarrow}(x) &=& \left[ L_{1}(x) e^{-i k_{\text{F}}^{(1)} x} + \bar{Y}_{2}(x) e^{-i k_{\text{F}}^{(2)} x}\right]/\sqrt{2}\;, \end{eqnarray} where $k_{\text{F}}^{(1)}=0$, $k_{\text{F}}^{(2)}=\pi$. For $\lambda/t=\lambda_{s}$, the $i=1$ and $i=2$ modes have opposite helicity, so $Y_{2}(x)=L_{2}(x)$ and $\bar{Y}_{2}(x)=R_{2}(x)$. For $\lambda/t\rightarrow\infty$, we have $Y_{2}(x)=R_{2}(x)$ and $\bar{Y}_{2}(x)=L_{2}(x)$. The fermionic anticommutation relations follow from Eq.~(\ref{eqn_anticom}). The spin operators can be expressed for both cases as \begin{eqnarray} \label{eqn_spinop} \hat{S}^{a}(x) &=& \frac{1}{2}\sum\limits_{\sigma,\sigma^{\prime}} \hat{c}_{\sigma}^{\dagger}(x)\sigma^{a}_{\sigma,\sigma^{\prime}} \hat{c}_{\sigma^{\prime}}^{\phantom\dagger}(x)\nonumber\\ &=& \frac{1}{4}\sum\limits_{\sigma,\sigma^{\prime}}\Psi^{\dagger}_{\sigma}(x) s^{a}_{\sigma,\sigma^{\prime}} \Psi^{\phantom\dagger}_{\sigma^{\prime}}(x)\;, \end{eqnarray} with the constraint of single occupancy, $\hat{c}^{\dagger}_{\uparrow}(x)\hat{c}^{\phantom\dagger}_{\uparrow}(x) +\hat{c}^{\dagger}_{\downarrow}(x)\hat{c}^{\phantom\dagger}_{\downarrow}(x)=1$. The matrices $s^{a}$ are given by \begin{eqnarray} \label{eqn_spinmat} s^{x}&=& \openone \otimes \sigma_{x} + \left(\sigma_{x}\otimes\sigma_{x}\right) e^{i\pi x} = \Gamma^{45}-\Gamma^{23}e^{i\pi x}\nonumber\,,\\ s^{y}&=& \openone \otimes \sigma_{y} + \left(\sigma_{x}\otimes\sigma_{y}\right) e^{i\pi x} = -\Gamma^{35}-\Gamma^{24}e^{i\pi x}\nonumber\,,\\ s^{z}&=& \openone \otimes \sigma_{z} + \left(\sigma_{x}\otimes\sigma_{z}\right) e^{i\pi x} = \Gamma^{34}-\Gamma^{25}e^{i\pi x}\;. \end{eqnarray} They have the commutation relation $[s^{a}/4,s^{b}/4] = i \epsilon^{abc} (s^{c}/4)$. Apart from the spin operators, Eq.~(\ref{eqn_spinop}), there are three additional operators which have the commutation relations of the $su(2)$ Lie algebra. These operators are represented by the matrices \begin{equation} \label{eqn_chiral} \Sigma_{x}\equiv \Gamma^{23}\;,\;\;\Sigma_{y}\equiv \Gamma^{24}\;,\;\;\Sigma_{z}\equiv \Gamma^{34}\;, \end{equation} which appear in Eq.~(\ref{eqn_spinmat}) and satisfy $[\Sigma_{a}/2,\Sigma_{b}/2] = i \epsilon^{abc} (\Sigma_{c}/2)$. They are related to the additional chiral degree of freedom which is introduced by the edge mode `orbitals' taking the values $i=1,2$. For $\lambda/t=\lambda_{s}$, all three generators $\Sigma_{a}$ are symmetries of the low-energy Hamiltonian (\ref{eqn_effham_s}), \ie, $[H_{\text{edge}}^{s},\Sigma_{a}]=0$, whereas for $\lambda/t\rightarrow\infty$, this is only true for $\Sigma_{z}$. Therefore, and apart from the spin symmetry, a chiral $SU(2)$ symmetry is present for $\lambda/t=\lambda_{s}$ which turns into a chiral $U(1)$ symmetry for $\lambda/t\rightarrow\infty$. We define a rotation by $\pi/2$, described by \begin{eqnarray} \label{eqn_rot} U_{a}=\text{exp}\left[-i(\pi/4)\Sigma_{a}\right]=(\openone -i\Sigma_{a})/\sqrt{2}\;. \end{eqnarray} Then, $U_{a}^{\dagger}\hat{S}^{b}(x)U_{a}=M_{ab}$ is the rotation by $\pi/2$ of the spin component $\hat{S}^{b}(x)$ around the $\bm{e}_{a}$ axis, where \begin{equation} \label{eqn_rot2} M=\left( \begin{array}{c c c} \hat{S}^{x}(x) & e^{i\pi x}\hat{S}^{z}(x) & -e^{i\pi x}\hat{S}^{y}(x) \\ -e^{i\pi x}\hat{S}^{z}(x) & \hat{S}^{y}(x) & e^{i\pi x}\hat{S}^{x}(x) \\ -\hat{S}^{y}(x) & \hat{S}^{x}(x) & \hat{S}^{z}(x) \end{array} \right)\;. \end{equation} In particular, we obtain the relations \begin{eqnarray} \label{eqn_symrel} U_{x}^{\dagger}\hat{S}^{z}(x)U_{x} &=& -e^{i\pi x} \hat{S}^{y}(x)\;,\nonumber\\ U_{y}^{\dagger}\hat{S}^{z}(x)U_{y} &=& e^{i\pi x} \hat{S}^{x}(x)\;,\nonumber\\ U_{z}^{\dagger}\hat{S}^{y}(x)U_{z} &=& \hat{S}^{x}(x)\;. \end{eqnarray} We now consider the static spin structure factor \begin{equation} \label{eqn_spinspin} S^{a}(q)=\frac{1}{\sqrt{N}}\sum\limits_{x} e^{-iqx}\langle \hat{S}^{a}(x) \hat{S}^{a}(0)\rangle\;, \end{equation} where the expectation value is defined with respect to the effective Hamiltonian (\ref{eqn_effham_x}). Using the symmetry relations~(\ref{eqn_symrel}) we get \begin{eqnarray} \label{eqn_symrel2} S^{z}(q) &=& S^{x}(q+\pi)\quad \text{for}\,\lambda/t=\lambda_{s}\;,\nonumber\\ S^{x}(q) &=& S^{y}(q) \quad \hspace*{1.9em} \text{for}\,\lambda/t=\lambda_{s}\,\text{and}\,\lambda/t\rightarrow\infty \;. \end{eqnarray} Equation~(\ref{eqn_symrel2}) relates the longitudinal and transverse components of the spin-spin correlation functions. In Sec.~\ref{sec_qmc_ribbon}, we numerically show that this low-energy symmetry is preserved in the presence of interactions. It is therefore an emergent symmetry of the interacting $\pi$KMH model~(\ref{eqn_KMH}). However, because the chiral spins [Eq.~(\ref{eqn_chiral})] do not commute with the Rashba term [\eg, Eq.~(\ref{eqn_rashba})], this symmetry hinges on $U(1)$ spin symmetry. \section{Bosonization for the edge states}\label{sec_boson} At low energies, the edge states of the $\pi$KMH model~(\ref{eqn_KMH}) can be described in terms of a two-component \cite{Tanaka09,Orignac11, Tada12, Chung14} Tomonaga-Luttinger liquid \cite{Delft98,Senechal99}. The Tomonaga-Luttinger liquid is the stable low-energy fixed point of gapless interacting systems in one dimension \cite{Haldane81}. We consider the free Hamiltonian with two left and two right movers, forward scattering within the $i=1$ and $i=2$ branches (intra-forward scattering of strength $g_{f}^{(i)}$), and between the branches (inter-forward scattering of strength $g_{f}^{\prime}$). We focus on the case of two pairs of edge modes crossing at $k=0$ and $k=\pi$, respectively, since only those are protected by time-reversal symmetry. In the following, we show that at half filling umklapp scattering between the edge modes is a relevant perturbation in the sense of the renormalization group (RG). It can drive the model away from the Luttinger liquid fixed point and open gaps in the low-energy spectrum. We consider the following kinetic and interaction terms, \begin{eqnarray} \label{eqn_ham_int1} \mathcal{H}&=& \sum\limits_{i=1}^{2}\Big[ v_{i}\int \mathrm{d}x\left(L_{i}^{\dagger} (i\partial_{x}) L_{i}^{\phantom\dagger} + R_{i}^{\dagger} (-i\partial_{x}) R_{i}^{\phantom\dagger}\right)\nonumber\\ &&\quad+ g_{f}^{(i)}\int \mathrm{d}x\,\rho_{i}^2\Big] + g_{f}^{\prime}\int \mathrm{d}x\,\rho_{1}\rho_{2}\;, \end{eqnarray} where $L_{i}$ ($R_{i}$) are the left (right) moving fields, and $\rho_{i}=R_{i}^{\dagger}R_{i}+L_{i}^{\dagger}L_{i}$ is the electronic density. To bosonize the above Hamiltonian~(\ref{eqn_ham_int1}), we introduce the bosonic fields $\phi_{i}(x)$, with $\partial_{x}\phi_{i}=\pi\rho_{i}$, and $\Pi_{i}=R_{i}^{\dagger}R_{i}-L_{i}^{\dagger}L_{i}$, where $\left[\phi_{i}(x),\Pi_{i^{\prime}}(x^{\prime}\right]=i\delta_{i,i^{\prime}}\delta(x-x^{\prime})$. We then have \begin{eqnarray} \label{eqn_boson_1} \mathcal{H}&=& \frac{1}{2\pi}\int \mathrm{d}x \sum\limits_{i=1}^{2} \left[ v_{i}\left(\pi\Pi_{i}\right)^2 + v_{i} K_{i}^{-2}\left(\partial_{x}\phi_{i}\right)^{2}\right]\nonumber\\ &\quad&+ \frac{g_{f}^{\prime}}{\pi^{2}}\int \mathrm{d}x\,\partial_{x}\phi_{1}\partial_{x}\phi_{2}\nonumber\\ &=& \frac{1}{2\pi}\int \text{d}x \left[ \pi^{2} \Pi^{T}M\Pi + \left(\partial_{x}\phi\right)^{T} N \partial_{x}\phi \right] \;, \end{eqnarray} where $K_{i}=(1+2g_{f}^{(i)}/\pi v_{i})^{-1/2}$ is a dimensionless parameter. In the last line, we defined $\Pi=(\Pi_{1},\Pi_{2})^{T}$, $\phi=(\phi_{1},\phi_{2})^{T}$, and \begin{equation} M= \left( \begin{array}{cc} v_{1} & 0 \\ 0 & v_{2} \end{array} \right)\,,\, N=\frac{1}{\pi} \left( \begin{array}{cc} \pi v_{1}+2g_{f}^{(1)} & g_{f}^{\prime} \\ g_{f}^{\prime} & \pi v_{2}+2g_{f}^{(2)} \end{array} \right)\,, \end{equation} using the notation of Orignac \textit{et~al.}~\cite{Orignac10,Orignac11}. The off-diagonal elements in $M$ are zero, since there is no single-particle scattering from the $i=1$ to the $i=2$ cone. Hamiltonian (\ref{eqn_boson_1}) is decoupled by rescaling the fields: \begin{eqnarray} \label{eqn_boson_2} \mathcal{H}&=& \frac{1}{2\pi}\int \text{d}x \left[ \pi^{2} \Pi^{\prime T}\Pi^{\prime} + \left(\partial_{x}\phi^{\prime}\right)^{T} M^{1/2} N M^{1/2} \partial_{x}\phi^{\prime} \right]\nonumber\\ &=& \frac{1}{2\pi}\int \text{d}x \left[ \pi^{2} \Pi^{\prime\prime T}\Pi^{\prime\prime} + \left(\partial_{x}\phi^{\prime\prime}\right)^{T} \Delta \partial_{x}\phi^{\prime\prime} \right]\nonumber\\ &=& \frac{1}{2\pi}\int \text{d}x \sum\limits_{i=1}^{2}\Delta_{ii}^{1/2}\left[ \pi^{2} \widetilde{\Pi}_{i}^{2} + \left(\partial_{x}\widetilde{\phi}_{i}\right)^{2} \right]\,, \end{eqnarray} where $\Pi^{\prime}=M^{1/2}\Pi$, $\phi^{\prime}=M^{-1/2}\phi$, $\Pi^{\prime\prime}=S^{-1}\Pi^{\prime}$, $\phi^{\prime\prime}=S^{-1}\phi^{\prime}$, $\widetilde{\Pi}=\Delta^{-1/4}\Pi^{\prime\prime}$, and $\widetilde{\phi}=\Delta^{1/4}\phi^{\prime\prime}$. $\Delta$ is a diagonal matrix and $S$ a rotation, defined via $\Delta=S^{-1}M^{1/2} N M^{1/2}S$. Therefore, the linear transformation to the new bosonic fields $\widetilde\Pi$ and $\widetilde\phi$ is $\Pi=M^{-1/2} S \Delta^{1/4}\widetilde{\Pi}\equiv P\widetilde{\Pi}$ and $\phi=M^{1/2} S \Delta^{-1/4}\widetilde{\phi}\equiv Q\widetilde{\phi}$. The canonical commutation relations are preserved, since \begin{eqnarray} \left[\widetilde{\phi}_{i}(x),\widetilde{\Pi}_{i^{\prime}}(x^{\prime}\right] &=& \sum\limits_{k,k^{\prime}} Q^{-1}_{i,k} \left(P^{-1}\right)_{k^{\prime},i^{\prime}}^{T} \left[\phi_{k}(x),\Pi_{k^{\prime}}(x^{\prime})\right]\nonumber\\ &=& i\delta_{i,i^{\prime}}\delta(x-x^{\prime})\;. \end{eqnarray} We have \begin{equation} Q= \left( \begin{array}{cc} S_{11} v_{1}^{1/2}\Delta_{11}^{-1/4} & S_{12} v_{1}^{1/2} \Delta_{22}^{-1/4} \\ S_{21} v_{2}^{1/2}\Delta_{11}^{-1/4} & S_{22} v_{2}^{1/2} \Delta_{22}^{-1/4} \end{array} \right)\;, \end{equation} \begin{eqnarray} \Delta_{ii}&=&\frac{v_{1}N_{11} + v_{2}N_{22}}{2}\nonumber\\ \quad&&\pm\left[\left(\frac{v_{1}N_{11} - v_{2}N_{22}}{2}\right)^{2} + v_{1}v_{2} N_{12}^{2}\right]^{1/2}\,, \end{eqnarray} and, for $g_{f}^{\prime}\neq 0$, \begin{equation} S= \left( \begin{array}{cc} \frac{\text{sgn}(g_{f}^{\prime})}{\sqrt{1+s_{1}}} & \frac{\text{sgn}(g_{f}^{\prime})}{\sqrt{1+s_{2}}} \\ \frac{\text{sgn}\left(\Delta_{11}-v_{1}N_{11}\right)}{\sqrt{1+s_{1}^{-1}}} & \frac{\text{sgn}\left(\Delta_{22}-v_{1}N_{11}\right)}{\sqrt{1+s_{2}^{-1}}} \end{array} \right)\;, \end{equation} where $s_{i}=(\Delta_{ii}-N_{11}v_{1})^{2}/v_{1}v_{2}N_{12}^{2}$. For $g_{f}^{\prime}=0$, $S=\openone$. \begin{figure} \subfigure[\;$g_{u}^{1}$ and $g_{u}^{2}$]{ \includegraphics[width=0.225\textwidth]{scattering_sketch_gu.pdf} \label{fig_gu} } \subfigure[\;$g_{u,1}^{\prime}$]{ \includegraphics[width=0.225\textwidth]{scattering_sketch_gup1.pdf} \label{fig_gup1} } \subfigure[\;$g_{u,2}^{\prime}$]{ \includegraphics[width=0.225\textwidth]{scattering_sketch_gup2.pdf} \label{fig_gup2} } \caption{The edge modes cross at $k=0$ and $k=\pi$ with in general nonequivalent Fermi velocities $v_{1}$ and $v_{2}$. We consider the intra-umklapp scattering process (a), and the inter-umklapp scattering processes (b) and (c). \label{fig_scattering}} \end{figure} We consider the following interactions as perturbations to Eq.~(\ref{eqn_boson_2}): intra-umklapp scattering of strength $g_{u}^{(i)}$ [Fig.~\ref{fig_gu}], inter-umklapp scattering of strength $g_{u,1}^{\prime}$ [Fig.~\ref{fig_gup1}], and inter-umklapp scattering of strength $g_{u,2}^{\prime}$ [Fig.~\ref{fig_gup2}]. These processes are described by \begin{eqnarray} \mathcal{H}^{\prime} &=& \sum\limits_{i=1}^{2}g_{u}^{(i)}\int\mathrm{d}x\; L_{i}^{\dagger}(x) L_{i}^{\dagger}(x+a) R_{i}^{\phantom\dagger}(x) R_{i}^{\phantom\dagger}(x+a)\nonumber\\ &\quad&\quad\times e^{i 4 k_{\text{F}}^{(i)}x}\nonumber\\ &\quad&+ g_{u,1}^{\prime}\int\mathrm{d}x\; L_{1}^{\dagger}(x) L_{2}^{\dagger}(x) R_{1}^{\phantom\dagger}(x) R_{2}^{\phantom\dagger}(x) \;e^{i 2 (k_{\text{F}}^{(1)} +k_{\text{F}}^{(2)}) x}\nonumber\\ &\quad&+ g_{u,2}^{\prime}\int\mathrm{d}x\; L_{1}^{\dagger}(x) R_{2}^{\dagger}(x) L_{2}^{\phantom\dagger}(x) R_{1}^{\phantom\dagger}(x) \;e^{i 2 (k_{\text{F}}^{(1)} - k_{\text{F}}^{(2)}) x}\nonumber\\ &\quad&+ \mathrm{H.c.} \label{eqn_interact1} \end{eqnarray} The fermionic operators are $R_{i}=\text{exp}(-i\phi_{R,i})/\sqrt{2\pi}$ and $L_{i}=\text{exp}(i\phi_{L,i})/\sqrt{2\pi}$, omitting the Klein factors, and we have $\phi_{i}=(\phi_{R,i}+\phi_{L,i})/2$. We take $4k_{\text{F}}^{(i)}x=2 (k_{\text{F}}^{(1)} +k_{\text{F}}^{(2)})x=2 (k_{\text{F}}^{(1)}- k_{\text{F}}^{(2)})x=2\pi n$, corresponding to half-filled bands. Then, \begin{eqnarray} \mathcal{H}^{\prime} &=& \sum\limits_{i=1}^{2}\frac{g_{u}^{(i)}}{2\pi^{2}}\int\mathrm{d}x\; \text{cos}\left(4\phi_{i}\right) + \frac{g_{u,1}^{\prime}}{2\pi^{2}}\int\mathrm{d}x\; \text{cos}\left[2\left(\phi_{1}+\phi_{2}\right)\right] \nonumber\\ &\quad&+ \frac{g_{u,2}^{\prime}}{2\pi^{2}}\int\mathrm{d}x\;\text{cos}\left[2\left(\phi_{1}-\phi_{2}\right)\right]\;. \label{eqn_interact2} \end{eqnarray} We now consider $\mathcal{H}+\mathcal{H}^{\prime}$ and obtain the scaling dimensions $\Delta_{u}^{(i)}$, $\Delta_{u1}^{\prime}$, and $\Delta_{u2}^{\prime}$, of the vertex operators $\text{exp}(i4\phi_{i})$ and $\text{exp}[i2(\phi_{1}\pm\phi_{2})]$ in the above scattering processes \cite{Delft98}: \begin{eqnarray} \Delta_{u}^{(1)}&=&4\left( Q_{11}^{2}+Q_{12}^{2}\right),\nonumber\\ \Delta_{u}^{(2)}&=&4\left( Q_{21}^{2}+Q_{22}^{2}\right),\nonumber\\ \Delta_{u1}^{\prime}&=&\left( Q_{11}+Q_{21}\right)^{2} + \left( Q_{12}+Q_{22}\right)^{2},\nonumber\\ \Delta_{u2}^{\prime}&=&\left( Q_{11}-Q_{21}\right)^{2} + \left( Q_{12}-Q_{22}\right)^{2}\;, \end{eqnarray} The scaling dimension $\Delta$ determines whether the respective scattering process in $\mathcal{H}^{\prime}$ [Eq.~(\ref{eqn_interact2})] is a relevant ($\Delta<2$) or irrelevant ($\Delta>2$) perturbation to the free bosonic Hamiltonian $\mathcal{H}$ [Eq.~(\ref{eqn_boson_2})]. For $g_{f}^{\prime}=0$, we have two separate Dirac cones, with $\Delta_{u}^{(i)} = 4 v_{i}\Delta_{ii}^{-1/2}=4 K_{i}$ [see Eq.~(\ref{eqn_boson_1})]. Therefore, intra-umklapp scattering ($g_{f}^{(i)}$) becomes relevant when $K_{i}<1/2$, reproducing the result for a one-component helical liquid \cite{Wu06,Xu06}. In the case of weak coupling ($g_{f}^{1,2}\ll 1$ and $g_{f}^{\prime}\ll 1$), we come to the following conclusions: (i) Intra-umklapp scattering is RG-irrelevant, with $\Delta_{u}^{(1,2)}>2$. This is similar to the case of the one-component helical liquid \cite{Wu06,Xu06}. (ii) Inter-umklapp scattering $g_{u,1}^{\prime}$ is RG-relevant, with $\Delta_{u1}^{\prime}<2$. (iii) The relevance of the inter-umklapp scattering $g_{u,2}^{\prime}$ is determined by the phase diagram shown in Fig.~\ref{fig_scalingdim}. \begin{figure} \includegraphics[width=0.5\textwidth]{scaling_inter_umklapp2.pdf} \caption{Phase diagram of the inter-umklapp process $g_{u,2}^{\prime}$ in the ($g_{f}^{\prime}$,$g_{f}$) plane for (a) equivalent and (b) nonequivalent velocities of the edge modes. The scattering process is relevant (irrelevant) in the region where $\Delta_{u2}^{\prime}<2$ ($\Delta_{u2}^{\prime}>2$). \label{fig_scalingdim}} \end{figure} If the $U(1)$ spin symmetry is preserved, only one of the two inter-umklapp scattering processes $g_{u,1}^{\prime}$ or $g_{u,2}^{\prime}$ is allowed by symmetry, depending on the chirality of the ($0,\sigma$) and ($\pi,\sigma$) modes which is determined by the intrinsic spin-orbit coupling $\lambda$. As shown in Fig.~\ref{fig_aom_edge3}(a), for $\lambda/t<\lambda_{0}$ and $\lambda/t>\lambda_{\pi}$, both edge movers have the same chirality so that inter-umklapp scattering corresponds to the $g_{u,2}^{\prime}$ term. In contrast, for $\lambda_{0}<\lambda/t<\lambda_{\pi}$, the edge movers have opposite chirality and inter-umklapp scattering is given by the $g_{u,1}^{\prime}$ term. The above distinction no longer holds when the $U(1)$ spin symmetry is broken. In this case, $g_{u,1}^{\prime}$ is always RG-relevant, whereas the relevance of $g_{u,2}^{\prime}$ depends on the forward scattering strengths $g_{f}$ and $g_{f}^{\prime}$ and on the edge velocities, see Fig.~\ref{fig_scalingdim}. For $\lambda/t=\lambda_{\text{s}}$ ($\lambda/t\rightarrow\infty$), our low-energy theory is similar to the fusion of two anti-parallel (parallel) helical edge modes \cite{Tanaka09}, see also Fig.~\ref{fig_edge_vel}. However, in the latter setup, the spatial overlap of the two edge wave functions can be neglected, whereas it is included in the interaction term of Eq.~(\ref{eqn_ham_int1}). \section{Quantum Monte Carlo results for edge correlation effects}\label{sec_qmc_ribbon} Correlation effects on the edge states of the $\pi$KMH model can be studied numerically using the approach discussed in Sec.~\ref{sec_qmc_method}. Considering a zigzag ribbon, we take into account a Hubbard interaction only at one edge, and simulate the resulting model exactly using the CT-INT quantum Monte Carlo method. We focus on two values of the spin-orbit coupling $\lambda/t$ and set the Rashba coupling to $\lambda_{\text{R}}/t=0.3$. For $\lambda/t=0.35$, the edge modes at $k=0$ and $k=\pi$ have different velocities ($v_{0}<v_{\pi}$), whereas at $\lambda/t=0.65$, we have $v_{0}\approx v_{\pi}$. As in the KMH model \cite{Ho.As.11}, we observe that the velocities of the edge states remain almost unchanged with respect to the noninteracting case. We carried out simulations for a zigzag ribbon of dimensions $L_{1}=25$ (open boundary condition) and $L_{2}=16$ (periodic boundary condition), see also Fig.~\ref{fig_band_ribbon2}(a). For $\lambda_{\text{R}}=0$, $\mu=0$ corresponds to half filling. Although the band filling in general changes as a function of $\lambda_{\text{R}}$ (the Rashba term breaks the particle-hole symmetry), the Kramers degenerate edge states at $k=0,\pi$ are pinned to $\omega=\mu$. The choice $\mu=0$ then again corresponds to half-filled Dirac cones, and allows for umklapp scattering processes. The inverse temperature was set to $\beta t=60$. \begin{figure}[ht] \includegraphics[width=0.4\textwidth]{edge_spectra_qmc.pdf} \caption{ \label{fig_ctqmc5} (Color online) Spin-averaged single-particle spectral function $A(k,\omega)$ [Eq.~(\ref{eqn_akwave})] from CT-INT simulations. (a) Weak coupling $U/t=2$, (b),(c) strong coupling $U/t=5$. Here, $\lambda_{\text{R}}/t=0.3$.} \end{figure} \subsection{Single-particle spectral function} Using CT-INT in combination with the stochastic maximum entropy method \cite{Beach04}, we calculate the spin-averaged spectral function at the edge, \begin{eqnarray}\label{eqn_akwave} A(k,\omega) &=& \frac{1}{2}\sum_{\sigma} A^{\sigma}(k,\omega)\,, \\\nonumber A^{\sigma}(k,\omega)&=&-\frac{1}{\pi}\mathrm{Im}\;G^{\sigma}(k,\omega)\;, \end{eqnarray} where $G^{\sigma}(k,\omega)$ is the interacting single-particle Green function, and $k$ is the momentum along the edge. As shown in Fig.~\ref{fig_ctqmc5}(a), for $U/t=2$, the numerical results suggest the existence of gapless edge states. In contrast, for a stronger interaction $U/t=5$, a gap is clearly visible both at $k=0$ and $k=\pi$. While the bosonization analysis in Sec.~\ref{sec_boson} predicts a gap as a result of relevant umklapp scattering for any $U>0$, the size of the gap depends exponentially on $U/t$. The apparent absence of a gap in Fig.~\ref{fig_ctqmc5}(a) can therefore be attributed to the small system size used ($L_2=16$). Figure~\ref{fig_ctqmc5}(c) shows the spectral function~(\ref{eqn_akwave}) for $\lambda/t=0.35$, where $v_{0} < v_{\pi}$. Compared to the case of $\lambda/t=0.65$ [Fig.~\ref{fig_ctqmc5}(b)] where $v_{0}\approx v_{\pi}$, the gap in the edge states is much smaller. We expect this dependence on the Fermi velocities to also emerge from the bosonization in the form of a velocity-dependent prefactor that determines the energy scale of the gap \cite{GiamarchiBook}. \subsection{Charge and spin structure factors} We consider the charge structure factor \begin{equation}\label{eq:nq} N(q)=\frac{1}{\sqrt{N}}\sum\limits_{x} e^{-iqx}\left[ \langle \hat{n}(x) \hat{n}(0)\rangle -\langle \hat{n}(x)\rangle \langle\hat{n}(0)\rangle\right] \;, \end{equation} where $x$ is the position along the edge. Figure~\ref{fig_ctqmc7}(b) shows results for different values of $U/t$, $\lambda/t=0.65$, and $\lambda_{\text{R}}/t=0.3$. For a weak interaction, $U/t=1$, $N(q)$ exhibits cusps at $q=0$ and $q=\pi$ that indicate a power-law decay of the real-space charge correlations. Upon increasing $U/t$, the cusps becomes less pronounced, which suggests a suppression of charge correlations by the interaction. This is in accordance with the existence of a gap in the single-particle spectral function [Fig.~\ref{fig_ctqmc5}(b)]. A suppression of charge correlations is also observed for $\lambda=0.35$, see Fig.~\ref{fig_ctqmc7}(a). \begin{figure} \includegraphics[width=0.5\textwidth]{chargecorrel.pdf} \caption{(Color online) Charge structure factor $N(q)$ [Eq.~(\ref{eq:nq})] from CT-INT simulations for (a) $\lambda/t=0.35$ and (b) $\lambda/t=0.65$. Here, $\lambda_{\text{R}}/t=0.3$. \label{fig_ctqmc7} } \end{figure} The spin structure factors ($a=x,z$) \begin{equation}\label{eq:sq} S^{a}(q)=\frac{1}{\sqrt{N}}\sum\limits_{x} e^{-iqx}\langle \hat{S}^{a}(x) \hat{S}^{a}(0)\rangle \end{equation} are shown in Fig.~\ref{fig_ctqmc6}. For $\lambda/t=0.65$ and $U/t=2$, $S^{x}(q)$ has cusps at $q=0$ and $q=\pi$ [Fig.~\ref{fig_ctqmc6}(c)], and varies almost linearly in between. With increasing $U/t$ [$U/t=5$ in Fig.~\ref{fig_ctqmc6}(d)], correlations with $q=0$ become much stronger. Whereas $q=0$ spin correlations dominate the $x$ component of spin, the structure factor $S^z(q)$ in Fig.~\ref{fig_ctqmc6}(d) indicates equally strong correlations with $q=\pi$ for the $z$ component. The resulting spin order resembles that of a canted antiferromagnet. Qualitatively similar results, although with a less pronounced increase of spin correlations between $U/t=2$ and $U/t=5$, are also observed for $\lambda/t=0.35$, as shown in Figs.~\ref{fig_ctqmc6}(a),(b). \begin{figure} \includegraphics[width=0.5\textwidth]{spincorrel.pdf} \caption{(Color online) Spin structure factors $S^{x}(q)$ and $S^{z}(q)$ [Eq.~(\ref{eq:sq})] from CT-INT simulations for $\lambda/t=0.35$ [(a),(b)] and $\lambda/t=0.65$ [(c),(d)]. Here, $\lambda_\text{R}/t=0.3$. \label{fig_ctqmc6}} \end{figure} Despite a small but nonzero Rashba coupling, the results in Figs.~\ref{fig_ctqmc6}(c) and (d) reveal the symmetry relation $S^{z}(q)=S^{x}(q+\pi)$ which roots in the chiral $SU(2)$ symmetry of the corresponding low-energy Hamiltonian (see Sec.~\ref{subsec_symmetry}). Our quantum Monte Carlo results show that this symmetry survives even in the presence of strong correlations. The results in Fig.~\ref{fig_ctqmc6} are almost identical to the case with $\lambda_\text{R}=0$ (not shown), suggesting that the Rashba term breaks the chiral symmetry only weakly. On the other hand, the symmetry is clearly absent for $\lambda/t=0.35$ [Figs.~\ref{fig_ctqmc6}(a),(b)]. \subsection{Effective spin model for $\lambda/t=\lambda_{s}$} For strong interactions $U/t$, there exist no low-energy charge fluctuations at the edge, allowing for a description in terms of a spin model. We consider the case of (nearly) equal velocities, $\lambda/t=0.65$, and make an ansatz in the form of a Heisenberg model with nearest-neighbor interactions, \begin{eqnarray} \label{eqn_xxz} \mathcal{H}_{\text{spin}}&=& \sum\limits_{i} \left(J_{x }S_{i}^{x}S_{i+1}^{x}+J_{y} S_{i}^{y}S_{i+1}^{y} + J_{z}S_{i}^{z}S_{i+1}^{z}\right)\nonumber\\ &=& J\sum\limits_{i} \left(S_{i}^{x}S_{i+1}^{x}+S_{i}^{y}S_{i+1}^{y}-S_{i}^{z}S_{i+1}^{z}\right)\;. \end{eqnarray} In the second line, the coupling constants $J_{a}$ have been fixed by imposing the invariance under the rotations given in Eq.~(\ref{eqn_rot}), $[H_{\text{spin}}^{s},U_{a}]=0$, and using the relations $U_{a}^{\dagger}\hat{S}^{b}(x)U_{a}=M_{ab}$ [cf. Eq.~(\ref{eqn_rot2})]. Hamiltonian~(\ref{eqn_xxz}) corresponds to the XXZ Heisenberg model, tuned to the ferromagnetic isotropic point that separates the Ising phase from the Luttinger liquid phase via a first order transition. In both cases, one expects strong spin correlations, as observed in Fig.~\ref{fig_ctqmc6}(d) \cite{Luther75}. \section{Conclusions} \label{sec_sum} In this paper, we introduced the $\pi$KM model, corresponding to the Kane-Mele model on a honeycomb lattice with a magnetic flux of $\pm\pi$ through each hexagon. The flux insertion doubles the size of the unit cell, and leads to a four-band model for each spin sector. For one spin direction, the band structure has four Dirac points which acquire a gap for nonzero spin-orbit coupling $\lambda$. At half filling, the spinless model has a Chern insulating ground state with Chern number 2 or $-2$, depending on the spin-orbit coupling. The transition between these states occurs via a phase transition at $\lambda/t=1/2$, and the band structure features a quadratic crossing at the critical point. The spinful $\pi$KM model is trivial in the $Z_2$ classification, with an even number of Kramers doublets. If translation symmetry at the edge is unbroken, the helical edge states are stable at the single-particle level even in the presence of a Rashba coupling that breaks the $U(1)$ spin symmetry. The $U(1)$ spin symmetric low-energy model of the edge states has a chiral symmetry when the edge state velocities have equal magnitude and either the same or opposite sign. This chiral symmetry is shown to survive even in the presence of interactions. Regarding the effect of electronic correlations in the bulk, the combination of mean-field calculations and quantum Monte Carlo simulations suggest the existence of a quantum phase transition to a state with long-range, antiferromagnetic order, similar to the Kane-Mele-Hubbard model. The critical value of the interaction depends on the spin-orbit coupling. At $\lambda/t=1/2$, where the quadratic band crossing occurs, a weak-coupling Stoner instability exists. We studied the correlation effects on the edge states in the paramagnetic bulk phase. At half filling, the bosonization analysis predicts the opening of a gap in the edge states as a result of umklapp scattering for any nonzero interaction. For strong coupling, we were able to confirm this prediction using quantum Monte Carlo simulations. Umklapp processes are only effective at commensurate filling and therefore can be eliminated by doping away from half filling. In this case, we expect the interacting model to have stable edge modes, provided translation symmetry is not broken. At large $U/t$, the emergent chiral symmetry can be used to derive an effective spin model of the XXZ Heisenberg type. Our model may be regarded as a two-dimensional counterpart of TCIs. Whereas the gapless edge states of the latter are protected by crystal symmetries of the two-dimensional surface, the edge states in the $\pi$KM model are protected (at the single-particle level, or away from half filling) by translation symmetry. TCIs have an even number of surface Dirac cones which are related by a crystal symmetry. The cones can be displaced in momentum space without breaking time-reversal symmetry by applying inhomogeneous strain \cite{Tang14}. This is in contrast to topological insulators with an odd number of Dirac points where at least one Kramers doublet is pinned at a time-reversal invariant momentum. In TCIs, umklapp scattering processes can be avoided either by doping away from half filling or by moving the Dirac points. In our model, the edge modes have in general unequal velocities and cannot be mapped onto each other by symmetry. The Dirac points are pinned at the time-reversal invariant momenta, and subject to umklapp scattering at half filling. Finally, the $\pi$KM model may be experimentally realized in ultracold atomic gases by using optical flux lattices to create periodic magnetic flux densities \cite{Goldman10,Cooper11,Aidelsburger11,Baur14,Celi14}. {\begin{acknowledgments}% We thank F.~Crepin and B.~Trauzettel for helpful discussions. We acknowledge computing time granted by the J\"ulich Supercomputing Centre (JUROPA), and the Leibniz Supercomputing Centre (SuperMUC). This work was supported by the DFG grants Nos. AS120/10-1 and Ho 4489/2-1 (FOR1807). \end{acknowledgments}} \bibliographystyle{apsrev4-1}
1,314,259,996,950
arxiv
\section{\protect\bigskip Introduction} \bigskip One of the important mathematical problems in quantum information theory is the characterization of separable states. In the case of pure separable states, much progress has been made. For instance, if one considers a quantum system of $p$ particles with state space $H=\otimes _{j=1}^{p}% \mathbb{C}^{n_{j}}$, then the pure states are rays in $H$. Mathematically, this is the complex projective space$\mathbb{\ CP}\left( N-1\right) $, which is a real manifold of dimension $2N-2$, where $N=n_{1}\cdots n_{p}$. The separable pure states are product pure states and so correspond to a submanifold isomorphic to the Cartesian product, $\mathbb{CP}\left( n_{1}-1\right) \times \cdots \times \mathbb{CP}\left( n_{p}-1\right) $, which has real dimension $\sum_{j=1}^{p}\left( 2n_{j}-2\right) $. Thus the set of separable pure states is a measure 0, closed, non-dense subset of the set of pure states. In particular if one randomly picks a pure state in $H$, the probability it is entangled (i.e. not separable) is one. Moreover, every entangled state has an open set of entangled states around it. The situation for separable mixed states is quite different. To see why, first recall that mixed states are described in terms of density matrices. These are $N\times N$, complex, positive semi-definite, Hermitian matrices with trace equal to $1$. If $N=n_{1}\cdots n_{p}$, then the separable density matrices are those which are convex combinations of product matrices, where by product matrix we mean one of the form $A=A_{1}\otimes \cdots \otimes A_{p}$. Unlike the pure state case, the set of separable density matrices, $\Sigma \left( n_{1},\ldots ,n_{p}\right) $, is not of measure 0 in the set of all density matrices, $\mathcal{DM}\left( N\right) $ - it is not negligible. In fact the vector space of $N\times N$ Hermitian matrices has bases which consist solely of product density matrices. This means $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ contains an open subset of $\mathcal{DM}\left( N\right) $, since the convex hull of a vector space basis contains a set which is open in the hyperplane that contains the basis elements. In the case of $\Sigma \left( n_{1},\ldots ,n_{p}\right) $, that hyperplane is the set of matrices with trace equal to $1$. Thus $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ is a compact, convex subset of $\mathcal{% DM}\left( N\right) ,$which is the closure of its non-empty interior. The interior, moreover, contains an element which is in some sense the center of $\mathcal{DM}\left( N\right) $, the totally mixed state ( \cite{horodecki} \cite{braunstein} \cite{vidal} ). One might think that $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ would thus be easy to characterize. After all, such is the case for common convex, compact sets with non-empty interiors such as balls and polytopes. But $% \Sigma \left( n_{1},\ldots ,n_{p}\right) $ is not simple at all. For instance, unlike balls and polytopes, there is no easy way to determine the minimum number of product states needed in convex combination to construct a given separable mixed state. If $A\in \Sigma \left( n_{1},\ldots ,n_{p}\right) $, we say its optimal ensemble length is the minimum number of product states needed in convex combination to construct $A$. When we require all the product states to be pure, we call the minimum number needed the optimal pure state ensemble length. This latter quantity was studied for two particle systems with $H=% \mathbb{C}^{n}\otimes \mathbb{C}^{n}$ by Ulhmann \cite{uhlmann} and by DiVincenzo, Terhal and Thapliyal \cite{divencenzo} among others. Uhlmann showed the optimal pure state ensemble length is at least equal to the rank of the density matrix and no greater than its square. DiVincenzo, Terhal and Thapliyal took up the question of whether one actually needed more than the rank. This is an important question, for the spectral theorem assures that every density matrix can be expressed as the convex combination of pure states, the number equalling the rank of the matrix. They found examples of states with optimal pure state ensemble length greater than their rank. We shall see for systems with three or more particles and for systems of two particles other than \ possibly those modelled on $\mathbb{C}^{2}\otimes \mathbb{C}^{n}$ or $\mathbb{C}^{3}\otimes \mathbb{C}^{3}$ that almost every separable state has an optimal pure state ensemble length greater than its rank. In this paper we examine the size of the set of all separable mixed states which have optimal ensemble length of $k$ or fewer and the set of all which have optimal pure state ensemble length of $k$ or fewer. The first set will be denoted by $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ and the second by $\Sigma _{pure}^{k}\left( n_{1},\ldots ,n_{p}\right) $. We completely determine the $k$ for which $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ has measure 0 in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ in both the bipartite case and the case in which one of the particles has substantially more quantum numbers than all the rest - for instance a molecule and photons. This result is the content of theorem 1. In theorem 2 (respectively theorem 3) a lower bound on $k$ for which $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ (respectively $\Sigma _{pure}^{k}\left( n_{1},\ldots ,n_{p}\right) $) has measure 0 in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ is given. Moreover, in theorem 2 an upper bound on $k$ for which $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ has positive measure and contains an open subset is also given. In order to put the main theorems in context, I should mention that a classical theorem of Caratheodory assures one never needs more than $N^{2}$ pure product states to construct a separable state. Thus $\Sigma \left( n_{1},\ldots ,n_{p}\right) =\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) =\Sigma _{pure}^{k}\left( n_{1},\ldots ,n_{p}\right) $ for $k=N^{2}$. However, it is not the case one always needs this many. For instance Sanpera, Tarrach and Vidral \cite{sanpera} have shown in the 2-qubit case one needs no more than four pure product states. Our main results are the following: \begin{theorem} \label{theorem 1} Let $N=n_{1}\cdots n_{p}$ with $n_{1}\leq n_{2}\leq \cdots \leq n_{p}$ and $n_{1}\cdots n_{p-1}\leq n_{p.}$Then $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ has the following properties: a) It is a connected, compact subset of $\Sigma \left( n_{1},\ldots ,n_{p}\right) $. In particular if it is not all of $\Sigma \left( n_{1},\ldots ,n_{p}\right) $, then it is not dense and its complement in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ is an open subset . b) If $k<n_{1}^{2}\cdots n_{p-1}^{2}$, then $\Sigma ^{k}\left( n_{1,}\ldots ,n_{p}\right) $ has measure 0 in $% \Sigma \left( n_{1},\ldots ,n_{p}\right) $. c) If $k\geq n_{1}^{2}\cdots n_{p-1}^{2}$, then $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ has positive \ measure in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ and in fact contains an open subset \end{theorem} \begin{theorem} \label{theorem 2} The set $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ has the following properties: a) It is a connected, compact subset of $% \Sigma \left( n_{1},\ldots ,n_{p}\right) $. In particular if it is not all of $\Sigma \left( n_{1},\ldots ,n_{p}\right) $, then it is not dense and its complement in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ is an open subset. b) If $k<\left( n_{1}^{2}\cdots n_{p}^{2}\right) /(1-p+\sum_{j=1}^{p}n_{i}^{2})$, then $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ has measure 0 in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $% . c) If $n_{1}\leq n_{2}\leq \cdots \leq n_{p}$ and $k\geq n_{1}^{2}\cdots n_{p-1}^{2}$, then $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ has positive measure in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ and in fact contains an open subset.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{theorem} \begin{theorem} \bigskip \label{theorem 3} The set $\Sigma _{pure}^{k}\left( n_{1},\ldots ,n_{p}\right) $ has the following properties: a) It is a connected, compact subset of $\Sigma \left( n_{1},\ldots ,n_{p}\right) $. In particular if it is not all of $\Sigma \left( n_{1},\ldots ,n_{p}\right) $, then it is not dense and its complement in $\Sigma \left( n_{1},\ldots ,n_{p}\right) $ is an open subset. b) If $k<\left( n_{1}^{2}\cdots n_{p}^{2}\right) /\left( 1-2p+\sum_{j=1}^{p}2n_{j}\right) $, then $\Sigma _{pure}^{k}\left( n_{1},\ldots ,n_{p}\right) $ has measure $0$ in $\Sigma \left( n_{1},\ldots ,n_{p}\right) .$ \end{theorem} The proofs of these theorems will be presented in the next section. First though let us look at a few consequences. In the bipartite case considered by Uhlmann and by DiVincenzo \textit{% et.al., }$H=\mathbb{C}^{n}\otimes \mathbb{C}^{n}$. By theorem 1 there is an open set of separable mixed states with optimal ensemble length of $n^{2}$ or fewer. Note $n^{2}=R_{\max }$, the maximal rank of density matrices in this case. By theorem 3, however, the set of separable mixed states with optimal pure state ensemble length equal to $n^{3}/\left( 4-3/n\right) $ or fewer is of measure 0. Thus one must almost always use more than $\left( R_{\max }\right) ^{3/2}/4$ pure product states to construct a mixed separable state in the bipartite case. This is quite a disparity. However, it is not indicative of all situations. For instance, consider a system consisting of p qubits. Caratheodory's theorem assures that every separable state can be decomposed into a convex combination of $2^{2p}$ pure product states or fewer. From our theorems 2 and 3, it is seen that for large values of $\ p$ one must almost always use close to that number, whether one uses pure product states or general ones. In particular, $\Sigma ^{k}\left( 2,\ldots ,2\right) $ has measure 0 for $% k<2^{2p}/(1+3p)$ and $\Sigma _{pure}^{k}\left( 2,\ldots ,2\right) $ has measure 0 for $k<2^{2p}/(1+2p).$ In terms of the maximal rank, these inequalities are $k<R_{\max }^{2}/(1+3\log (R_{\max }))$ and $k<R_{\max }^{2}/(1+2\log (R_{\max }))$. On the other hand, theorem 2 implies $\Sigma ^{k}\left( 2,\ldots ,2\right) $ has positive measure and a non-empty interior if $k\geq 2^{2p-2}$. This is not sharp. For instance, when $p=3$ one gets an open set with $k=13$. Turning to the general multiparticle system, we note that the maximum rank of a density matrix on $H=\mathbb{C}^{n_{1}}\otimes \cdots \otimes \mathbb{C}% ^{n_{p}}$ is $n_{1}\cdots n_{p}$. When this is less than $n_{1}^{2}\cdots n_{p}^{2}/\left( 1-2p+\sum_{j=1}^{p}2n_{j}\right) $, we can conclude from theorem 3 that the optimal pure state ensemble length of a separable state is almost always greater than the rank of the state. In particular, one must almost always use entangled pure states in the spectral (i.e. eigenvalue) decomposition of separable states. This occurs for all systems with three or more particles and for systems with two particles except possibly those with $H=\mathbb{C}^{2}\otimes \mathbb{C}^{n}$ or $H=\mathbb{C}^{3}\otimes \mathbb{% C}^{3}$. That there are exceptions was shown by Sanpera \textit{et.at.} in \cite{sanpera}. As mentioned before, in that paper they showed every separable state on $\mathbb{C}^{2}\otimes \mathbb{C}^{2}$ can be written as the convex combination of four or fewer pure product states. It would be interesting to see if the other $\mathbb{C}^{2}\otimes \mathbb{C}^{n}$ and $% \mathbb{C}^{3}\otimes \mathbb{C}^{3}$ are also exceptions. Before turning to the proofs, two things need to be mentioned about measurability. First of all in this paper, ''almost always'' is used in the strict mathematical sense of meaning ''except on a set of measure 0''. Secondly, there is a controversy over the proper measure to use for the set of density matrices. That does not apply to the results presented here since any two measures which are absolutely continuous with respect to each other have the same sets of measure 0. Since the hyperplane of Hermitian matrices with trace equal to 1 is a real $N^{2}-1$ dimensional vector space and $% \Sigma \left( n_{1},\ldots ,n_{p}\right) $ is a compact, convex subset of it with non-empty interior, we shall use $N^{2}-1$ dimensional Lebesgue measure for both. \section{Proofs} Suppose $M$ and $N$ are two finite dimensional $C^{\infty }$ manifolds and $% f $ is a $C^{\infty }$ function from $M$ to $N$. A point $m\in M$ is a critical point for $f$ if $df_{m}:TM_{m}\rightarrow TN_{f\left( m\right) }$ is not onto. In words: $m$ is a critical point for $f$ if the differential of $f$ at $m$, which is a linear transformation from the tangent space of $M$ at $m$, $TM_{m}$, to the tangent space of $N$ at $f\left( m\right) $, $% TN_{f\left( m\right) }$, is not onto. A point $n\in N$ is a critical value for $f$ if it is the image of a critical point. A classical theorem in differential topology due to Sard\cite{sard} states that the set of critical values in $N$ is of measure $0$. This will be the key to our proofs. We shall apply it, along with the rank theorem \cite{Narasimhan}, to the length $k$ mixing function which we shall define shortly. For $w$ an integer, let $Herm\left( w\right) $ denote the set of $w\times w$ complex Hermitian matrices. $Herm\left( w\right) $ is a real vector space of dimension $w^{2}$. The subset of positive semi-definite matrices in $% Herm\left( w\right) $ form a closed, convex cone with non-empty interior. For $r$ a real number take $\tau _{r}\left( w\right) $ to be the subset of $% Herm\left( w\right) $ consisting of those matrices with trace equal to $r$. Each $\tau _{r}\left( w\right) $ is a $w^{2}-1$ dimensional hyperplane in $% Herm\left( w\right) $. They are all parallel to $\tau _{0}\left( w\right) $, which is a vector space. The intersection of $\tau _{1}\left( w\right) $ with the cone of positive semi-definite matrices in $Herm\left( w\right) $ is the set of density matrices, $\mathcal{DM}\left( w\right) $. As mentioned before, it is a compact, convex set with non-empty interior in $\tau _{1}\left( w\right) $. Also, note that the tangent space of $\tau _{1}\left( w\right) $ at $\mathbf{Q}$ is $\tau _{0}\left( w\right) $, since the hyperplanes are parallel. Let $N=n_{1}\cdots n_{p}$. The length $k$ mixing function \begin{equation*} \mu _{k}:\mathbb{R}^{k-1}\times \left( \tau _{1}\left( n_{1}\right) \times \cdots \times \tau _{1}\left( n_{p}\right) \right) ^{k}\rightarrow \tau _{1}\left( N\right) \end{equation*} is defined for $\mathbf{Q=}(\lambda _{1},\ldots ,\lambda _{k-1},A_{11},\ldots ,A_{1p},\ldots ,A_{k1},\ldots ,A_{kp})$ by \begin{eqnarray*} \mu _{k}\left( \mathbf{Q}\right) &=& \\ &&\sum_{j=1}^{k-1}\lambda _{j}A_{j1}\otimes \cdots \otimes A_{jp}+\left( 1-\sum_{j=1}^{k-1}\lambda _{j}\right) A_{k1}\otimes \cdots \otimes A_{kp} \end{eqnarray*} When $\mu _{k}$ is restricted to $\Lambda _{k}\times \left( \mathcal{DM}% \left( n_{1}\right) \times \cdots \times \mathcal{DM(}n_{p}\right) )^{k},$ where $\Lambda _{k}=\left\{ (\lambda _{1},\ldots ,\lambda _{k-1}):\lambda _{j}\geq 0\text{ and }\sum_{j=1}^{k-1}\lambda _{j}\leq 1\right\} $, it yields elements in $\mathcal{DM}(N)$. Moreover, it does so by forming convex combinations of product states. Since $\mu _{k}$ is an algebraic function, it is infinitely differentiable and so the criteria for Sard's theorem are satisfied. The differential of $\mu _{k}$ at the point $\mathbf{Q}$ applied to the tangent vector $\mathbf{V}=(r_{1},\ldots ,r_{k-1},H_{11},\ldots ,H_{kp})$ is given by: \smallskip \smallskip \begin{eqnarray} d\mu _{k}\left( \mathbf{Q}\right) \mathbf{V} &=& \label{1} \\ &&\sum_{j=1}^{k-1}\lambda _{j}\left[ \begin{array}{c} H_{j1}\otimes A_{j2}\otimes \cdots \otimes A_{jp}+A_{j1}\otimes H_{j2}\otimes A_{j3}\otimes \cdots \otimes A_{jp} \\ \cdots +A_{j1}\otimes \cdots \otimes A_{jp-1}\otimes H_{jp} \end{array} \right] \notag \\ &&+\left( 1-\sum_{j=1}^{k-1}\lambda _{j}\right) \left[ \begin{array}{c} H_{k1}\otimes A_{k2}\otimes \cdots \otimes A_{kp}+\cdots \\ +A_{k1}\otimes \cdots \otimes A_{kp-1}\otimes H_{kp} \end{array} \right] \notag \\ &&+\sum_{j=1}^{k-1}r_{j}A_{j1}\otimes \cdots \otimes A_{jp}-\left( \sum_{j=1}^{k-1}r_{j}\right) A_{k1}\otimes \cdots \otimes A_{kp} \notag \end{eqnarray} \ \ \ \ \ \ \ We need to determine when $d\mu _{k}$ is never onto. To this end observe that $\tau _{0}\left( N\right) $, the tangent space at each point of $\tau _{1}\left( N\right) $, equals \begin{eqnarray} &&\tau _{0}\left( n_{1}\right) \otimes Herm(n_{2})\otimes \cdots \otimes Herm(n_{p})+ \label{2} \\ &&Herm\left( n_{1}\right) \otimes \tau _{0}\left( n_{2}\right) \otimes \cdots \otimes Herm\left( n_{p}\right) + \notag \\ &&\cdots +Herm\left( n_{1}\right) \otimes \cdots \otimes Herm\left( n_{p-1}\right) \otimes \tau _{0}\left( n_{p}\right) . \notag \end{eqnarray} (Note, this is sum, not direct sum. There is a great deal of overlap in the terms. In particular, do not add dimensions.) Let us first prove part b of theorem 1 for the bipartite case. Thus $% N=n_{1}n_{2},$ $n_{1}\leq n_{2},$ and $k<n_{1}^{2}$. We need to show $d\mu _{k}\left( \mathbf{Q}\right) $ is not onto for any $\mathbf{Q=}\left( \lambda _{1},\ldots ,\lambda _{k-1},A_{11},A_{12},\ldots ,A_{k1},A_{k2}\right) $. To begin, notice that $k<n_{1}^{2}\leq n_{2}^{2}$ means that neither $\left\{ A_{j1}\right\} $ spans $Herm(n_{1})$ nor $% \left\{ A_{j2}\right\} $ spans $Herm(n_{2}).$ Hence if either the projections of the $A_{j1}$ onto $\tau _{0}\left( n_{1}\right) $ do not span $\tau _{0}\left( n_{1}\right) $ or the projections of the $A_{j2}$ onto $% \tau _{0}\left( n_{2}\right) $ do not span $\tau _{0}\left( n_{2}\right) $, then $d\mu _{k}\left( \mathbf{Q}\right) $ cannot be onto. Indeed, without loss of generality suppose the projections of the $A_{j2}$ onto $\tau _{0}\left( n_{2}\right) $ do not span. Then there is a $C\in \tau _{0}\left( n_{2}\right) $ which is orthogonal to the span of those projections. Since $% \left\{ A_{j1}\right\} $ does not span $Herm\left( n_{1}\right) $ there is a $B\in Herm\left( n_{1}\right) $ which is orthogonal to the span of $\left\{ A_{j1}\right\} $. The product $B\otimes C$ is then both in $\tau _{0}\left( n_{1}n_{2}\right) $ and orthogonal to every term in equation (1) and so $% d\mu _{k}\left( \mathbf{Q}\right) $ is not onto. Since $\dim \tau _{0}\left( n_{2}\right) =n_{2}^{2}-1$, the situation just considered occurs if any of the following hold $k<n_{1}^{2}-1,$ $n_{1}<n_{2}$% , any of the $\lambda _{j}$ are 0, or the $\lambda _{j}$ add to 1. Therefore, to finish this part of the proof let us assume $n_{1}=n_{2}=n$, $% k=n^{2}-1,$ none of the $\lambda _{j}$ are 0 and the $\lambda _{j}$ do not add to 1. Suppose $A_{j1}=E_{j}+\frac{1}{n}I$ and $A_{j2}=F_{j}+\frac{1}{n}I$ where $% \left\{ E_{j}\right\} $ and $\left\{ F_{j}\right\} $ are bases for $\tau _{0}\left( n\right) $. In order to establish $d\mu _{k}\left( \mathbf{Q}% \right) $ is not onto, we only need to show it does not send a basis of $% \mathbb{R}^{k-1}\times \left( \tau _{0}\left( n\right) \times \tau _{0}\left( n\right) \right) ^{k}$ onto a basis of $\tau _{0}\left( n^{2}\right) .$ The elements of $\mathbb{R}^{k-1}\times \left( \tau _{0}\left( n\right) \times \tau _{0}\left( n\right) \right) ^{k}$ are of the form $\mathbf{V}=\left( r_{1},\ldots ,r_{k-1},H_{11},H_{12},\ldots ,H_{k1},H_{k2}\right) $. By successively picking one $r_{j}$ to be 1 and all the other entries in $\mathbf{V}$ to be 0 and then picking all $r_{j}$ to be 0 and successively picking $H_{ji}$ to be one of the $E_{s}$ or $F_{t}$ depending upon whether $i=1$ or $2$, we obtain a basis for $\mathbb{R}% ^{k-1}\times \left( \tau _{0}\left( n\right) \times \tau _{0}\left( n\right) \right) ^{k}$. Applying $d\mu _{k}\left( \mathbf{Q}\right) $ to this basis, we obtain the set \begin{equation} \left\{ \begin{array}{c} E_{s}\otimes F_{t}+E_{s}\otimes \frac{1}{n}I,E_{s}\otimes F_{t}+\frac{1}{n}% I\otimes F_{t}, \\ E_{s}\otimes F_{t}+\frac{1}{n}I\otimes F_{t}+E_{s}\otimes \frac{1}{n}% I-E_{k}\otimes F_{k}-\frac{1}{n}I\otimes F_{k}-E_{k}\otimes \frac{1}{n}I \end{array} \right\} \label{3} \end{equation} where $s$ and $t$ range independently from 1 to $n^{2}-1$. Subtracting the first group of these elements from the second and third groups and adding the first group with $s=t=k$ to the third, we get the set \begin{equation} \left\{ E_{s}\otimes F_{t}+E_{s}\otimes \frac{1}{n}I,\frac{1}{n}I\otimes F_{t}-E_{s}\otimes \frac{1}{n}I,\frac{1}{n}I\otimes F_{t}-\frac{1}{n}% I\otimes F_{k}\right\} \label{4} \end{equation} Subtracting the last group from the second and adding the result to the first group, we obtain \begin{equation} \left\{ E_{s}\otimes F_{t}+\frac{1}{n}I\otimes F_{k},\frac{1}{n}I\otimes F_{k}-E_{s}\otimes \frac{1}{n}I,\frac{1}{n}I\otimes F_{t}-\frac{1}{n}% I\otimes F_{k}\right\} \label{5} \end{equation} Since $\dim \tau _{0}\left( n\right) =n^{2}-1$, there are $\left( n^{2}-1\right) \left( n^{2}-1\right) $ elements in the first group of this last set. There are $n^{2}-1$ elements in the second group and there are $% n^{2}-2$ in the third group. Thus all told there are $n^{4}-2$ elements in the set. But $\dim \tau _{0}\left( n^{2}\right) =n^{4}-1$ and so the set cannot form a basis, which means $d\mu _{k}\left( \mathbf{Q}\right) $ is never onto if $k<n_{1}^{2}$. Hence if $k<n_{1}^{2}$, then every point in $\mathbb{R}^{k-1}\times \left( \tau _{1}\left( n_{1}\right) \times \tau _{1}\left( n_{2}\right) \right) ^{k} $ is a critical point for $\mu _{k}$. It follows from Sard's theorem that the image of $\mu _{k}$ is of measure 0 in $\tau _{1}\left( N\right) $. The bipartite case of part b of theorem 1 is then a result of the facts that $\Sigma ^{k}\left( n_{1},n_{2}\right) $ is in the image of $\mu _{k}$ and any measure $0$ subset of $\tau _{1}\left( N\right) $ has measure 0 in $% \Sigma \left( n_{1},n_{2}\right) $ too. To finish the proof of part b of theorem 1, we only need to note that $% \Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) \subset \Sigma ^{k}\left( \prod_{j=1}^{p-1}n_{j},n_{p}\right) $and use what we have just proved for the bipartite case. Let us now prove part c of theorems 1 and 2. We shall use the rank theorem \cite{Narasimhan} which states that if $d\mu _{k}$ is onto at a point $% \mathbf{Q=}\left( \lambda _{1},\ldots ,\lambda _{k-1},A_{11},\ldots ,A_{kp}\right) $, then $\mu _{k}$ maps some open ball centered at $\mathbf{Q} $ onto an open set containing $\mu _{k}\left( \mathbf{Q}\right) $. Thus we need to find a $\mathbf{Q}$ in the interior of $\Lambda _{k}\times \left( \mathcal{DM}\left( n_{1}\right) \times \cdots \times \mathcal{DM}\left( n_{p}\right) \right) ^{k}$ at which $d\mu _{k}\left( \mathbf{Q}\right) $ is onto. We know there are bases of $Herm(n_{i})$ which consist of elements in the interior of $\mathcal{DM}\left( n_{i}\right) $. We also know $\frac{1}{n_{p}}% I$ is in the interior of $\mathcal{DM}\left( n_{i}\right) $. Therefore, since $k\geq n_{1}^{2}\cdots n_{p-1}^{2}=\dim Herm\left( n_{1}\cdots n_{p-1}\right) ,$ we can pick the $A_{ji}$ for $j=1,\ldots ,k,$ $i=1,\ldots ,p-1$ to be in the interior of $\mathcal{DM}\left( n_{i}\right) $ and such that $\left\{ A_{j1}\otimes \cdots \otimes A_{jp-1}\right\} $ spans $% Herm\left( n_{1}\cdots n_{p-1}\right) $. Choosing them so and also choosing all $\lambda _{j}=\frac{1}{k}$ and all $A_{jp}=\frac{1}{n_{p}}I$, we obtain a $\mathbf{Q}$ which satisfies our needs. To see this, note an element of $% \mathbb{R}^{k-1}\times \left( \tau _{0}\left( n_{1}\right) \times \cdots \times \tau _{0}(n_{p}\right) )^{k}$ is of the form $\mathbf{V}% =(r_{1},\ldots ,r_{k-1},H_{11},\ldots ,H_{1p},\ldots ,H_{k1},\ldots ,H_{kp})$% . Let $\Gamma _{i}$ be the set of all $\mathbf{V}$ for which the only non-zero component is an $H_{ji}$. Since $\left\{ A_{j1}\otimes \cdots \otimes A_{jp-1}\right\} $ spans $Herm\left( n_{1}\cdots n_{p-1}\right) $ and $A_{jp}=\frac{1}{n_{p}}I$, we have that $d\mu _{k}\left( \mathbf{Q}% \right) $ maps $\Gamma _{i}$ onto $\tau _{0}\left( n_{1}\cdots n_{p}\right) \otimes \frac{1}{n_{p}}I$ for $i<p$ and $\Gamma _{p}$ onto $Herm\left( n_{1}\cdots n_{p-1}\right) \otimes \tau _{0}\left( n_{p}\right) $. These two sets span $\tau _{0}\left( N\right) $ and so part c of theorems 1 and 2 is proved. To finish the proofs of these two theorems we first note that if $k$ satisfies the condition in part b of theorem 2, then the dimension of the domain of $\mu _{k}$ is less than the dimension of its target. In such a case it is impossible for $d\mu _{k}\left( \mathbf{Q}\right) $ to ever be onto because the dimension of its domain is too small. And so the result follows again from Sard's theorem. As for part a of theorems 1 and 2, it is a consequence of the fact $\Sigma ^{k}\left( n_{1},\ldots ,n_{p}\right) $ is the image of the connected, compact set $\Lambda _{k}\times \left( \mathcal{% DM}\left( n_{1}\right) \times \cdots \times \mathcal{DM}\left( n_{p}\right) \right) ^{k}$ under the continuous map $\mu _{k}$. Finally, as for theorem 3, let us recall that the set of pure states in $% \mathcal{DM}\left( q\right) $ is isomorphic to the complex projective space $% \mathbb{CP}\left( q-1\right) $, which has real dimension $2q-2$. Hence we need to consider the composition of the embedding \begin{equation*} \iota :\mathbb{R}^{k-1}\times \left( \mathbb{CP}\left( n_{1}-1\right) \times \cdots \times \mathbb{CP}\left( n_{p}-1\right) \right) ^{k}\rightarrow \mathbb{R}^{k-1}\times \left( \tau _{1}\left( n_{1}\right) \times \cdots \times \tau _{1}\left( n_{p}\right) \right) ^{k} \end{equation*} with $\mu _{k}$. Part a of theorem 3 is a result of the fact $\Sigma _{pure}^{k}\left( N\right) $ is the image of the connected, compact set $% \Lambda _{k}\times \left( \mathbb{CP}\left( n_{1}-1\right) \times \cdots \times \mathbb{CP}\left( n_{p}-1\right) \right) ^{k}$ under the continuous map $\mu _{k}\circ \iota $. As for part b, it is a simple consequence of Sard's theorem and the observation that $\mathbb{R}^{k-1}\times \left( \mathbb{CP}\left( n_{1}-1\right) \times \cdots \times \mathbb{CP}\left( n_{p}-1\right) \right) ^{k}$ has dimension $k\left( 1-2p+\sum_{j=1}^{k}2n_{j}\right) -1$ while $\tau _{1}\left( N\right) $ has dimension $n_{1}^{2}\cdots n_{p}^{2}-1$. Since $d\mu _{k}\circ \iota $ is a linear transformation, it cannot be onto if the dimension of the domain is strictly less than the dimension of its image, which is the case here if $% k<\left( n_{1}^{2}\cdots n_{p}^{2}\right) /\left( 1-2p+\sum_{j=1}^{k}2n_{j}\right) $. \begin{acknowledgement} Research for this paper was partially funded by the Naval Research Laboratory in Washington, D.C. \end{acknowledgement}
1,314,259,996,951
arxiv
\section{Introduction} The synthesis of unknown heavy nuclei has been spotlighted for last decades with the development of new facilities for rare isotope accelerators~\cite{Moller16,GS16,GGW16}. In particular, the structure of neutron-rich heavy nuclei is expected to shed light on our understanding of nuclear structure in isospin asymmetric nuclear matter and it will give insight on the structure of neutron stars and the process of nuclear synthesis during the evolution of stars~\cite{GLM07}. Therefore, it can be a test ground for various issues of nuclear physics such as nuclear density functional, strong nuclear interactions, various decay processes, r-p process, etc, which makes it one of the most exciting topics in low energy nuclear physics~\cite{OST17}. The formation of such heavy nuclei is identified through their decay processes such as the $\alpha$ decay, $\beta$ decay, and spontaneous fission~\cite{VS66b}. The competition between these decay processes is reflected in branching ratios, and, in fact, the heavy nuclei with the atomic number $Z>105$ were found to rarely survive for a few minutes~\cite{AWWK12, WAWK12}. The study on the nuclear $\alpha$ decay process has a very long history, as it is one of the major decay processes of nuclei~\cite{Mang64,VS66b}. In particular, the formation of a new heavy nuclide would be mostly identified through its $\alpha$ decay chains~\cite{Oganessian07,MMKH12,KYDA14}. Modern approaches for theoretical understanding of the nuclear $\alpha$ decay are based on effective nuclear interactions such as the square well potential model~\cite{BMP91,BMP92}, $\cosh$ potential model~\cite{BMP92b}, unified fission model~\cite{DZGWP10}, double-folding model~\cite{RSB05,SCB07,RSB08}, and so on. The most important factor in the $\alpha$ decay process of heavy nuclei is the accurate information on the $Q$ value for the decay process, which reflects the structure of heavy nuclei through binding energy. The importance of the $Q$ value in the $\alpha$ decay lifetime can easily be found in the Geiger-Nuttall law~\cite{GN11} and its improved version of Viola and Seaborg~\cite{VS66b}.% \footnote{ For example, in the case of the alpha decay of $\nuclide[212]{Po} \to \nuclide[208]{Pb} + \alpha$, a difference of 0.1 MeV in the $Q$ value of the reaction, where $Q_{\rm expt.} \approx 8.95$~MeV, results in about a factor of 1.7 difference in the calculated half-life of \nuclide[212]{Po}.} The next most sensitive factor in the determination of the $\alpha$ decay width is the nucleon distribution inside the daughter nucleus, which determines the $\alpha$ potential. Since the $\alpha$-decay is basically a quantum tunneling effect, the exact positions of the classical turning points and the profile of the barrier, i.e., its height and width, are essential parts for the estimation of the $\alpha$ decay lifetime. Therefore, the information on the nuclear potential felt by the $\alpha$ cluster inside the parent nucleus is important to estimate the $\alpha$ decay width. Furthermore, the Coulomb potential is responsible for the repulsive potential barrier together with the angular momentum barrier, so the potential shape due to the proton distribution in the daughter nucleus has a nontrivial role in the $\alpha$ decay process. The purpose of the present work is to go beyond a simple model approach for the $\alpha$ potential by developing a more realistic $\alpha$ potential based on nucleon density profiles for estimating $\alpha$ decay half-lives. In the present work, we calculate the $\alpha$ decay half-lives of heavy nuclei within the Wentzel-Kramers-Brillouin (WKB) approximation by calculating the nuclear potential felt by the $\alpha$ cluster using phenomenological nuclear force models. The nuclear potential form for the $\alpha$ cluster is obtained from the Skyrme-type interaction as prescribed in Ref.~\cite{SLHO15}, which requires the proton and neutron distribution functions as inputs. We then use the Skyrme SLy4 model~\cite{CBHMS98} and Gogny D1S model~\cite{BGG91} as non-relativistic models and the relativistic mean-field DD-ME2 model~\cite{LNVR05} as well. For the $Q$ values of the $\alpha$ decay processes, we use the experimental data whenever available, and, if not, we make use of the liquid droplet model (LDM) elucidated in Ref.~\cite{SPLE04}. This paper is organized as follows. In Sec.~\ref{sec:qval}, we review the LDM to calculate the binding energy to be used when the experimental $Q$ value is not known. The Coulomb diffusion and exchange terms are included as well as the pairing and shell corrections, which gives a better fitting to existing data. In the shell corrections, we use the last magic number as a free variable to minimize the root-mean-square deviation of total binding energy. We also check the $Q$ values using the phenomenological formula as a function of isospin asymmetry $I$, with $I=(N-Z)/A$, as in Ref.~\cite{DZGWP10}. Section~\ref{sec:models} briefly explains nuclear models to find the density profiles of nucleons inside nuclei, and we construct the effective nuclear potential for the $\alpha$ cluster. The parameters of the effective potential for each model of nucleon density distribution are determined. Our results are presented in Sec.~\ref{sec:results} and compared to experimental data. The predictions on the unobserved $\alpha$ decays of heavy nuclei are given as well. We summarize and conclude in Sec.~\ref{sec:conclusion}. \section{\boldmath Nuclear models} \label{sec:qval} The $Q$ value plays an important role to determine the lifetime of the $\alpha$ decay as it determines the assaulting frequency of the $\alpha$ particle for a given potential well. It also sets the penetration width for quantum tunneling. In the estimation of $\alpha$ decay lifetimes, we use the empirical $Q$ values, if available. However, for unobserved decay processes, we have to resort to model predictions on the binding energy. In this Section, we review the LDM that will be used in the present work. \subsection{Liquid Droplet Model} To estimate the unknown binding energies of heavy nuclei, we make use of the LDM with some modifications as prescribed in Refs.~\cite{MS69,SPLE04}. In general, heavy nuclei are neutron-rich and the neutron skin is likely to exist on the surface. For example, the neutron skin thickness of \nuclide[208]{Pb} was investigated with the electric dipole response~\cite{TPVF11}, the parity radius experiment (PREX)~\cite{PREX12, HAJR12}, and, more recently, through coherent $\pi^0$ photoproduction~\cite{CB14}. All numerical calculations using Skyrme-Hartree-Fock, Gogny, and relativistic mean field models show the out-layer of neutrons in the neutron-rich heavy nuclei. Thus, it is natural to include the neutron skin effects in LDM. The binding energy in the LDM for a nucleus of ($Z, A$) is given as~\cite{SPLE04} \begin{eqnarray}\label{eq:ldm} E & = & f_{B}^{} \left( A - N_{s} \right) + 4\pi R^{2} \sigma(\mu_{n}) + \mu_{n}N_{s} + E_{\text{Coul}} \nonumber \\ && \mbox{} + E_{\text{pair}} + E_{\text{shell}}, \end{eqnarray} where $f_B^{}$ is the binding energy per baryon of infinite nuclear matter, $N_s$ is the number of neutrons in the neutron skin on the surface, $R$ is the radius of the nucleus, $\sigma (\mu_n)$ is the surface tension as a function of neutron chemical potential $\mu_n$. $E_\text{Coul}$ is the Coulomb energy, $E_\text{pair}$ is the pairing energy, and $E_\text{shell}$ includes the shell corrections. In this model, $f_B$ is a phenomenological energy function, which reads \begin{equation} f_B^{} = - B + S_v(1-2x)^2 + \frac{K}{18}(1-u)^2, \end{equation} where $B$ is the binding energy per nucleon, $S_v$ is the nuclear symmetry energy, and $K$ is the nuclear incompressibility of symmetric nuclear matter at nuclear saturation density $\rho_0^{}$. Here, $x$ and $u$ are defined as \begin{equation} x = \frac{Z}{A-N_s} , \quad u =\frac{\rho}{\rho_0^{}}. \end{equation} The surface tension is a function of $x$, and we find that the simple expansion of $\sigma(x) = \sigma_0^{} - \sigma_\delta^{} (1-2x)^2$ is not a good approximation for highly neutron-rich nuclei. Therefore, we use the form suggested in Refs.~\cite{RPL83, Lim12}, which reads \begin{equation} \sigma(x) = \sigma_0^{} \frac{2\cdot 2^\alpha + q}{x^{-\alpha} + q + (1-x)^{-\alpha}}\,. \end{equation} The parameters $\sigma_0^{}$, $\alpha$, and $q$ will be determined later. The Coulomb energy contribution to the total mass is obtained from the classical Coulomb interaction, the Coulomb diffusion term, and the exchange term. It is then written as \begin{equation} E_\text{Coul} = \frac{3Z^2e^2}{5R} -\frac{\pi^2 Z^2 e^2 d^2}{2R^3} - \frac{3Z^{4/3}e^2}{4R} \left(\frac{3}{2\pi}\right)^{2/3}\,, \end{equation} where $d$ ($= 0.55~\text{fm})$ is the surface diffuseness parameter~\cite{SPLE04} and $R$ is the average radius of the nucleus. The general expression for the pairing energy in LDM reads \begin{equation} E_\text{pair} = (-1)^N \frac{\Delta_N}{\sqrt{A}} + (-1)^Z\frac{\Delta_P}{\sqrt{A}}\,, \end{equation} where the pairing energies for protons and neutrons are treated separately, since neutron-rich nuclei would have higher single particle energy of the last-filled neutron than the one for protons. For the shell contribution to the total binding energy, we follow the prescription of Duflo and Zuker~\cite{DZ94, DV09}, which writes the shell correction as \begin{equation} E_\text{shell} = a_1^{} S_2 + a_2^{} (S_2)^2 + a_3^{} S_3 + a_{np}^{} S_{np}\,, \end{equation} where $a_1^{}$, $a_2^{}$, $a_3^{}$, and $a_{np}^{}$ are parameters to be determined, and \begin{equation} \begin{aligned} S_{2} & = \frac{n_v \bar{n}_v}{D_n} + \frac{p_v \bar{p}_v}{D_p} \,, \\ S_{3} & = \frac{n_v \bar{n}_v (n_v - \bar{n}_v)}{D_n} + \frac{p_v \bar{p}_v (p_v - \bar{p}_v)}{D_p} \,, \\ S_{np} & = \frac{n_v \bar{n}_v p_v \bar{p}_v}{D_nD_p} \,. \end{aligned} \end{equation} Here, $n_v$ and $p_v$ are the valence numbers of neutrons and protons, respectively, i.e., the minimal difference for neutron and proton numbers from the magic numbers, 2, 8, 20, 28, 50, 82, 126, and 184 (or 168). For example, for \nuclide[56]{Fe}, we obtain $n_v = \abs{30 - 28} = 2$ and $p_v = \abs{26 - 28} = 2$. $D_n$ ($D_p$) is the degeneracy number, i.e, the interval of the magic numbers adjacent to the neutron (proton) number. For instance, in the case of \nuclide[56]{Fe}, the nearest two magic numbers for $N=30$ are $28$ and $50$, which then leads to $D_{N=30} = 50 - 28 = 22$. Finally, $\bar{n}_v$ and $\bar{p}_v$ are the complementary valence numbers for neutrons and protons, respectively, and their explicit forms are \begin{equation} \bar{n}_v \equiv D_n - n_v, \quad \bar{p}_v \equiv D_p - p_v. \end{equation} Again, for \nuclide[56]{Fe}, we have $\bar{n}_v(30) = 22 - 2 = 20$. \begin{table}[t] \begin{tabular}{cccl} \hline \hline & ~~Case I~~ & ~~Case II~~ & ~Unit \\ \hline $B$ & 16.125 & 16.370 & ~MeV \\ $\rho_0^{}$ & 0.155 & 0.155 & ~fm$^{-3}$ \\ $\sigma_0^{}$ & 1.256 & 1.300 & ~MeV\,fm$^{-2}$ \\ $\alpha$ & 4.0 & 3.7 & \\ $q$ & 60.00 & 25.48 & \\ $S_v$ & 31.818 & 32.471 & ~MeV\\ $K$ & 250.00 & 226.389 & ~MeV\\ $\Delta_n$ & 5.458 & 6.232 & ~MeV\\ $\Delta_p$ & 5.807 & 11.760 & ~MeV\\ $a_1^{}$ & 1.265 & $-0.143$ & ~MeV\\ $a_2^{}$ & $-8.601\times 10^{-3}$ & $9.307\times 10^{-3}$ & ~MeV\\ $a_3^{}$ & $-4.007\times 10^{-3}$ & $2.216\times 10^{-3}$ & ~MeV\\ $a_{np}^{}$ & $-9.663\times 10^{-2}$ & $-4.231\times 10^{-2}$ & ~MeV\\ $M(8)$ & 184 & 168 & \\ RMSD & 1.144 & 0.218 & ~MeV\\ \hline \end{tabular} \caption{The parameters of LDM. The values of case I are obtained by the least $\chi^2$ fitting to the observed binding energies for 2336 nuclei. The parameters in case II are found by fitting to the experimental $Q$ values for the nuclei with $Z \ge 100$, where we have totally 100 data points. $M(8)$ is the 8th magic number in each case. RMSD in the last row denotes the root-mean-square deviation. The RMSD in cases I is for binding energies, whereas that in case II is for $Q$ values. } \label{table1} \end{table} In the present work, we will work with two parameter sets as given in Table~\ref{table1}. The parameters of case I are obtained by fitting to the experimentally known binding energies of 2336 nuclei. Therefore, this corresponds to a global fitting. On the other hand, since we are considering $\alpha$ decays of neutron-rich heavy nuclei, it may be useful to focus on heavy nuclei for that purpose. Thus the second parameter set is found by using the measured $Q$ values of heavy nuclei with $Z \ge 100$. We use 100 data points for finding the parameters set of case II. Note that $M(8)$ in Table~\ref{table1} is the 8th magic number in the LDM parameterization with each parameter set. Once the masses of nuclei are evaluated by Eq.~\eqref{eq:ldm}, we can calculate the $Q$ value for $\alpha$ decay through~\cite{MRDT06} \begin{eqnarray}\label{eq:qval} Q &=& \Delta M(Z,A) - \Delta M(Z-2, A-4) - \Delta M_{\alpha} \nonumber \\ && \mbox{} + 10^{-6}\, k\left[ Z^\beta - (Z-2)^\beta \right] , \end{eqnarray} where $\Delta M_\alpha = 2.4249$~MeV. The values for $k$ and $\beta$ are ($k=8.7$~MeV, $\beta = 2.517 $) for nuclei of $Z \ge 60$, and ($k=13.6$~MeV, $\beta =2.408 $) for nuclei of $Z < 60$. \subsection{\boldmath Local formula for $Q_\alpha$} Considering heavy nuclei with $Z \ge 90$ and $N \ge 140$, Dong et al.~\cite{DR08,DZGWP10} developed a local mass formula for nuclei with large $N$ and $Z$ values. Using the Taylor expansion, it leads to the expression of the local $Q$ value including shell effects as \begin{eqnarray}\label{eq:qform} Q & = & a \frac{Z}{A^{4/3}} \left(3A-Z \right) + b\left( \frac{N-Z}{A} \right)^2 \nonumber \\ && \mbox{} + c \left[ \frac{\abs{N-152}}{N} - \frac{|N-154|}{N-2}\right] \nonumber \\ && \mbox{} + d \left[ \frac{|Z-110|}{Z} - \frac{|Z-112|}{Z-2}\right] + e , \end{eqnarray} where $a$, $b$, $c$, $d$, and $e$ are parameters to be fitted. Note that the pairing effects are neglected since the semi-classical formula gives almost the same contribution to the total binding energy for both parent and daughter nuclei and it does not cause a change in the $Q$ value. Since our goal is to compute the half-lives of super heavy nuclei through $\alpha$ decay processes, we obtain the parameters in Eq.~\eqref{eq:qform} with the measured $Q$ values for nuclei with $Z \ge 100$. The resulting parameters are shown in Table~\ref{tab2}. \begin{table}[t] \begin{tabular}{cccccc} \hline\hline $a$ & $b$ & $c$ & $d$ & $e$ & RMSD \\ \hline $0.90753$ & $-97.84028$ & $16.15924$ & $-18.95722$ & $-26.16600$ & $0.255$ \\ \hline \end{tabular} \caption{The best fit parameters of Eq.~\eqref{eq:qform}. All parameters have a unit of MeV.} \label{tab2} \end{table} Figure~\ref{fig:qval} shows the $Q$ values obtained from the LDM with Eq.~\eqref{eq:qval} and those from the local formula of Eq.~\eqref{eq:qform}. It is found that the case II and the local formula give more reliable results than case I on the measured $Q$ values. \begin{figure*}[t] \includegraphics[scale=0.65]{fig1.pdf} \caption{$Q$ values for $\alpha$ decays of nuclei between $Z = 106$ and $Z = 118$. The numerical values can be found in Table~\ref{tb:heavy}. A small horizontal offset is used for better visibility for a given value of $Z$.} \label{fig:qval} \end{figure*} \section{\boldmath Potential for the $\alpha$ cluster}\label{sec:models} In the $\alpha$ cluster model, the nuclear $\alpha$ decay is described as a quantum tunneling effect. Once the energy, i.e., the $Q$ value, of the reaction is determined, the next step is to find the potential for the $\alpha$ cluster inside the parent nucleus. In this Section, we discuss how we use phenomenological models for constructing the potential for the $\alpha$ cluster. \subsection{\boldmath Potential form} In the $\alpha$ cluster model, the $\alpha$ particle is already formed in the parent nucleus and it penetrates the potential barrier to cause the $\alpha$ decay process. Therefore, the estimation of lifetimes requires the information on the potential of the $\alpha$ cluster created by the core nucleus, i.e., the daughter nucleus after decay. The $\alpha$ cluster potential can be decomposed as \begin{equation} V = V_N + V_C + V_L, \label{eq:V_alpha} \end{equation} where $V_N$ is the nuclear potential for the $\alpha$ cluster, $V_C$ is the Coulomb potential provided by the protons of the core nucleus, and $V_L $ is the centrifugal potential arising from the relative orbital angular momentum between the $\alpha$ particle and the core nucleus. In principle, the nuclear potential of the $\alpha$ particle would be computed if the interactions between nucleons inside a nucleus is completely known. However, it is certainly beyond the scope of the present work, and we invoke the Skyrme force model to get the form of $V_N$. Then, as described in Ref.~\cite{SLHO15}, $V_N$ takes the form of \begin{eqnarray} V_N & = & \alpha \rho + \beta(\rho_n^{5/3} + \rho_p^{5/3}) + \gamma \rho^{\epsilon}(\rho^2 + 2\rho_n^{} \rho_p^{}) \nonumber \\ && \mbox{} + \delta \frac{1}{r}\frac{d \rho}{dr} + \eta\frac{d^2 \rho}{dr^2} \,, \label{eq:V_N} \end{eqnarray} where $\rho = \rho_n^{} + \rho_p^{}$ with $\rho_n^{}$ ($\rho_p^{}$) being the density distribution of neutrons (protons). This model contains 6 parameters, namely, $\alpha$, $\beta$, $\gamma$, $\delta$, $\eta$, and $\epsilon$. These parameters will be determined by fitting to the empirical data for $\alpha$ decay half-lives of heavy nuclei and will be discussed in the next subsection. Furthermore, the nuclear potential in Eq.~\eqref{eq:V_N} is controlled by the density distribution of nucleons, which should be provided by microscopic models for nuclear structure. Once the nucleon distribution is known, the Coulomb potential term $V_C$ can be calculated through \begin{equation} V_C = 8\pi e^2 \left[ \frac{1}{r} \int_0^r \rho_p^{}(r^\prime) r^{\prime2} dr^\prime + \int_r^\infty \rho_p(r^\prime) r^\prime dr^\prime \right]. \end{equation} The centrifugal potential $V_L$ is written as \begin{equation} V_L = \frac{\hbar^2}{2m_\mu r^2} \left(\ell +\frac{1}{2}\right)^2 , \end{equation} where $m_\mu$ is the reduced mass, and the Langer modification factor~\cite{Langer37} is adopted. \subsection{Nucleon density profiles} Since the $\alpha$ cluster potential of Eq.~\eqref{eq:V_N} requires the information on the density profile of the daughter nucleus, we rely on microscopic models for nuclear structure. In the present work, we consider the Skyrme SLy4 (zero-range)~\cite{CBHMS98} and the Gogny D1S (finite-range)~\cite{BGG91} models as non-relativistic approaches and the relativistic mean-field interaction DD-ME2 model of Ref.~\cite{LNVR05} as a relativistic approach. The Skyrme force model is constructed based on nucleon-nucleon interactions having dependence on the relative momentum and density, which reads \begin{eqnarray} \label{eq:skyint} v_{ij}^{} &=& t_0^{} \left(1 + x_0^{} P_\sigma \right) \delta \left(\mathbf{r}_i^{} - \mathbf{r}_j^{} \right) \nonumber \\ && \mbox{} + \frac{t_1^{}}{2} \left(1+x_1^{} P_\sigma \right) \nonumber \\ && \mbox{} \quad \times \left[ \delta \left(\mathbf{r}_i^{} - \mathbf{r}_j^{} \right) \mathbf{k}^2 + \mathbf{k}^{\prime2} \delta \left(\mathbf{r}_i^{} - \mathbf{r}_j^{} \right) \right] \nonumber \\ && \mbox{} + t_2^{} (1+ x_2^{} P_\sigma)\, \mathbf{k}^\prime \cdot \delta (\mathbf{r}_i^{} - \mathbf{r}_j^{} ) \mathbf{k} \nonumber \\ && \mbox{} + \frac{t_3}{6} \left(1+x_3P_\mathbf{\sigma} \right)\rho^{\alpha}\delta(\mathbf{r}_i -\mathbf{r}_j) \nonumber \\ && \mbox{} + i \, W_0\, \mathbf{k}^{\prime} \delta(\mathbf{r}_i -\mathbf{r}_j)\times \mathbf{k} \cdot (\bm{\sigma}_i + \bm{\sigma}_j) , \end{eqnarray} where $P_\sigma$ is the spin exchange operator, and $\bm{\sigma}_i$ are the Pauli spin matrices. Here, $\mathbf{k}$ and $\mathbf{k}^\prime$ are the relative momenta of two nucleons before and after interaction, respectively, and $W_0$ is the strength of the spin-orbit coupling. There are many versions of the parameter set $(t_i,x_i,W_0)$ and, in the present work, we use the SLy4 model compiled in Ref.~\cite{CBHMS98}. Compared with the Skyrme force model, the Gogny force assumes finite-range nucleon-nucleon interactions and zero-range multi-body forces, which leads to~\cite{DG80} \begin{eqnarray} v_{12}^{} &=& \sum_{j=1,2} \exp \left\{-\frac{ \left(\mathbf{r}_1^{} - \mathbf{r}_2^{} \right)^2}{\mu_j^2} \right\} \nonumber \\ && \mbox{} \qquad \times \left( W_j + B_j P_\sigma - H_j P_\tau - M_j P_\sigma P_\tau \right) \nonumber \\ && \mbox{} + t_0^{} \left(1 + x_0^{} P_\sigma \right) \rho^{\alpha} \left( \frac{\mathbf{r}_1^{} + \mathbf{r}_2^{}}{2} \right) \delta \left( \mathbf{r}_1^{} - \mathbf{r}_2^{} \right) \nonumber \\ && \mbox{} + i W_{LS} \,\mathbf{k}^\prime \delta \left(\mathbf{r}_1^{} - \mathbf{r}_2^{} \right) \times \mathbf{k} \cdot \left( \bm{\sigma}_1^{} + \bm{\sigma}_2^{} \right) , \end{eqnarray} where $P_\tau$ is the isospin exchange operator. We use the parameter values known as the D1S model in Ref.~\cite{BGG91}. For nucleon density distribution, we also use a relativistic mean-field model of Refs.~\cite{LNVR05,NPVR14}, which gives a satisfactory description for the properties of finite nuclei. In this model, the relativistic Lagrangian density is given by \begin{eqnarray} \mathcal{L} &=& \bar{\psi} \left( i \slashed{\partial} -m \right) \psi + \frac{1}{2} \partial^\mu \sigma \partial_\mu \sigma - \frac{1}{2} m_\sigma \sigma^2 - g_\sigma \bar{\psi} \sigma \psi \nonumber \\ && \mbox{} - \frac{1}{4} \Omega^{\mu\nu} \Omega_{\mu\nu} + \frac{1}{2}m_\omega^2\omega^2 - g_\omega \bar{\psi} \gamma^\mu \omega_\mu \psi \nonumber \\ && \mbox{} - \frac{1}{4}\vec{R}^{\mu\nu} \cdot \vec{R}_{\mu\nu} + \frac{1}{2}m_\rho^2 \vec{\rho}^{\,2} - g_\rho \bar{\psi} \gamma^\mu \vec{\rho}_\mu \cdot \vec{\tau} \psi \nonumber \\ && \mbox{} -\frac{1}{4} F^{\mu\nu} F_{\mu\nu} - e\bar{\psi} \gamma^\mu A_\mu \frac{(1-\tau_3^{})}{2}\psi\,, \end{eqnarray} where $\Omega^{\mu\nu}$, $\vec{R}^{\mu\nu}$, and $F^{\mu\nu}$ are the field strength tensors of the $\omega$ vector meson field $\omega_\mu$, the isovector $\rho$ vector meson field $\vec{\rho}_\mu$, and the photon field $A_\mu$, respectively. Note that the coupling constants of mesons to the nucleon are density-dependent so as to reproduce the properties of nuclear matter and finite nuclei. In the present work, we adopt the parameter set given as the DD-ME2 model in Ref.~\cite{LNVR05}. Within the Skyrme and Gogny force models, we solve Schr\"{o}dinger-like equations to obtain the density profile of a nucleus. On the other hand, in the relativistic mean field model, we solve the Dirac equation to get the density profile for a given nucleus. Once the density profile is known, one can find the $\alpha$ potential for each nucleus and the $\alpha$ decay lifetime can be computed. Since the $\alpha$ potential in Eq.~\eqref{eq:V_alpha} contains 6 parameters, we determine these parameters to the experimental data for the alpha decays of even-even nuclei ($\ell=0$) as we have done in Ref.~\cite{SLHO15}. Table~\ref{tb:param} shows the parameters for the nuclear $\alpha$ potential determined in this manner. The potential parameters for each model are found to have similar magnitudes except the case of $\gamma$, which is correlated to the value of $\epsilon$. The $\gamma$ term is related with the multi-body force and we choose $\epsilon = \frac{1}{3}$ in the Gogny D1S model reflecting the original $\epsilon$ value in the Gogny $NN$ interaction. \begin{table}[t] \begin{tabular}{ccccl} \hline \hline Parameter & SLy4 & D1S & DD-ME2 & ~~Unit \\ \hline $\alpha$ & $-1484.58$ & $-1499.04$ & $-1524.24$ & ~~MeV fm$^{3}$ \\ $\beta$ & $1355.57$ & $1248.80$ & $1289.04$ & ~~MeV fm$^{5}$ \\ $\gamma$ & $1005.48$ & $242.28$ & $1137.21$ & ~~MeV fm$^{6+\epsilon}$ \\ $\delta$ & $53.87$ & $30.75$ &$-41.84$ & ~~MeV fm$^{5}$ \\ $\eta$ & $-210.15$ & $-178.12$ &$-184.09$ & ~~MeV fm$^{5}$ \\ $\epsilon$ & $1/6$ & $1/3$ & $1/6$ & \\ \hline \end{tabular} \caption{Parameters for $\alpha$ particle potential in Eq.~\eqref{eq:V_N}.} \label{tb:param} \end{table} \section{Results}\label{sec:results} Equipped with the $\alpha$ potential obtained in the previous section, the $\alpha$-decay half-lives of heavy nuclei can be estimated in the standard way by using the WKB approximation. The half-life of the nuclear $\alpha$ decay is related to the decay width $\Gamma$ by \begin{equation} T_{1/2}^{} = \frac{\hbar\ln 2}{\Gamma} , \end{equation} where the decay width is given by \begin{equation} \Gamma = \mathcal{PF} \frac{\hbar^2}{4m_\mu}\exp \left[-2\int_{r_2}^{r_3}\,dr k(r) \right]. \end{equation} Here, $\mathcal{P}$ is the preformation factor which illustrates the probability of $\alpha$ particle in the parent nuclei, and $\mathcal{F}$ is the assaulting frequency of the trapped $\alpha$ particle between two turning points $r_1^{}$ and $r_2^{}$. In this calculation, we use $\mathcal{P} = 1$ and the explicit expression for $\mathcal{F}$ can be found, for example, in Ref.~\cite{SLHO15}. The distance between $r_2^{}$ and $r_3^{}$, i.e., $\abs{r_2^{} - r_3^{}}$, represents the penetration width of the barrier through which $\alpha$ particle passes. $k(r)$ corresponds to the wave number of the $\alpha$ particle inside the potential barrier, \begin{equation} k(r) = \sqrt{\frac{2m_\mu}{\hbar^2} \abs{Q - V(r)} } \end{equation} with $m_\mu$ being the reduced mass of the system. \begin{table*}[t] \caption{Observed $\alpha$ decay half-lives of heavy nuclei and the results of the present work. Unless specified, $\ell = 0$ is understood.} \label{tb:heavy} \begin{tabular}{c|cccccccc} \hline\hline $(Z,A)$ & $Q_{\alpha}^{\text{Expt}}$ (MeV) & $T_{1/2}^{\text{Expt}}$ & $T_{1/2}^{\text{SLy4}} ~[\ell]$ & $T_{1/2}^{\text{D1S}} ~[\ell]$ & $T_{1/2}^{\text{DD-ME2}} ~[\ell]$ & Reference \\ \hline ~$(118, 294)$~ & ~$11.81 \pm 0.06$~ & ~$0.89_{-0.31}^{+1.07}$ ms~ & ~$0.50^{+0.18}_{-0.13}$ ms~ & ~$0.61^{+0.22}_{-0.16}$ ms~ & ~$0.43^{+0.15}_{-0.11}$ ms~ & \cite{OULA06} \\ $(116,293)$ & $10.67 \pm 0.06$ & $53_{-19}^{+62}$ ms & ~$65^{+28}_{-20}$ ms & $78^{+33}_{-23}$ ms & $54^{+24}_{-16}$ ms & \cite{OULA04b} \\ $(116,292)$ & $10.80 \pm 0.07$ & $18_{-6}^{+16}$ ms & ~$31^{+16}_{-10}$ ms~ & $38^{+19}_{-13}$ ms & $26^{+13}_{-9}$ ms & \cite{OULA04b} \\ $(116,291)$ & $10.89 \pm 0.07$ & $18_{-6}^{+22}$ ms & $19^{+9}_{-6}$ ms & $23^{+11}_{-7}$ ms & $16^{+8}_{-5}$ ms & \cite{OULA06} \\ $(116,290)$ & $11.00 \pm 0.08$ & $7.1_{-1.7}^{+3.2}$ ms & ~$10.6^{+6.1}_{-3.8}$ ms~ & $12.5^{+7.2}_{-4.5}$ ms & $8.6^{+5.0}_{-3.1}$ ms & \cite{OULA06} \\ $(115,288)$ & $10.61 \pm 0.06$ & $87_{-30}^{+105}$ ms & $51^{+21}_{-15}$ ms & $57^{+25}_{-17}$ ms & $42^{+19}_{-13}$ ms & \cite{OULA04,OUDL05} \\ $(115,287)$ & $10.74 \pm 0.09$ & $32_{-14}^{+155}$ ms &~$25^{+17}_{-10}$ ms~ & $28^{+20}_{-12}$ ms & $21^{+15}_{-9}$ ms & \cite{OULA04,OUDL05} \\ $(114,289)$ & $9.96 \pm 0.06$ & $2.7_{-0.7}^{+1.4}$ s & $1.3^{+0.6}_{-0.4}$ s & $1.5^{+0.7}_{-0.5}$ s & $1.0^{+0.5}_{-0.3}$ s & \cite{OULA04b} \\ $(114,288)$ & $10.09 \pm 0.07$ & $0.8_{-0.18}^{+0.32}$ s & ~$0.56^{+0.31}_{-0.20}$ s~ & $0.65^{+0.37}_{-0.23}$ s & $0.46^{+0.26}_{-0.16}$ s & \cite{OULA04b} \\ $(114,287)$ & $10.16 \pm 0.06$ & $0.48_{-0.09}^{+0.16}$ s & $0.37^{+0.17}_{-0.12}$ s & $0.42^{+0.20}_{-0.13}$ s & $0.31^{+0.15}_{-0.10}$ s & \cite{OULA06} \\ $(114,286)$ & $10.33 \pm 0.06$ & $0.13_{-0.02}^{+0.04}$ s & ~$0.14^{+0.06}_{-0.04}$ s~ & $0.15^{+0.07}_{-0.05}$ s & $0.12^{+0.05}_{-0.04}$ s & \cite{OULA06} \\ $(113,284)$ & $10.15 \pm 0.06$ & $0.48_{-0.17}^{+0.58}$ s & $0.20^{+0.09}_{-0.06}$ s & $0.23^{+0.10}_{-0.07}$ s & $0.28^{+0.13}_{-0.09}$ s [$\ell = 2$] & \cite{OULA04,OUDL05} \\ $(113,283)$ & $10.26 \pm 0.09$ & $100_{-45}^{+490}$ ms &~$106^{+77}_{-45}$ ms~ & $120^{+89}_{-51}$ ms & $94^{+70}_{-40}$ ms & \cite{OULA04,OUDL05} \\ $(113,282)$ & $10.83 \pm 0.08$ & $73_{-29}^{+134}$ ms & $106^{+62}_{-38}$ ms [$\ell = 6$ & $121^{+73}_{-45}$ ms [$\ell = 6$] & $93^{+55}_{-34}$ ms [$\ell = 6$] & \cite{OULA07} \\ $(112,285)$ & $9.29 \pm 0.06$ & $34_{-9}^{+17}$ s & $27^{+14}_{-10}$ ms & $30^{+16}_{-10}$ s & $22^{+13}_{-8}$ s & \cite{OULA04b} \\ $(112,283)$ & $9.67 \pm 0.06$ & $3.8_{-0.7}^{+1.2}$ s & $2.0^{+1.0}_{-0.7}$ s & $2.3^{+1.2}_{-0.8}$ s & $1.8^{+0.9}_{-0.6}$ s & \cite{OULA06} \\ $(111,280)$ & $9.87 \pm 0.06$ & $3.6_{-1.3}^{+4.3}$ s & $1.4^{+0.7}_{-0.4}$ s [$\ell = 4$ & $1.6^{+0.8}_{-0.5}$ s [$\ell = 4$] & $7.2^{+3.4}_{-2.3}$ s [$\ell = 6$] & \cite{OULA04,OUDL05} \\ $(111,279)$ & $10.52 \pm 0.16$ & $170_{-80}^{+810}$ ms & ~$157^{+251}_{-95}$ ms [$\ell = 6$ & $176^{+276}_{-106}$ ms [$\ell = 6$] & $138^{+219}_{-83}$ ms [$\ell = 6$] & \cite{OULA04,OUDL05} \\ $(111,278)$ & $10.89 \pm 0.08$ & $4.2_{-1.7}^{+7.5}$ ms & $3.5^{+1.9}_{-1.3}$ ms [$\ell = 4$] & $3.9^{+2.2}_{-1.4}$ ms [$\ell = 4$] & $3.2^{+1.8}_{-1.1}$ ms [$\ell = 4$] & \cite{OULA07} \\ $(110,279)$ & $9.84 \pm 0.06$ & $0.20_{-0.04}^{+0.05}$ s & $0.15^{+0.07}_{-0.05}$ s & $0.17^{+0.08}_{-0.05}$ s & $0.13^{+0.06}_{-0.04}$ s & \cite{OULA06} \\ $(109,276)$ & $9.85 \pm 0.06$ & $0.72_{-0.25}^{+0.97}$ s & $0.37^{+0.17}_{-0.12}$ s [$\ell = 4$] & $0.41^{+0.19}_{-0.13}$ s [$\ell = 4$] & $0.33^{+0.16}_{-0.10}$ s [$\ell = 4$] & \cite{OULA04,OUDL05} \\ $(109,275)$ & $10.48 \pm 0.09$ & $9.7_{-4.4}^{+46}$ ms & $8.7^{+5.9}_{-3.5}$ ms [$\ell = 4$ & $9.4^{+6.6}_{-3.8}$ ms [$\ell = 4$] & $7.9^{+5.4}_{-3.2}$ ms [$\ell = 4$ & \cite{OULA04,OUDL05} \\ $(109,274)$ & $9.95 \pm 0.10$ & $440_{-170}^{+810}$ ms & $220^{+195}_{-99}$ ms [$\ell = 4$] & $242^{+211}_{-112}$ ms[$\ell = 4$] & $200^{+170}_{-94}$ ms [$\ell = 4$] &\cite{OULA07} \\ $(108,275)$ & $9.44 \pm 0.06$ & $0.19_{-0.07}^{+0.22}$ s & $0.46^{+0.23}_{-0.15}$ s & $0.51^{+0.25}_{-0.17}$ s & $0.42^{+0.21}_{-0.14}$ s & \cite{OULA06} \\ $(107,272)$ & $9.15 \pm 0.06$ & $9.8_{-3.5}^{+11.7}$ s & $9.0^{+4.7}_{-3.1}$ s [$\ell = 4$ & $9.7^{+5.1}_{-3.3}$ s [$\ell = 4$] & $7.9^{+4.1}_{-2.7}$ s [$\ell = 4$] & \cite{OULA04,OUDL05} \\ $(107,270)$ & $9.11 \pm 0.08$ & $61_{-28}^{+292}$ s & $73^{+58}_{-30}$ s [$\ell = 6$] & $84^{+64}_{-36}$ s [$\ell = 6$] & $70^{+54}_{-30}$ s [$\ell = 6$] & \cite{OULA07} \\ $(106,271)$ & $8.67 \pm 0.08$ & $1.9_{-0.6}^{+2.4}$ min & ~$2.10^{+1.77}_{-0.95}$ min [$\ell = 4$] & ~$2.27^{+1.99}_{-1.02}$ min [$\ell = 4$] & ~$1.83^{+1.54}_{-0.83}$ min [$\ell = 4$] & \cite{OULA06} \\ \hline RMSD & - & - & $0.209$ & $0.198$ & $0.218$ & \\ \hline\hline \end{tabular} \end{table*} The heavy nuclei under study in the present work are neutron-rich but are located on the neutron-deficient side of beta-stability. Thus, $\beta$ decay does not occur for these nuclei. Table~\ref{tb:heavy} shows our results on the observed $\alpha$ decay half-lives of heavy nuclei. Our results are obtained with the three models for nuclear density profiles and are compared with experimental data. The theoretical uncertainties shown in the table come from those of the experimental $Q$ values. The obtained half-lives depend on the relative orbital angular momentum $\ell$. We assume $\ell=0$ for even-even decay cases but we allow the variation of $\ell$ in other types of decay processes. The value of $\ell$ which minimizes the difference with the experimental data for the half-life is explicitly shown in Table~\ref{tb:heavy}. The results for half-lives without the value of $\ell$ are obtained with $\ell = 0$. Compared with the previous results given in Ref.~\cite{SLHO15} which used a simple Fermi density profile, using realistic proton distribution improves the rms deviation (RMSD) in $\alpha$ decay lifetimes as shown in the table, which is defined by \begin{equation} \mbox{RMSD} = \sqrt{\frac{1}{N-1} \sum_i \left( \log_{10} \left[\frac{T_{i}^{\rm expt.}}{T_i^{\rm cal.}} \right] \right)^2 }, \end{equation} where $N$ is the total number of data. This indicates that the density profile of the neutron-rich heavy nuclei deviates from the simple Fermi density profile and its effect should be considered to get more realistic results. Presented in Table~\ref{tb:pre} are our predictions on the half-lives of unobserved $\alpha$ decays of superheavy elements. In this case, the $Q$ values are estimated by using the LDM and the local formula as described in Sec.~\ref{sec:qval}. We assume $\ell=0$ for simplicity as there is no information on these processes.% \footnote{If $\ell \neq 0$, the potential barrier width becomes larger than the case of $\ell=0$ and the lifetime becomes longer. For example, when $Q = 11 \sim 14$~MeV, if we use the Gogny D1S model, the enhancement factors for the half-life become $1.06$, $1.61$, $2.16$, $4.40$, and $8.08$ as we increase the value of $\ell$ from $1$ to $5$. Other models give similar results.} Note that the half-lives from the D1S calculation are longer than the ones from SLy4 and DD-ME2 calculations. We found that this is mostly caused by the differences in parameters given in Table~\ref{tb:param}. \begin{table*}[t] \caption{Predictions on the $\alpha$ decay lifetimes for unobserved superheavy elements with $Q$ values from the LDM (case II) and from the local formula.} \begin{tabular}{c|cccc|cccc} \hline\hline \multirow{2}{*}{~Nuclei $(Z,A)$~} & ~$Q$ (MeV)~ & \multirow{2}{*}{$T_{1/2}^{\text{SLy4}}$ (s)} & \multirow{2}{*}{~$T_{1/2}^{\text{D1S}}$ (s)~} & \multirow{2}{*}{~$T_{1/2}^{\text{DD-ME2}}$ (s)~} & $Q$ (MeV)~ & \multirow{2}{*}{$T_{1/2}^{\text{SLy4}}$ (s)} & \multirow{2}{*}{~$T_{1/2}^{\text{D1S}}$ (s)~} & \multirow{2}{*}{ ~$T_{1/2}^{\text{DD-ME2}}$ (s)~} \\ & LDM & & & & Local formula & & & \\ \hline (122, 307) & 12.594 & $ 9.467\times 10^{-5}$ & $ 9.982\times 10^{-5}$ & $ 6.999\times 10^{-5}$ & 12.289 & $ 4.340\times 10^{-4}$ & $ 4.514\times 10^{-4}$ & $ 3.194\times 10^{-4}$ \\ (122, 306) & 12.729 & $ 5.649\times 10^{-5}$ & $ 5.836\times 10^{-5}$ & $ 4.183\times 10^{-5}$ & 12.420 & $ 2.517\times 10^{-4}$ & $ 2.688\times 10^{-4}$ & $ 1.891\times 10^{-4}$ \\ (122, 305) & 12.853 & $ 3.334\times 10^{-5}$ & $ 3.607\times 10^{-5}$ & $ 2.525\times 10^{-5}$ & 12.550 & $ 1.402\times 10^{-4}$ & $ 1.539\times 10^{-4}$ & $1.073\times 10^{-4}$ \\ (122, 304) & 12.986 & $ 1.931\times 10^{-5}$ & $ 2.100\times 10^{-5}$ & $ 1.480\times 10^{-5}$ & 12.679 & $ 7.919\times 10^{-5}$ & $ 8.911\times 10^{-5}$ & $ 6.193\times 10^{-5}$ \\ (122, 303) & 13.108 & $ 1.145\times 10^{-5}$ & $ 1.300\times 10^{-5}$ & $ 9.047\times 10^{-6}$ & 12.807 & $ 4.646\times 10^{-5}$ & $ 5.237\times 10^{-5}$ & $3.593\times 10^{-5}$ \\ (122, 302) & 13.239 & $ 6.692\times 10^{-6}$ & $ 7.539\times 10^{-6}$ & $ 5.339\times 10^{-6}$ & 12.935 & $ 2.646\times 10^{-5}$ & $ 3.000\times 10^{-5}$ & $2.099\times 10^{-5}$ \\ \hline (121, 306) & 12.114 & $ 5.360\times 10^{-4}$ & $ 5.522\times 10^{-4}$ & $ 3.846\times 10^{-4}$ & 11.853 & $ 2.104\times 10^{-3}$ & $ 2.175\times 10^{-3}$ & $1.509\times 10^{-3}$ \\ (121, 305) & 12.250 & $ 2.948\times 10^{-4}$ & $ 3.093\times 10^{-4}$ & $ 2.170\times 10^{-4}$ & 11.985 & $ 1.143\times 10^{-3}$ & $ 1.212\times 10^{-3}$ & $8.467\times 10^{-4}$ \\ (121, 304) & 12.367 & $ 1.664\times 10^{-4}$ & $ 1.831\times 10^{-4}$ & $ 1.274\times 10^{-4}$ & 12.117 & $ 6.082\times 10^{-4}$ & $ 6.787\times 10^{-4}$ & $4.700\times 10^{-4}$ \\ (121, 303) & 12.511 & $ 9.077\times 10^{-5}$ & $ 1.030\times 10^{-4}$ & $ 7.119\times 10^{-5}$ & 12.248 & $ 3.317\times 10^{-4}$ & $ 3.794\times 10^{-4}$ & $2.593\times 10^{-4}$ \\ (121, 302) & 12.636 & $ 5.323\times 10^{-5}$ & $ 6.026\times 10^{-5}$ & $ 4.191\times 10^{-5}$ & 12.378 & $ 1.834\times 10^{-4}$ & $ 2.093\times 10^{-4}$ & $1.439\times 10^{-4}$ \\ (121, 301) & 12.769 & $ 2.976\times 10^{-5}$ & $ 3.401\times 10^{-5}$ & $ 2.378\times 10^{-5}$ & 12.508 & $ 1.027\times 10^{-4}$ & $ 1.169\times 10^{-4}$ & $8.201\times 10^{-5}$ \\ \hline (120, 304) & 11.790 & $ 1.567\times 10^{-3}$ & $ 1.650\times 10^{-3}$ & $ 1.167\times 10^{-3}$ & 11.546 & $ 5.792\times 10^{-3}$ & $ 6.146\times 10^{-3}$ & $4.349\times 10^{-3}$ \\ (120, 303) & 11.918 & $ 8.584\times 10^{-4}$ & $ 9.358\times 10^{-4}$ & $ 6.494\times 10^{-4}$ & 11.679 & $ 2.987\times 10^{-3}$ & $ 3.331\times 10^{-3}$ & $2.289\times 10^{-3}$ \\ (120, 302) & 12.055 & $ 4.456\times 10^{-4}$ & $ 5.025\times 10^{-4}$ & $ 3.459\times 10^{-4}$ & 11.812 & $ 1.561\times 10^{-3}$ & $ 1.761\times 10^{-3}$ & $1.217\times 10^{-3}$ \\ (120, 301) & 12.181 & $ 2.491\times 10^{-4}$ & $ 2.816\times 10^{-4}$ & $ 1.959\times 10^{-4}$ & 11.944 & $ 8.288\times 10^{-4}$ & $ 9.395\times 10^{-4}$ & $ 6.575\times 10^{-4}$ \\ (120, 300) & 12.317 & $ 1.342\times 10^{-4}$ & $ 1.523\times 10^{-4}$ & $ 1.068\times 10^{-4}$ & 12.076 & $ 4.465\times 10^{-4}$ & $ 5.053\times 10^{-4}$ & $ 3.520\times 10^{-4}$ \\ (120, 299) & 12.442 & $ 7.735\times 10^{-5}$ & $ 8.978\times 10^{-5}$ & $ 6.175\times 10^{-5}$ & 12.207 & $ 2.436\times 10^{-4}$ & $ 2.817\times 10^{-4}$ & $ 1.957\times 10^{-4}$ \\ \hline (119, 298) & 11.973 & $ 4.022\times 10^{-4}$ & $ 4.688\times 10^{-4}$ & $ 3.243\times 10^{-4}$ & 11.772 & $ 1.131\times 10^{-3}$ & $ 1.322\times 10^{-3}$ & $ 8.986\times 10^{-4}$ \\ (119, 297) & 12.109 & $ 2.119\times 10^{-4}$ & $ 2.415\times 10^{-4}$ & $ 1.706\times 10^{-4}$ & 11.904 & $ 5.932\times 10^{-4}$ & $ 1.610\times 10^{-3}$ & $ 4.795\times 10^{-4}$ \\ (119, 296) & 12.234 & $ 1.181\times 10^{-4}$ & $ 1.340\times 10^{-4}$ & $ 9.719\times 10^{-5}$ & 12.036 & $ 3.147\times 10^{-4}$ & $ 3.587\times 10^{-4}$ & $ 2.593\times 10^{-4}$ \\ (119, 295) & 12.368 & $ 6.172\times 10^{-5}$ & $ 7.814\times 10^{-5}$ & $ 5.316\times 10^{-5}$ & 12.167 & $ 1.643\times 10^{-4}$ & $ 1.913\times 10^{-4}$ & $ 1.405\times 10^{-4}$ \\ (119, 294) & 12.492 & $ 3.425\times 10^{-5}$ & $ 4.112\times 10^{-5}$ & $ 2.983\times 10^{-5}$ & 12.297 & $ 8.668\times 10^{-5}$ & $ 1.044\times 10^{-4}$ & $ 7.549\times 10^{-5}$ \\ (119, 293) & 12.625 & $ 1.874\times 10^{-5}$ & $ 2.264\times 10^{-5}$ & $ 1.646\times 10^{-5}$ & 12.427 & $ 4.775\times 10^{-5}$ & $ 5.767\times 10^{-5}$ & $ 4.168\times 10^{-5}$ \\ \hline (118, 298) & 11.393 & $ 4.077\times 10^{-3}$ & $ 4.600\times 10^{-3}$ & $ 3.215\times 10^{-3}$ & 11.197 & $ 1.206\times 10^{-2}$ & $ 1.373\times 10^{-2}$ & $ 9.535\times 10^{-3}$ \\ (118, 297) & 11.522 & $ 2.126\times 10^{-3}$ & $ 2.488\times 10^{-3}$ & $ 1.699\times 10^{-3}$ & 11.332 & $ 5.977\times 10^{-3}$ & $ 7.008\times 10^{-3}$ & $ 4.774\times 10^{-3}$ \\ (118, 296) & 11.660 & $ 1.068\times 10^{-3}$ & $ 1.238\times 10^{-3}$ & $ 8.599\times 10^{-4}$ & 11.466 & $ 3.013\times 10^{-3}$ & $ 3.481\times 10^{-3}$ & $ 2.423\times 10^{-3}$ \\ (118, 295) & 11.787 & $ 5.640\times 10^{-4}$ & $ 6.577\times 10^{-4}$ & $ 4.692\times 10^{-4}$ & 11.600 & $ 1.500\times 10^{-3}$ & $ 1.762\times 10^{-3}$ & $ 1.244\times 10^{-3}$ \\ (118, 294) & 11.924 & $ 2.824\times 10^{-4}$ & $ 8.069\times 10^{-4}$ & $ 2.412\times 10^{-4}$ & 11.733 & $ 7.515\times 10^{-4}$ & $ 9.050\times 10^{-4}$ & $ 6.387\times 10^{-4}$ \\ (118, 293) & 12.050 & $ 1.516\times 10^{-4}$ & $ 1.835\times 10^{-4}$ & $ 1.305\times 10^{-4}$ & 11.865 & $ 3.832\times 10^{-4}$ & $ 4.644\times 10^{-4}$ & $ 3.289\times 10^{-4}$ \\ \hline (117, 298) & 10.779 & $ 6.202\times 10^{-2}$ & $ 7.032\times 10^{-2}$ & $ 4.795\times 10^{-2}$ & 10.920 & $ 1.678\times 10^{-1}$ & $ 1.916\times 10^{-1}$ & $ 1.311\times 10^{-1}$ \\ (117, 297) & 10.920 & $ 2.837\times 10^{-2}$ & $ 3.274\times 10^{-2}$ & $ 2.236\times 10^{-2}$ & 10.749 & $ 7.769\times 10^{-2}$ & $ 9.001\times 10^{-2}$ & $ 6.129\times 10^{-2}$ \\ (117, 296) & 11.051 & $ 1.409\times 10^{-2}$ & $ 1.666\times 10^{-2}$ & $ 1.126\times 10^{-2}$ & 10.886 & $ 3.620\times 10^{-2}$ & $ 4.330\times 10^{-2}$ & $ 2.903\times 10^{-2}$ \\ (117, 295) & 11.192 & $ 6.660\times 10^{-3}$ & $ 7.806\times 10^{-3}$ & $ 5.400\times 10^{-3}$ & 11.023 & $ 1.735\times 10^{-2}$ & $ 2.035\times 10^{-2}$ & $ 1.396\times 10^{-2}$ \\ (117, 294) & 11.321 & $ 3.310\times 10^{-3}$ & $ 3.965\times 10^{-3}$ & $ 6.634\times 10^{-3}$ & 11.158 & $ 8.146\times 10^{-3}$ & $ 9.736\times 10^{-3}$ & $ 6.779\times 10^{-3}$ \\ (117, 293) & 11.460 & $ 1.584\times 10^{-3}$ & $ 1.941\times 10^{-3}$ & $ 1.325\times 10^{-3}$ & 11.293 & $ 3.885\times 10^{-3}$ & $ 4.752\times 10^{-3}$ & $ 3.244\times 10^{-3}$ \\ \hline\hline \end{tabular}\label{tb:pre} \end{table*} Figure~\ref{fig:chain} shows one of the most important $\alpha$ decay chains of superheavy nuclei, namely, the decay chains of \nuclide[294][118]{Og} and \nuclide[296][118]{Og}. Our results successfully explain the $\alpha$ decay lifetimes in these two decay channels compared with experimental results. The $\alpha$ decay of \nuclide[296][118]{Og} is yet to be discovered and the half-lives for this decay given in Fig.~\ref{fig:chain} are our predictions. It should be noticed that the half-lives shown in Fig.~\ref{fig:chain} are calculated from the nuclear $\alpha$ decay but the actual half-lives should be determined through the competition with the spontaneous fission process. For example, in the case of \nuclide[286]{Fl}, although the measured half-life is $T^{\rm Exp.} \approx 0.13$~s, the branching ratio of the $\alpha$-decay is about 60\%~\cite{OU15,OU15b}, which makes the $\alpha$-decay half-life close to 0.22~s. \begin{figure*}[t] \includegraphics[scale=0.4]{fig2.pdf} \caption{ Float charts for $\alpha$ decay chains for \nuclide[294][118]{Og} and \nuclide[296][118]{Og}. The measured half-life of \nuclide[286]{Fl} is about 0.13~s. Since the branching ratio of its $\alpha$ decay is about 60\%~\cite{OU15,OU15b}, however, the half-life of its $\alpha$ decay is about 0.22~s.} \label{fig:chain} \end{figure*} \begin{figure} \includegraphics[scale=0.45]{fig3.pdf} \caption{ The $\alpha$ nuclear and Coulomb potentials, $V_N + V_C$, for \nuclide[296][118]{Og} in the models of the present work. The double folding potential for \nuclide[296][118]{Og} of Ref.~\cite{Mohr16} is also presented for comparison. } \label{fig:pot} \end{figure} Figure~\ref{fig:pot} shows the $\alpha$ potentials, $V_N + V_C$, used to calculate the half-life of \nuclide[296][118]{Og} in this work. The dotted line indicates the $Q$-values obtained in this work. The double folding potential is presented by the dashed line for comparison~\cite{Mohr16}. This shows that, although the details of the potentials in each model are quite different inside the nucleus, the barrier widths corresponding to the obtained $Q$ values are relatively close to each other. The sightly lower barrier in Ref.~\cite{Mohr16} is compensated by a preformation factor of 0.09, finally leading to half-lives close to each other. \section{Summary and Conclusion}\label{sec:conclusion} In this paper, we have investigated the nuclear $\alpha$ decays of heavy nuclei based on nuclear energy density functional. We use a Skyrme-type force model to get the nuclear potential of the $\alpha$ particle inside a nucleus as a functional of proton and neutron density profiles of the daughter nucleus. These nucleon density profiles are obtained from the Skyrme SLy4, Gogny D1S, and relativistic mean-field DD-ME2 models. The parameters of the nuclear potential of the $\alpha$ are fitted for each density profile model to measured $\alpha$ decay half-lives of heavy nuclei. The results show that this approach improves the previous results reported in Ref.~\cite{SLHO15}, by reducing the RMS deviation from 0.238 to $0.198 \sim 0.218$. In particular, we found that the Gogny D1S gives a better description among the models considered in the present work. Once all the parameters are fixed, we apply the model to predict half-lives of unobserved $\alpha$ decays to get the estimations shown in Table~\ref{tb:pre}. One interesting quantity is the half-life of \nuclide[296][118]{Og} as there are attempts to synthesize this nuclide~\cite{Sobiczewski16}. Our predictions on this decay are also shown in Fig.~\ref{fig:chain}, which shows our estimation of the $Q$ value as $Q^{\text{LDM}}=11.66$~MeV and $Q^{\text{Local}} =11.47$~MeV. Our predictions on the half-life of the $\alpha$ decay of this nuclide is in the range of $0.86~\mbox{ms} \sim 3.48~\mbox{ms}$, which is in good agreement with the predictions of Ref.~\cite{Sobiczewski16} that gives $0.5~\mbox{ms} \sim 4.8~\mbox{ms}$ based on realistic mass formulas and with the prediction of Ref.~\cite{Mohr16} which obtained 0.825~ms using the double-folding potential model. (See also Refs.~\cite{SPN16,Manjunatha16}.) In the present work, we assumed that the potential for the $\alpha$ is isotropic. However, in the case of heavy nuclei, the deformation effects should be included, in particular, to understand its fine structure~\cite{DIL92,NR10}. Therefore, improving the present model by including deformation and other microscopic effects would be desired for a better understanding of nuclear $\alpha$ decays of superheavy nuclei. \acknowledgments We are grateful to P. Papakonstantinou for providing us with density profiles of nuclei obtained in the Gogny force model. We also thank P. Mohr for providing his double folding potential for $\alpha$ decay and many suggestions for this work. The work of Y.O. was supported by Kyungpook National University Bokhyeon Research Fund, 2015.
1,314,259,996,952
arxiv
\section{Introduction} With the broad interest and accelerating development of superhydrophobic surfaces for a variety of applications including self-cleaning, condensation heat transfer enhancement, and anti-icing, the need for more detailed insights on droplet interactions on these surfaces have emerged. Specifically, when two or more droplets coalesce, they can spontaneously jump away from a superhydrophobic surface independent of gravity due to the release of excess surface energy \cite{Kollera, Boreyko}. To date, researchers have focused on creating superhydrophobic surfaces showing rapid droplet removal and experimentally analyzing \cite{Miljkovic1} the merging and jumping behavior before and immediately after coalescence \cite{Enright}. However, aspects related to the droplet dynamics after departure from the surface remain to be investigated. Here, using high speed visualization, we show that jumping droplets 1) can undergo multiple jumps after departing the surface, 2) return to the surface against the force of gravity due to condensing vapor entrainment, and 3) gain a net positive charge that causes them to repel each other mid-flight \cite{Miljkovic2}. \section{Surface Fabrication} To create the CuO nanostructures (Superhydrophobic CuO), commercially available oxygen-free Cu tubes were used (99.9{\%} purity) with outer diameters, {\it D}$_{OD}$ = 6.35 mm, inner diameters, {\it D}$_{ID}$ = 3.56 mm, and lengths, {\it L} = 131 mm, as the test samples for the experiments. Each Cu tube was cleaned in an ultrasonic bath with acetone for 10 minutes and rinsed with ethanol, isopropyl alcohol and de-ionized (DI) water. The tubes were then dipped into a 2.0 M hydrochloric acid solution for 10 minutes to remove the native oxide film on the surface, then triple-rinsed with DI water and dried with clean nitrogen gas. Nanostructured CuO films were formed by immersing the cleaned tubes (with ends capped) into a hot (96 ± 3$^\circ$C) alkaline solution composed of NaClO$_{2}$, NaOH, Na$_{3}$PO$_{4}$•12H$_{2}$O, and DI water (3.75 : 5 : 10 : 100 wt.{\%}). During the oxidation process, a thin (≈300 nm) Cu$_{2}$O layer was formed that then re-oxidized to form sharp, knife-like CuO oxide structures with heights of {\it h} $\approx$ 1 $\mu$m, solid fraction $\phi$ $\approx$ 0.023 and roughness factor {\it r} $\approx$ 10. Carbon nanotubes (Superhydrophobic CNTs) were grown by chemical vapor deposition (CVD). Silicon growth substrates were prepared by sequentially depositing a 20 nm thick Al$_{2}$O$_{3}$ diffusion barrier and a 5 nm thick film of Fe catalyst layer using electron-beam deposition. Growth was performed in a 2.54 cm quartz furnace tube. Following a 15 min purge in a H$_{2}$/He atmosphere, the growth substrate was annealed by ramping the furnace temperature to 750$^\circ$C followed by a 3 minute anneal at temperature, while maintaining a flow of H$_{2}$ and He at 400 sccm and 100 sccm, respectively. CNT growth was then initiated by flowing C$_{2}$H$_{4}$ at 200 sccm. The flow of C$_{2}$H$_{4}$ was stopped after a period of 1 minute. The thermally-grown CNT had a typical outer diameter of {\it d} $\approx$ 7 nm. Due to the short growth time ($\approx$5 min.) the CNT did not form a well-aligned forest, but rather a tangled turf. To functionalize the surfaces, a proprietary fluorinated polymer (P2i) was deposited using plasma enhanced vapor deposition. The process occurs under low pressure within a vacuum chamber at room temperature. The coating is introduced as a vapor and ionized. This process allows for the development of a highly conformal ($\approx$30 nm thick) polymer layer, which forms a covalent bond with the surface, making it extremely durable. Goniometric measurements (MCA-3, Kyowa Interface Science) of $\approx$100 nL droplets on a smooth P2i coated silicon wafer surface showed advancing and receding contact angles of $\theta$$_{a}$ = 124.3 ± 3.1$^\circ$ and $\theta$$_{r}$ = 112.6 ± 2.8$^\circ$, respectively. \section{Experimental Setup and Conditions} All experiments were carried out under saturated conditions in an environmental chamber. The droplet ejection process was captured using a single-camera set-up \cite{Miljkovic1}. The out-of-plane trajectory of the ejected droplets was captured using a high-speed camera (Phantom v7.1, Vision Research). The camera was mounted outside the environmental chamber and fitted with an extended macro lens assembly. The lens assembly consisted of a fully extended 5X optical zoom macro lens (MP-E 65mm, Canon), connected in series with 3 separate 68mm extension tubes (Auto Extension Tube Set DG, Kenko). The DG extension tubes have no optics. They are mounted in between the camera body and lens to create more distance between the lens and film plane. By moving the lens further away from the film or CCD sensor in the camera, the lens is forced to focus much closer than normal. The greater the length of the extension tube, the closer the lens can focus. Illumination was supplied by light emitting diodes installed inside the chamber and providing back lighting to the sample. The experiments were initiated by first evacuating the environmental chamber to medium-vacuum levels (=0.5 ± 0.025 Pa). Flat samples (jumping droplet videos (Superhydrophobic CuO and CNTs), and multi-jump videos) were mounted to a flattened copper tube connected to an external cooling loop and was maintained at a temperature of {\it T}$_{w}$ $\approx$ 26$^\circ$C ({\it p}$_{w}$ $\approx$ 3.33 kPa). The water vapor supply was vigorously boiled before the experiments to remove non-condensable gases. Water vapor was introduced into the environmental chamber {\it via} a metering valve set to maintain the chamber pressure. \subsection{Droplet Return Video} Droplet return to the surface due to vapor flow entrainment was studied by observing steady state condensation on the nanostructured CuO tube captured with the high speed camera \cite{Miljkovic1}. The tube is oriented in the horizontal direction with cooling water flowing inside the tube at 5 L/min. The vapor pressure is ≈2.7 kPa. Droplet removal {\it via} coalescence-induced ejection occurs once droplets reach sizes large enough to begin coalescing. The video was captured at 90 fps and is played back at 30 fps. The field of view is 16.0 mm x 12.0 mm. \subsection{Droplet-Droplet Repulsion Videos} Droplet-droplet repulsion during steady state condensation on the nanostructured CuO tube captured with the high speed camera \cite{Miljkovic2}. The tube was oriented in the horizontal direction with the bottom surface seen on the top of the frame and cooling water flowing inside the tube at 5 L/min. The vapor pressure was ≈2.7 kPa. These videos were captured at 1000 fps and are played back at 5 fps. The fields of view for the two movies in succession are 2.8 x 2.9 mm, 3.3 x 2.6 mm, respectively. \subsection{Charging Effects Videos} To study the effect of droplet charging, a 350 $\mu$m diameter copper wire electrode was placed beneath the superhydrophobic surface \cite{Miljkovic2}. The electrode was connected to a 600 V DC power supply (N5752A, Agilent Technologies). The video shows a typical view from the side port of the tube-electrode setup after condensation initiated ($\Delta$V = 0 V). With an applied constant electrical bias ($\Delta$V), an electric field between the electrode and grounded tube was established, inducing droplet motion toward or away from the electrode. High speed videos show droplet motion in the presence of the electrode. When a negative bias was applied to the electrode ($\Delta$V = -15, -30 V), significant droplet-electrode attraction was observed. To eliminate the possibility of induced electrical effects, {\it i.e.}, droplet motion due to dielectrophoresis, we reversed the polarity of the electrode ($\Delta$V = +15, +30 V) and saw a significant droplet-electrode repulsion. The repulsion and attraction observed under positive and negative electrode bias, respectively, indicate that all of the droplets were positively charged after jumping from the surface.\\ The visualizations provide insight into complex droplet-vapor, droplet-surface, and droplet-droplet phenomena, which offer a realm of new possibilities.
1,314,259,996,953
arxiv
\section{Introduction} Let $R$ be an associative ring with an identity. The commutant of $a\in R$ is defined by $comm(a)=\{x\in R~|~xa=ax\}$. The double commutant of $a\in R$ is defined by $comm^2(a)=\{x\in R~|~xy=yx~\mbox{for all}~y\in comm(a)\}$. An element $a\in R$ has Drazin inverse in case there exists $b\in R$ such that $$b=bab, b\in comm^2(a), a-a^2b\in R^{nil}.$$ The preceding $b$ is unique if exists, we denote it by $a^D$. Let $a,b\in R$. Then $ab$ has Drazin inverse if and only if $ba$ has Drazin inverse and $(ba)^{D}=b((ab)^{D})^2a$. This was known as Cline's formula for Drazin inverses (see \cite{CC}). An element $a\in R$ has g-Drazin inverse (i.e., generalized Drazin inverse) in case there exists $b\in R$ such that $$b=bab, b\in comm^2(a), a-a^2b\in R^{qnil}.$$ The preceding $b$ is unique if exists, we denote it by $a^d$. Let $a,b\in R$. Then $ab$ has g-Drazin inverse if and only if $ba$ has g-Drazin inverse and $(ba)^{d}=b((ab)^{d})^2a$. This was known as Cline's formula for g-Drazin inverses (see \cite{C}). Following Wang and Chen, an element $a$ in a ring $R$ has p-Drazin inverse if there exists $b\in comm^2(a)$ such that $b=b^2a, (a-a^2b)^k\in J(R)$. The p-Drazin inverse $b$ is also unique, and we denote it by $a^{pD}$ (see \cite{WC} ). We shall extend Cline's formula for Drazin inverse, generalized Drazin inverse of $ba$ in a ring when $ac$ has a corresponding inverse, $a(ba)^2=abaca=acaba=(ac)^2a$. This also recovers some recent results (see \cite{L}). In Section 2, we extend the Cline's formula for generalized inverse. We prove that for a ring $R$, if $a(ba)^2=abaca=acaba=(ac)^2a$, for some $a, b, c\in R$ then, $ac\in R^{d}$ if and only if $ba\in R^{d}$. In Section 3, we generalized the Jacobson's Lemma and prove that if If $a(ba)^2=abaca=acaba=(ac)^2a$ in a ring $R$, then $$1-ac\in U(R)\Longleftrightarrow 1-ba\in U(R).$$ Also we study the common spectral properties of bounded linear operators. Throughout the paper, all rings are associative with an identity. We use $R^{nil}, R^{qnil}$ and $R^{rad}$ to denote the set of all nilpotents, quasinilpotents and Jacobson radical of the ring $R$, respectively. $U(R)$ is the set of all units in $R$. $R^{D}$ and $R^{d}$ denote the sets of all elements in $R$ which have Drazin and g-Drazin inverses. ${\Bbb N}$ stands for the set of all natural numbers. \section{Cline's Formula} In \cite [Lemma 2.2]{L} proved that $ab\in R^{qnil}$ if and only if $ba\in R^{qnil}$ for any elements $a,b$ in a ring $R$. We extend this fact as follows. \begin{lem} Let $R$ be a ring, and let $a,b,c\in R$. If $a(ba)^2=abaca=acaba=(ac)^2a$, then the following are equivalent:\end{lem} \begin{enumerate} \item [(1)]{\it $ac\in R^{qnil}$.} \vspace{-.5mm} \item [(2)]{\it $ba\in R^{qnil}$.} \end{enumerate}\begin{proof} $\Longrightarrow$ By hypothesis, $a(ba)^2=(ac)^2a$ and $a(ba)^3=(ac)^3a$. Suppose that $ac\in R^{qnil}$. Let $y\in comm(ba)$. Then $(1+yba)(1-yba+y^baba)=1-y^3bababa$, and so $$\begin{array}{ll} &(1+yba)(1-yba+y^baba)(1+y^3bababa)\\ =&1-y^6babababababa\\ =&1-y^6b(acaca)bababa\\ =&1-y^6b(acac(ababa)ba\\ =&1-y^6b(acac(acaca)ba. \end{array}$$ In view of Jacobson's Lemma (see \cite [Theorem 2.2]{C}), we will suffice to prove $$1-abay^6bacacac(ac)\in U(R).$$ As $ac\in R^{qnil}$, we will suffice to check $$abay^6bacacac(ac)=(ac)abay^6bacacac.$$ One easily checks that $$\begin{array}{lll} abay^6bacacac(ac)&=&abay^6b(acacac)ac\\ &=&ay^6bababababac;\\ (ac)abay^6bacacac&=&(ac)ababay^6cacac\\ &=&(acacaca)y^6cacac\\ &=&(abababa)y^6cacac\\ &=&ay^6bababababac. \end{array}$$ Hence $1+yba\in U(R)$. This shows that $ba\in R^{qnil}.$ $\Longleftarrow$ If $ba\in R^{qnil}$, by the preceding discussion, we see that $ab\in R^{qnil}$. With the same argument as above we get $ca\in R^{qnil}$, and therefore $ac\in R^{qnil}$.\end{proof} We come now to the main result of this paper. \begin{thm} Let $R$ be a ring, and let $a,b,c\in R$. If $a(ba)^2=abaca=acaba=(ac)^2a$, then the following are equivalent:\end{thm} \begin{enumerate} \item [(1)]{\it $ac\in R^{d}$.} \vspace{-.5mm} \item [(2)]{\it $ba\in R^{d}$.} \end{enumerate} In this case, $(ac)^{d}=a((ba)^{d})^2c$ and $(ba)^{d}=b((ac)^{d})^2a$. \begin{proof} Suppose that $ac$ has g-Drazin inverse and $(ac)^{d}=d$. Let $e=bd^2a$ and $f\in comm(ba)$. Then $$fe=fb((ac)^2d^3)^2a=fb(ac)^4d^6a=(ba)^4fcd^6a=b((ac)^3afc)d^6a.$$ Also we have $$\begin{array}{lll} ac((ac)^3afc)&=&(ac)^4afc=af(ba)^4c=af(ba)^3cac\\ &=&((ab)^3afc)ac=((ac)^3afc)ac. \end{array}$$ Since $d\in comm^2(ac)$, we get $((ac)^3afc)d=d((ac)^3afc)$. Thus, we conclude that $$\begin{array}{lll} fe&=&b((ac)^3afc)d^6a=bd^6((ac)^3afc)a\\ &=&bd^6(ab)^3afc=bd^6af(ba)^3ca\\ &=&bd^6af(ba)^4=bd^6a(ba)^4f\\ &=&bd^6a(ca)^4f=bd^2af=ef. \end{array}$$ This implies that $e\in comm^2(ba)$. We have $$\begin{array}{lll} e(ba)e&=&bd^2a(ba)bd^2a=bd^2ababacd^3a\\ &=&bd^2(ac)^3d^3a=bd^2a=e. \end{array}$$ Let $p=1-acd$ then, $$pac=ac-acdac=ac-(ac)^2d$$ that is contained in $R^{qnil}$. Moreover, we have $$\begin{array}{lll} ba-(ba)^2e&=&ba-bababd^2a=ba-bababacd^2da\\ &=&ba-bacacacd^2da=b(1-acd)a=bpa. \end{array}$$ One easily checks that $$\begin{array}{lll} abpabpa&=&ab(1-acd)ab(1-acd)a\\ &=&ab(1-dac)aba(1-cda)\\ &=&(ababa-abdacaba)(1-cda)\\ &=&(abaca-abdacaca)(1-cda)\\ &=&ab(1-dac)aca(1-cda)\\ &=&ab(1-dac)ac(1-acd)a\\ &=&abpacpa, \end{array}$$ and so $$(pa)b(pa)b(pa)=(pa)b(pa)c(pa).$$ Likewise, we verify $$\begin{array}{c} (pa)b(pa)b(pa)=(pa)c(pa)b(pa)=(pa)c(pa)c(pa). \end{array}$$ Then by Lemma 2.1., $bpa\in R^{qnil}$. Hence $ba$ has g-Drazin inverse $e$. That is, $e=bd^2a=(ba)^{d}.$ Moreover, we check $$\begin{array}{lll} a((ba)^d)^2c&=&abd^2abd^2(ac)\\ &=&abd^3(acabac)d^2\\ &=&abd^3(acacac)d^2\\ &=&ab(acacac)d^5\\ &=&(ac)^4d^5\\ &=&(ac)^d, \end{array}$$ as required.\end{proof} \begin{cor} Let $R$ be a ring, let $k\in {\Bbb N}$, and let $a,b,c\in R$. If $a(ba)^2=abaca=acaba=(ac)^2a$, If $(ac)^k$ has g-Drazin inverse if and only if $(ba)^k$ has g-Drazin inverse. \end{cor} \begin{proof} Case 1. $k=1$. This is obvious by Theorem 2.2. Case 2. $k=2$. We easily check that $$\begin{array}{lll} a(bab)a(bab)a&=a(bab)a(cac)a\\ &=a(cac)a(bab)a\\ &=a(cac)a(cac)a. \end{array}$$ The result follows by Theorem 2.2. Case 3. $k\geq 3$. Then $(ac)^{k}=(ab)^{k-1}ac$. Hence, $(ac)^k$ has g-Drazin inverse if and only if $(ab)^{k}=(ac)(ab)^{k-1}$ has g-Drazin inverse. This completes the proof.\end{proof} \begin{cor} Let $R$ be a ring, and let $a,b,c\in R$. If $aba=aca$, then $ac\in R^{d}$ if and only if $ba\in R^{d}$. In this case, $(ba)^dc=b(ac)^d$.\end{cor} \begin{proof} In view of Theorem 2.2., $ac\in R^{d}$ if and only if $ba\in R^{d}$. Moreover, $(ac)^{d}=a((ba)^{d})^2c$ and $(ba)^{d}=b((ac)^{d})^2a$. Therefore $(ba)^dc=b((ac)^{d})^2ac=b(ac)^d$, as required.\end{proof} \begin{lem} Let $R$ be a ring, and let $a\in R$. If $a\in R^{D}$, then $a\in R^{d}$ and $a^{D}=a^d$.\end{lem} \begin{proof} This is obvious as the g-Drazin inverse of $a$ is unique.\end{proof} \begin{lem} Let $R$ be a ring, and let $a,b,c\in R$. If $a(ba)^2=abaca=acaba=(ac)^2a$, then $ac\in R^{nil}$ if and only if $ba\in R^{nil}$.\end{lem} \begin{proof} $\Longrightarrow$ Let $ac\in R^{nil}$, then there exists some $n\in {\Bbb N}$ such that $(ac)^n=0$. We may assume that $n$ is even. Hence $(ac)^na=(ac)^{n-2}(ac)^2a=(ac)^{n-2}a(ba)^2=(ac)^{n-4}(ac)^2a(ba)^2=(ac)^{n-4}a(ba)^{4}$ $=\cdots =(ac)^2a(ba)^{n-2}=a(ba)^n=0$ and so $(ba)^{n+1}=0$. $\Longleftarrow$ It can be proved in the similar way. \end{proof} \begin{thm} Let $R$ be a ring, and let $a,b,c\in R$. If $a(ba)^2=abaca=acaba=(ac)^2a$, then $ac\in R^{D}$ if and only if $ba\in R^{D}$. In this case, we have $$(ba)^D=b(((ac)^D)^2)a, (ac)^D=a(((ba)^D)^2)c.$$\end{thm} \begin{proof} Suppose that $ac\in R^{D}$. Then $ac\in R^{d}$ by Lemma 2.1. In view of Theorem 2.2, we see that $ba\in R^{d}$, and $(ba)^d=b((ac)^d)^2a$. Let $p=1-(ac)(ac)^d$. As in the proof of Theorem 2.2, we have $$\begin{array}{c} (pa)b(pa)b(pa)=(pa)b(pa)c(pa)=(pa)c(pa)b(pa)=(pa)c(pa)c(pa);\\ (pa)c=ac-(ac)^2(ac)^D\in R^{nil}.\end{array}$$ In light of Lemma 2.6, $bpa\in R^{nil}$. Therefore $$\begin{array}{lll} ba-(ba)^2(ba)^d&=&ba-babab((ac)^d)^2a\\ &=&ba-bababac((ac)^d)^3a\\ &=&ba-bacacac((ac)^d)^3a\\ &=&ba-b(ac)(ac)^da\\ &=&bpa\in R^{nil}. \end{array}$$ Therefore $ba\in R^{D}$ and $(ac)^D=a(((ba)^D)^2)c.$ Moreover, $(ac)^D=a(((ba)^D)^2)c.$ Conversely if $ba\in R^{D}$, then by \cite[Theorem 2.1]{LC}, $ab\in R^{D}$. Withe the same argument we get $ca\in R^D$ and so $ac\in R^{D}$.\end{proof} Recall that $a$ has the group inverse if $a$ has Drazin inverse with index $1$, and denote the group inverse by $a^{\#}$. As an immediate consequence of Theorem 2.7., we now derive \begin{cor} Let $R$ be a ring, and let $a,b,c\in R$. If $a(ba)^2=abaca=acaba=(ac)^2a$, then $ac$ has group if and only if\end{cor} \begin{enumerate} \item [(1)]{\it $ba\in U(R)$; or} \vspace{-.5mm} \item [(2)]{\it $ba$ has group inverse and $(ba)^{\#}=b((ac)^{\#})^2a$; or} \vspace{-.5mm} \item [(3)]{\it $ba\in R^{D}$ and $(ba)^{D}=b((ac)^{\#})^2a$.} \end{enumerate} We note that if $aba=aca$ in a ring $R$ then $a(ba)^2=abaca=acaba=(ac)^2a$, but the converse is not true. \begin{exam} Let $R=M_2({\Bbb Z}_2), x=\left( \begin{array}{ccc} 0&1&0\\ 0&0&1\\ 0&0&0 \end{array} \right)\in R$. Then $x^2\neq 0$ and $x^3=0$. Choose $$a= \left( \begin{array}{cc} 0&x\\ 0&0 \end{array} \right), b=\left( \begin{array}{cc} 1&0\\ 0&0 \end{array} \right), c=\left( \begin{array}{cc} 1&0\\ 1&1 \end{array} \right).$$ Then $a(ba)^2=abaca=acaba=(ac)^2a$, but $aba\neq aca$. In this case, $ac\in R^{D}$.\end{exam} \section{Common spectral properties of bounded linear operators} Let $A$ be a complex Banach algebra with unity $1$, and let $a\in A$. The Drazin spectrum $\sigma_D(a)$ and g-Drazin spectrum $\sigma_{d}(a)$ are defined by $$\begin{array}{c} \sigma_D(a)=\{ \lambda\in {\Bbb C}~|~\lambda-a\not\in A^{D}\};\\ \sigma_{d}(a)=\{ \lambda\in {\Bbb C}~|~\lambda-a\not\in A^{d}\}. \end{array}$$ Let $X$ be Banach space, and let $L(X)$ denote the set of all bounded linear operators from Banach space to itself. The goal of this section is concern on common spectrum properties of $L(X)$. The following lemma is crucial. \begin{lem} Let $R$ be a ring, and let $a,b,c\in R$. If $a(ba)^2=abaca=acaba=(ac)^2a$, then $$1-ac\in U(R)\Longleftrightarrow 1-ba\in U(R).$$ \end{lem} \begin{proof} $\Longrightarrow$ Write $s(1-ac)=(1-ac)s=1$ for some $s\in R$. Then $sac=s-1$. We see that $$\begin{array}{ll} &\big((1+bsa)(1+ba)-bsa\big)(1-ba)\\ =&(1+bsa)(1-baba)-bsa(1-ba)\\ =&1-baba+bsa-bsababa-bsa(1-ba)\\ =&1-baba+bsa-bsacaba-bsa(1-ba)\\ =&1-baba+bsa-b(s-1)aba-bsa(1-ba)\\ =&1. \end{array}$$ Thus, $1-ba\in R$ is left invertible. Likewise, we see that it is right invertible. Therefore $$(1-ba)^{-1}=\big(1+b(1-ac)^{-1}a\big)(1+ba)-b(1-ac)^{-1}a,$$ as asserted. $\Longleftarrow$ This is symmetric.\end{proof} \begin{thm} Let $A,B,C\in L(X)$ such that $A(BA)^2=ABACA=ACABA=(AC)^2A$, then $$\sigma_d(AC)=\sigma_d(BA).$$\end{thm} \begin{proof} Case 1. $0\in \sigma_d(AC)$. Then $AC\not\in A^{d}$. In view of Theorem 2.2., $BA\not\in A^{d}$. Thus $0\in \sigma_d(BA)$. Case 2. $0\not\in \lambda\in\sigma_d(AC)$. Then $\lambda\in acc\sigma(AC)$. Thus, we see that $$\lambda=\lim\limits_{n\to \infty}\{ \lambda_n ~|~ \lambda_n I-AC\not\in L(X)^{-1}\}.$$ For $\lambda_n\neq 0$, it follows by Lemma 3.1 that $I-(\frac{1}{\lambda_n} A)C\in L(X)^{-1}$ if and only if $I-B(\frac{1}{\lambda_n} A)\in L(X)^{-1}$. Therefore $$\lambda=\lim\limits_{n\to \infty}\{ \lambda_n ~|~ \lambda_n I-BA\not\in L(X)^{-1}\}\in acc\sigma(BA)=\sigma_d(BA).$$ Therefore $\sigma_d(AC)\subseteq \sigma_d(BA).$ Likewise, $\sigma_d(BA)\subseteq \sigma_d(AC)$, as required.\end{proof} \begin{cor} Let $A,B,C\in L(X)$ such that $ABA=ACA$, then $$\sigma_d(AC)=\sigma_d(BA).$$\end{cor} \begin{proof} This is obvious by Theorem 3.2.\end{proof} \begin{exam} Let $A,B,C$ be operators, acting on separable Hilbert space $l_2({\Bbb N})$, defined as follows respectively: $$\begin{array}{lll} A(x_1,x_2,x_3,x_4,\cdots )&=&(0,x_2,0,x_4,\cdots ),\\ B(x_1,x_2,x_3,x_4,\cdots )&=&(0,x_1,x_2,x_4,\cdots ),\\ C(x_1,x_2,x_3,x_4,\cdots )&=&(0,0,x_1,x_4,\cdots ). \end{array}$$ Then $ABA=ACA$, and so $\sigma_d(AC)=\sigma_d(BA)$ by Corollary 3.3.\end{exam} For the Drazin spectrum $\sigma_D(a)$, we now derive \begin{thm} Let $A,B,C\in L(X)$ such that $A(BA)^2=ABACA=ACABA=(AC)^2A$, then $$\sigma_D(AC)=\sigma_D(BA).$$\end{thm} \begin{proof} In view of Theorem 2.7, $AC\in L(X)^{D}$ if and only if $BA\in L(X)^{D}$, and therefore we complete the proof by~\cite[Theorem 3.1]{Y}.\end{proof} A bounded linear operator $T\in L(X)$ is Fredholm operator if $dimN(T)$ and $codimR(T)$ are finite, where $N(T)$ and $R(T)$ are the null space and the range of $T$ respectively. If furthermore the Fredholm index $ind(T)=0$, then $T$ is said to be Weyl operator. For each nonnegative integer $n$ define $T_{|n|}$ to be the restriction of $T$ to $R(T^n)$. If for some $n$, $R(T^n)$ is closed and $T_{|n|}$ is a Fredholm operator then $T$ is called a $B$-Fredholm operator. $T$ is said to be a $B$-Weyl operator if $T_{|n|}$ is a Fredholm operator of index zero (see \cite{Ba}). The $B$-Fredholm and $B$-Weyl spectrums of $T$ are defined by $$\begin{array}{c} \sigma_{BF}(T)=\{ \lambda\in {\Bbb C}~|~T-\lambda I~\mbox{is not a}$ $B-\mbox{Fredholm operator}\};\\ \sigma_{BW}(T)=\{ \lambda\in {\Bbb C}~|~T-\lambda I~\mbox{is not a}$ $B-\mbox{Weyl operator}\}. \end{array}$$ \begin{cor} Let $A,B,C\in L(X)$ such that $A(BA)^2=ABACA$ $=ACABA=(AC)^2A$, then $$\sigma_{BF}(AC)=\sigma_{BF}(BA).$$\end{cor} \begin{proof} Let $\pi: L(X)\to L(X)/F(X)$ be the canonical map and $F(X)$ be the ideal of finite rank operators in $L(X)$. As in well known, $T\in L(X)$ is $B$-Fredholm if and only if $\pi(T)$ had Drazin inverse. By hypothesis, we see that $$\begin{array}{lll} \pi(A)(\pi(B)\pi(A))^2&=&\pi(A)\pi(B)\pi(A)\pi(C)\pi(A)\\ &=&\pi(A)\pi(C)\pi(A)\pi(B)\pi(A)\\ &=&(\pi(A)\pi(C))^2\pi(A). \end{array}$$ According to Theorem 3.5., for every scalar $\lambda$, we have $$\lambda I-\pi(AC)~\mbox{has Drazin inverse} ~\Longrightarrow \lambda I-\pi(BD)~\mbox{has Drazin inverse}.$$ This completes the proof.\end{proof} \begin{cor} Let $A,B,C\in L(X)$ such that $A(BA)^2=ABACA=ACABA=(AC)^2A$, then $$\sigma_{BW}(AC)=\sigma_{BW}(BA).$$\end{cor} \begin{proof} If $T$ is $B$-Fredholm then for $\lambda\neq 0$ small enough, $T-\lambda I$ is Fredholm and $ind(T)=ind(T-\lambda I)$. As in the proof of \cite[Lemma 2.3, Lemma 2.4]{Y}, we see that $I-AC$ is Fredholm if and only if $I-BD$ is Fredholm and in this case, $ind(I-AC)=ind(I-BD)$. Therefore we complete the proof by by Corollary 3.6.\end{proof} \vskip10mm
1,314,259,996,954
arxiv
\section{Introduction} In the current era of big data, it is common for researchers to collect super-large sample data ranging from hundreds of thousands to hundreds of millions of observations. The ambitious BRAIN Initiative of NIH is expected to bring a torrent of data, e.g, 100 terabytes of data per day from a single brain lab. These super-large datasets provide a wealth of information. To effectively extract the information, numerous data-oriented statistical learning methods have been developed. Among these methods, data-driven nonparametric regression models \citep[see][]{Ruppert+EtAl:2003,Silverman:1985} have achieved remarkable success in identifying subtle patterns and discovering functional relationships in large noisy data; such models require few assumptions about the observed data, but produce a powerful prediction. For example, smoothing splines \citep[see][]{Silverman:1985,Wahba:1990} offer a powerful and flexible framework for nonparametric modeling. Smoothing spline analysis of variance (SSANOVA) models \citep{Gu:2013} further expand the research horizon of the smoothing spline; SSANOVAs can model multivariate data and provide nice interpretability of the modeling and prediction outcome. Furthermore, assuming that the smoothing parameters are selected via cross-validation, SSANOVA models have been shown to have desirable asymptotic properties \citep[see][]{Gu:2013,Li:1987,Wahba:1990}. The main drawback of the SSANOVA approach is its computational expense: the computational complexity of SSANOVA is on the order of $O(n^3)$, where $n$ is sample size. Over the years, many efforts have been made to design scalable algorithms for SSANOVA. Generalized additive models \citep[GAMs;][]{Hastie+Tibshirani:1990, Wood:2006} provide scalable computation at the price of eliminating or reparameterizing all interaction terms of an SSANOVA model. By collapsing similar subspaces, \citet*{Helwig+Ma:2015} provide an algorithm for modeling all interactions with affordable computational complexity. However, even using the most efficient SSANOVA approximation \citep{Kim+Gu:2004,Ma+EtAl:2015} and algorithm \citep{Helwig+Ma:2015}, the computational burden grows linearly with the sample size, which makes the approach impractical for analyzing super-large datasets. One possibility is to fit the model to a subset of the observed data. For example, when analyzing ultra large datasets, \citet{Ma+EtAl:2014} suggest fitting regression models to a randomly selected influential sample of the full dataset. This sort of smart-sampling approach works well, as long as a representative sample of observations is selected for analysis; however, the fitted model varies from time to time as the subsample is randomly taken. Furthermore, determining the appropriate size of the subsample could be difficult in some situations. In this paper, we propose a new approach for fitting SSANOVA models to super-large samples. Specifically, we introduce user-tunable rounding parameters in the SSANOVA model, which makes it possible to control the precision of each predictor. As we demonstrate, fitting a nonparametric regression model to the rounded data can result in substantial computational savings without introducing much bias to the resulting estimate. In the following sections, we provide a brief introduction to SSANOVA (Section~\ref{splines}), develop the concept of rounding parameters for nonparametric regression (Section~\ref{rparms}), present finite-sample and asymptotic results concerning the quality of the rounded SSANOVA estimator (Section~\ref{rqual}), demonstrate the benefits of the rounding parameters with a simulation study (Section~\ref{sim}), and provide an example with real data to reveal the practical potential of the rounding parameters (Section~\ref{ex}). \section{Smoothing Splines} \label{splines} \subsection{Overview} A typical (Gaussian) nonparametric regression model has the form \begin{equation} \label{ssa} y_{i}=\eta(\mathbf{x}_{i})+e_{i} \end{equation} where $y_{i}\in\mathbb{R}$ is the response variable, $\mathbf{x}_{i}\equiv(x_{i1},\ldots,x_{ip})$ is the predictor vector, $\eta$ is the unknown smooth function relating the response and predictors, and $e_{i}\stackrel{\mathrm{iid}}{\sim}\mathrm{N}(0,\sigma^{2})$ is unknown, normally-distributed measurement error \citep[see][]{Gu:2013,Ruppert+EtAl:2003,Wahba:1990}. Typically, $\eta$ is estimated by minimizing the penalized least-squares functional \begin{equation} \label{penfun} (1/n)\sum_{i=1}^{n}(y_{i}-\eta(\mathbf{x}_{i}))^{2} + \lambda J(\eta) \end{equation} where the nonnegative penalty functional $J$ quantifies the roughness of $\eta$, and the smoothing parameter $\lambda\in(0,\infty)$ balances the trade-off between fitting the data and smoothing $\eta$. Given fixed smoothing parameters and a set of selected knots $\{\breve{\mathbf{x}}_{h}\}_{h=1}^{q}$, the $\eta_{\lambda}$ minimizing Equation~(\ref{penfun}) can be approximated using \begin{equation} \label{ssarep} \eta_{\lambda}(\mathbf{x}) = \sum_{v=1}^{m}d_{v}\phi_{v}(\mathbf{x}) + \sum_{h=1}^{q}c_{h}\rho_{\mathrm{c}}(\mathbf{x},\breve{\mathbf{x}}_{h}) \end{equation} where $\{\phi_{v}\}_{v=1}^{m}$ are functions spanning the null space (i.e., $J(\phi_{v})=0$), $\rho_{\mathrm{c}}$ is the reproducing kernel (RK) of the contrast space (i.e., $J(\rho_{\mathrm{c}})>0$), and $\mathbf{d}=\{d_{v}\}_{m\times1}$ and $\mathbf{c}=\{c_{h}\}_{q\times1}$ are the unknown function coefficients \citep[see][]{Helwig+Ma:2015,Kim+Gu:2004,Gu+Wahba:1991}. Note that $\rho_{\mathrm{c}}=\sum_{k=1}^{s}\theta_{k}\rho_{k}^{*}$, where $\rho_{k}^{*}$ denotes the RK of the $k$-th orthogonal contrast space, and $\boldsymbol\theta=(\theta_{1},\ldots,\theta_{s})'$ are additional smoothing parameters with $\theta_{k}\in(0,\infty)$. \subsection{Estimation} Inserting the optimal representation in Equation~(\ref{ssarep}) into the penalized least-squared functional in Equation~(\ref{penfun}) produces \begin{equation} \label{penloss2} (1/n)\|\mathbf{y}-\mathbf{K}\mathbf{d}-\mathbf{J}_{\boldsymbol\theta}\mathbf{c}\|^{2} + \lambda\mathbf{c}'\mathbf{Q}_{\boldsymbol\theta}\mathbf{c} \end{equation} where $\|\cdot\|^{2}$ denotes the squared Frobenius norm, $\mathbf{y}\equiv\{y_{i}\}_{n\times 1}$, $\mathbf{K}\equiv\{\phi_{v}(\mathbf{x}_{i})\}_{n\times m}$ for $i\in\{1,\ldots,n\}$ and $v\in\{1,\ldots,m\}$, $\mathbf{J}_{\boldsymbol\theta}=\sum_{k=1}^{s}\theta_{k}\mathbf{J}_{k}$ with $\mathbf{J}_{k}\equiv\{\rho_{k}^{*}(\mathbf{x}_{i},\breve{\mathbf{x}}_{h})\}_{n \times q}$ for $i\in\{1,\ldots,n\}$ and $h\in\{1,\ldots,q\}$, and $\mathbf{Q}_{\boldsymbol\theta}=\sum_{k=1}^{s}\theta_{k}\mathbf{Q}_{k}$ where $\mathbf{Q}_{k}\equiv\{\rho_{k}^{*}(\breve{\mathbf{x}}_{g},\breve{\mathbf{x}}_{h})\}_{q \times q}$ for $g,h\in\{1,\ldots,q\}$. Given a choice of $\boldsymbol\lambda\equiv(\lambda/\theta_{1},\ldots,\lambda/\theta_{s})$, the optimal function coefficients are given by \begin{equation} \label{coefs} \begin{split} \left(\begin{matrix} \hat{\mathbf{d}} \\ \hat{\mathbf{c}} \end{matrix}\right) &= \left(\begin{matrix} \mathbf{K'K} & \mathbf{K}'\mathbf{J}_{\boldsymbol\theta} \\ \mathbf{J}_{\boldsymbol\theta}'\mathbf{K} & \mathbf{J}_{\boldsymbol\theta}'\mathbf{J}_{\boldsymbol\theta} + \lambda n\mathbf{Q}_{\boldsymbol\theta} \end{matrix}\right)^{\dagger} \left(\begin{matrix} \mathbf{K}' \\ \mathbf{J}_{\boldsymbol\theta}' \end{matrix}\right)\mathbf{y} \end{split} \end{equation} where $(\cdot)^{\dagger}$ denotes the Moore-Penrose pseudoinverse. The fitted values are given by $\hat{\mathbf{y}} = \mathbf{K}\hat{\mathbf{d}}+\mathbf{J}_{\boldsymbol\theta}\hat{\mathbf{c}} = \mathbf{S}_{\boldsymbol\lambda}\mathbf{y}$, where \begin{equation} \label{smooth} \begin{split} \mathbf{S}_{\boldsymbol\lambda} &= \left(\begin{matrix} \mathbf{K} & \mathbf{J}_{\boldsymbol\theta} \end{matrix}\right) \left(\begin{matrix} \mathbf{K'K} & \mathbf{K}'\mathbf{J}_{\boldsymbol\theta} \\ \mathbf{J}_{\boldsymbol\theta}'\mathbf{K} & \mathbf{J}_{\boldsymbol\theta}'\mathbf{J}_{\boldsymbol\theta} + \lambda n\mathbf{Q}_{\boldsymbol\theta} \end{matrix}\right)^{\dagger} \left(\begin{matrix} \mathbf{K}' \\ \mathbf{J}_{\boldsymbol\theta}' \end{matrix}\right) \end{split} \end{equation} is the smoothing matrix, which depends on $\boldsymbol\lambda$. The smoothing parameters are typically selected by minimizing \citeApos{Craven+Wahba:1979} generalized cross-validation (GCV) score: \begin{equation} \label{gcv} \mathrm{GCV}(\boldsymbol\lambda) = \{n\|(\mathbf{I}_{n}-\mathbf{S}_{\boldsymbol\lambda})\mathbf{y}\|^{2}\}/\{[n-\mathrm{tr}(\mathbf{S}_{\boldsymbol\lambda})]^{2}\}. \end{equation} The estimates $\hat{\lambda}$ and $\hat{\boldsymbol\theta}$ that minimize the GCV score have desirable properties \citep[see][]{Craven+Wahba:1979,Gu:2013,Gu+Wahba:1991,Li:1987}. \section{Rounding Parameters} \label{rparms} \subsection{Overview} When fitting a nonparametric regression model to ultra large samples, we propose including user-tunable rounding parameters in the model \citep[see][for preliminary work]{Helwig:Phd}. Assuming that all (continuous) predictors have been transformed to the interval [0,1], the rounding parameters $r_{j}\in(0,1]$ are used to create locally-smoothed versions of the (continuous) predictor variables, such as \begin{equation} \label{round} z_{ij}=\mathrm{rd}(x_{ij}/r_{j})r_{j} \end{equation} for $i\in\{1,\ldots,n\}$ and $j\in\{1,\ldots,p\}$, where the rounding function $\mathrm{rd}(\cdot)$ rounds the input value to the nearest integer. Note that the $z_{ij}$ scores are formed simply by rounding the original $x_{ij}$ scores to the precision defined by the rounding parameter for the $j$-th predictor variable, e.g., if $r_{j}=.02$, then each $x_{ij}$ value is rounded to the nearest .02 to form $z_{ij}$. Let $\mathbf{z}_{i}\equiv(z_{i1},\ldots,z_{ip})'$ with $z_{ij}$ defined according to Equation~(\ref{round}), and let $\{\breve{\mathbf{z}}_{h}\}_{h=1}^{q}$ denote the rounded knots; then, the penalized least-squares function in Equation~(\ref{penloss2}) can be approximated as $(1/n)\|\mathbf{y}-\mathbf{K}_{\star}\mathbf{d}_{\star}-\mathbf{J}_{\boldsymbol\theta}^{\star}\mathbf{c}_{\star}\|^{2} + \lambda\mathbf{c}_{\star}'\mathbf{Q}^{\star}_{\boldsymbol\theta}\mathbf{c}_{\star}$, where $\mathbf{K}_{\star}$, $\mathbf{J}_{\boldsymbol\theta}^{\star}$, and $\mathbf{Q}^{\star}_{\boldsymbol\theta}$ are defined according to Equation~(\ref{penloss2}) with $\mathbf{z}_{i}$ replacing $\mathbf{x}_{i}$. Similarly, the optimal basis function coefficients corresponding to the rounded data (i.e., $\hat{\mathbf{d}}_{\star}$ and $\hat{\mathbf{c}}_{\star}$) can be defined according to Equation~(\ref{coefs}) with with $\mathbf{z}_{i}$ replacing $\mathbf{x}_{i}$. Finally, smoothing matrix corresponding to these coefficients (denoted by $\mathbf{S}_{\boldsymbol\lambda,r}$) can be defined according to Equation~(\ref{smooth}) with with $\mathbf{z}_{i}$ replacing $\mathbf{x}_{i}$. One could calculate the fitted values using $\mathbf{S}_{\boldsymbol\lambda,r}\mathbf{y}$ (and this is what we recommend for the smoothing parameter estimation), however this could introduce a small bias to each predicted score. So, when interpreting specific $\hat{y}_{i}$ scores, we recommend using the $\hat{\mathbf{d}}_{\star}$ and $\hat{\mathbf{c}}_{\star}$ coefficients and basis function matrices with unrounded predictor variable scores \begin{equation} \label{fit2} \begin{split} \hat{\mathbf{y}}_{\star} &= \mathbf{K}\hat{\mathbf{d}}_{\star} + \mathbf{J}_{\boldsymbol\theta}\hat{\mathbf{c}}_{\star} \end{split} \end{equation} where $\mathbf{K}$ and $\mathbf{J}_{\boldsymbol\theta}$ are defined according to Equation~(\ref{penloss2}). \subsection{Computational Benefits} Let $\{\tilde{\mathbf{z}}_{t}\}_{t=1}^{u}$ denote the set of unique observed $\mathbf{z}_{i}$ vectors with $u \geq q$, and note that $u$ has an upper-bound that is determined by the rounding parameters and the predictor variables. For example, suppose that $\tilde{\mathbf{z}}_{t}\equiv(\tilde{z}_{t1},\tilde{z}_{t2})$ with $\tilde{z}_{t1}\in[0,1]$ and $\tilde{z}_{t2}\in\{1,\ldots,f\}$; then, defining $r_{1}=.01$, it is evident that $u \leq 101f$, given that $z_{ij}$ can have a maximum of 101 unique values for the first predictor, and maximum of $f$ unique values for the second predictor. As a second example, suppose that $\tilde{\mathbf{z}}_{t}\equiv(\tilde{z}_{t1},\tilde{z}_{t2})$ with $\tilde{z}_{t1},\tilde{z}_{t2}\in[0,1]$; then, defining $r_{1}=r_{2}=.01$, it is evident that $u \leq 101^{2}$, given that $z_{ij}$ can have a maximum of 101 unique values for each predictor. Similar reasoning can be used to place an upper bound on $u$ for different combinations of rounding parameters and predictor variable types. Note that the inner-portion of $\mathbf{S}_{\boldsymbol\lambda,r}$ can be written as \begin{equation} \begin{split} \label{usmooth} &\left(\begin{matrix} \mathbf{K}_{\star}'\mathbf{K}_{\star} & \mathbf{K}_{\star}'\mathbf{J}_{\boldsymbol\theta}^{\star} \\ (\mathbf{J}_{\boldsymbol\theta}^{\star})'\mathbf{K}_{\star} & (\mathbf{J}_{\boldsymbol\theta}^{\star})'\mathbf{J}_{\boldsymbol\theta}^{\star} + \lambda n \mathbf{Q}^{\star}_{\boldsymbol\theta} \end{matrix}\right)^{\dagger} = \\ & \qquad \left(\begin{matrix} \tilde{\mathbf{K}}_{\star}'\mathbf{W}\tilde{\mathbf{K}}_{\star} & \tilde{\mathbf{K}}_{\star}'\mathbf{W}\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta} \\ (\tilde{\mathbf{J}}_{\boldsymbol\theta}^{\star})'\mathbf{W}\tilde{\mathbf{K}}_{\star} & (\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta})'\mathbf{W}\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta} + \lambda n\mathbf{Q}^{\star}_{\boldsymbol\theta} \end{matrix}\right)^{\dagger} \end{split} \end{equation} where $\tilde{\mathbf{K}}_{\star}\equiv\{\phi_{v}(\tilde{\mathbf{z}}_{t})\}_{u\times m}$ for $t\in\{1,\ldots,u\}$ and $v\in\{1,\ldots,m\}$, $\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta}=\sum_{k=1}^{s}\theta_{k}\tilde{\mathbf{J}}^{\star}_{k}$ where $\tilde{\mathbf{J}}^{\star}_{k}\equiv\{\rho_{k}^{*}(\tilde{\mathbf{z}}_{t},\breve{\mathbf{z}}_{h})\}_{u \times q}$ for $t\in\{1,\ldots,u\}$ and $h\in\{1,\ldots,q\}$, and $\mathbf{W}\equiv\mathrm{diag}(w_{1},\ldots,w_{u})$ with $w_{t}$ denoting the number of $\mathbf{z}_{i}$ that are equal to $\tilde{\mathbf{z}}_{t}$ (for $t\in\{1,\ldots,u\}$). Next, define $\tilde{\mathbf{X}}=(\tilde{\mathbf{K}}_{\star}, \tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta})$ and define the reduced smoothing matrix $\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}$, such as \begin{equation} \label{rsmooth} \tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star} = \tilde{\mathbf{X}} \left(\begin{matrix} \tilde{\mathbf{K}}_{\star}'\mathbf{W}\tilde{\mathbf{K}}_{\star} & \tilde{\mathbf{K}}_{\star}'\mathbf{W}\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta} \\ (\tilde{\mathbf{J}}_{\boldsymbol\theta}^{\star})'\mathbf{W}\tilde{\mathbf{K}}_{\star} & (\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta})'\mathbf{W}\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta} + \lambda n\mathbf{Q}^{\star}_{\boldsymbol\theta} \end{matrix}\right)^{\dagger} \tilde{\mathbf{X}}'. \end{equation} Note that $\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}$ is a $u \times u$ matrix, and note that $u<n$ if there are replicate predictor vectors after the rounding (which is guaranteed if $n$ is larger than $u$'s upper bound). Next, suppose that the $(y_{i},\mathbf{z}_{i})$ scores are ordered such that observations $1,\ldots,w_{1}$ have predictor scores $\tilde{\mathbf{z}}_{1}$, observations $w_{1}+1,\ldots,w_{1}+w_{2}$ have predictor scores $\tilde{\mathbf{z}}_{2}$, and so on. Then $\mathbf{S}_{\boldsymbol\lambda,r}$ can be written in terms of $\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}$, such as \begin{equation} \label{r2fsmooth} \mathbf{S}_{\boldsymbol\lambda,r} = \left(\begin{matrix} (\mathbf{e}_{1}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{1})\mathbf{1}_{w_{1}}\mathbf{1}_{w_{1}}' & \cdots & (\mathbf{e}_{1}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{u})\mathbf{1}_{w_{1}}\mathbf{1}_{w_{u}}' \\ (\mathbf{e}_{2}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{1})\mathbf{1}_{w_{2}}\mathbf{1}_{w_{1}}' & \cdots & (\mathbf{e}_{2}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{u})\mathbf{1}_{w_{2}}\mathbf{1}_{w_{u}}' \\ \vdots & \ddots & \vdots \\ (\mathbf{e}_{u}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{1})\mathbf{1}_{w_{u}}\mathbf{1}_{w_{1}}' & \cdots & (\mathbf{e}_{u}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{u})\mathbf{1}_{w_{u}}\mathbf{1}_{w_{u}}' \\ \end{matrix}\right) \end{equation} where $\mathbf{e}_{t}$ denotes a $u \times 1$ vector with a one in the $t$-th position and zeros elsewhere, and $\mathbf{1}_{w_{t}}$ denotes a $w_{t} \times 1$ vector of ones (for $t\in\{1,\ldots,u\}$). Furthermore, note that the fitted values corresponding to $\mathbf{S}_{\boldsymbol\lambda,r}$ can be written as \begin{equation} \begin{split} \label{fit4} \mathbf{S}_{\boldsymbol\lambda,r}\mathbf{y} = \left(\begin{matrix}(\mathbf{e}_{1}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{1})\mathbf{1}_{w_{1}} & \cdots & (\mathbf{e}_{1}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{u})\mathbf{1}_{w_{1}} \\ (\mathbf{e}_{2}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{1})\mathbf{1}_{w_{2}} & \cdots & (\mathbf{e}_{2}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{u})\mathbf{1}_{w_{2}} \\ \vdots & \ddots & \vdots \\ (\mathbf{e}_{u}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{1})\mathbf{1}_{w_{u}} & \cdots & (\mathbf{e}_{u}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{e}_{u})\mathbf{1}_{w_{u}} \\ \end{matrix}\right)\tilde{\mathbf{y}} \end{split} \end{equation} where $\tilde{\mathbf{y}}\equiv\{\tilde{y}_{t}\}_{u \times 1}$ with $\tilde{y}_{t}=\sum_{\mathcal{I}_{t}}y_{i}$ and $\mathcal{I}_{t}\subset\{1,\ldots,n\}$ denoting the set of indices such that $\mathbf{z}_{i}$ is equal to $\tilde{\mathbf{z}}_{t}$. Now, let $\hat{y}_{t}^{\star} = \mathbf{e}_{t}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\tilde{\mathbf{y}}$ denote the fitted value corresponding to $\tilde{\mathbf{z}}_{t}$ (for $t\in\{1,\ldots,u\}$), and note that the numerator of the GCV score in Equation~(\ref{gcv}) can be written as \begin{equation} \label{GCV1} \begin{split} n\sum_{t=1}^{u}\sum_{\mathcal{I}_{t}}(y_{i}-\hat{y}_{t}^{\star})^{2} &= n\sum_{i=1}^{n}y_{i}^{2} - 2n\sum_{t=1}^{u}\tilde{y}_{t}\hat{y}_{t}^{\star} + n\sum_{t=1}^{u}w_{t}(\hat{y}_{t}^{\star})^{2}\\ &= n\left[\|\mathbf{y}\|^{2} - 2\tilde{\mathbf{y}}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\tilde{\mathbf{y}} + \tilde{\mathbf{y}}'\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\mathbf{W}\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}\tilde{\mathbf{y}}\right] \end{split} \end{equation} In addition, note that the denominator of the GCV score can be written as $[n-\mathrm{tr}(\mathbf{S}_{\boldsymbol\lambda,r})]^{2} = [n - \mathrm{tr}(\mathbf{W}\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star})]^{2}$ using the relation in Equation~(\ref{r2fsmooth}). The above formulas imply that, after initializing $\tilde{\mathbf{y}}$, $\|\mathbf{y}\|^{2}$, and $\mathbf{W}$, it is only necessary to calculate the reduced smoothing matrix $\tilde{\mathbf{S}}_{\boldsymbol\lambda}^{\star}$ to evaluate the GCV score. Furthermore, note that the optimal function coefficients can be estimated from the reduced smoothing matrix using \begin{equation} \label{cdhatROUND2} \left(\begin{matrix} \hat{\mathbf{d}}_{\star} \\ \hat{\mathbf{c}}_{\star} \end{matrix}\right) = \left(\begin{matrix} \tilde{\mathbf{K}}_{\star}'\mathbf{W}\tilde{\mathbf{K}}_{\star} & \tilde{\mathbf{K}}_{\star}'\mathbf{W}\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta} \\ (\tilde{\mathbf{J}}_{\boldsymbol\theta}^{\star})'\mathbf{W}\tilde{\mathbf{K}}_{\star} & (\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta})'\mathbf{W}\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta} + \lambda n\mathbf{Q}^{\star}_{\boldsymbol\theta} \end{matrix}\right)^{\dagger} \left(\begin{matrix} \tilde{\mathbf{K}}_{\star}'\\ (\tilde{\mathbf{J}}^{\star}_{\boldsymbol\theta})' \end{matrix}\right) \tilde{\mathbf{y}} \end{equation} which implies that it is never necessary to construct the full $n \times n$ smoothing matrix to estimate $\eta$ when using the rounding parameters. \subsection{Choosing Rounding Parameters} In many situations, a rounding parameter can be determined by the measurement precision of the predictor variable. For example, suppose we have one predictor $x_{i}$ recorded with the precision of two decimals on the interval [0,1], i.e., $x_{i}\in\{0,0.01,0.02,\ldots,0.99,1\}$ for $i\in\{1,\ldots,n\}$. In this case, setting $r=0.01$ will produce the exact same solution as using the unrounded predictors (i.e., $z_{i} = x_{i} \forall i$) and can immensely reduce the computational burden. Note that $u \leq 101$ even if $n$ is very large, and it is only necessary to evaluate the functions $\{\phi_{v}\}_{v=1}^{m}$ and $\rho_{\mathrm{c}}$ for the $u \ll n $ unique predictor scores to estimate $\eta$. Now, for large $n$, note that a cubic smoothing spline is approximately a weighted moving average smoother \citep[see][Section~3]{Silverman:1985}. In particular, let $s_{i_{1}i_{2}(\lambda)}$ denote the entry in the $i_{1}$-th row and $i_{2}$-th column of $\mathbf{S}_{\lambda}$, and note that $s_{i_{1}i_{2}(\lambda)}$ asymptotically depends on a kernel function whose influence decreases exponentially as $|x_{i_{1}}-x_{i_{2}}|$ increases \citep[see][equations~3.1--3.4]{Silverman:1985}. Also, note that the rounding parameter proposed in this paper widens the peak of the kernel (see Figure~\ref{fig1}). \begin{figure*} \centering \scalebox{.5}{\includegraphics{fig1-akern.pdf}} \caption{\label{fig1} Asymptotic cubic spline kernel function for $z_{i}\in[0,1]$ and $\breve{z}=0.5$.} \end{figure*} For relatively smooth functions (e.g., $\lambda \geq 10^{-3}$), the shape of the asymptotic kernel function is stable for $r\leq0.05$; however, for more jagged functions (e.g., $\lambda \leq10^{-7}$), the rounding parameter will need to be set smaller (e.g., $r=0.01$) for the rounded kernel function to resemble the true asymptotic kernel (see Figure~\ref{fig1}). \section{Quality of Rounded Solution} \label{rqual} \subsection{A Taylor Heuristic} Note that the rounded predictor $z_{ij}$ can be written as \begin{equation} z_{ij} = x_{ij} + r_{j}v_{ij} \end{equation} where $v_{ij} = (z_{ij}-x_{ij})/r_{j}$ by definition and $|z_{ij}-x_{ij}| \leq r_{j}/2$ so that $|v_{ij}| \leq 1/2$. This implies $ \mathbf{z}_{i} = \mathbf{x}_{i} + \mathbf{R}\mathbf{v}_{i}$ where $\mathbf{v}_{i}=(v_{i1},\ldots,v_{ip})'$ and $\mathbf{R} = \mathrm{diag}(r_{1},\ldots,r_{p})$. Consider the linear approximation of $\eta(\mathbf{z}_{i})$ at the point $\mathbf{x}_{i}$ \[ \eta(\mathbf{z}_{i}) = \eta(\mathbf{x}_{i}) + [\nabla \eta(\mathbf{x}_{i})]' \mathbf{R}\mathbf{v}_{i} + o(\|\mathbf{R}\mathbf{v}_{i}\|) \] where $\nabla \eta$ denotes the gradient of $\eta$. If the gradient of $\eta$ were known, we could approximate the rounding error using \begin{equation} \begin{split} n^{-1}\sum_{i=1}^{n}[\eta(\mathbf{x}_{i}) - \eta(\mathbf{z}_{i})]^{2} &\approx n^{-1}\sum_{i=1}^{n} \{ [\nabla \eta(\mathbf{x}_{i})]' \mathbf{R}\mathbf{v}_{i} \}^{2}\\ & \leq n^{-1}\sum_{i=1}^{n} \| \nabla \eta (\mathbf{x}_{i}) \|^{2} \|\mathbf{R}\mathbf{v}_{i}\|^{2}\\ & \leq (4n)^{-1}\sum_{i=1}^{n} \| \nabla \eta(\mathbf{x}_{i}) \|^{2} \|\mathbf{r}\|^{2} \end{split} \end{equation} where $\mathbf{r}=(r_{1},\ldots,r_{p})'$; note that the last line is due to the fact that $|v_{ij}| \leq 1/2$. For example, using an $m$-th order polynomial smoothing spline with $x_{i}\in[0,1]$ \citep[see][]{Craven+Wahba:1979,Gu:2013} we have \[ \eta_{\lambda}(x) = \sum_{v=0}^{m-1}d_{v}k_{v}(x) + \sum_{h=1}^{q}c_{h} \rho_{\breve{x}_{h}}(x) \] where $k_{v}(\cdot)$ are scaled Bernoulli polynomials, $\{\breve{x}_{h}\}_{h=1}^{q} \subset \{x_{i}\}_{i=1}^{n}$ are the selected knots, and \[ \rho_{\breve{x}_{h}}(x) = k_{m}(x)k_{m}(\breve{x}_{h}) + (-1)^{m-1}k_{2m}(|x-\breve{x}_{h}|) \] is the reproducing kernel of the contrast space. Using the properties of Bernoulli polynomials we have \[ \eta_{\lambda}'(x) = \frac{\partial \eta_{\lambda}(x)}{\partial x} = \sum_{v=1}^{m-1}d_{v}k_{v-1}(x) + \sum_{h=1}^{q}c_{h} \rho_{\breve{x}_{h}}'(x) \] where \[ \rho_{\breve{x}_{h}}'(x) = k_{m-1}(x)k_{m}(\breve{x}_{h}) + (-1)^{m-1}s_{h}k_{2m-1}(x-\breve{x}_{h}) \] with $s_{h}=1$ if $x\geq \breve{x}_{h}$ and $s_{h}=-1$ otherwise \citep[see][]{Craven+Wahba:1979,Gu:2013}. Consequently, for polynomial splines we can approximate the rounding error using \[ \begin{split} n^{-1}\sum_{i=1}^{n}[\eta(x_{i}) - \eta(z_{i})]^{2} &\approx n^{-1}\sum_{i=1}^{n}(rv_{i})^{2} [\eta_{\lambda}'(x_{i})]^{2}\\ & \leq r^{2}(4n)^{-1} \|\mathbf{X}\mathbf{b}\|^{2}\\ \end{split} \] where $\mathbf{X} = [\tilde{\mathbf{K}},\tilde{\mathbf{J}}]$ with $\tilde{\mathbf{K}} = \{k_{v}(x_{i})\}_{n \times m-1}$ for $v\in\{0,\ldots,m-2\}$ and $\tilde{\mathbf{J}} = \{\rho_{\breve{x}_{h}}'(x_{i})\}_{n \times q}$ for $h\in\{1,\ldots,q\}$, and $\mathbf{b}=(d_{1},\ldots,d_{m-1}, c_{1},\ldots,c_{q} )'$. Note that the contrast space reproducing kernel $\rho_{\breve{x}_{h}}(x)$ is rather smooth for the classic cubic smoothing spline, and the magnitude of the derivatives are rather small (see Figure~\ref{fig2}). This implies that setting $r\in\{0.01,0.02,0.05\}$ will not introduce much rounding error to the contrast kernel evaluation when using cubic smoothing splines on $x_{i}\in[0,1]$. The rounding error depends on the norm $\|\mathbf{X}\mathbf{b}\|$, so the relative impact of a particular choice of rounding parameters will depend on the (unknown) function coefficients $\mathbf{b}$. For practical use, we can approximate the rounding error relative to the norm of the coefficients, such as \[ \begin{split} \frac{1}{n\|\mathbf{b}\|^{2}}\sum_{i=1}^{n}[\eta(x_{i}) - \eta(z_{i})]^{2} &\approx \frac{1}{n\|\mathbf{b}\|^{2}}\sum_{i=1}^{n}(rv_{i})^{2} [\eta_{\lambda}'(x_{i})]^{2}\\ & \leq r^{2}(4n)^{-1} \lambda_{1}^{*} \\ \end{split} \] where $\lambda_{1}^{*}$ is the largest eigenvalue of $\mathbf{X}'\mathbf{X}$; note that we have $\|\mathbf{X}\mathbf{b}\|^{2} \leq \|\mathbf{X}\|^{2} \|\mathbf{b}\|^{2}$ and $\|\mathbf{X}\|^{2}=\lambda_{1}^{*}$ by definition. For practical computation, it is possible to estimate $\lambda_{1}^{*}/n$ by taking a random sample of $\tilde{n} \ll n$ observations, and then approximate the relative rounding error as $r^{2}(\tilde{n}4)^{-1} \hat{\lambda}_{1}^{*}$. Clearly this sort of approach can be extended to assess the relative rounding error for tensor product smoothing splines, but the gradient formulas become a bit more complicated. \begin{figure*} \centering \scalebox{.7}{\includegraphics{fig2-dkern.pdf}} \caption{\label{fig2} Top: contrast reproducing kernel $\rho_{z}(x)$ for linear spline ($m=1$), cubic spline ($m=2$), and quintic spline ($m=3$) with $z=0.5$ as the knot. Bottom: contrast reproducing kernel derivative $\rho_{z}'(x)$ for $m$-th order polynomial splines.} \end{figure*} \subsection{Finite Sample Performance} To quantify the finite-sample error introduced by rounding, define the loss function \begin{equation} \begin{split} L(r) &= \frac{1}{n}\sum_{i=1}^{n} \left( \hat{\eta}_{\lambda}(\mathbf{x}_{i}) - \hat{\eta}_{\lambda,r}(\mathbf{z}_{i}) \right)^2 \\ &= n^{-1} \| (\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r})\mathbf{y} \|^{2} \end{split} \end{equation} where $\mathbf{S}_{\boldsymbol\lambda}$ and $\mathbf{S}_{\boldsymbol\lambda,r}$ are the smoothing matrices corresponding to the unrounded and rounded predictors (i.e., $\mathbf{x}_{i}$ and $\mathbf{z}_{i}$, respectively). Denote the risk function as \begin{equation} \begin{split} R(r) &= E[L(r)]\\ &= n^{-1} \| (\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r})\boldsymbol\eta \|^{2} + n^{-1}\sigma^{2}\mathrm{tr}\{(\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r})^{2}\} \end{split} \end{equation} where $\boldsymbol\eta=\{\eta(\mathbf{x}_{i})\}_{n\times1}$ contains the realizations of the (unknown) true function $\eta$. Note that the first term of $R(r)$ corresponds to the (squared) bias difference between $\hat{\eta}_{\lambda}$ and $\hat{\eta}_{\lambda,r}$, and the second term is related to (but not equal to) the variance difference. Also note that we can write \begin{equation} \begin{split} R(r) &\leq n^{-1} \|\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r} \|^{2} \| \boldsymbol\eta \|^{2} + n^{-1}\sigma^{2}\sum_{i=1}^{n}\lambda_{i,r}\\ & \leq \lambda_{1,r} \left( n^{-1} \| \boldsymbol\eta \|^{2} + \sigma^{2} \right) \end{split} \end{equation} where $\lambda_{1,r} \geq \cdots \geq \lambda_{n,r}$ are the eigenvalues of $(\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r})^{2}$. The risk $R(r)$ depends on the squared norm of the unknown function $\eta$, so the practical relevance of a particular value of $R(r)$, e.g., $R(r) = 0.1$, differs depending on the situation, i.e., unknown true function. To overcome this practical issue, we can examine the risk relative to the squared norm of the unknown function, such as \begin{equation} \label{relrisk} \begin{split} U(r) &= R(r)\| \boldsymbol\eta \|^{-2}\\ &\leq n^{-1} \lambda_{1,r} \left( 1 + n\sigma^{2}\| \boldsymbol\eta \|^{-2} \right) \end{split} \end{equation} where $n\sigma^{2}\| \boldsymbol\eta \|^{-2} = \sigma^{2}/(\|\boldsymbol\eta \|^{2}/n)$ relates to the noise-to-signal ratio, i.e., inverse of signal-to-noise ratio (SNR). Furthermore, for a fixed SNR and a large enough $n$, the second term in the upper-bound of the relative risk is negligible, and we have that $U(r) \lesssim n^{-1} \lambda_{1,r}$. Consequently, it is only necessary to know the largest eigenvalue of $\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r}$ to understand the expected performance of a given set of rounding parameters for a large sample size $n$. In practice, calculating $\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r}$ and $\lambda_{1,r}$ for various values of $r$ is a computational challenge for large $n$. For practical computation, we recommend examining $R(r)$ and/or $U(r)$ using a random sample of $\tilde{n} \ll n $ observations. Using this approach, the unknown parameters (i.e., $\boldsymbol\eta$ and $\sigma^{2})$ can be estimated using the results of the unrounded solution. For example, the SNR can be estimated as $(\| \hat{\boldsymbol\eta} \|^{2}/\tilde{n})/\hat{\sigma}^{2}$ where $\hat{\boldsymbol\eta}$ and $\hat{\sigma}^{2}$ are the estimated function and error variance using the $\tilde{n}$ observations with unrounded predictors. Or, if the approximate SNR is known, Equation~(\ref{relrisk}) can be used to place an upper-bound on the relative risk $U(r)$. We demonstrate this approach in Figures~\ref{fig3}--\ref{fig4}, which plot functions with various degrees of smoothness (Figure~\ref{fig3}) and the median estimated rounding risk $\hat{R}(r)$ across five samples of $\tilde{n}=500$ observations (Figure~\ref{fig4}). Note that Figure~\ref{fig4} illustrates that the expected difference between the unrounded and rounded solutions increases as the error variance increases. Furthermore, note that Figure~\ref{fig4} affirms that for $x\in[0,1]$ setting $r=0.01$ can be expected to introduce minimal rounding error for a variety of functions and SNRs. Finally, Figure~\ref{fig4} reveals that setting $r\in\{0.01,0.02,0.05\}$ will not introduce much rounding error whenever the underlying function $\eta$ is relatively smooth. For example, for the functions $\eta_{A1}$ and $\eta_{B1}$, we should expect a negligible difference between the unrounded and rounded solutions using $r=0.05$ for a variety of different SNRs. \begin{figure*} \centering \scalebox{.65}{\includegraphics{fig3-funcs.pdf}} \caption{\label{fig3} Functions with various degrees of smoothness. $\eta_{Ak}(x) = x - 0.5 + \sin(2 k \pi x)$ for $x\in[0,1]$ and $\eta_{Bk}(x_{1},x_{2}) = x_{1} + x_{2} - 1 + [ \sin(2 k \pi x_{1}) + \cos(2 k \pi x_{2}) + 2\sin(2 \pi (x_{1}-x_{2}) ) ]/4$ for $x_{1},x_{2}\in[0,1]$} \end{figure*} \begin{figure*} \centering \scalebox{.65}{\includegraphics{fig4-rprisk.pdf}} \caption{\label{fig4} Median estimated risk $\hat{R}(r) = \tilde{n}^{-1} \| (\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r})\hat{\boldsymbol\eta} \|^{2} + \tilde{n}^{-1}\hat{\sigma}^{2}\mathrm{tr}\{(\mathbf{S}_{\boldsymbol\lambda} - \mathbf{S}_{\boldsymbol\lambda,r})^{2}\}$ for various functions, rounding parameters, and error variances using five random samples of $\tilde{n}=500$ observations.} \end{figure*} \subsection{Asymptotic Bias and Variance} To establish the asymptotic properties of the proposed estimate, we employ an equivalent kernel approach developed in \cite{Nychka:1995}. The key idea is that a smoothing spline estimate can be written as kernel estimate \begin{equation} \hat{\eta}_{\lambda}(x)= \frac{1}{n}\sum_{i=1}^{n}w(x_i, x)y_i \end{equation} where the kernel function $w(x_i, x)$ can be well approximated by a Green's function. Then the asymptotic properties of $\hat{\eta}_{\lambda}$ can be established via the analytical properties of the Green's function. Following \cite{Nychka:1995}, we establish the asymptotic properties of our rounding estimate for the one dimensional case. In addition, we assume that we use a full basis where all distinct rounded data are used as knots, i.e., $q=u$. Then our estimate $\hat{\eta}_{\lambda,r}$ is the minimizer of \begin{equation} \label{penfunround} (1/n)\sum_{i=1}^{n}(y_{i}-\eta({z}_{i}))^{2} + \lambda \int_0^1 (\eta^{(m)})^2dx. \end{equation} Let $F_{n,r}$ denote the empirical distribution function for the rounded predictor $z_i$, $ i=1, \ldots, n$, let $F$ be the limiting distribution of the original predictor $x$ with a continuous and strictly positive density function $f$ on $[0, 1]$ and let \[ D_{n,r} = \sup_{x \in [0,1]}| F_{n,r} -F |, \] and $\rho=\lambda^{1/2m}$. Then we have the following theorem. \begin{theorem} Assume that $\hat{\eta}_{\lambda,r}$ is a smoothing spline estimate of (\ref{penfunround}) with $m=1$ and $z_i$ are not equally spaced. Suppose that $\eta \in C_2[0,1]$ and satisfies the H\"{o}lder condition $|\eta^{(2)}(x)-\eta^{(2)}(x^{\prime})| \le M |x-x^{\prime}|^{\beta}$ for some $\beta > 0$ and some $M < \infty$. Assume that $f$ has a uniformly continuous derivative and $D_{n,r} \rightarrow 0$ as $n \rightarrow \infty$. Choose $0 < \Delta < 1$ and let $\lambda_n \rightarrow 0$ and $\Lambda_n \rightarrow 0$ as $ n \rightarrow \infty$. Then \[ \begin{split} E[\hat{\eta}_{\lambda,r}(x)]- \eta(x) &= -\frac{\lambda}{f(x)}\eta^{(2)}(x) +o(\lambda) + O(\frac{D_{n,r}}{\rho}),\\ \text{Var}[\hat{\eta}_{\lambda,r}(x)] &= \frac{\sigma^2}{8nf(x)} (\frac{ f(x)}{\lambda} )^{1/2} + \sigma^2 O(\frac{D_{n,r}}{\rho}), \end{split} \] uniformly for $\lambda \in [\lambda_n, \Lambda_n]$ and $x \in [\Delta, 1-\Delta]$ as $ n \rightarrow \infty$. \end{theorem} \noindent The theorem is a direct result of Theorem 2.2 of \cite{Nychka:1995}. For $m > 1$, a slightly more complicated version of our theorem can be shown using Theorem 2 of \cite{Wang+EtAl:2013}. The theorem states that both the bias and variance of our estimate $\hat{\eta}_{\lambda,r}$ depend on $D_{n,r}$, which is required to be sufficiently small relative to $\rho$ as $n \rightarrow \infty$. Consequently, the theorem reveals that the rounding parameter $r$ will have to be set smaller when \begin{itemize} \item[(a)] the true function $\eta$ is rougher \item[(b)] the spline order $m$ is larger \item[(c)] the predictor distribution $f$ is rougher \item[(d)] the sample size $n$ is larger. \end{itemize} These conclusions derive directly from the requirement that $D_{n,r} $ be sufficiently small relative to $\rho$ as $n \rightarrow \infty$. \section{Simulation Study} \label{sim} \subsection{Design and Analyses} We conducted a simulation study to demonstrate the benefits of the rounding parameters. As a part of the simulation, we manipulated two conditions: (a)~the function smoothness (8~levels: see Figure~\ref{fig3}), and (b)~the number of observations (3~levels: $n=1000k$ for $k\in\{100,200,500\}$). Note that the functions are defined such that $J(\eta_{Aj}) < J(\eta_{Ak})$ and $J(\eta_{Bj}) < J(\eta_{Bk})$ for $j<k\in\{1,2,3,4\}$, so the function smoothness is systematically manipulated. We generated $y_{i}$ by (a)~independently sampling the predictor(s) from a uniform distribution, (b)~independently sampling $e_{i}$ from a standard normal distribution, and (c)~defining the observed response as $y_{i}=\eta(\mathbf{x}_{i})+e_{i}$ for $i\in\{1,\ldots,n\}$. Then, we fit a nonparametric regression model using six different methods: Method~1 is an SSANOVA using unrounded data \citep[see][]{Helwig+Ma:2015}, Method~2 is an SSANOVA with $r=.01$, Method~3 is an SSANOVA with $r=.02$, Method~4 is an SSANOVA with $r=.05$, Method~5 is standard GAM implemented through \citeapos{mgcv} \texttt{gam.R} function, and Method~6 is batch-processed GAM implemented through \citeapos{mgcv} \texttt{bam.R} function. Methods 1--4 are implemented through \citeapos{bigsplines} \texttt{bigspline.R} function (for $\eta_{Ak}$) and \texttt{bigssa.R} function (for $\eta_{Bk}$). For the $\eta_{Ak}$ functions we used $q=21$ knots to fit the model, and for $\eta_{Bk}$ functions we used $q=100$ knots. For Methods~1--4, we used a bin-sampling approach to select knots spread throughout the covariate domain \citep{Helwig+Ma:prep}; for Methods~5 and 6, we used the default \texttt{gam.R} and \texttt{bam.R} knot-selection algorithm \citep[see][]{mgcv}. For each method, we used cubic splines and selected the smoothing parameters that minimized the GCV score. Given the optimal smoothing parameters, we calculated the fitted values, and then defined the true mean-squared-error (MSE) as $(1/n)\sum_{i=1}^{n}(\eta(\mathbf{x}_{i}) - \hat{y}_{i})^{2}$. Finally, we used 100 replications of the above procedure within each cell of the simulation design. \subsection{Results} The true MSE for each combination of simulation conditions is plotted in Figure~\ref{fig5}. \begin{figure*} \centering \scalebox{.55}{\includegraphics{fig5-sim.pdf}} \caption{\label{fig5} Simulation true MSEs on log-10 scale. Within each sample size, the six boxes correspond to Methods 1--6. Method~1 is SSANOVA with no rounding, Method~2 is SSANOVA with $r=.01$, Method~3 is SSANOVA with $r=.02$, Method~4 is SSANOVA with $r=.05$, Method~5 is \texttt{gam.R}, and Method~6 is \texttt{bam.R}.} \end{figure*} First, note that for each method, the true MSE decreased as $n$ increased, which was expected. Next, note that all of the methods recovered $\eta$ quite well (i.e., all MSEs smaller than 0.01). Comparing Methods 1--4, it is evident that setting $r\in\{.01,.02\}$ introduced minimal bias to the resulting solution. In contrast, setting $r=.05$ produced a more noticeable bias, particularly when analyzing the more jagged $\eta_{Ak}$ and $\eta_{Bk}$ functions, i.e., those with larger $k$. However, the bias introduced with $r=.05$ was small relative to the norm of $\eta$, so there is little practical difference between the solutions with $r\in\{.01,.02,.05\}$. Examining the true MSEs of Methods 5 and 6, it is apparent that the standard GAM performed almost identical to the batch-processed GAM throughout the simulation. Comparing the true MSEs of Methods 1--4 to those of Methods~5 and 6, it apparent that the SSANOVAs performed similar to the GAMs in every simulation condition. In the one-dimensional case ($\eta_{Ak}$ functions), the GAMs have slightly smaller true MSEs for $k\in\{3,4\}$, but the difference is trivial compared to the norm of the $\eta_{Ak}$ functions. In the two-dimensional case ($\eta_{Bk}$ functions), the SSANOVAs have slightly smaller true MSEs for $k\in\{3,4\}$. Differences between the SSANOVA and GAM solutions are most pronounced when analyzing the $\eta_{B4}$ function; in this case, the median true MSE of the GAM solutions is over 10 times larger than the corresponding median of the SSANOVA solutions with $r\in\{NA,0.01,0.02\}$. However, the difference is still quite small compared to the norm of the $\eta_{B4}$ function. The median analysis runtimes (in seconds) for each simulation condition are displayed in Tables~\ref{tab1} and \ref{tab2}. First, note that for each method, the runtime increased as $n$ increased, which was expected. Next, note that the runtimes for Methods~1, 5, and 6 were substantially larger than the corresponding runtimes of Methods~2--4. When analyzing the $\eta_{Ak}$ functions, the median runtimes for Methods 2--4 were less than one-tenth of a second for all examined $n$, and were anywhere from 40--60 times faster than the median runtimes for Methods 5 and 6. When analyzing the $\eta_{Bk}$ functions, the median runtimes for Methods 3--4 were less than one second for all examined $n$, and were anywhere from 10--20 times faster than the median runtimes for Methods 5 and 6. \begin{table*} \caption{\label{tab1} Median runtimes (seconds) for $\eta_{Ak}$ functions.} {\footnotesize \begin{tabular}{|l|rrr|rrr|rrr|rrr|} \hline & \multicolumn{3}{c}{$\eta_{A1}$} & \multicolumn{3}{|c}{$\eta_{A2}$} & \multicolumn{3}{|c}{$\eta_{A3}$} & \multicolumn{3}{|c|}{$\eta_{A4}$} \\ & 100 & 200 & 500 & 100 &200 & 500 &100 & 200 & 500 &100 &200 & 500 \\ \hline Method 1 ($r=$ NA) & 0.35 & 0.64 & 1.31 & 0.37 & 0.64 & 1.28 & 0.30 & 0.64 & 1.31 & 0.36 & 0.64 & 1.31 \\ Method 2 ($r=0.01$) & 0.02 & 0.03 & 0.07 & 0.02 & 0.03 & 0.07 & 0.02 & 0.03 & 0.07 & 0.02 & 0.03 & 0.07 \\ Method 3 ($r=0.02$) & 0.02 & 0.03 & 0.06 & 0.02 & 0.03 & 0.06 & 0.01 & 0.03 & 0.06 & 0.01 & 0.03 & 0.06 \\ Method 4 ($r=0.05$) & 0.01 & 0.03 & 0.06 & 0.02 & 0.02 & 0.06 & 0.01 & 0.02 & 0.06 & 0.01 & 0.02 & 0.06 \\ Method 5 (GAM) & 1.44 & 2.24 & 4.05 & 1.40 & 2.11 & 4.03 & 1.47 & 2.12 & 4.06 & 1.40 & 2.11 & 4.06 \\ Method 6 (BAM) & 1.35 & 2.02 & 4.26 & 1.37 & 2.05 & 4.30 & 1.32 & 2.05 & 4.28 & 1.38 & 2.05 & 4.29 \\ \hline \end{tabular} } \end{table*} \begin{table*} {\footnotesize \caption{\label{tab2} Median runtimes (seconds) for $\eta_{Bk}$ functions.} \begin{tabular}{|l|rrr|rrr|rrr|rrr|} \hline & \multicolumn{3}{c}{$\eta_{B1}$} & \multicolumn{3}{|c}{$\eta_{B2}$} & \multicolumn{3}{|c}{$\eta_{B3}$} & \multicolumn{3}{|c|}{$\eta_{B4}$} \\ & 100 & 200 & 500 & 100 &200 & 500 &100 & 200 & 500 &100 &200 & 500 \\ \hline Method 1 ($r=$ NA) & 3.80 & 6.60 & 14.84 & 3.80 & 6.60 & 14.82 & 3.81 & 6.61 & 14.85 & 3.81 & 6.60 & 14.85 \\ Method 2 ($r=0.01$) & 0.85 & 0.80 & 1.35 & 0.85 & 0.80 & 1.34 & 0.85 & 0.80 & 1.35 & 0.85 & 0.80 & 1.35 \\ Method 3 ($r=0.02$) & 0.34 & 0.51 & 0.99 & 0.34 & 0.51 & 0.99 & 0.34 & 0.51 & 0.99 & 0.34 & 0.51 & 0.99 \\ Method 4 ($r=0.05$) & 0.28 & 0.43 & 0.90 & 0.28 & 0.43 & 0.90 & 0.28 & 0.43 & 0.90 & 0.28 & 0.43 & 0.90 \\ Method 5 (GAM) & 4.48 & 9.16 & 22.31 & 4.45 & 9.12 & 22.29 & 4.45 & 9.16 & 22.38 & 4.50 & 9.20 & 22.43 \\ Method 6 (BAM) & 4.75 & 7.81 & 18.55 & 4.73 & 7.78 & 18.55 & 4.74 & 7.80 & 18.61 & 4.77 & 7.85 & 18.65 \\ \hline \end{tabular} } \end{table*} \section{Real Data Example} \label{ex} \subsection{Data and Analyses} To demonstrate the practical benefits of the rounding parameters when working with real data, we use electroencephalography (EEG) data obtained from \citet*{Bache+Lichman:2013}. Note that EEG data consist of electrical activities that are recorded from various electrodes on the scalp, and EEG patterns are used to infer information about mental processing. The EEG data used in this example were recorded from both control and alcoholic subjects participating in an experiment at the Henri Begleiter Neurodynamic Lab at SUNY Brooklyn. The data were recorded during a standard visual stimulus event-related potential (ERP) experiment using a 61-channel EEG cap (see Figure~\ref{fig6}). The data were recorded at a frequency of 256 Hz for one second following the presentation of the visual stimulus. \begin{figure} \centering \scalebox{.75}{\includegraphics{fig6-eegcap.pdf}} \caption{\label{fig6} Depiction of the 61-channel EEG cap. The Pz electrode is highlighted in red. Created using the \texttt{eegcap} function in the \texttt{eegkit} R package \citep{eegkit}.} \end{figure} For the example, we analyzed data from the Pz electrode of 120 subjects (44 controls and 76 alcoholics), and we used 10 replications of the ERP experiment for each subject.\footnote{Note that data from subjects \texttt{co2a0000425} and \texttt{co2c0000391} were excluded from the analysis due to small amounts of data, and we used the first 10 replications for each subject.} This resulted in $n=$ 307,200 data points (120 subjects $\times$ 256 time points $\times$ 10 replications). We analyzed the data using a two-way SSANOVA on the domain $[0,1]\times\{1,2\}$, where the first predictor is the time effect and the second predictor is the group effect (control vs.\ alcoholic); see the Appendix for an explanation of how the rounding parameter can be applied when working with continuous and nominal predictors. We used a cubic spline for the time effect, a nominal spline for the group effect, and $q=50$ bin-sampled knots. Finally, we fit the model both with the unrounded data and with the time covariate rounded to the nearest .01 second (i.e., $r=.01$ on the interval [0,1]); note that setting $r=.01$ for the time covariate results in $u=202$ unique covariate vectors, which is substantially less than the original $n=307200$ data points. \subsection{Results} The predicted ERPs for the unrounded and rounded data are plotted in Figure~\ref{fig7}. \begin{figure*} \centering \scalebox{.55}{\includegraphics{fig7-eegfig.pdf}} \caption{\label{fig7} Predicted ERPs using the unrounded data (a) and rounded data (b). Shaded regions give a 99\% Bayesian confidence interval around $\hat{\eta}$. Created using the \texttt{eegtime} function in the \texttt{eegkit} R package \citep{eegkit}.} \end{figure*} Note that there are no practical differences between the two solutions (c.f.\ Figure~\ref{fig7}a,b). Furthermore, note that both solutions produced a GCV score of GCV=85.96 and variance-accounted-for value of $R^{2}=0.03$, suggesting that the rounded solution fits the data as well as the unrounded solution. It is also worth noting that the unrounded solution took over five times longer to fit compared to the rounded solution; furthermore, the unrounded solution required a substantial amount of RAM to fit the model, whereas the rounded solution is easily fittable on a standard laptop or tablet. Comparing the estimated ERPs of the controls and alcoholics, there are obvious differences (see Figure~\ref{fig7}). In particular, the alcoholic subjects are missing the P300 component of the ERP waveform (i.e., large positive peak occurring about 300 ms after the stimulus). Note that the P300 component is thought to relate to a subject's internalization and/or categorization of stimuli, so these results suggest that alcoholic subjects have different information processing patterns for standard visual stimuli. This finding is consistent with previous findings regarding EEG patterns of alcoholic subjects \citep[see][]{Porjesz+EtAl:1980,Porjesz+EtAl:1987}, and some research suggests that this sort of EEG pattern may predispose individuals to alcoholism \citep[see][]{Porjesz+Begleiter:1990a,Porjesz+Begleiter:1990b}. \section{Discussion} This paper proposes the use of rounding parameters to overcome the computational burden of fitting nonparametric regression models to super-large samples of data. By rounding each predictor to a given precision (e.g., 0.01), it is possible to estimate $\eta$ using the $u \ll n$ unique rounded predictor variables. We have provided a simple Taylor heuristic that justifies the use of a small rounding parameter (e.g., $r=.01$) when using cubic smoothing splines for $x\in[0,1]$. Furthermore, we have provided methods for assessing the finite sample and asymptotic performance of the rounded SSANOVA estimator in various situations. The simulation study and EEG example clearly demonstrate the benefits of the proposed rounding parameters. When fitting nonparametric regression models with large $n$, the simulation results reveal that setting $r_{j}\leq.05$ can result in substantial computational savings without introducing much bias to the solution. Furthermore, the EEG data example reveals that there are no practical differences between the unrounded and rounded solutions (using $r=.01$) when analyzing real data. Thus, the rounding parameters offer a fast and stable method for fitting nonparametric regression models to very large samples. In addition to providing a fast method for smoothing large datasets, the rounding parameters are also quite memory efficient. Because the rounding approach only uses the unique rounded-covariate values, it is never necessary to construct the full $n\times q$ model design matrix (or the $n\times n$ smoothing matrix). So, using the rounding parameters, it is possible to fit nonparametric regression models to very large samples using a standard laptop or tablet, e.g., all of the rounded SSANOVA models in this paper are easily fittable on a laptop with 4 GB of RAM. As a result, typical researchers now have the ability to discover functional relationships in super-large data sets without needing access to supercomputers or computing clusters. As a final point, it should be noted that in some cases (e.g., large $p$) the number of unique rounded-covariate values may be very large. In such cases, forming the $u\times q$ model design matrix may require a substantial amount of memory (because $u$ is so large). However, as is noted in \citet*{Helwig:Phd} and \citet*{Helwig+Ma:2015}, fitting an SSANOVA model only depends on various crossproduct vectors and matrices. So, if $u$ is too large to form the full $u\times q$ model design matrix, then the needed crossproduct statistics can be formed in a batch-processing manner similar to the approach used by \citeapos{mgcv} \texttt{bam.R} function. \newpage \section*{Appendix: Rounding Algorithm} In this section, we provide algorithms for rounding SSANOVA predictors and obtaining the sufficient statistics for the SSANOVA estimation. The first algorithm assumes that all of the covariates are continuous; extensions for nominal covariates will be discussed after the presentation of the initial algorithm. First, let $r_{j}\in(0,1]$ denote the rounding parameter for the $j$-th predictor, let $\tilde{\mathbf{x}}_{j}$ denote the $n \times 1$ vector containing the $j$-th predictor's scores, and let $x_{(i)j}$ denote the $i$-th order statistic of the $j$-th predictor. Next, initialize $\mathbf{g}\equiv\{1\}_{n\times1}$ and $h\equiv1$, and then calculate \[ \begin{split} &\mathrm{for} \ j \in\{1,\ldots,p\}\\ &\qquad 1. \ \ \mathbf{g} \leftarrow \mathbf{g} + h[\mathrm{rd}\{(1/r_{j})(\tilde{\mathbf{x}}_{j}-x_{(1)j})/(x_{(n)j}-x_{(1)j})\}]\\ &\qquad 2. \ \ h \leftarrow \mathrm{rd}(1+1/r_{j})h\\ &\mathrm{end} \end{split} \] where the rounding function $\mathrm{rd}\{\cdot\}$ rounds the input to the nearest integer. After running the for loop, we have $g_{i}\in\{1,\ldots,u\}$, where $g_{i}$ denotes the $i$-th element of $\mathbf{g}$, and $u$ is the total possible number of unique covariate vectors; thus, the vector $\mathbf{g}$ indexes the multi-dimensional rounded-covariate score for each observation. The above result implies that the unique rounded-covariate scores (i.e., $\tilde{\mathbf{z}}_{t}$) can be obtained by sorting the predictors according to the $g_{i}$ values, and then sampling one observation's covariate vector from each unique $g_{i}$ value. Similarly, once the data is sorted according to the $g_{i}$ values, the sum of the response at each unique covariate (i.e., $\tilde{y}_{t}$) and the number of observations at each unique covariate (i.e., $w_{t}$) can be easily calculated. Lastly, after calculating $\|\mathbf{y}\|$, the SSANOVA model can be fit using the sufficient statistics from the rounded solution, i.e., $\tilde{\mathbf{z}}_{t}$, $\tilde{y}_{t}$, and $w_{t}$. As we previously mentioned, the above algorithm can be modified to include nominal covariates as well. When working with nominal covariates, the algorithm assumes that all nominal covariates are of the form $x_{ij}\in\{1,\ldots,f_{j}\}$ where $f_{j}$ is the number of factor levels of the $j$-th covariate. Assuming that $x_{ij}\in\{1,\ldots,f_{j}\}$, both steps of the rounding algorithm need to be slightly modified: \[ \begin{split} &\mathrm{for} \ j \in\{1,\ldots,p\}\\ &\quad \mbox{If } x_{ij} \mbox{ is continuous} \\ &\qquad 1. \ \ \mathbf{g} \leftarrow \mathbf{g} + h[\mathrm{rd}\{(1/r_{j})(\tilde{\mathbf{x}}_{j}-x_{(1)j})/(x_{(n)j}-x_{(1)j})\}]\\ &\qquad 2. \ \ h \leftarrow \mathrm{rd}(1+1/r_{j})h\\ &\quad \mbox{Else if } x_{ij} \mbox{ is nominal} \\ &\qquad 1. \ \ \mathbf{g} \leftarrow \mathbf{g} + h(\tilde{\mathbf{x}}_{j}-1) \\ &\qquad 2. \ \ h \leftarrow f_{j}h\\ &\mathrm{end} \end{split} \] Using this simple modification, the rounding algorithm can be efficiently applied to any combination of continuous and nominal covariates.
1,314,259,996,955
arxiv
\section*{Introduction} Motion is a major impediment to the acquisition of high quality images in free-breathing coronary magnetic resonance angiography (CMRA) exams \cite{ingle2014nonrigid, prieto2015highly, cruz2017highly}. A variety of retrospective motion correction methods have been proposed for CMRA. For these approaches to be effective, accurate motion measurements must be obtained. Many techniques have been developed to acquire this information, including 1D navigators placed over the diaphragm and self-navigation schemes that derive the position of the heart from the imaging data \cite{ehman1989adaptive, feng2016xd}. Another class of methods collects motion information using separately acquired 2D images of the heart \cite{wu2013free, correia2018optimized, malave2019whole, bustin2019five}. 3D image-based navigators (3D iNAVs) have been proposed in recent years to directly monitor nonrigid motion in different regions of the heart \cite{keegan2007non, moghari2014three, powell2014cmra, addy20173d}. The premise underlying these approaches is to rapidly acquire a low-resolution 3D dataset in each cardiac cycle concurrently with the segmented high-resolution data that contributes to the final image over several heartbeats. Moghari, \textit{et al}. initially collected such 3D iNAVs with an anisotropic resolution of 56 x 18 x 1 mm\textsuperscript{3} using a Cartesian trajectory \cite{moghari2014three}. Powell, \textit{et al}. extended this approach with parallel imaging to acquire 3D iNAVs exhibiting an anisotropic resolution of 5 x 5 x 10 mm\textsuperscript{3} \cite{powell2014cmra}. By applying compressed sensing based parallel imaging alongside a variable-density (VD) cones trajectory, Addy, \textit{et al}. demonstrated 3D iNAVs with 4.4 mm isotropic resolution \cite{addy20173d}. To track the highly local deformations of coronary vessels, these prior works have gradually augmented the spatial resolution of 3D iNAVs by increasing the associated scan acceleration factor. The resulting aliasing has been mitigated with iterative reconstruction. Residual undersampling artifacts, however, can remain in the 3D iNAVs in the case of large scan acceleration factors. Such artifacts may detract from the benefits of monitoring motion using 3D iNAVs with enhanced spatial resolution. In this work, we investigate the fidelity of motion estimates derived from 3D iNAVs collected with the accelerated cones trajectory described above. Determining the \textit{in vivo} performance of navigators is difficult because there is no reliable ground truth that can concurrently be acquired in real-time. To address this issue, we first develop a simulation framework to capture the translational displacements and nonrigid deformations of the heart introduced by the respiration cycle. Using this framework, we then examine the influence of different 3D iNAV spatial resolutions, corresponding to varying levels of necessary scan acceleration in VD cones imaging, on the accuracy of the extracted motion information. Finally, mindful of simulation results, we present a modified 3D iNAV design strategy. \textit{In vivo} nonrigid motion-corrected CMRA outcomes are utilized to compare the motion tracking capability of the proposed 3D iNAV design with that of the previously proposed approach. \section*{Methods} \subsection*{Imaging Data and 3D iNAV Acquisition} Beat-to-beat 3D iNAVs for respiratory motion tracking are collected as part of the cardiac-triggered, free-breathing 3D CMRA sequence shown in Supporting Information Figure S1 \cite{wu2013free}. Within each heartbeat, a fat saturation module is applied before the desired trigger delay point. Immediately following this module, imaging data is collected using a 3D cones $k$-space trajectory (28x28x14 cm\textsuperscript{3} FOV, 1.2 mm spatial resolution) \cite{gurney2006design}. An alternating-TR balanced steady state free precession (ATR bSSFP) readout is incorporated for further fat suppression and high blood signal. The overall acquisition scheme involves 9137 total interleaved cones acquired in segments of 18 every cardiac cycle. \begin{figure} \centering \includegraphics[width=\linewidth]{S1.png} \caption* {Supporting Information Figure S1: Imaging data is collected with a cardiac-triggered sequence. A fat-saturation (FS) module is followed by a 3D cones sequence, where cones interleaves are acquired in groups of 18 with a temporal resolution of 99 ms during diastole. A 3D iNAV is also collected with a temporal resolution of 176 ms each heartbeat to monitor heart motion. } \end{figure} To collect a 3D iNAV in a single heartbeat, scan acceleration is applied with a VD cones trajectory \cite{addy2015high}. Here, the sampling density ($f$) in $k$-space ($|\boldsymbol{k}|$) is modified in the following manner: \begin{equation} f(|\boldsymbol{k}|) = \begin{cases} f_1 & |\boldsymbol{k}| \in [0, k_1] \\ (f_1 - f_2)(1 - (|\boldsymbol{k}| - k_1)/(k_{max} - k_1))^{p} + f_2 & |\boldsymbol{k}| \in (k_1, k_{max}] \\ \end{cases} \end{equation} where the constant $f_1$ denotes the sampling density from 0 cm\textsuperscript{-1} to $k_1$ cm\textsuperscript{-1}. The transition in sampling density, from $f_1$ at $k_1$ down to $f_2$ at the maximum $k$-space extent ($k_{max}$), is governed by a $p$\textsuperscript{th} order polynomial. To mitigate undersampling artifacts in each 3D iNAV, reconstruction is performed with compressing sensing based parallel imaging using the state-of-the-art L\textsubscript{1}-ESPIRiT \cite{uecker2014espirit}: \begin{equation} \label{eq:problemOld} \underset{m}{\arg\min}\left\|DSm - y\right\|_2^2+\mu\left\|\Psi(m)\right\|_1 \end{equation} where $D$ is the NUFFT operator, $S$ contains the coil sensitivity maps, $m$ is the desired 3D iNAV, $y$ is the acquired non-Cartesian data, $\mu$ is the regularization parameter, and $\Psi$ is the wavelet transform. \begin{figure} \centering \includegraphics[width=\linewidth]{F1.png} \caption[] {(A) Our simulation framework begins with six different respiratory phases acquired in separate breath-holds using a fully sampled 4.4 mm cones trajectory. Motion information from this dataset is used as the ground truth. Different undersampled 3D iNAV configurations are synthesized from the fully sampled navigators, and the corresponding motion information is compared with the ground truth to determine the optimal design for the 3D iNAVs. (B) We investigate 3D iNAVs with four spatial resolutions. Acceleration factors (R) for different spatial resolutions with a variable-density cones sampling pattern range from 10.9 to 2.9. } \end{figure} \subsection*{Simulations} We previously developed 4.4 mm isotropic spatial resolution 3D iNAVs using a 32-readout VD cones trajectory generated with $f_1$ = 1 (corresponding to a fully sampled region), $k_1$ = 1.14 m\textsuperscript{-1} (1\% of $k_{max}$), $f_2$ = 0.27, and $p$ = 3.1. This corresponds to an acceleration factor of 10.9 compared to a fully sampled 3D cones trajectory requiring 349 readouts for 4.4 mm isotropic resolution. Prior work has demonstrated, however, that such a large acceleration factor when using an eight-channel coil can lead to blurring/smoothening effects or residual aliasing in compressed sensing reconstructions \cite{jaspan2015compressed}. The simulation framework described below aims to examine the influence of such reconstruction artifacts on the fidelity of motion estimates from 3D iNAVs. It additionally investigates potential benefits from reducing the spatial resolution of 3D iNAVs. This would decrease the necessary acceleration factor and thereby enhance the quality of compressed sensing reconstructions, which could improve the accuracy of the derived motion information. To develop the simulation framework, a volunteer was instructed to perform breath-holds in six respiratory phases from end-expiration to end-inspiration. For each respiratory phase, a fully sampled 4.4 mm spatial resolution 3D iNAV (349 readouts, TR = 5.5 ms, TE = 0.6 ms) was acquired with cardiac-gating (18 readouts per heartbeat, 99 ms temporal resolution) across 20 heartbeats. Because these six 3D iNAVs do not exhibit undersampling artifacts, the translational and nonrigid motion estimates obtained from them with respect to the end-expiration navigator serve as the ground truth (Figure 1(A)). Specifically, for each 3D iNAV, we first obtain 3D translational motion estimates. Rigid-body registration is performed with the MATLAB Image Processing Toolbox (The Mathworks, Natick, MA). Here, a 3D ellipsoidal mask covering the whole heart is prescribed and the mean-squared error within this mask is the similarity metric for registration. After the estimation of 3D translations and rigid-body alignment of 3D iNAVs, residual nonrigid motion in different respiratory phases is quantified using deformation fields, which are determined from diffeomorphic demons \cite{vercauteren2009diffeomorphic}. The choice of registration techniques for the estimation of translational and nonrigid motion mirrors those applied in the most recent work for processing 3D iNAVs \cite{luo2017nonrigid}. To analyze the effect of spatial resolution and undersampling, we generate a total of four VD cones trajectories with spatial resolutions of 4.4 mm, 5.4 mm, 6.4 mm, and 7.8 mm. In designing these trajectories, we fix the following parameters: FOV = 28x28x14 cm\textsuperscript{3}, number of readouts = 32, $f_1$ = 1, $p$ = 3.1, and $k_1$ = 1.14 m\textsuperscript{-1}. $k_{max}$ is prescribed to provide the desired spatial resolution and $f_2$ is modified to ensure each trajectory has 32 readouts (Figure 1(B)). Note that as the resolution of the trajectory decreases, $f_2$ (the sampling density in $k$-space periphery) increases, which results in smaller acceleration factors. $k$-space data for the different undersampled trajectories are computed from the fully sampled 4.4 mm data using a type 1 NUFFT (i.e., inverse gridding) (Figure 1(A)). Separate sensitivity maps for each respiratory phase are determined from the corresponding fully sampled acquisitions. These sensitivity maps are then used in the inverse gridding operation to generate multichannel $k$-space data for each respiratory phase. Following the synthesis of undersampled 3D iNAV data at varying spatial resolutions, reconstruction is performed with L\textsubscript{1}-ESPIRiT. For all reconstructions, the optimal regularization parameter is determined via a coarse-to-fine grid-based search that results in the lowest root-mean-squared error relative to the fully sampled image from which the non-Cartesian data was synthesized. For each of the four undersampled 3D iNAV configurations, translational and nonrigid motion estimates relative to the end-expiration phase are computed using the aforementioned approach. Note that prior to deriving motion information, zero-padding of $k$-space is performed so that all 3D iNAV configurations have an interpolated isotropic spatial resolution of 4.4 mm. To assess errors in translations, we evaluate the absolute difference in superior-inferior (SI), anterior-posterior (AP), and right-left (RL) displacements derived from any particular 3D iNAV configuration relative to displacements from the fully sampled 4.4 mm 3D iNAV. The voxel-by-voxel SI, AP, and RL components of the nonrigid deformation fields from the undersampled 3D iNAVs are analyzed in a similar fashion. As an example, to compare the SI components of the deformation fields corresponding to the end-inspiration phase between an undersampled 3D iNAV and the ground truth 3D iNAV, we first compute the voxel-by-voxel absolute difference. Then, we examine the mean absolute difference in SI estimates across voxels spanning the heart (as determined by an ellipsoidal mask). The same procedure is carried out for the AP and RL components, and the overall error analysis is repeated for deformation fields associated with the remaining respiratory phases. Beyond individually analyzing the nonrigid SI, AP, and RL components, we consider the voxel-by-voxel error magnitude as well. For a voxel, this is defined as the square root of the sum of squares of the voxel-level errors in SI, AP, and RL directions. For each undersampled 3D iNAV configuration, the mean error magnitude in voxels spanning the heart is computed for the different respiratory phases. Inaccuracies in nonrigid motion estimates from the synthesized undersampled data can be due to the (1) lower spatial resolution with respect to the ground truth or (2) blurring and residual aliasing following iterative reconstruction. To separate these two effects, we create fully sampled 5.4 mm, 6.4 mm, and 7.8 mm datasets by appropriately truncating the \textit{k}-space data of the fully sampled 4.4 mm 3D iNAVs. The mean error magnitude relative to the fully sampled 4.4 mm 3D iNAVs is computed as described above for each of these generated datasets. By subtracting the error magnitude due to spatial resolution calculated here from the total error magnitude determined above, we isolate the influence of reconstruction artifacts in the 3D iNAVs on the extracted nonrigid motion information. \section*{Experiments} The undersampled 3D iNAV configuration that provided the highest accuracy motion estimates in simulation was compared with the previously applied 4.4 mm 3D iNAV design in six volunteer acquisitions. Each volunteer underwent two scans with the CMRA sequence shown in Supporting Information Figure S1. Respiratory motion in each scan was tracked with either the modified 3D iNAV configuration or the 4.4 mm 3D iNAV design. The order of the two scans was randomized. All scans were carried out on a 1.5 T whole-body GE scanner with maximum slew rate of 150 mT/m/ms and maximum gradient amplitude of 40 mT/m. Participants provided informed consent, and the institutional review board approved the complete scan protocol. The studies were performed using an eight-channel cardiac receive coil with cardiac triggering via a peripheral plethysmograph. A 3D cones trajectory with the following imaging parameters was utilized: TR = 5.5 ms; flip angle = 70\textdegree; bandwidth = 250 kHz; FOV = 28x28x14 cm\textsuperscript{3}, resolution = 1.2 mm isotropic. The total scan time across all subjects spanned 508 heartbeats and ranged from 7 to 10 minutes due to variations in heartrate. An autofocusing motion correction framework utilizing both translational and nonrigid estimates was applied to evaluate the motion information from the two different 3D iNAVs \cite{luo2017nonrigid}. The first step in this scheme entails the estimation of 3D translational motion from the beat-to-beat 3D iNAVs and subsequent correction of imaging data with linear phase terms. Following this, residual nonrigid motion in the rigid-body aligned 3D iNAVs is quantified using the deformation fields from diffeomorphic demons. k-Means clustering is then performed to group pixels with similar deformation fields over time into 32 clusters. Averaging of the deformation fields in each cluster generates a total of 32 localized 3D translational motion trajectories. For each localized translational estimate, the appropriate linear phase modulation is applied, and a candidate motion compensated image is reconstructed. From this collection of motion compensated images, a localized gradient entropy metric is used to assemble the final image on a voxel-by-voxel basis. Two board-certified cardiologists with experience in CMRA assessed variation in nonrigid motion correction outcomes using the 4.4 mm 3D iNAV and the modified 3D iNAV configuration. Thin-slab maximal intensity projection reformats of the right coronary artery (RCA) and left coronary artery (LCA) were generated with OsiriX (Pixmeo, Geneva, Switzerland). The two autofocusing reconstructions (one applying the 4.4 mm 3D iNAV and the other applying the modified 3D iNAV configuration) were randomized and presented together, and the blinded readers scored the proximal, medial, and distal segments of the RCA and LCA on a five-point scale: 5-Excellent, 4-Good, 3-Moderate, 2-Poor, 1-Non-diagnostic. Paired two-tailed Student's t-tests were applied to determine significance. \begin{figure} \centering \includegraphics[width=\linewidth]{F2.png} \caption[] {(A) The 3D translational displacements computed using the ground truth, fully sampled 4.4 mm 3D iNAV and the four undersampled 3D iNAVs exhibit similar trends across the respiratory phases (R1 = end-inspiration respiratory phase and R5 = respiratory phase closest to end-expiration). (B) The absolute difference in the RL, AP, and SI estimates from the undersampled 3D iNAVs relative to those from the fully sampled 3D iNAV present small errors below 0.1 mm. This indicates that all the undersampled 3D iNAVs are comparable in tracking the translational motion of the heart induced by respiration. } \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{F3.png} \caption[] {Compared to the inaccuracies observed in translational estimates, the mean errors (relative to the fully sampled 4.4 mm 3D iNAV) are larger in the RL, AP, and SI components of the nonrigid deformation fields from the four undersampled 3D iNAVs, as shown in (A) for different respiratory phases (R1 = end-inspiration respiratory phase and R5 = respiratory phase closest to end-expiration). With the exception of the RL error in respiratory phase R4, the 6.4 mm 3D iNAV consistently provides higher accuracy nonrigid motion estimates than the 4.4 mm 3D iNAV. This trend is more apparent in the error magnitude shown in (B) for the undersampled 3D iNAVs. Across all respiratory phases and undersampled 3D iNAV configurations, the 6.4 mm 3D iNAV results in the lowest errors. The errors due to spatial resolution (determined by comparing fully sampled navigators at different resolutions to the ground truth 4.4 mm 3D iNAV) and residual aliasing/blurring (computed by subtracting the spatial resolution error from the error magnitude) effects following L\textsubscript{1}-ESPIRiT combine to give the highest accuracy nonrigid estimates for the 6.4 mm 3D iNAV. } \end{figure} \section*{Results} Figure 2(A) presents 3D translations computed with the fully sampled 3D iNAV and the four undersampled 3D iNAV configurations in simulation. Absolute differences in translations from the undersampled 3D iNAVs relative to those from the ground truth 3D iNAV are shown in Figure 2(B). As is evident, the undersampled 3D iNAVs perform similarly to one another for the estimation of translational displacements. Moreover, all the errors in translations are below 0.1 mm. The average errors in the nonrigid deformation fields from the different undersampled 3D iNAVs are highlighted in Figure 3(A). For the RL component, with the exception of one respiratory phase, the 6.4 mm 3D iNAV presents lower errors than the 4.4 mm 3D iNAV applied in prior work. A similar trend is observed for the AP component, where the 6.4 mm 3D iNAV consistently outperforms the 4.4 mm 3D iNAV. In the SI component, for four respiratory phases, the 6.4 mm 3D iNAV yields the smallest errors among the assessed 3D iNAV configurations. The mean error magnitude combining the individual RL, AP, and SI errors accentuates the observed patterns (Figure 3(B)). Here, across all the respiratory phases, the 6.4 mm 3D iNAV exhibits the lowest error magnitude. Note that in the case of fully sampled datasets at different resolutions, it is the 4.4 mm 3D iNAV that exhibits the lowest error magnitude. However, in the case of an undersampled 4.4 mm 3D iNAV with large scan acceleration (R = 10.9), the benefit from high spatial resolution for motion tracking is offset by the error contribution from residual aliasing and blurring/smoothening effects following L\textsubscript{1}-ESPIRiT. The simulation suggests that an undersampled 6.4 mm 3D iNAV (R = 4.2) balances the tradeoff between (1) improved motion tracking with high spatial resolution navigators and (2) compromised motion tracking in the presence of reconstruction artifacts due to aggressive scan acceleration. \begin{figure} \centering \includegraphics[width=\linewidth]{S2.png} \caption* {Supporting Information Figure S2: The synthesized undersampled 4.4 mm 3D iNAV with an acceleration factor of 10.9 exhibits residual aliasing as well as blurring and smoothening effects that skew the quantification of nonrigid motion information. The undersampled 6.4 mm 3D iNAV requires a lower acceleration factor (4.2), which lessens the severity of artifacts following reconstruction. Dotted circles in the different respiratory phases highlight regions in which structure is similarly depicted in the ground truth 3D iNAV and undersampled 6.4 mm 3D iNAV, but poorly seen in the undersampled 4.4 mm 3D iNAV due to reconstruction artifacts. } \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{S3.png} \caption* {Supporting Information Figure S3: The voxel-by-voxel distribution in error magnitude for two respiratory phases (R1 = end-inspiration respiratory phase and R5 = respiratory phase closest to end-expiration) demonstrate the advantages of using a lower resolution 3D iNAV. For both respiratory phases, the 6.4 mm 3D iNAV has larger pixel counts near smaller errors compared to the 4.4 mm 3D iNAV. This trend is accentuated in R1, which exhibits more nonrigid deformations than R5 since the end-expiration respiratory phase is the reference frame. } \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{F4.png} \caption[] {Reformatted MIP of the RCA with autofocusing motion correction using the 4.4 mm 3D iNAV (left) and 6.4 mm 3D iNAV (right) for three subject studies. In Subject 1 and Subject 2, applying the motion information from the 6.4 mm 3D iNAV better delineates the RCA compared to utilizing the 4.4 mm 3D iNAV. Subject 6 is a case where the depiction of the different RCA segments is similar between the two approaches. White arrows indicate regions of notable differences between autofocusing using the 4.4 mm 3D iNAV and 6.4 mm 3D iNAV. } \end{figure} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{F5.png} \caption[] {LCA with autofocusing motion correction using the 4.4 mm 3D iNAV (left) and 6.4 mm 3D iNAV (right). As seen with the RCA, the 6.4 mm 3D iNAV presents the LCA in an enhanced manner relative to the 4.4 mm 3D iNAV in Subject 1 and Subject 2. The LCA is visualized with equivalent detail in Subject 6 irrespective of the 3D iNAVs used for motion tracking. Differences in LCA sharpness are highlighted by white arrows. } \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{F6.png} \caption[] {(A) The average scores of the RCA and LCA across both readers and three coronary segments in six volunteers when using the 4.4 mm 3D iNAV and 6.4 mm 3D iNAV for autofocusing. (B) The mean RCA and LCA reader scores across all volunteers for nonrigid correction outcomes when applying the two 3D iNAVs. The statistical significance of the results for reader scores was P $<$ 0.05 for the RCA as well as the LCA when using the two-tailed Student's t-test. } \end{figure} Supporting Information Figure S2 presents all the respiratory phases for the fully sampled 4.4 mm 3D iNAV, undersampled 4.4 mm 3D iNAV, and undersampled 6.4 mm 3D iNAV. Relative to the undersampled 4.4 mm 3D iNAV, the undersampled 6.4 mm 3D iNAV exhibits improved depiction of various structures in the heart. To further substantiate the better performance of the 6.4 mm 3D iNAV compared to the 4.4 mm 3D iNAV in simulations, Supporting Information Figure S3 shows a histogram of the error magnitude in voxels spanning the heart for two sample respiratory phases. In both phases, the distribution of error magnitude is centered around smaller errors for the 6.4 mm 3D iNAV relative to the 4.4 mm 3D iNAV. The beat-to-beat 4.4 mm 3D iNAVs and 6.4 mm 3D iNAVs obtained from the six volunteer studies corroborate the trends seen in simulations (Supporting Information Video S1). Specifically, across all volunteers, the 6.4 mm 3D iNAVs exhibits less residual aliasing and blurring/smoothening artifacts following L\textsubscript{1}-ESPIRiT. For example, in Subject 1, the boundary between the apex of the heart and diaphragm as well as the separation of the ventricles of the heart is best delineated with the 6.4 mm 3D iNAVs. Such structural features can enhance the performance of nonrigid image registration techniques for deriving motion information \cite{sroubek2004registration, vandewalle2006frequency}. Reformatted oblique thin-slab MIP images depicting the RCA for three volunteer studies are shown in Figure 4. For Subject 1 and Subject 2, autofocusing correction with the 6.4 mm 3D iNAVs results in enhanced depiction of the medial RCA segment compared to correction with the 4.4 mm 3D iNAVs. In the case of Subject 6, the RCA exhibits equivalent sharpness regardless of the applied 3D iNAVs. Similar trends are observed for the LCA in Figure 5. While the medial segment of the LCA is better visualized when using the 6.4 mm 3D iNAVs in place of the 4.4 mm 3D iNAVs for Subject 1 and Subject 2, a difference in sharpness is not observed in the LCA of Subject 6. Figure 6 showcases the results of the qualitative reader studies. When using the 4.4 mm 3D iNAV for autofocusing correction, the average score for the RCA and LCA is 3.17 and 3.33, respectively, across both readers, all segments, and all subjects. Applying the 6.4 mm 3D iNAV results in scores of 3.50 and 3.81. Observed discrepancies in scores between motion correction with the 4.4 mm 3D iNAVs and 6.4 mm 3D iNAVs are statistically significant with P $<$ 0.05. The correlation coefficients between each reader for the RCA and LCA are 0.83 and 0.61, respectively. \section*{Discussion} In this work, we examined the accuracy of translational and nonrigid motion estimates offered by undersampled 3D iNAVs acquired with a VD cones trajectory. This analysis was performed using a novel simulation framework for investigating motion estimation errors in CMRA. To obtain ground truth motion information, fully sampled, breath-held 4.4 mm 3D iNAV datasets were collected at several respiratory phases. Then, different undersampled 3D iNAV configurations with spatial resolutions ranging from 4.4 mm to 7.8 mm and scan acceleration factors between 10.9 and 2.2 were generated from the fully sampled data. While translational motion from the undersampled 3D iNAVs strongly correlated with those from the fully sampled 3D iNAVs, the nonrigid motion estimates exhibited large errors. Notably, we found that the undersampled 3D iNAV with the highest spatial resolution (4.4 mm) did not provide the best accuracy motion information. This is because the undersampled 4.4 mm 3D iNAV also has the largest associated scan acceleration factor (R = 10.9), leading to artifacts such as aliasing and blurring/smoothening effects after iterative reconstruction. We demonstrated that the 6.4 mm 3D iNAV, while lower in resolution, provides higher fidelity nonrigid motion estimates, as it achieves a sufficient spatial resolution with a moderate acceleration factor (R = 4.2). The simulation framework developed in this study does not incorporate several important considerations. First, the sensitivity maps used to generate multichannel, undersampled \textit{k}-space data are the same maps applied in L\textsubscript{1}-ESPIRiT to reconstruct 3D iNAVs. In practice, errors exist in the sensitivity maps for L\textsubscript{1}-ESPIRiT, which might worsen the accuracy of motion information from 3D iNAVs. Second, note that the 3D iNAVs are acquired as part of a bSSFP sequence. Thus, higher resolution 3D iNAVs will experience more eddy current artifacts, which we do not study with our simulations \cite{malave2019whole}. This suggests, however, that the improvements of the 6.4 mm 3D iNAVs over 4.4 mm 3D iNAVs are likely even larger than indicated by our simulations. Third, the effect of subject size on 3D iNAV quality is not evaluated in the current work. Specifically, while the VD cones trajectories for all undersampled 3D iNAVs are designed with a nominal FOV of 28x28x14 cm\textsuperscript{3}, the effective FOV is smaller for the 4.4 mm 3D iNAV than the 6.4 mm 3D iNAV. This is because of the 2.6x (= 10.2/4.2) greater scan acceleration factor for the 4.4 mm 3D iNAV compared to the 6.4 mm 3D iNAV. As a result, in the case of large subjects, the aliasing due to undersampling will be more pronounced for the 4.4 mm 3D iNAV relative to the 6.4 mm 3D iNAV. Consequently, the presence of reconstruction artifacts following L\textsubscript{1}-ESPIRiT might be more exaggerated in the higher resolution 3D iNAV. Further study is warranted to examine the influence of subject size on motion tracking accuracy with undersampled 3D iNAVs, as the simulation is based on fully sampled data from a single volunteer. Lastly, the ability of this volunteer to perform breath-hold at six respiratory phases could impact the conclusions from the simulation. However, as long as there are several unique respiratory positions, it should nevertheless serve as a sufficient proxy for evaluating the nonrigid motion estimation performance of different 3D iNAV configurations. Despite the simplifications in our simulation, the findings from it correctly guided us in our experimentation. In the six volunteer studies, the 6.4 mm 3D iNAVs improved the depiction of cardiac structure compared to the 4.4 mm 3D iNAVs. Accordingly, nonrigid autofocusing correction of free-breathing CMRA data yielded sharper coronary vessels with the lower resolution 3D iNAVs. Assessment by two cardiologists validated these trends. The L\textsubscript{1}-ESPIRiT technique applied in this work utilizes spatial regularization alone. Temporal regularizers such as total variation or low rank constraints across the navigator frames might further mitigate aliasing artifacts. Temporal constraints may improve reconstruction quality, but their ability to retain motion information has not been studied in a quantitative manner. Therefore, we did not incorporate these additional regularizers in our approach to reconstruct 3D iNAVs. Note also that parallel imaging and compressing sensing reconstructions are performed here using an eight-channel cardiac coil. A larger number of channels will enable greater accelerations for acquiring 3D iNAVs. The simulation pipeline developed in this work can readily be applied in contexts involving coils with additional elements to understand the impact of scan acceleration on the derived motion information. By leveraging coils with several channels alongside reconstruction schemes with a combination of regularizers, 3D iNAVs can potentially be directly derived from the high-resolution imaging data. \section*{Conclusion} We have analyzed the effect of spatial resolution and scan acceleration on the fidelity of respiratory motion tracking using 3D iNAVs for CMRA. Through simulations, we determined that a higher spatial resolution 3D iNAV, if fully sampled, results in better motion estimates. However, with undersampling, the advantages associated with high spatial resolution motion tracking are offset by the presence of artifacts following iterative reconstruction. In light of this, we found that an undersampled 4.4 mm 3D iNAV (R = 10.9) yielded lower accuracy nonrigid motion information than an undersampled 6.4 mm 3D iNAV (R = 4.2). \textit{In vivo} CMRA studies presenting sharp autofocusing motion correction outcomes demonstrated a capability for monitoring motion with improved fidelity using the 6.4 mm 3D iNAV in place of the 4.4 mm 3D iNAV. \section*{Abstract} \setlength{\parindent}{0in} Purpose: To study the accuracy of motion information extracted from beat-to-beat 3D image-based navigators (3D iNAVs) collected using a variable-density cones trajectory with different combinations of spatial resolutions and scan acceleration factors. Methods: Fully sampled, breath-held 4.4 mm 3D iNAV datasets for six respiratory phases are acquired in a volunteer. Ground truth translational and nonrigid motion information is derived from these datasets. Subsequently, the motion estimates from synthesized undersampled 3D iNAVs with isotropic spatial resolutions of 4.4 mm (acceleration factor = 10.9), 5.4 mm (acceleration factor = 7.2), 6.4 mm (acceleration factor = 4.2), and 7.8 mm (acceleration factor = 2.9) are assessed against the ground truth information. The undersampled 3D iNAV configuration with the highest accuracy motion estimates in simulation is then compared with the originally proposed 4.4 mm undersampled 3D iNAV in six volunteer studies. Results: The simulations indicate that for navigators beyond certain scan acceleration factors, the accuracy of motion estimates is compromised due to errors from residual aliasing and blurring/smoothening effects following compressed sensing reconstruction. The 6.4 mm 3D iNAV achieves an acceptable spatial resolution with a small acceleration factor, resulting in the highest accuracy motion information among all assessed undersampled 3D iNAVs. Reader scores for six volunteer studies demonstrate superior coronary vessel sharpness when applying an autofocusing nonrigid correction technique using the 6.4 mm 3D iNAVs in place of 4.4 mm 3D iNAVs. Conclusion: Undersampled 6.4 mm 3D iNAVs enable motion tracking with improved accuracy relative to previously proposed undersampled 4.4 mm 3D iNAVs. \setlength{\parindent}{0in} {\bf Key words: 3D navigators, coronary angiography, motion correction} \newpage
1,314,259,996,956
arxiv
\section Phonon emission rates and carrier dephasing} Here we consider carrier dephasing for Bloch-oscillating moir\'e superlattices due to phonon emission. Since the width of moir\'e bands is considerably smaller than the optical phonon energies, the electron-phonon interactions are dominated by coupling to acoustic phonons. We show that the acoustic phonon emission rates can be tuned in a wide range by two independent knobs---the width of the moir\'e band, controlled by the twist angle, and the in-plane electric field. The bandwidth impacts phonon emission through the density of states; phonon emission is suppressed when the twist angle is tuned away from the magic flat-band value. The electric field suppresses the emission rate by creating a discrete electron energy spectrum. As a result, phonon emission is suppressed as the field increases and the system enters the Bloch-oscillating state. Importantly, the threshold field for this suppression is relatively low, such that Bloch oscillations can be induced in the free-carrier regime, avoiding Wannier-Stark (WS) localization on a superlattice scale. We start by writing down the full Hamiltonian, which contains the free-particle parts for electrons and phonons, and an electron-phonon interaction term, here taken in the deformation-potential form: \begin{equation} \label{eq:H_total} H = H_{\rm el} + H_{\rm ph} + H_{\rm el-ph} \end{equation} The electrons are described by a tight-binding model on a two-dimensional superlattice with a linear potential due to electric field: \begin{equation} \label{eq:H_e} H_{\rm el} = -\sum_{\left\langle \vec n \vec n' \right\rangle} Jc^\dagger_{\vec n} c_{\vec n'} - \sum_{\vec n} ea\vec E\cdot \vec nc^\dagger_{\vec n} c_{\vec n} \end{equation} where $a$ is the superlattice period, $\vec n=(n_x,n_y)$ with integer $n_x$ and $n_y$ are the discrete coordinates that label the superlattice sites. Here, for simplicity, we model the superlattice as a square lattice. The electric field is applied along a general direction, the quantities $c_{\vec n}^\dagger$ and $c_{\vec n}$ are the creation and annihilation operators describing carriers on the lattice. Fermions in 2D continuum are described by superpositions of different Wannier orbitals, \begin{equation}\label{eq:phi_W_c} \psi_{\vec r}=\sum_{\vec n}W(\vec r-\vec n a) c_{\vec n} ,\quad \psi^\dagger_{\vec r}=\sum_{\vec n}W^*_{\vec n}(\vec r-\vec n a) c_{\vec n}^\dagger , \end{equation} where $W(\vec r-\vec n a)$ are Wannier orbitals centered at the superlattice nodes. Next we introduce the eigenstates of the electron Hamiltonian Eq.\eqref{eq:H_e}, denoting them as $\phi_{\vec n}(\vec r)$. As always for the WS ladder problem, the analysis is simplest in the momentum representation, $\phi_{\vec n}(\vec r)=\int d^2r e^{i\vec p\vec r} \phi_{\vec n}(\vec p)$. Indeed, in momentum representation the Schroedinger equation turns into a first-order ODE which can be solved explicitly. The states in the 2D continuum are then given by a convolution of the on-lattice states and Wannier orbitals, Eq.\eqref{eq:phi_W_c}. Accordingly, carrying out the analysis yields the momentum-space wavefunctions $\phi_{\vec n}(\vec p)$ given by products of the WS ladder wavefunctions $\Psi(\vec n, \vec p)$ and the Wannier-orbital formfactors $W(\vec p)=\int d^2r e^{i\vec p\vec r} W(\vec r)$: \begin{equation} \phi_{\vec n}(\vec p) = \Psi (\vec n, \vec p)W(\vec p) \label{eq:eigenstate_wavefunction} \end{equation} Below, for simplicity, we use a Gaussian model for the quantities $W(\vec p)$, \begin{equation} W(\vec r) = \frac{1}{\sqrt{\pi} \xi} e^{-\frac{r^2}{2\xi^2}},\quad W(\vec p) = \left( 2\pi\right) ^{\frac{1}{2}}\xi e^{-p^2\xi^2/2}. \end{equation} where $\xi$ defines the Wannier orbital radius. It will be convenient to factorize the Gaussian dependence as $ W(\vec p) = w(p_x)w(p_y) $, where $\vec p = (p_x,p_y)$ and \begin{equation} w(p_{j}) = (2\pi)^{\frac{1}{4}}\xi^{\frac{1}{2}} e^{-p_{j}^2\xi^2/2},\quad j=x,y . \end{equation} Importantly, the WS ladder wavefunction $\Psi(\vec n,\vec p)$ can also be brought to a separable form for an electric field $\vec E = (E_x, E_y)$ applied in a generic incommensurate direction: \begin{equation} \Psi (\vec n, \vec p) = \psi_x (n_x, p_x)\psi_y (n_y,p_y), \end{equation} with the factors $\psi_x (n_x, p_x)$ and $\psi_y (n_y,p_y)$ given by \begin{eqnarray}\label{eq:WS_wavefunction_x} &\psi_{j} (n_{j},p_{j}) = e^{-i F^{\left( i\right)}_{n_{j}}(p_{j})}, \quad j=x,y \\ \nonumber & F^{\left( i\right)}_{n_j}(p_j) = \frac{2J}{eE_{j} a} \sin p_{j} a - n_{j}p_{j} a. \label{eq:Wannier_function_xy} \end{eqnarray} This yields a separable representation for the full momentum-space wavefunctions in Eq.\eqref{eq:eigenstate_wavefunction}. We note parenthetically that a more complicated treatment is required when an electric field is applied in a commensurate direction. In this case, instead of two-dimensional ladder, the WS problem yields a one-dimensional ladder, in which each level represents a one-dimensional band describing particle moving perpendicular to the electric field. Here, for simplicity, we focus on the case of the field applied in a generic incommensurate direction. Next, we introduce phonons and electron phonon coupling. We model the acoustic phonons by the continuum Hamiltonian \begin{equation} H_{\rm ph} = \int \frac{d^2 q}{(2\pi)^2} \hbar \omega_{\vec q} a_{\vec q}^\dagger a_{\vec q}, \quad \omega_{\vec q} = sq \end{equation} where $s$ is the speed of sound, and the momenta $\vec q$ form a continuum extending beyond the superlattice Brillouin zone. This model accounts for the presence of phonon modes with wavelengths that can be either shorter or greater than the superlattice periodicity $a$. The electrons and phonons interact through the deformation potential coupling: \begin{eqnarray}\label{eq:H_el_ph_1} &H_{\rm el-ph} = \int d^2 r D \vec \nabla \cdot \vec u(\vec r) \psi_{\vec r}^\dagger \psi_{\vec r}, \\ \nonumber &u(\vec r) = \sum_k \sqrt{\frac{2\hbar}{\rho_0 \omega_{\vec k }}\left[ a_k(t)e^{i\vec k\vec r}+ a_k^\dagger(t) e^{-i\vec k\vec r}\right], \end{eqnarray} where $\rho_0$ is the atomic mass density, $a_k(t) =a_k e^{-i\omega_{\vec k} t}$, $\psi_{\vec r}^\dagger $ and $\psi_{\vec r}$ are the creation and annihilation of an electron at a continuum position $\vec r$ defined above. To proceed with the analysis we rewrite the continuum electron-phonon coupling, Eq.\eqref{eq:H_el_ph_1}, in the basis of eigenstates found above, Eq.\eqref{eq:eigenstate_wavefunction}. This gives \begin{equation}\label{eq:H_el_ph_2} H_{\rm el-ph} = \sum_{\vec n,\vec n'} \tilde c_{\vec n}^\dagger \tilde c_{\vec n'}\left\langle \vec n| D \vec \nabla \cdot \vec u|\vec n'\right\rangle \end{equation} where $\tilde c_{\vec n}$ and $\tilde c_{\vec n}^{\dagger}$ denote fermion operators for the eigenstates in Eq.\eqref{eq:eigenstate_wavefunction}, and $|\vec n\rangle$ is a short-hand notation for these states. Accordingly, the matrix element in Eq.\eqref{eq:H_el_ph_2} equals \begin{equation} \left\langle \vec n| D \vec \nabla \cdot \vec u|\vec n'\right\rangle=\int d^2 r \phi^*_{\vec n}(\vec r) D\nabla \cdot \vec u(\vec r) \phi_{\vec n'}(\vec r) . \end{equation} Starting from the electron-phonon Hamiltonian Eq.\eqref{eq:H_el_ph_2} and Fermi's golden rule, we express the phonon emission rate by a carrier transitioning from a state $\left. |\vec n\right\rangle$ to a state $\left. |\vec n-\vec m\right\rangle$ as \begin{widetext} \begin{equation} \gamma = \frac{2\pi}{\hbar} \frac{2D^2\hbar }{\rho_0 s V} {\sum_{\vec m}}' \sum_{\vec q} |q| \left|\int d^2 \vec r e^{i\vec q \vec r}\overline \phi_{\vec n-\vec m}(\vec r)\phi_{\vec n}(\vec r)\right|^2 \delta(\hbar s|q| - ea \vec m\cdot \vec E )\label{eq:emission_rate_1} \end{equation} \end{widetext} where $\vec q$ and $\hbar s |q|$ are the phonon momenta and energies, $\sum_{\vec m}'$ is the summation over all Bravais lattice vectors $\vec m$ that satisfy the condition for phonon emission, $\vec m\cdot \vec E > 0$. Here we ignore phonon occupation numbers, assuming that electron temperature is much higher than the lattice temperature. Next, we evaluate the matrix elements in Eq.\eqref{eq:emission_rate_1}. In the general form given above the overlap integrals are pretty cumbersome. However the task of evaluating the overlap integrals can be simplified by employing an approximation of a small Wannier orbital radius, $\xi\ll a$. We start with plugging Eq.\eqref{eq:eigenstate_wavefunction} \eqref{eq:Wannier_function_xy} into Eq.\eqref{eq:emission_rate_1}, \begin{widetext} \begin{equation}\label{eq:emission_rate_2} \gamma = \frac{4\pi}{\hbar} \frac{D^2}{\rho_0 s^2 } {\sum_{\vec m}}' \int \frac{|q| dq_xdq_y}{(2\pi)^2} \left| \sum_{p_x} \overline \psi_{0} \left( p_x+\frac{q_x}{2} \right) \psi_{m_x} \left( p_x-\frac{q_x}{2} \right) \overline{w} \left( p_x+\frac{q_x}{2} \right) w \left( p_x-\frac{q_x}{2} \right) \right|^2 \times \left|\sum_{p_y}... \right|^2 \delta(|q| - Q_{\vec m} ) \end{equation} where \begin{equation}\label{eq:Q_m} Q_{\vec m} = \frac{ea \vec m\cdot \vec E}{\hbar s}=\frac{e \pi \vec m\cdot \vec E}{\omega_*} ,\quad \omega_*=\frac{\pi \hbar s}{a} , \end{equation} \end{widetext} is the emitted phonon momentum, $\omega_*$ is the superlattice Debye's frequency, and the quantity $|\sum_{p_y}...|^2$ is identical to $|\sum_{p_x}...|^2$ up to a replacement $p_x,q_x,E_x,m_x\rightarrow p_y,q_y,E_y,m_y$. Now, we evaluate the term $|\sum_{p_x}...|^2$ in this expression. Plugging Eq.\eqref{eq:WS_wavefunction_x}-\eqref{eq:Wannier_function_xy} into Eq.\eqref{eq:emission_rate_2} yields \begin{eqnarray}\label{eq:emission_rate_3} & \left|\sum_{p_x} ...\right|^2 = \left| \sum_{p_x} G(p_x) |w(p_x)|^2\right|^2 e^{-q_x^2\xi^2/2} \\ \nonumber & \approx \left| \left\langle G\right\rangle\sum_{p_x}|w(p_x)|^2\right|^2 e^{-q_x^2\xi^2/2} =|\left\langle G\right\rangle|^2 e^{-q_x^2\xi^2/2} \end{eqnarray} Here $G(p_x)$ denotes the function \begin{equation} e^{i F^{\left( x\right)}_{0}(p_x+q_x/2) -i F^{\left( x\right)}_{m_x}(p_x- q_x/2) } \end{equation} which is periodic in $p_x$ with the period $2\pi/a$. We evaluate the quantity in Eq.\eqref{eq:emission_rate_3} using that the period of $G(p_x)$ is much smaller that the width of $|w(p_x)|^2=(2\pi)^{1/2}\xi e^{-\xi^2 p_x^2}$, namely $\pi/a\ll 1/\xi$. Accordingly, we replace $G(p_x)$ by its average value over the period and carried out integration over $p_x$ as $\sum_{p_x} |w(p_x)|^2=1$. Evaluating the average $\left\langle G(p_x)\right\rangle =\frac{a}{2\pi}\int_{-\pi/a}^{\pi/a} dp_x G(p_x)$ gives a Bessel function \begin{equation} \left\langle G(p_x)\right\rangle e^{-im_x q_x a/2} J_{m_x}\left( \frac{4J\sin\left( \frac{q_x a}{2}\right) }{eE_xa}\right) \end{equation} Applying the same approach to the integral over $p_y$ in Eq.\eqref{eq:emission_rate_2} yields a closed-form expression \begin{widetext} \begin{equation} \gamma \sim \frac{4\pi}{\hbar} \frac{D^2}{\rho_0 s^2 } {\sum_{\vec m}}' \int \frac{q^2 dqd\theta }{(2\pi)^2} \left|J_{m_x}\left(\frac{4J}{eE_xa}\sin\left( \frac{q a \cos\theta }{2}\right)\rp \right|^2\left|J_{m_y}\left(\frac{4J}{eE_ya}\sin\left( \frac{q a \sin\theta }{2}\right)\rp \right|^2 e^{-q^2\xi^2/2} \delta(|q| - Q_{\vec m} ) . \label{eq:emission_rate_ultimate_form} \end{equation} \end{widetext} This expression, which was derived in the limit $a\gg\xi$, is reasonably accurate for the practically interesting parameter range $a\gtrsim\xi$. \begin{figure}[b] \includegraphics[width=0.48\textwidth]{field_dependent_emission_rate.png} \centering \caption{The field dependence of phonon emission rate obtained from Eq.\eqref{eq:emission_rate_ultimate_form} for several different bandwidth values, and typical moir\'e graphene parameter values given in the text. In the green shaded region the Bloch oscillations are underdamped, $\gamma<\omega_B=eEa/\hbar$; in the white region the oscillations are overdamped $\gamma>\omega_B$. The field orientation is incommensurate relative to the superlattice, such that $E_x/E_y=1.618$. The suppression of emission rate under increasing bandwidth and growing electric field is a generic behavior expected to remain valid for other incommensurate electric field orientations. }\label{fig:emission_rate} \end{figure} The emission rate in Eq.\eqref{eq:emission_rate_ultimate_form} shows an interesting behavior as a function of system parameters. Crucially, it is sharply suppressed when either the bandwidth $J$ or the electric field $E$ increases. These quantities can therefore serve as knobs to tune $\gamma$ and thereby control the Bloch-oscillating carrier dephasing. The suppression of phonon emission in these two cases is governed by very different mechanisms. The impact of the bandwidth on $\gamma$ can be understood in terms of the density of electronic states which control the emission rate, decreasing inversely with $J$. The dependence $\gamma$ vs. $E$ is fairly complicated due to the oscillatory character of the Bessel functions. The general trend, however, is simple to understand by noting that the energy spacings in the two-dimensional WS ladder grow as $E$ increases. As a result, the energies of different WS states are tuned out of resonance; this detuning suppresses phonon-mediated transitions. The suppression of phonon emission becomes exponential at $E$ much larger than the threshold value set by the maximal energy of phonons emitted through this process, $eEa\gg\omega_{\rm max}\approx\omega_*\xi/a$. We illustrate the suppression of $\gamma$ in Fig.\ref{fig:emission_rate}, which shows the emission rate obtained from Eq.\eqref{eq:emission_rate_ultimate_form} for an electric field set to a generic direction. Numerical values for other quantities are chosen to mimic a MATBG bandstructure: the superlattice period $a=10\rm{nm}$, the Wannier function radius $\xi=0.5 a$. For these values, the superlattice Debye's frequency in Eq.\eqref{eq:Q_m} is $\omega_{*} = 1\rm{meV}$. For el-ph coupling we use the graphene monolayer deformation potential $D=20\rm{eV}$ and graphene mass density $\rho_0=7.6\times 10^{-8}\rm{g}/\rm{cm}^2$. The above analysis, carried out for a square lattice tight-binding model, predicts a behavior of phonon emission that we expect to remain qualitatively valid for other types of superlattices, in particular the moir\'e graphene superlattices. Namely, the large spatial periods of moir\'e superlattices and their abnormally narrow bandwidths limit phonon emission to the pathway dominated by acoustic phonons. We find, in particular, that the emission rate is quickly suppressed upon increasing the bandwidth, see Fig.\ref{fig:emission_rate}. Since the moir\'e graphene bandwidth is highly sensitive to the twist angle, becoming small near the magic values, phonon emission can be suppressed by detuning the twist angle away from these values. Likewise, the large superlattice periodicity results in a high sensitivity to the electric field. Our analysis predicts an abrupt quenching of phonon emission occurring already at moderate fields. The phonon emission rate features strong dependence on the bandwidth and field strength, these quantities can therefore serve as useful knobs allowing to realize and control Bloch oscillations. \section{The backaction on the oscillator due to Bloch-oscillating carriers and the role of oscillator damping} Here we provide the details of the analysis of the backaction on the oscillator due to Bloch-oscillating carriers. We work with the equations of motion as given in Eqs.\eqref{eq:El_vel_eq}. We average over the randomness in the starting times $t'_i$ ignoring the associated noise. This simple approach will be sufficient to understand the synchronization effect. The role of randomness and noise will be discussed elsewhere. As a first step, we integrate Bloch dynamics of the $i$-th electron for times $t'_i<\tau<t$, which gives \begin{align} & \vec p_i(t) =e\vec E(t-t'_i)+\vec p_i(t'_i)+\alpha\int_{t'_i}^t Q(\tau')d\tau' \\ \nonumber & \vec x_i(t) =\vec x_i(t'_i)+\int_{t'_i}^t \vec v_i(\tau) d\tau , \end{align} where $\vec v_i(\tau)=\sum_l \frac{2J_l \vec a_l}{\hbar}\sin [\vec a_l\cdot\vec p(t)/\hbar]$. Averaging over the starting times $t'_i$ must be carried out using the survival probability obeying the Poisson statistics $dp=dt\gamma e^{-\gamma(t-t'_i)}$. It is instructive to first apply these relations to the free-carrier dynamics in the absence of coupling to the oscillator, $\alpha=0$. In this case different carriers are totally decoupled and thus not synchronized. The drift velocity can be found by averaging $\vec v_i(t)$ as \begin{align}\nonumber &\left\langle \vec v_i(t)\right\rangle = \sum_{\vec k'}\int_{-\infty}^t dt' \gamma e^{-\gamma (t-t')} \vec v_i(t,t') \\ \label{eq:v(t)_ave} & = \sum_{\vec k'}\sum_l \frac{J_l \vec a_l}{i\hbar}\gamma\left[ \frac{e^{i\vec a_l\cdot \vec k'}}{\gamma- i \frac{e}{\hbar}\vec a_l\cdot \vec E} - \frac{e^{-i\vec a_l\cdot \vec k'}}{\gamma+ i \frac{e}{\hbar}\vec a_l\cdot \vec E} \right] , \end{align} where $\sum_{\vec k'}$ is a shorthand notation for averaging over the initial momentum distribution $\int \frac{d^2k}{(2\pi)^2} f_0(\vec k')$ (here assumed to be steady-state). The quantity $\vec v_i(t,t')$ under the integral over $t'$ is a sum of harmonics with frequencies $\omega_l$, arising from the carrier velocity time dependence \begin{align} \vec v_i(t,t')=\sum_l \frac{2J_l \vec a_l}{\hbar}\sin \left[\vec a_l\cdot\left( \frac{e}{\hbar}\vec E(t-t')+\vec k'\right)\right] . \end{align} Simplifying the result in Eq.\ref{eq:v(t)_ave} yields the drift velocity \begin{align}\label{eq:v_dc} &\vec v_{\rm DC} = \sum_{\vec k'}\sum_l \frac{2J_l \vec a_l}{\hbar}\cos(\vec a_l\cdot \vec k') \frac{\gamma \frac{e}{\hbar}\vec a_l\cdot \vec E}{\gamma^2+ (\frac{e}{\hbar}\vec a_l\cdot \vec E)^2}. \end{align} Given by a sum of the terms $\frac{\gamma\omega_l}{\gamma^2+\omega_l^2}$, the dependence $v_{\rm DC}$ vs. $E$ is nonmonotonic, growing linearly at $E\lesssim E_\gamma=\gamma\hbar/ea$ and decreasing at $E\gtrsim E_\gamma$; at weak fields it matches the Drude theory prediction. The negative differential conductivity $dI/dV<0$ is a testable signature of the Bloch-oscillation regime. The spectrum of current fluctuations, Eq.\eqref{eq:spectrum_def}, can be obtained in a similar manner. The velocity time dependence $\vec v_i(t,t')$ is a sum of harmonics with frequencies $\omega=\omega_l$; each harmonic producing a resonance broadened by the damping rate $\gamma$. Indeed, evaluating the Fourier components and averaging over the initial times gives \begin{align} &\int_{-\infty}^t dt' \gamma e^{-\gamma (t-t')} \vec v_i(t,t') e^{-i\omega (t-t')} \\ & =\sum_l \frac{J_l \vec a_l}{i\hbar}\gamma\left[ \frac{e^{i\vec a_l\cdot \vec k'}}{\gamma- i (\omega+\omega_l)} - \frac{e^{-i\vec a_l\cdot \vec k'}}{\gamma- i (\omega-\omega_l)} \right] . \end{align} Taking squares of the absolute values yields a fairly cumbersome expression for the noise spectrum. In the small-$\gamma$ limit, achieved at $E\gtrsim E_\gamma$, it represents a comb of sharp Lorentzians plus a background part, see Eq.\eqref{eq:spectrum_def} and Fig.\ref{fig2_asynchronous}. Next, we reinstate the coupling to the oscillator and proceed with the analysis of synchronization. For conciseness, we focus on a resonance approximation valid near one of the resonances $\omega=\omega_l$ in Eq.\eqref{eq:spectrum_def}, at $\omega_{\rm B}\gg\gamma$. In what follows, without loss of generality, we take $\vec E$ to be parallel to $\vec a_l$, and denote $\omega_l$ and $\vec a_l$ as $\omega_{\rm B}$ and $a$, respectively. Generalizing to the large-$\gamma$ case and other field orientations will be straightforward. The special cases of field orientation such that $\vec E\cdot \vec a_l\approx \vec E\cdot \vec a_{l'}$, when two resonances can be excited simultaneously, will be discussed elsewhere. The back-action of the carriers on the oscillator, given by the sum of carrier displacements $f(t)=\frac{\alpha}{m}\sum_i x_i(t)$ in Eq.\ref{eq:EoM_singleMode} averaged over the starting times $t'_i$ with the Poissonian survival probability $dp=dt \gamma e^{-\gamma(t-t'_i)}$, equals \begin{align} &\left\langle x_i(t)\right\rangle = \left\langle x_i(t')\right\rangle + \int\limits_{-\infty}^t dt' \gamma e^{-\gamma(t-t')}\int\limits_{t'}^t d\tau v_0 \sin \frac{a p_i(\tau)}{\hbar} \nonumber \\ &= \!\int\limits_{-\infty}^t \!\!dt' \gamma e^{-\gamma(t-t')}\int\limits_{t'}^t d\tau v_0 \sin\left( \phi(\tau)\right) ,\ \ v_0 =\frac{2aJ_l }{\hbar} , \end{align} where we denote $\phi(\tau)=\omega_{\rm B}(\tau-t')+\frac{\alpha a}{\hbar}\int_{t'}^\tau Q(\tau')d\tau'$. In what follows we drop the starting displacement term $\left\langle x_i(t')\right\rangle$, assuming that it vanishes under averaging as expected for a spatially uniform distribution. The single mode dynamics is now described by Eq.\eqref{eq:EoM_singleMode} with the right-hand side replaced with a back-action memory function $\frac{\alpha}{m}N \left\langle x_i(t)\right\rangle$, where $N$ is the number of Bloch electrons. We will consider the dynamics at lowest nonvanishing order in $Q(t)$, assuming the latter to be small. First, setting $Q(\tau')=0$ and integrating over $\tau$, we find $ \left\langle x_i^{(0)}(t)\right\rangle =\frac{v_0 \omega_{\rm B}}{\gamma^2+\omega_{\rm B}^2} $, a constant displacement that gives a time independent contribution to $f(t)$ in Eq.\eqref{eq:EoM_singleMode}, which can be compensated for by shifting the oscillator equilibrium. Next, at first order in $Q(t)$, we Taylor-expand the sine term to obtain \begin{widetext} \begin{equation}\label{eq:x1} \left\langle x_i^{(1)}(t)\right\rangle = \int_{-\infty}^t dt' \gamma e^{-\gamma(t-t')} \left( \int_{t'}^t d\tau v_0 \cos\left( \omega_{\rm B}(\tau-t')\right) \left[\frac{\alpha a}{\hbar}\int_{t'}^\tau Q(\tau')d\tau' \right]\right) . \end{equation} Plugging in a harmonic dependence $Q(t)=Q_0e^{-i\omega t}$, we evaluate the integrals over $\tau'$ and $\tau$ as \begin{align} &\int_{t'}^t d\tau v_0 \cos\left( \omega_{\rm B}(\tau-t')\right) \left[\frac{\alpha a}{\hbar}\int_{t'}^\tau Q(\tau')d\tau' \right]=\int_{t'}^t d\tau v_0 \cos\left( \omega_{\rm B}(\tau-t')\right) \left[\frac{i\alpha a}{\hbar \omega}Q_0\left( e^{-i\omega \tau}- e^{-i\omega t'}\right) \right] \nonumber \\ &= \frac{i\alpha a v_0 }{\hbar \omega}Q_0 \left( e^{-i\omega t} \frac{e^{i\omega_{\rm B}(t-t')}-e^{i\omega (t-t')}}{2i(\omega_{\rm B}-\omega)}+e^{-i\omega t}\frac{e^{-i\omega_{\rm B}(t-t')}-e^{i\omega (t-t')}}{-2i(\omega_{\rm B}+\omega)}-e^{-i\omega t'}\frac{\sin\omega_{\rm B}(t-t')}{\omega_{\rm B}}\right) . \end{align} Integration over $t'<t$ in Eq.\eqref{eq:x1} can now be carried out with the help of the identity \[ \int_{-\infty}^t dt' \gamma e^{-\gamma(t-t')} e^{-i\Omega (t-t')}=\frac{\gamma}{\gamma+i\Omega} , \] giving \begin{align} &\left\langle x_i^{(1)}(t)\right\rangle = \frac{i\alpha a v_0 }{\hbar \omega}Q_0 e^{-i\omega t} \left( \frac{ \frac{\gamma}{\gamma-i\omega_{\rm B}}-\frac{\gamma}{\gamma-i\omega}}{2i(\omega_{\rm B}-\omega)} +\frac{\frac{\gamma}{\gamma+i\omega_{\rm B}} -\frac{\gamma}{\gamma-i\omega}}{-2i(\omega_{\rm B}+\omega)} -\frac{\frac{\gamma}{\gamma-i(\omega+\omega_{\rm B})}-\frac{\gamma}{\gamma-i(\omega-\omega_{\rm B})}}{2i\omega_{\rm B}}\right) \nonumber \\ & = \frac{i\alpha a v_0 }{\hbar \omega}Q_0 e^{-i\omega t} \left( \frac{\gamma}{2(\gamma-i\omega_{\rm B})(\gamma-i\omega)} +\frac{\gamma}{2(\gamma+i\omega_{\rm B})(\gamma-i\omega)} -\frac{\gamma}{(\gamma-i(\omega+\omega_{\rm B}))(\gamma-i(\omega-\omega_{\rm B}))} \right) \\ \nonumber & = \frac{i\alpha a v_0 }{\hbar \omega}Q_0 e^{-i\omega t} \left( \frac{\gamma^2}{(\gamma^2+\omega_{\rm B}^2)(\gamma-i\omega)} +\frac{\gamma}{(\omega+i\gamma)^2-\omega_{\rm B}^2} \right) . \end{align} \end{widetext} Substituting this result in Eq.\eqref{eq:EoM_singleMode} gives a characteristic equation for $\omega$ of the form given in Eq.\eqref{eq:characteristic_eqn}. The instability criterion and the phase diagram for the oscillator damping equal to that of Bloch-oscillating carriers is discussed in the main text (see Fig.\ref{fig1_oscillator} and accompanying discussion). It is instructive to extend this analysis to the more general case of unequal damping rates for the oscillator and electrons, $\gamma_0\ne\gamma$. After some algebra we arrive at the instability criterion \begin{align}\nonumber \left( \eta+2(\gamma-\gamma_0)(\omega_{\rm B}-\omega_0)\right)^2 > &\left( \left(\omega_{\rm B}-\omega_0\right)^2+4\gamma\gamma_0\right) \\ & \times 4(\gamma+\gamma_0)^2 . \label{eq:instability_criterion_2} \end{align} A new interesting behavior found for $\gamma_0\ne\gamma$ is an asymmetry between $\omega_{\rm B}$ blue-shifted and red-shifted away from $\omega_0$, with the instability threshold lower for $\omega_{\rm B}>\omega_0$ and higher for $\omega_{\rm B}<\omega_0$ when $\gamma_0<\gamma$, and vice versa when $\gamma_0>\gamma$, as illustrated in Fig.~\ref{fig2_oscillator}. The asymmetry is particularly striking in the limit $\gamma_0/\gamma\to 0$: for $\omega_{\rm B}>\omega_0$ the instability occurs at the coupling values $\eta$ much smaller than those in Eq.\eqref{eq:instability_criterion_0}, whereas for $\omega_{\rm B}<\omega_0$ the instability threshold remains on the same order as in Eq.\eqref{eq:instability_criterion_0}. Furthermore, perhaps somewhat counterintuitively, for $\gamma_0/\gamma\to 0$ the lowest value of coupling at which the instability sets in occurs far away from the resonance $\omega_{\rm B}=\omega_0$. The origin of this asymmetry is closely related to the mechanism that enables the synchronized behavior. When the oscillator is undamped, synchronization arises due to the electrons pumping energy into the oscillator mode; subsequently, when this energy is passed back to electrons, they become synchronized with the oscillator, and with each other. However, at a weak coupling $\eta$, the energy transfer from the Bloch-oscillating electrons into the oscillator is possible only if $\hbar\omega_{\rm B}>\hbar\omega_0$, indicating that the instability is easier to reach for $\omega_{\rm B}$ values blue-shifted from $\omega_0$. The above argument also suggests a reversal in the asymmetry when Bloch oscillations are weakly damped compared to the oscillator damping, $\gamma\ll\gamma_0$. Indeed, in this case it is the electron subsystem that serves as the main reservoir for energy storage, whereas the role of the oscillator mode is merely to lock the phases of different Bloch-oscillating carriers. Pumping energy into the collective mode now requires $\hbar \omega_{\rm B}<\hbar\omega_0$. We therefore expect that in this limit the instability will occur at lower $\eta$ values for $\omega_{\rm B}$ red-shifted from $\omega_0$. This is exactly what Eq.\eqref{eq:instability_criterion_2} predicts (see Fig.\ref{fig2_oscillator}). \begin{figure}[t] \includegraphics[width=0.99\columnwidth]{Fig4_synchronized_v1.png} \caption{The lasing and synchronization regimes. a) Lasing ($\gamma \gg \gamma_0$). In this case, the oscillator is weakly damped and serves as the main reservoir of the energy. Energy of the electrons is more easily pumped to the oscillator when $\omega_{\rm B} > \omega_0$. Shown is the phase diagram for $\gamma = 100 \gamma_0$. b) Synchronization ($\gamma \ll \gamma_0$). In this case, the oscillator is strongly damped and the electrons serve as the main reservoir of the energy. Energy of the oscillator is more easily pumped to the electrons when $\omega_{\rm B} < \omega_0$. Shown is the phase diagram for $100 \gamma = \gamma_0$. The instability criterion is a sign change of the imaginary parts of the roots of Eq.~\eqref{eq:characteristic_with_gamma0}, which is negative in the stable regime and becomes positive in the unstable regime. } \label{fig2_oscillator} \end{figure} \end{document} \section{Electric current} Here we use the analysis of the previous section to calculate the current produced by motions of the electrons coupled to an oscillator mode. As a result of the electron scatterings, the zero DC current in the case of ideal Bloch oscillations gives way to a more complicated behavior in which the oscillations are accompanied by a finite DC current. Additionally, the coupling to an oscillator mode, synchronizes the individual electrons' oscillations and produces an AC current. We assume that electrons move back to the equilibrium Fermi sea upon scattering. The Fermi sea for a charge density $n = \frac{2 p_F}{\pi \hbar}$ spans momenta from $-p_F$ to $p_F$, where $p_F$ is the Fermi momentum. The velocity of the $i$-th electron last scattering at a time $t_i'<t$ to a momentum $-p_F < p_i' < p_F$ is obtained from equations of motion in Eq.~\eqref{eq:El_vel_eq} as \begin{align} \label{eq:el_vel} &v_i(t) = v_0 \sin \left( \frac{a}{\hbar} p_i(t) \right) \\ \nonumber &= v_0 \sin \left( \frac{a p_i'}{\hbar} + \omega_{\rm B}(t-t_i') + \frac{\alpha a}{\hbar} \int_{t_i'}^t Q(\tau') d\tau' \right). \end{align} For a memory-less process of scattering, the beginning time $t_i'$ is a random variable that has an exponential probability density $\gamma e^{-\gamma(t-t_i')}$. Averaging over $t_i'$ with the corresponding probability and also over $p_i'$ with uniform distribution between $-p_F$ and $p_F$ gives the average velocity of the $i$-th electron at time $t$ \begin{widetext} \begin{equation} \label{eq:avg_el_vel} \left\langle v_i(t) \right\rangle = \int_{-p_F}^{p_F} \frac{dp_i'}{2 p_F}\int_{-\infty}^t dt_i' \gamma e^{-\gamma(t-t_i')} v_0 \sin \left( \frac{a p_i'}{\hbar} + \omega_{\rm B}(t-t_i') + \frac{\alpha a}{\hbar} \int_{t_i'}^t Q(\tau') d\tau' \right). \end{equation} As for the case of $\left\langle x_i(t) \right\rangle$, we find the zeroth and first order in $Q(t)$ estimates of $\left\langle v_i(t) \right\rangle$. By setting $Q(t)=0$, we get the average drag velocity of the electron \begin{equation} \left\langle v_i^{(0)}(t) \right\rangle = \frac{\gamma v_0}{2 p_F} \int_{-p_F}^{p_F} dp_i'\int_{-\infty}^t dt_i' e^{-\gamma (t-t_i')} \sin \left( \frac{a p_i'}{\hbar} + \omega_{\rm B} (t-t_i') \right) = v_0 \frac{\sin \frac{a p_F}{\hbar}}{\frac{a p_F}{\hbar}} \frac{\gamma \omega_{\rm B}}{\gamma^2 + \omega_{\rm B}^2} \equiv \overline{v}, \end{equation} which is time-independent and corresponds to the DC electric current \begin{equation} \label{eq:DC_current} j_{\rm DC} = n e \overline{v} = n e v_0 \frac{\sin \frac{a p_F}{\hbar}}{\frac{a p_F}{\hbar}} \frac{\gamma \omega_{\rm B}}{\gamma^2 + \omega_{\rm B}^2}. \end{equation} Noting that $v_0 \sin \frac{a p_F}{\hbar} = v_F$ is the velocity of the electrons at Fermi surface, and by defining the effective electron mass as $m_e = \frac{p_F}{v_F}$, this result, for small electric fields, predicts a linear dependence on the field that agrees with the Drude model conductivity: \begin{equation} j_{\rm DC} = \frac{n e^2}{m_e \gamma} E \quad\quad {\rm for \quad} E \ll E_{\gamma} \equiv \frac{\gamma \hbar}{e a}. \end{equation} But at high electric fields, $E \gg E_{\gamma}$, the DC current in Eq.~\eqref{eq:DC_current} falls off as $1/E$. Now, we want to calculate the electron velocity correlation functions to obtain the current power spectrum. Due to the independence of distinct particles' motions, we have $\left\langle v_i^{(0)}(t_1) v_j^{(0)}(t_2) \right\rangle = \left\langle v_i^{(0)}(t_1) \right\rangle \left\langle v_j^{(0)}(t_2) \right\rangle = \overline{v}^2$ for $i \ne j$. In the case of $i=j$, the velocity correlation can be calculated considered two scenarios, no scattering events at times $t_1 < t < t_2$, or some scatterings during those times. In the former case, both velocities come from the last scattering that occurred at a time $t_i' < t_1$. But in the latter case, the velocity at time $t_1$ is calculable from the last scattering at time $t_i'<t_1$, while the velocity at time $t_2$ depends on the last scattering before $t_2$ that also has to be after $t_1$, i.e., $t_1<t_i''<t_2$. Combining the two cases with corresponding probabilities, we obtain \begin{align} \left\langle v_i^{(0)}(t_1) v_i^{(0)}(t_2) \right\rangle = \mathcal P_0(t_2-t_1) \frac{\gamma v_0^2}{2 p_F} & \\ \times \int_{-p_F}^{p_F} dp_i' & \int_{-\infty}^{t_1} dt_i' e^{-\gamma(t_1-t_i')} \sin \left( \frac{a p_i'}{\hbar} + \omega_{\rm B} (t_1-t_i') \right) \sin \left( \frac{a p_i'}{\hbar} + \omega_{\rm B} (t_2-t_i') \right) \\ + (1-\mathcal P_0(t_2-t_1)) & \frac{\gamma^2 v_0^2}{(2 p_F)^2} \int_{-p_F}^{p_F} dp_i' \int_{-\infty}^{t_1} dt_i' e^{-\gamma(t_1-t_i')} \sin \left( \frac{a p_i'}{\hbar} + \omega_{\rm B} (t_1-t_i') \right) \\ & \times \frac{1}{1-e^{-\gamma(t_2-t_1)}} \int_{-p_F}^{p_F} dp_i'' \int_{t_1}^{t_2} dt_i'' e^{-\gamma(t_2-t_i'')} \sin \left( \frac{a p_i''}{\hbar} + \omega_{\rm B} (t_2-t_i'') \right). \end{align} Here, $\mathcal P_o(t_2-t_1) = e^{-\gamma (t_2-t_1)}$ is the probability of having no scattering events at times $t_1<t<t_2$ which is calculated from the Poisson distribution. We can simplify the velocity correlation further as \begin{align} \left\langle v_i^{(0)}(t_1) v_i^{(0)}(t_2) \right\rangle = & e^{-\gamma (t_2-t_1)} \frac{\gamma v_0^2}{4 p_F} \\ & \times \int_{-p_F}^{p_F} dp_i' \int_{0}^{\infty} d\tau e^{-\gamma \tau} \left( \cos \omega_{\rm B} (t_2-t_1) - \cos \left( \frac{2 a p_i'}{\hbar} + \omega_{\rm B} (2\tau + t_2 - t_1) \right) \right) \\ & + \left( v_0^2 \frac{\sin \frac{a p_F}{\hbar}}{\frac{a p_F}{\hbar}} \frac{\gamma \omega_{\rm B}}{\gamma^2 + \omega_{\rm B}^2} \right)^2 \left[ 1 - e^{-\gamma (t_2-t_1)} \left( \frac{\gamma}{\omega_{\rm B}} \sin \omega_{\rm B} (t_2-t_1) + \cos \omega_{\rm B} (t_2-t_1) \right) \right] \\ = & e^{-\gamma (t_2-t_1)} \frac{\gamma v_0^2}{2} \left( \frac{\cos \omega_{\rm B} (t_2-t_1)}{\gamma} - \frac{\sin \frac{2 a p_F}{\hbar}}{\frac{2 a p_F}{\hbar}} \frac{\gamma \cos \omega_{\rm B} (t_2-t_1) - 2 \omega_{\rm B} \sin \omega_{\rm B} (t_2 - t_1)}{\gamma ^2 + 4 \omega_{\rm B}^2} \right) \\ & + \left( v_0^2 \frac{\sin \frac{a p_F}{\hbar}}{\frac{a p_F}{\hbar}} \frac{\gamma \omega_{\rm B}}{\gamma^2 + \omega_{\rm B}^2} \right)^2 \left[ 1 - e^{-\gamma (t_2-t_1)} \left( \frac{\gamma}{\omega_{\rm B}} \sin \omega_{\rm B} (t_2-t_1) + \cos \omega_{\rm B} (t_2-t_1) \right) \right]. \end{align} In the limit of interest, $\gamma \ll \omega_{\rm B}$, the inside of the first brackets is dominated by the first term and the second term is negligible. Therefore, the velocity correlation becomes \begin{equation} \label{eq:vel_cor_result} \left\langle v_i^{(0)}(t_1) v_i^{(0)}(t_2) \right\rangle = \frac{1}{2} e^{-\gamma \left| t_2-t_1 \right|} v_0^2 \cos \omega_{\rm B} (t_2-t_1). \end{equation} \end{widetext} Using the velocity correlation function, we can calculate the current power spectrum as \begin{equation} P(\omega) = \frac{1}{2} \int_{-\infty}^{\infty} \left\langle \delta j(t_1) \delta j(t_2) \right\rangle e^{-i \omega (t_1 - t_2)} dt_1 = \frac{e^2}{2 L^2} \int_{-\infty}^{\infty} \sum_{i, j} \left( \left\langle v_i(t_1) v_j(t_2) \right\rangle - \overline{v}^2 \right) e^{-i \omega (t_1-t_2)} dt_1. \end{equation} Noting that the summand vanishes for $i \ne j$ and inserting the result from Eq.~\eqref{eq:vel_cor_result} for $i=j$, we get \begin{equation} P(\omega) = \frac{e^2 v_0^2 N}{2L^2} \int_{0}^{\infty} e^{-\gamma \tau} \cos \omega_{\rm B} \tau \cos \omega \tau d\tau = \frac{\gamma e^2 v_0^2 N}{4 L^2} \left( \frac{1}{\gamma^2 + (\omega - \omega_{\rm B})^2} + \frac{1}{\gamma^2 + (\omega + \omega_{\rm B})^2} \right) \end{equation} \section{A kinetic equation treatment} \label{sec:steady-state} Here we consider the effects of electron scattering, which alter the textbook Bloch oscillation picture in a number of ways. First, scattering dephases the oscillations, producing finite-width resonances in the power spectrum. Second, the non-dissipative character of free-particle Bloch oscillations, manifested in a zero DC current, gives way to a more complicated behavior in which the oscillations are accompanied by a finite DC current \cite{Ktitorov1972,Esaki1970,Kroemer2000}. We will study this behavior using Boltzmann kinetic equation for carrier distribution, adopting a one-rate relaxation model: \begin{equation}\label{eq:kinetic_eqn} (\partial_t \tilde {\vec E} \cdot \nabla_{\vec k})f(\vec k,t)+\gamma(f(\vec k,t)-f_0(\vec k))=0 \end{equation} where $f_0(\vec k)$ is the equilibrium distribution, and we defined $\tilde {\vec E}=\frac{e}{\hbar}\vec E$. The one-rate model ignores the difference between energy and momentum relaxation, to be discussed elsewhere. \addLL{[REFS]} To facilitate our analysis, we define a fundamental solution of this equation, by replacing the term $ \gamma f_0(\vec k)$ with a point source $\delta^2(\vec k-\vec k')\delta(t-t')$. Solving Eq.\eqref{eq:kinetic_eqn} yields \begin{equation}\label{eq:fundamental_soln} f(\vec k,t)=e^{-\gamma(t-t')}\theta(t-t') \delta^2(\vec k-\vec k' \tilde {\vec E}(t-t')) . \end{equation} To evaluate current, we use the Wannier theorem's tight-binding representation of band dispersion and velocity \begin{equation} \epsilon(\vec k) = \sum_{\vec l}J_{\vec l}e^{-i\vec k \cdot \vec l} ,\quad \vec v(\vec k) = \sum_{\vec l}\vec v_{\vec l}e^{-i\vec k \cdot \vec l} \end{equation} $\vec v=-\frac{i\vec l}{\hbar} J_{\vec l}$, where the sum runs over all Bravais lattice vectors $\vec l$. Plugging these relations in the expression for current $\vec j=\sum_{\vec k} e\vec v(\vec k) f(\vec k,t)$ gives damped oscillations: \begin{equation}\label{eq:Greens_function} \vec j_{\vec k'}(\tau) e^{-\gamma\tau} \theta(\tau)\sum_{\vec l}e\vec v_{\vec l}e^{-i \omega_{\vec l}\tau+i\vec l\cdot\vec k' } ,\quad \tau=t-t' , \end{equation} where $\vec k'$ and $t'$ label the point source, and we defined $\omega_{\vec l}= \tilde {\vec E}\cdot\vec l$. \addQ{Discrete frequency values arise because electron trajectories sweep the (reduced) BZ of a two-dimensional crystal in the direction set by the $\vec E$ vector. Every time an electron reaches zone boundary it umklapps to the opposite side and continues forward, winding around the BZ at different frequencies in different directions. This leads, for a general field orientation, to a quasiperiodic dynamics characterized by two fundamental frequencies which depend only on the field and lattice periodicity as described in Eq.\eqref{eq:frequency_circles}, in agreement with the geometric construction in Fig.\ref{fig1_asynchronous}. } The frequencies $\omega_{\vec l}$ are independent of the initial values $\vec k'$, as expected. One quantity that will be of interest to us is ensemble-averaged DC current, which can be written in terms of the contributions of point sources, Eq.\eqref{eq:Greens_function}, as \begin{equation}\label{eq:j_DC_long} \vec j_{\rm DC}=\int\limits_{-\infty}^t dt' \sum_{\vec k'}\vec j_{\vec k'}(t-t')\gamma f_0(\vec k') =\sum_{\vec l}\frac{e\vec v_{\vec l} \gamma f_{0,\vec l}}{\gamma+i\omega_{\vec l}} . \end{equation} Here we introduced Fourier coefficients $f_0(\vec k) = \sum_{\vec l}f_{0,\vec l}e^{-i\vec k \cdot \vec l}$, $ f_{0,\vec l} = \Omega \int_{BZ} \frac{d^2 \vec k}{(2\pi)^2} f(\vec k)e^{i \vec k \cdot\vec l} $, where the integral is taken over the Brillouin zone and $\Omega=(\vec a_1\times\vec a_2)\cdot \hat{\vec z}$ the real-space unit cell area. Since $f_{0,\vec l}$ is even in $\vec l$, whereas $v_{\vec l}$ is odd, Eq.\eqref{eq:j_DC_long} can be simplified to read \addQ{\begin{equation} \vec j_{\rm DC}=\frac{e}{\hbar}\sum_{\vec l}\vec l J_{\vec l} f_{0,\vec l} \frac{\gamma \omega_{\vec l}}{\gamma^2+\omega_{\vec l}^2} \end{equation} Eq.\eqref{eq:j_DC} predicts a dependence on the driving field which is linear at small $E<E_\gamma=\gamma\hbar/ea$ and falls off as $1/E$ at large $E>E_\gamma$. Interestingly, current depends on the dimensionless quantity $E/E_\gamma$ in a way that is independent of the specific value of $\gamma$.} \addLL{This behavior is illustrated in Fig.\ref{fig2_asynchronous}(d) for the drift velocity $v_d=j_{\rm DC}/ne$. } \addLL{[ We used a tight binding dispersion relation (see Eq.\eqref{eq:TBM_triangular} in Appendix)]} The ohmic behavior at low $E$ agrees with the Drude model. The nonmonotonic $E$ dependence that peaks at $E\sim E_\gamma$, as well as negative differential resistance arising at $E>E_\gamma$, provide clear signatures of the Bloch regime that are detectable by a DC transport measurement. Two-dimensional Bloch oscillations that feature two fundamental frequencies manifest themselves very directly in the current power spectrum. Ensemble-averaged power spectrum arises as a sum of independent contributions of individual particles, undergoing damped oscillations as described by Eq.\eqref{eq:Greens_function}. We evaluate \addQ{\begin{equation} P(\omega) = \frac12\int_{-\infty}^{\infty} \langle\delta\vec j(t_1)\cdot \delta\vec j(t_2)\rangle e^{-i\omega (t_1-t_2)} dt_1 , \end{equation} where $\langle...\rangle$ is the ensemble average and current fluctuations $\delta\vec j$} are found from the fluctuating part of the distribution function, $\delta \vec j(t)=\sum_{\vec k} e\vec v(\vec k)\delta f(\vec k,t)$. The quantity $\delta f(\vec k,t)$ obeys the kinetic equation with a source term: \begin{equation} (\partial_t+ \tilde {\vec E}\cdot \nabla_{\vec k})\delta f(\vec k,t)+\gamma\delta f(\vec k,t)=\xi(\vec k,t) . \end{equation} The source pair correlation function is given in terms of the steady-state distribution as\cite{kogan_shulman,kogan_book} $\left\langle \xi(\vec k,t)\xi(\vec k',t')\right\rangle =2\gamma f(\vec k)(1-f(\vec k))\delta(t-t')\delta^2(\vec k-\vec k')$, which describes fluctuations of occupancies of individual microscopic states due to ingoing and outgoing scattering processes occurring at the rate $\gamma$. Using the fundamental solution, Eq.\eqref{eq:Greens_function}, we write \begin{equation} \delta \vec j(t)=\sum_{\vec k'} \int_{-\infty}^{t} dt' \vec j_{\vec k'}(t-t') \xi(\vec k',t') . \end{equation} Substituting this expression in Eq.\eqref{eq:spectrum_def}, we carry out time integration and ensemble averaging over $\xi$ to obtain \begin{equation}\label{eq:P(w)_1} P(\omega)=\frac12\sum_{\vec k'}\sum_{\vec l_1,\vec l_2} \frac{e^2\vec v_{\vec l_1} \cdot \vec v_{\vec l_2}e^{i(\vec l_1+\vec l_2)\cdot\vec k'}g(\vec k')}{(\gamma+i(\omega_{\vec l_1}-\omega))(\gamma+i(\omega_{\vec l_2}+\omega))} , \end{equation} where we defined $g(\vec k')=2\gamma f_0(\vec k')(1-f_0(\vec k'))$. As before, using Fourier representation $g(\vec k)=\sum_{\vec l}g_{\vec l}e^{-i\vec k\cdot\vec l}$, we can write the result as \begin{equation} P(\omega)=\frac12\sum_{\vec l_1,\vec l_2} \frac{e^2\vec v_{\vec l_1}\cdot \vec v_{\vec l_2}g_{\vec l_1+\vec l_2}}{(\gamma+i(\omega_{\vec l_1}-\omega))(\gamma+i(\omega_{\vec l_2}+\omega))} . \end{equation} Poles in the denominators produce resonances that are described most easily at high fields, $E\gg E_\gamma$. In this case, since $\omega_{\vec l}$ values are large compared to $\gamma$, the resonance structure is dominated by the $\vec l_1=-\vec l_2$ contributions. Suppressing other terms, which are relatively small, we obtain a sum of Lorentzians peaked at $\omega=\omega_{\vec l}$: \begin{equation}\label{eq:P(w)_lorentzian} P(\omega)=\frac12\sum_{\vec l} \frac{e^2\vec v_{\vec l}\cdot \vec v_{-\vec l}g_{\vec 0}}{\gamma^2+(\omega- \omega_{\vec l})^2} . \end{equation} This dependence is shown in Fig.\ref{fig2_asynchronous}(c). Noise power is proportial to the carrier density, as expected from the picture of a sum of $N\gg 1$ incoherent oscillatory contributions giving signal of amplitude $\sim N^{1/2}$ and intensity $\sim N$. As a sanity check, we consider the noise power at $\vec E=0$, which is given by \begin{align*} P(\omega)&=\sum_{\vec l_1,\vec l_2} \frac{e^2\vec v_{\vec l_1}\cdot \vec v_{\vec l_2}g_{\vec l_1+\vec l_2}}{2(\gamma^2+\omega^2)}=-Te^2\gamma \sum_{\vec k}\frac{\vec v(\vec k)\cdot \nabla_\vec k f_0(\vec k)}{\gamma^2+\omega^2} \end{align*} The $\omega=0$ value matches the Johnson noise power $2T\sigma$, with the conductivity $\sigma$ evaluated from Eq.\eqref{eq:j_DC}: \begin{align*} &\vec j_{{\rm DC}}=\frac{e}{\hbar}\sum_{\vec l}\vec l\epsilon_{\vec l} f_{0,\vec l} \frac{\gamma \omega_{\vec l}}{\gamma^2+\omega_{\vec l}^2} =\frac{e^2}{\hbar^2\gamma}\sum_{\vec l}\vec l\epsilon_{\vec l} (\vec l\cdot\vec E) f_{0,\vec l} \\ &=-\frac{e^2}{\hbar\gamma}\sum_{\vec k} \vec v(\vec k)(\vec E\cdot \nabla_{\vec k})f_0(\vec k) \end{align*} Thus our result for $P(\omega)$ is in agreement with the fluctuation-dissipation theorem. \end{document} \addLL{The analysis above is clearly somewhat sketchy and incomplete, but I think it may be useful still to look at it and discuss before we decide on the next steps.} condition for instability in the system LL STOPPED HERE The equation of motion (\ref{eq:EoM_singleMode}) then becomes \[ \ddot{Q}\left(t\right)+\omega_{0}^{2}Q\left(t\right)=\frac{\alpha}{m}\sum_{i}\Delta\sin\left(\frac{a}{\hbar}p_{i}\left(t\right)-\alpha Q\left(t\right)\right). \] The electrons are scattered with a rate $\gamma$. We can average the right hand side of the equation over time. The probability that an electron is scattered after traveling a time span $t-t'$ is $\gamma e^{-\gamma\left(t-t'\right)}$, therefore the average over all time spans $t-t'$ reads \[ \sum_{i}\left\langle \frac{\alpha a}{\hbar}\Delta\sin\left(\frac{a}{\hbar}p_{i}\left(t\right)-\alpha Q\left(t\right)\right)\right\rangle =\frac{\alpha\Delta\gamma}{m}N\int_{-\infty}^{t}e^{-\gamma\left(t-t'\right)}\sin\left(\frac{a}{\hbar}eEt'-\alpha Q\left(t'\right)\right)dt' \] $N$ is the number of electrons, and we have assumed that the time average is the same for all electrons. The equation of motion for $Q\left(t\right)$ then becomes \[ \ddot{Q}\left(t\right)+\omega_{0}^{2}Q\left(t\right)=\frac{\alpha\Delta\gamma}{m}N\int_{-\infty}^{t}e^{-\gamma\left(t-t'\right)}\sin\left(\omega_{B}t'-\alpha Q\left(t'\right)\right)dt', \] where $N$ is the number of electrons and $\omega_{B}=\frac{a}{\hbar}eE$ is the Bloch oscillation frequency. This equation can be converted to two coupled differential equations. Let \[ \left\langle j\right\rangle \left(t\right)=\frac{\alpha\Delta\gamma}{m}N\int_{-\infty}^{t}e^{-\gamma\left(t-t'\right)}\sin\left(\omega_{B}t'-\alpha Q\left(t'\right)\right)dt'. \] Then we have \begin{eqnarray*} \ddot{Q}\left(t\right)+\omega_{0}^{2}Q\left(t\right) & = & \left\langle j\right\rangle \left(t\right)\\ \frac{\partial}{\partial t}\left\langle j\right\rangle \left(t\right)+\gamma\left\langle j\right\rangle \left(t\right) & = & \frac{\alpha\Delta\gamma}{m}N\sin\left(\omega_{B}t-\alpha Q\left(t\right)\right). \end{eqnarray*} To linear order in $\alpha Q\left(t\right)$ we have \begin{eqnarray*} \ddot{Q}\left(t\right)+\omega_{0}^{2}Q\left(t\right) & = & \left\langle j\right\rangle \left(t\right)\\ \frac{\partial}{\partial t}\left\langle j\right\rangle \left(t\right)+\gamma\left\langle j\right\rangle \left(t\right) & = & \alpha\beta\sin\left(\omega_{B}t\right)-\alpha^{2}\beta Q\left(t\right)\cos\left(\omega_{B}t\right), \end{eqnarray*} where $\beta=\frac{\Delta\gamma}{m}N$. This system of equations is driven by oscillations with frequency $\omega_{B}$. Inserting the first into the second equation we get \begin{equation} \dddot{Q}\left(t\right)+\omega_{0}^{2}\dot{Q}\left(t\right)+\gamma\left(\ddot{Q}\left(t\right)+\omega_{0}^{2}Q\left(t\right)\right)=\alpha\beta\sin\left(\omega_{B}t\right)-\alpha^{2}\beta Q\left(t\right)\cos\left(\omega_{B}t\right)\label{eq:Q_eq} \end{equation} The general solution to the left hand side of this equation is \[ C_{1}e^{-\gamma t}+C_{2}\sin\left(\omega_{0}t\right)+C_{3}\sin\left(\omega_{0}t\right). \] We expect an instability to arise from the second right hand side term. It should be analogous to the well known parametric instability of the Mathieu equation. An instability should appear for \[ \omega_{B}\approx2\omega_{0} \] We introduce the frequency $\omega=\omega_{B}/2$ and find an approximate solution with the help of the Ansatz \begin{eqnarray*} Q\left(t\right) & = & A\left(t\right)\sin\left(\omega t\right)+B\left(t\right)\cos\left(\omega t\right). \end{eqnarray*} Inserting this Ansatz into Eq. (\ref{eq:Q_eq}), and retaining only the first derivaties of $A$, $B$, furthermore neglecting terms oscillating at the frequency $3\omega_{B}$, we obtain \begin{eqnarray*} & & \sin\left(\omega_{B}t\right)\left[-\alpha\beta+\left(\omega_{0}^{2}-3\omega^{2}\right)A'(t)+\left(\frac{\alpha^{2}\beta}{2}+\gamma\left(\omega_{0}^{2}-\omega^{2}\right)\right)A(t)-2\gamma\omega B'(t)+\omega(\omega^{2}-\omega_{0}^{2})B(t)\right]\\ & & +\cos\left(\omega_{B}t\right)\left[2\gamma\omega A'(t)+\omega\left(\omega_{0}^{2}-\omega^{2}\right)A(t)+\left(\omega_{0}^{2}-3\omega^{2}\right)B'(t)+\left(-\frac{\alpha^{2}\beta}{2}+\gamma\left(\omega_{0}^{2}-\omega^{2}\right)\right)B(t)\right]\\ & & =0. \end{eqnarray*} The coefficients of the $\sin\left(\omega_{B}t\right)$ and $\cos\left(\omega_{B}t\right)$ terms have to vanish independently. Setting $\omega=\omega_{0}$, neglecting the inhomogeneity term $-\alpha\beta$ in the first equation, which will only give a constant contribution to $A\left(t\right)$, we find that the functions $A\left(t\right)$, \textbf{$B\left(t\right)$} are linear combinations of the exponentials $e^{st}$ and $e^{-st}$ with \[ s=\frac{\alpha^{2}\beta}{4\omega_{0}\sqrt{\gamma^{2}+\omega_{0}^{2}}}. \] Thus, at its onset, the unstable behavior is governed by the exponential growth \[ Q\left(t\right)\sim A\left(t\right)\sim B\left(t\right)\sim\exp\left\{ \frac{\alpha^{2}\beta}{4\omega_{0}\sqrt{\gamma^{2}+\omega_{0}^{2}}}t\right\} . \] \end{document}
1,314,259,996,957
arxiv
\section{Introduction} \label{sec:intro} Many physical phenomena in physics and engineering can be modeled by the Stokes flow~\cite{Galdi:2011}. Noteworthy applications are, for example, Stokes flows in porous media~\cite{Bang-Lukkassen:1999}, design and development of efficient fibrous filters~\cite{Linden-Cheng-Wiegmann:2018} and micro-fluid devices~\cite{Smith-Barbati-Santana-Gleghon-Kirby:2012}, dynamics of droplets~\cite{Kitahata-Yoshinaga-Nagai-Sumino:2013}, bio-suspensions and sedimentation~\cite{Hofer:2018}. A very successful approach for the numerical treatment of the Stokes equations in variational form is based on the finite element method (FEM)~\cite{Cai-Tong-Vassilevski-Wang:2010,Crouzeix-Raviart:1973,Girault-Raviart:1986}. The FEM normally uses triangular and quadrilateral meshes in the two-dimensional (2D) case and tetrahedral and hexahedral meshes in the three-dimensional case (3-D). Furthermore, in the last two decades a great effort has been devoted in the design of numerical methods for partial differential equations (PDEs) suitable to polygonal and polyhedral meshes~\cite{Wachspress:2015 Kuznetsov-Repin:2003 Sukumar-Tabarraei:2004 BeiraodaVeiga-Lipnikov-Manzini:2014}. To this end, it is worth mentioning the mimetic finite the difference (MFD) method~\cite{Lipnikov-Manzini-Shashkov:2014,BeiraodaVeiga-Lipnikov-Manzini:2014} and its variational reformulation that led to the virtual element method (VEM)~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013}. The MFD was designed to preserve several fundamental properties of PDEs, such as the maximum/minimum principle, the conservation of fundamental quantities in physics (mass, momentum, energy) and the solution symmetries. The MFD method was successfully applied to the numerical approximation on unstructured polygonal and polyhedral meshes of diffusion problems~\cite{Brezzi-Lipnikov-Shashkov-Simoncini:2007,Brezzi-Lipnikov-Shashkov:2006}, convection–diffusion problems~\cite{Cangiani-Manzini-Russo:2009}, elasticity problems~\cite{Lipnikov-Morel-Shashkov:2004}, gas dynamic problems~\cite{Campbell-Shashkov:2001}, and electromagnetic problems~\cite{Hyman-Shashkov:2001}. On the other hand, the VEM is a finite element method that does not require the explicit knowledge of the basis functions and use of quadrature formulas to compute the bilinear forms of the Galerkin formulation. Indeed, the VEM can handle the construction of the bilinear forms on general polygonal and polyhedral elements through special polynomial projections of the basis functions and their derivatives (gradients, curl, divergence). Such projections are computable from the degrees of freedom of the virtual element functions and ensure the polynomial consistency of the bilinear forms. The connection between the VEM and the FEM on polygonal/polyhedral meshes is thoroughly investigated in~\cite{Manzini-Russo-Sukumar:2014,Cangiani-Manzini-Russo-Sukumar:2015,DiPietro-Droniou-Manzini:2018}, between VEM and discontinuous skeletal gradient discretizations in~\cite{DiPietro-Droniou-Manzini:2018}, and between the VEM and the BEM-based FEM method in~\cite{Cangiani-Gyrya-Manzini-Sutton:2017:GBC:chbook}. The VEM was originally formulated in~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} as a conforming FEM for the Poisson problem. Then, it was later extended to convection-reaction-diffusion problems with variable coefficients in~\cite{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013,BeiraodaVeiga-Brezzi-Marini-Russo:2016b}. Meanwhile, the nonconforming formulation for diffusion problems was proposed in~\cite{AyusodeDios-Lipnikov-Manzini:2016} as the finite element reformulation of~\cite{Lipnikov-Manzini:2014}. Mixed VEM for elliptic problems were introduced in~\cite{Brezzi-Falk-Marini:2014}, and later extended to meshes with curved edges in~\cite{Dassi-Fumagalli-Losapio-Scialo-Scotti-Vacca:2020a}. Implementation of mixed methods is discussed in~\cite{Dassi-Vacca:2019,Dassi-Scacchi:2020,Dassi-Fumagalli-Losapio-Scialo-Scotti-Vacca:2020b}. The connection with de~Rham diagrams and Nedelec elements and the application to the electromagnetics has been explored in~\cite{BeiraodaVeiga-Brezzi-Marini-Russo:2016a}. A practical application of these concepts can be found in \cite{BeiraodaVeiga-Dassi-Manzini-Mascotto:2021,NaranjoAlvarez-Bokil-Gyrya-Manzini:2020}. Other significant applications of the VEM on general meshes are found, for example, in~\cite{% Antonietti-Manzini-Verani:2018,% Antonietti-Manzini-Verani:2019:CAMWA:journal,% Brezzi-Marini:2013,% BeiraodaVeiga-Manzini:2014,% BeiraodaVeiga-Manzini:2015,% BeiraodaVeiga-Mora-Vacca:2019,% BeiraodaVeiga-Manzini-Mascotto:2019,% Benedetto-Berrone-Pieraccini-Scialo:2014,% Benedetto-Berrone-Borio-Pieraccini-Scialo:2016b,% Benedetto-Berrone-Scialo:2016,% Benvenuti-Chiozzi-Manzini-Sukumar:2019:CMAME:journal,% Berrone-Borio-Scialo:2016,% Berrone-Pieraccini-Scialo:2016,% Berrone-Borio-Manzini:2018:CMAME:journal Cangiani-Gyrya-Manzini:2016,% Cangiani-Georgoulis-Pryer-Sutton:2016,% Cangiani-Manzini-Sutton:2017,% Certik-Gardini-Manzini-Vacca:2018:ApplMath:journal Certik-Gardini-Manzini-Mascotto-Vacca:2019:CAMWA:journal,% Gardini-Manzini-Vacca:2019:M2AN:journal,% Mora-Rivera-Rodriguez:2015,% Natarajan-Bordas-Ooi:2015,% Paulino-Gain:2015,% Perugia-Pietra-Russo:2016,% Wriggers-Rust-Reddy:2016,% Zhao-Chen-Zhang:2016% }. In this work, we consider two possible numerical formulations of the VEM for the discretization of the two-dimensional (2D) Stokes equation. In both formulation, we approximate the two components of the velocity vector separately by using a variant of the conforming virtual element space originally proposed in~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} and already considered in~\cite{Manzini-Mazzia:2021}. In the first formulation we assume that the edge trace of each component of the velocity is a polynomial of degree $k+1$, where $k$ is the maximum degree of the polynomials that are in the virtual element space. This definition of the scalar virtual element space is a special case of the \emph{generalized local virtual element space} that is proposed in~\cite[Section~3]{BeiraodaVeiga-Vacca:2020-arXiv}. In the second formulation, we assume that only the trace of the normal component of the velocity vector is a polynomial of degree $k+1$, while the trace of the tangential component is a polynomial of degree $k$. For both formulations, we also consider the modified (``enhanced'') definition of the virtual element space~\cite{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013}, which allows us to construct the $\LTWO$ orthogonal projection onto the polynomials of degree $k$. In both formulations, the scalar unknown, e.g., the pressure, is approximated by discontinuous polynomials on the mesh elements. These two virtual element formulations satisfy the inf-sup stability condition, which is crucial to prove the well-posedness of the method, and can be proved to have an optimal convergence rate for the approximation errors in the $\LTWO$ norm and in the $\HONE$-seminorm. A similar approach for the incompressible Stokes equations led to the low-order accurate MFD methods in~\cite{BeiraodaVeiga-Gyrya-Lipnikov-Manzini:2009,BeiraodaVeiga-Lipnikov:2010}, that are equivalent to the formulations proposed in our work for $k=1$. All our numerical experiments confirm the expected optimal behavior of these two formulations, whose accuracy is comparable, although the second formulation requires less degrees of freedom than the first one. The zero divergence constraint is satisfied in a variational sense, i.e., the projection of the divergence on the subset of polynomials used in the scheme formulation is zero. It is worth mentioning that other virtual element approaches were recently proposed in the literature that approximate the Stokes velocity in such a way that its divergence is a polynomial that is set to zero in the scheme. This strategy provides an approximation of the Stokes velocity that satisfies the zero divergence constraint in a pointwise sense. We refer the interested reader to the works of References~\cite BeiraodaVeiga-Lovadina-Vacca:2017 BeiraodaVeiga-Lovadina-Vacca:2018 BeiraodaVeiga-Mora-Vacca:2019 BeiraodaVeiga-Dassi-Vacca:2020 Chernov-Marcati-Mascotto:2021}. However, the polynomial projection of the velocity divergence in our VEM is zero up to the machine precision, so if we consider such projection as the virtual element approximation of the velocity divergence, this approximation is identically zero almost everywhere in the computational domain. \subsection{Structure of the paper} The outline of the paper is as follows. In Section~\ref{sec:Stokes}, we introduce the Stokes problem. In Section~\ref{sec:VEM}, we discuss two different virtual element formulations for numerically solving this problem. In Section~\ref{sec:convergence}, we investigate the convergence of these formulations theoretically, and derive optimal convergence rates in the energy and $\LTWO$ norms for the velocity approximation and in the $\LTWO$ norm for the pressure approximation. In Section~\ref{sec:numerical}, we assess the accuracy of these virtual element approximations by investigating their behavior on a representative benchmark problem. In Section~\ref{sec:conclusions}, we offer our final conclusions. \subsection{Notation and technicalities} \label{subsec:notation} We use the standard definition and notation of Sobolev spaces, norms and seminorms, cf.~\cite{Adams-Fournier:2003}. Let $k$ be a nonnegative integer number. The Sobolev space $\HS{k}(\omega)$ consists of all square integrable functions with all square integrable weak derivatives up to order $k$ that are defined on the open, bounded, connected subset $\omega$ of $\mathbbm{R}^{2}$. As usual, if $k=0$, we prefer the notation $\LTWO(\omega)$. We will also use the subspace of $\LTWO(\Omega)$ denoted by $L^2_0(\Omega)$ and defined on the computational domain $\Omega$ as \begin{align} L^2_0(\Omega):=\bigg\{\,q\in\LTWO(\Omega)\,:\,\int_{\Omega}q\,d\xv=0\,\bigg\}. \end{align} Norm and seminorm in $\HS{k}(\omega)$ are denoted by $\norm{\cdot}{k,\omega}$ and $\snorm{\cdot}{k,\omega}$, respectively. We use the integral notation to denote the $\LTWO$-inner product between vector-valued fields, although for notation's conciseness, we may prefer to use the notation ``$(\cdot,\cdot)$'' in a few situations. \subsection{Mesh definition and regularity assumptions} \label{subsec:mesh:regularity:assumptions} For exposition's sake, we consider an open, bounded, polygonal domain $\Omega$ and a family of mesh decompositions of $\Omega$ denoted by $\mathcal{T}=\{\Omega_{\hh}\}_{h}$. Each mesh $\Omega_{\hh}$ is a set of non-overlapping, bounded (closed) elements $\P$ such that $\overline{\Omega}=\cup_{\P\in\Omega_{\hh}}\P$, where $\overline{\Omega}$ is the closure of $\Omega$ in $\mathbbm{R}^2$. The subindex $h$, which labels each mesh $\Omega_{\hh}$, is the maximum of the diameters $\hh_{\P}=\sup_{\mathbf{x},\mathbf{y}\in\P}\abs{\mathbf{x}-\mathbf{y}}$. Each element $\P$ has a non-intersecting polygonal boundary $\partial\P$ formed by $\NPE$ straight edges $\E$ connecting the $\NPV$ ($=\NPE$) polygonal vertices. The sequence of vertices forming $\partial\P$ is oriented in the counter-clockwise direction and the vertex coordinates are denoted by $\X_{\V}=(x_{\V},y_{\V})$. We denote the measure of $\P$ by $\ABS{\P}$, its barycenter (center of gravity) by $\xvP:=(x_{\P},y_{\P})$, the unit normal vector to each edge $\E\in\partial\P$ and pointing out of $\P$ by $\mathbf{n}_{\P,\E}$, and the length of $\E$ by $\hh_{\E}$. Moreover, we assume that the orientation of the mesh edges in every mesh is fixed \emph{once and for all}, so that we can unambiguously introduce $\norE$, the unit normal vector to edge $\E$. The orientation of this vector is independent of the element $\P$ to which $\E$ belongs, and may differ from $\mathbf{n}_{\P,\E}$ only by the multiplicative factor $-1$. \PGRAPH{Mesh regularity assumptions} In the definition of the admissible meshes, we first assume that the elemental boundaries are ``polylines'', i.e., continuously connected portions of straight lines. Then, we need the following regularity assumptions on the family of mesh decompositions $\{\Omega_{\hh}\}_{h}$ in order to use the interpolation and projection error estimates from the theory of polynomial approximation of functions in Sobolev spaces~\cite{Brenner-Scott:1994}. \medskip \begin{assumption}[Mesh regularity]~\\ \vspace{-\baselineskip} \begin{itemize} \item There exists a positive constant $\varrho$ independent of $h$ such that for every polygonal element $\P$ it holds that \begin{description} \item[]\textbf{(M1)}~~$\P$ is star-shaped with respect to a disk with radius $\ge\varrho\hh_{\P}$; \item[]\textbf{(M2)}~~for every edge $\E\in\partial\P$ it holds that $\hh_{\E}\geq\varrho\hh_{\P}$. \end{description} \end{itemize} \end{assumption} \medskip \begin{remark} The star-shapedness property \textbf{(M1)} implies that all the mesh elements are \emph{simply connected} subsets of $\mathbbm{R}^{2}$. The scaling property \textbf{(M2)} implies that the number of edges in all the elemental boundaries is uniformly bounded from above over the whole mesh family $\{\Omega_{\hh}\}_{h}$. \end{remark} \medskip These mesh assumptions are quite general and, as observed from the very first publication on the VEM, see, for example, \cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013}, allow the method a great flexibility in the geometric shape of the mesh elements. For example, we can consider elements with hanging nodes as in the adaptive mesh refinement (AMR) technique and elements with a non-convex shape. In this work we avoid elements with intersecting boundaries, elements with ``holes'', and elements totally surrounding other elements. However, elements with such more challenging shapes have already been considered in the virtual element formulation to show the robustness of the method~\cite{Paulino-Gain:2015}. A recent review of the mesh regularity assumptions in the VEM literature and a thorough investigation of the VEM performance on mesh families with extreme characteristics can also be found in~\cite{Sorgente-Biasotti-Manzini-Spagnuolo:2021:arXiv,Sorgente-VEM-Book:2021}. \subsection{Polynomials} Hereafter, $\PS{\ell}(\P)$ denotes the linear space of polynomials of degree up to $\ell$ defined on $\P$, with the useful convention that $\PS{-1}(\P)=\{0\}$; $\big[\PS{\ell}(\P)\big]^2$ denotes the space of two-dimensional vector-valued fields of polynomials of degree up to $\ell$ on $\P$; $\big[\PS{\ell}(\P)\big]^{2\times2}$ denotes the space of $2\times2$-sized tensor-valued fields of polynomials of degree up to $\ell$ on $\P$. Similar definitions also hold for the space of univariate polynomials defined on all mesh edges $\E$. Then, we define the linear space of discontinuous scalar, vector and tensor polynomial fields by collecting together the local definitions, so that \begin{align*} \PS{\ell}(\Omega_{\hh})&:=\Big\{ q\in\LTWO(\Omega)\,:\,\restrict{q}{\P}\in\PS{\ell}(\P)\quad\forall\P\in\Omega_{\hh} \Big\},\\[1em] \big[\PS{\ell}(\Omega_{\hh})\big]^2&:= \Big\{ \mathbf{q}\in\big[\LTWO(\Omega)\big]^2\,:\,\restrict{\mathbf{q}}{\P}\in\big[\PS{\ell}(\P)\big]^2 \quad\forall\P\in\Omega_{\hh} \Big\},\\[1em] \big[\PS{\ell}(\Omega_{\hh})\big]^{2\times2}&:= \Big\{ \bm{\kappa}\in\big[\LTWO(\Omega)\big]^{2\times2}\,:\, \restrict{{\bm\kappa}}{\P}\in\big[\PS{\ell}(\P)\big]^{2\times2} \quad\forall\P\in\Omega_{\hh} \Big\}. \end{align*} We will also use the norm and seminorm: \begin{align} \norm{\mathbf{v}}{1,h}^2=\norm{\mathbf{v}}{0,\Omega}^2+\snorm{\mathbf{v}}{1,h}^2 \quad\textrm{with}\quad \snorm{\mathbf{v}}{1,h}^2 = \sum_{\P\in\Omega_{\hh}}\snorm{\mathbf{v}}{1,\P}^2 \label{eq:broken:seminorm} \end{align} for every function $\mathbf{v}$ defined in the broken Sobolev space \begin{align*} \big[\HONE(\Omega_{\hh})\big]^2 = \Big\{\mathbf{v}\in\big[\LTWO(\Omega)\big]^2\,:\,\restrict{\mathbf{v}}{\P}\in\big[\HONE(\P)\big]^2\quad\forall\P\in\Omega_{\hh}\Big\}, \end{align*} which is the space of square integrable vector-valued functions whose restriction to every mesh element $\P$ is in $\big[\HONE(\P)\big]^2$. Space $\PS{\ell}(\P)$ is the span of the finite set of \emph{scaled monomials of degree up to $\ell$}, that are given by \begin{align*} \mathcal{M}_{\ell}(\P) = \bigg\{\, \left( \frac{\mathbf{x}-\xvP}{\hh_{\P}} \right)^{\alpha} \textrm{~with~}\abs{\alpha}\leq\ell \,\bigg\}, \end{align*} where \begin{itemize} \item $\xvP$ denotes the center of gravity of $\P$ and $\hh_{\P}$ its characteristic length, as, for instance, the edge length or the cell diameter; \item $\alpha=(\alpha_1,\alpha_2)$ is the two-dimensional multi-index of nonnegative integers $\alpha_i$ with degree $\abs{\alpha}=\alpha_1+\alpha_{2}\leq\ell$ and such that $\mathbf{x}^{\alpha}=x_1^{\alpha_1}x_{2}^{\alpha_{2}}$ for any $\mathbf{x}\in\mathbbm{R}^{2}$ and $\partial^{\abs{\alpha}}\slash{\partial\mathbf{x}^{\alpha}}=\partial^{\abs{\alpha}}\slash{\partialx_1^{\alpha_1}\partialx_2^{\alpha_2}}$. \end{itemize} The dimension of $\PS{\ell}(\P)$ equals $N_{\ell}=(\ell+1)(\ell+2)/2$, the cardinality of the basis set $\mathcal{M}_{\ell}(\P)$. Let $v$ and $\mathbf{v}=(v_x,v_y)^T$ denote a (smooth enough) scalar and vector-valued field. Then, \begin{itemize} \item the elliptic projection $\PinP{\ell}v\in\PS{\ell}(\P)$ is the solution of the variational problem \begin{align} \int_{\P}\nabla\big(v-\PinP{\ell}v\big)\cdot\nablaq\,d\xv &= 0 \qquad\forallq\in\PS{\ell}(\P),\\[0.5em] \int_{\partial\P}\big(v-\PinP{\ell}v\big)\,ds&=0; \label{eq:proj:H1:P:def} \end{align} \item the orthogonal projection $\PizP{\ell}v\in\PS{\ell}(\P)$ is the solution of the variational problem \begin{align} \int_{\P}\big(v-\PizP{\ell}v\big)q\,d\xv = 0 \qquad\forallq\in\PS{\ell}(\P); \label{eq:scalar:proj:L2:P:def} \end{align} \item the orthogonal projection of a vector-valued field $\mathbf{v}=(v_x,v_y)^T$ is the solution of the variational problem \begin{align} \int_{\P}\big(\mathbf{v}-\PizP{\ell}\mathbf{v}\big)\cdot\mathbf{q}\,d\xv = 0 \qquad\forall\mathbf{q}\in\big[\PS{\ell}(\P)\big]^2, \label{eq:vector:proj:L2:P:def} \end{align} and can be computed componentwisely, i.e., $\PizP{\ell}\mathbf{v}=(\PizP{\ell}v_x,\PizP{\ell}v_y)^T\in\big[\PS{\ell}(\P)\big]^2$, where $\PizP{\ell}v_x$ and $\PizP{\ell}v_y$ are the scalar orthogonal projections defined above; \item the gradient of vector $\mathbf{v}$ and its orthogonal projection $\PizP{\ell}\nabla\mathbf{v}\in\big[\PS{\ell}(\P)\big]^{2\times2}$ onto the linear space of $2\times2$-sized matrix-valued polynomials of degree $\ell$, which are defined componentwisely as follows: \begin{align} \nabla\mathbf{v} = \left( \begin{array}{cc} \frac{\partialv_x}{\partialx} & \quad\frac{\partialv_x}{\partialy}\\[1.0em] \frac{\partialv_y}{\partialx} & \quad\frac{\partialv_y}{\partialy}\\ \end{array} \right) \qquad\textrm{and}\qquad \PizP{\ell}\nabla\mathbf{v} = \left( \begin{array}{cc} \PizP{\ell}\frac{\partialv_x}{\partialx} & \quad\PizP{\ell}\frac{\partialv_x}{\partialy}\\[1.0em] \PizP{\ell}\frac{\partialv_y}{\partialx} & \quad\PizP{\ell}\frac{\partialv_y}{\partialy}\\ \end{array} \right), \end{align} and this latter one is the solution of the variational problem: \begin{align} \int_{\P}\big(\nabla\mathbf{v}-\PizP{\ell}\nabla\mathbf{v}\big):\bm\kappa\,d\xv = 0 \qquad\forall{\bm\kappa}\in\big[\PS{\ell}(\P)\big]^{2\times2}. \label{eq:tensor:proj:L2:P:def} \end{align} \end{itemize} \section{The Stokes problem and the virtual element discretization} \label{sec:Stokes} The incompressible Stokes problem for the vector-valued field $\mathbf{u}$ and the scalar field $p$ is governed by the system of equations: \begin{align} - \Delta\mathbf{u} +\nablap &= \mathbf{f} \phantom{0} \quad\textrm{in~}\Omega,\label{eq:stokes:A}\\[0.2em] \DIV\mathbf{u} &= 0 \phantom{\mathbf{f}} \quad\textrm{in~}\Omega,\label{eq:stokes:B}\\[0.2em] \mathbf{u} &= 0 \phantom{\mathbf{f}} \quad\textrm{on~}\Gamma\label{eq:stokes:C} \end{align} on the computational domain $\Omega$ with boundary $\Gamma$. We refer to $\mathbf{u}$ and $p$ as the \emph{Stokes velocity} and the \emph{Stokes pressure}. To ease the exposition, we consider only the case of homogeneous Dirichlet boundary conditions, see~\eqref{eq:stokes:C}. However, the extension to nonhomogeneous Dirichlet boundary conditions is deemed straightforward and the general case is considered in the section of numerical experiments. \medskip \noindent The variational formulation of \eqref{eq:stokes:A}-\eqref{eq:stokes:C} reads as: \emph{Find $(\mathbf{u},p)\in\big[H^1_0(\Omega)\big]^2\timesL^2_0(\Omega)$} such that \begin{align} a(\mathbf{u},\mathbf{v}) + b(\mathbf{v},p) &= (\mathbf{f},\mathbf{v}) \phantom{0} \qquad\forall\mathbf{v}\in\big[H^1_0(\Omega)\big]^2,\label{eq:stokes:var:A}\\[0.5em] b(\mathbf{u},q) &= 0 \phantom{(\mathbf{f},\mathbf{v})}\qquad\forallq\inL^2_0(\Omega), \label{eq:stokes:var:B} \end{align} where the bilinear forms $a(\cdot,\cdot):\big[\HONE(\Omega)\big]^2\times\big[\HONE(\Omega)\big]^2\to\mathbbm{R}$ and $b(\cdot,\cdot):\big[\HONE(\Omega)\big]^2\times\LTWO(\Omega)\to\mathbbm{R}$ are \begin{align} a(\mathbf{v},\mathbf{w})&:=\int_{\Omega}\nabla\mathbf{v}:\nabla\mathbf{w}\,d\xv \phantom{\int_{\Omega}\DIV\mathbf{v}\,q\,d\xv}\hspace{-1.25cm} \forall\mathbf{v},\mathbf{w}\in\HONE(\Omega), \label{eq:as:def}\\[0.5em] b(\mathbf{v},q)&:=-\int_{\Omega}q\DIV\mathbf{v}\,d\xv \phantom{\int_{\Omega}\nabla\mathbf{v}:\nabla\mathbf{w}\,d\xv}\hspace{-1.25cm} \forall\mathbf{v}\in\HONE(\Omega),\,q\in\LTWO(\Omega). \label{eq:bs:def} \end{align} In the following section, it will be convenient to split these bilinear forms on the mesh elements by rewriting them in the following way: \begin{align} a(\mathbf{v},\mathbf{w})&=\sum_{\P\in\Omega_{\hh}}\as^{\P}(\mathbf{v},\mathbf{w})\quad\textrm{with}\quad \as^{\P}(\mathbf{v},\mathbf{w})=\int_{\P}\nabla\mathbf{v}:\nabla\mathbf{w}\,d\xv, \label{eq:asP:def}\\[0.5em] b(\mathbf{v},q)&=\sum_{\P\in\Omega_{\hh}}\bs^{\P}(\mathbf{v},q)\quad\textrm{with}\quad \bs^{\P}(\mathbf{v},q)=-\int_{\P}q\DIV\mathbf{v}\,d\xv. \label{eq:bsP:def} \end{align} The bilinear form $a(\cdot,\cdot)$ is continuous and coercive. The bilinear form $b(\cdot,\cdot)$ is continuous and satisfies the inf-sup condition: \begin{align} \inf_{q\inL^2_0(\Omega)\backslash\{0\}}\sup_{\mathbf{v}\in[H^1_0(\Omega)\backslash\{0\}]^2}\frac{ b(\mathbf{v},q) }{ \norm{\mathbf{v}}{1,\Omega}\,\norm{q}{0,\Omega} }\geq\beta, \label{eq:exact:inf-sup} \end{align} for some real, strictly positive constant $\beta$. These properties imply the existence and uniqueness of the solution pair $(\mathbf{u},p)$, and, so, the well-posedness of the variational formulation~\eqref{eq:stokes:var:A}-\eqref{eq:stokes:var:B}, and the stability inequality \begin{align*} \norm{\mathbf{u}}{1,\Omega} + \norm{p}{0,\Omega} \leq C\norm{\mathbf{f}}{-1,\Omega}, \end{align*} for a right-hand side forcing term $\mathbf{f}\in\HS{-1}(\Omega)$, and a constant $C$ that depends only on $\Omega$, cf.~\cite{Boffi-Brezzi-Fortin:2013,Girault-Raviart:1986,Girault-Raviart:1979}. \medskip Let $k\geq1$ be a given integer number. Our virtual element discretizations have the general abstract form: \emph{Find $(\uvh,p_{\hh})\in\Vv^{h}_{k}\times\Qs^{h}_{k-1}$} \begin{align} \as_{\hh}(\uvh,\vvh) + \bs_{\hh}(\vvh,p_{\hh}) &= \bil{\fvh}{\vvh} \phantom{0} \qquad\forall\vvh\in\Vv^{h}_{k}, \label{eq:stokes:vem:A}\\[0.5em] \bs_{\hh}(\uvh,\qs_{\hh}) &= 0 \phantom{\bil{\fvh}{\vvh}}\qquad\forall\qs_{\hh}\in\Qs^{h}_{k-1}.\label{eq:stokes:vem:B} \end{align} Here, $\Vv^{h}_{k}$ is a finite-dimensional conforming subspace of $\big[H^1_0(\Omega)\big]^2$ and $\Qs^{h}_{k-1}$ a finite-dimensional discontinuous subspace of $L^2_0(\Omega)$. We use the integer $k$, which is a polynomial degree, to denote the accuracy of the method. The vector field $\uvh$ and the scalar field $p_{\hh}$ are the virtual element approximation of $\mathbf{u}$ and $p$, respectively. The bilinear forms $\as_{\hh}(\cdot,\cdot):\Vv^{h}_{k}\times\Vv^{h}_{k}\to\mathbbm{R}$ and $\bs_{\hh}(\cdot,\cdot):\Vv^{h}_{k}\times\Qs^{h}_{k-1}\to\mathbbm{R}$ are the virtual element approximations to the corresponding bilinear forms $a(\cdot,\cdot)$ and $b(\cdot,\cdot)$. The linear functional $\bil{\fvh}{\cdot}$ is the virtual element approximation of the right-hand side of~\eqref{eq:stokes:var:A}. The definition of all these mathematical objects is discussed in the next section, where we present, analyze and investigate numerically two new virtual element formulations that are suitable to polygonal meshes. \section{Virtual element approximations of the Stokes problem} \label{sec:VEM} We present two different virtual element approximations of the 2-D Stokes problem in variational form. For both formulations, the Stokes pressure is approximated by a piecewise polynomial function that belongs to the space \begin{align} \Qs^{h}_{k-1}&:=\Big\{\qs_{\hh}\inL^2_0(\Omega)\,:\,\restrict{\qs_{\hh}}{\P}\in\PS{k-1}(\P)\quad\forall\P\in\Omega_{\hh}\Big\}=\PS{k-1}(\Omega_{\hh})\capL^2_0(\Omega),\label{eq:SV:scalar:space:def} \end{align} and the degrees of freedom are the polynomial moments in every element against the polynomials of degree $k-1$. The Stokes velocity field is approximated in the finite-dimensional subspace of $\big[H^1_0(\Omega)\big]^2$ given by \begin{align} \Vv^{h}_{k}:=\Big\{\vvh\in\big[H^1_0(\Omega)\big]^2\,:\,\restrict{\vvh}{\P}\in\Vv^{h}_{k}(\P)\quad\forall\P\in\Omega_{\hh}\Big\}. \label{eq:VEM:global:space} \end{align} This functional space is defined by ``gluing together'' in a conforming way the local virtual element spaces $\Vv^{h}_{k}(\P)$, defined on the mesh elements $\P\in\Omega_{\hh}$. In particular, we denote the elemental space of the first formulation by $\Vv^{\FO,h}_{k}(\P)$ (\emph{formulation $\textit{F1}$}) and that of the second formulation by $\Vv^{\FT,h}_{k}(\P)$ (\emph{formulation $\textit{F2}$}), and we will use the generic symbols $\Vv^{h}_{k}(\P)$ (local space) and $\Vv^{h}_{k}$ (global space) when we discuss properties that hold regardless of the specific space definition. For both formulation, we also consider the modified definition of the elemental spaces according to the so called \emph{enhancement strategy}~\cite{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013}. This strategy allows us to compute the $\LTWO$-orthogonal projection onto the local subspace of vector polynomials of degree $k$, i.e., the subspace $\big[\PS{k}(\P)\big]^2\subset\Vv^{h}_{k}(\P)$. This orthogonal projection is required in the formulation of the right-hand side of Eq.~\eqref{eq:stokes:vem:A}. \medskip In the rest of this section, we first review the general construction of the virtual element approximation. Then, for each formulation \begin{description} \item[-] $(i)$ we explicitly define the local virtual element space and its degrees of freedom and discuss their unisolvence; \item[-] $(ii)$ we prove that the following polynomial projections of $\nabla\vvh$, $\DIV\vvh$ and $\vvh$ are computable for every virtual element vector-valued field $\vvh$ using only the degrees of freedom of $\vvh$: $\PizP{k-1}\nabla\vvh\in\big[\PS{k-1}(\P)\big]^{2\times2}$; $\PizP{k-1}\DIV\vvh\in\PS{k-1}(\P)$; $\PinP{k}\vvh\in\big[\PS{k}(\P)\big]^2$; $\PizP{\bar{k}}\vvh\in\big[\PS{\bar{k}}(\P)\big]^2$ where $\bar{k}=max(0,k-2)$ for the regular space definition or $\bar{k}=k$ for the enhanced space definition; (we recall that the formal definitions of these operators are given in~\eqref{eq:proj:H1:P:def}-\eqref{eq:tensor:proj:L2:P:def}). \end{description} \PGRAPH{Construction of the virtual element bilinear form $\as_{\hh}$} Using these projection operators, we define the virtual element bilinear form $\as_{\hh}(\cdot,\cdot)$ as the sum of local bilinear forms $\as^{\P}_{\hh}(\cdot,\cdot):\Vv^{h}_{k}(\P)\times\Vv^{h}_{k}(\P)\to\mathbbm{R}$ as follows: \begin{align} &\as_{\hh}(\vvh,\wvh) = \sum_{\P\in\Omega_{\hh}}\as^{\P}_{\hh}(\vvh,\wvh) \label{eq:ash:def} \intertext{where~} &\as^{\P}_{\hh}(\vvh,\wvh) = \int_{\P}\PizP{k-1}\nabla\vvh:\PizP{k-1}\nabla\wvh\,d\xv + \Ss^{\P}_{\hh}\big( (1-\Pi^{\P}_{k})\vvh, (1-\Pi^{\P}_{k})\wvh \big). \label{eq:asPh:def} \end{align} Here, $\Ss^{\P}_{\hh}(\cdot,\cdot):\Vv^{h}_{k}(\P)\times\Vv^{h}_{k}(\P)\to\mathbbm{R}$ is the local bilinear form providing the stabilization term, and $\Pi^{\P}_{k}$ denote either the $\LTWO$-orthogonal projection $\PizP{k}$ (when computable) or the elliptic projection $\PinP{k}$. The term $\Ss^{\P}_{\hh}(\cdot,\cdot)$ can be any symmetric, positive definite bilinear form for which there exist two real, positive constant $\sigma_*$ and $\sigma^*$ independent of $h$ (and $\P$) such that \begin{align*} \sigma_*\as^{\P}(\vvh,\vvh)\leq\Ss^{\P}_{\hh}(\vvh,\vvh)\leq\sigma^*\as^{\P}(\vvh,\vvh) \qquad \forall\vvh\in\Vv^{h}_{k}(\P)\cap\textrm{ker}(\Pi^{\P}_{k}), \end{align*} where $\as^{\P}(\cdot,\cdot)$ is defined in~\eqref{eq:asP:def}. Several possible stabilizations have been proposed over the last few years and are available from the technical literature, cf.~\cite{Mascotto:2018}. The local bilinear form $\as^{\P}_{\hh}(\cdot,\cdot)$ has two fundamental properties that are used in the analysis: \begin{itemize} \item \textbf{Polynomial consistency}: for every vector field $\vvh\in\Vv^{h}_{k}(\P)$ and vector polynomial field $\qvh\in\big[\PS{k}(\P)\big]^2$ it holds: \begin{align} \as^{\P}_{\hh}(\vvh,\qvh) = \as^{\P}(\vvh,\qvh); \label{eq:consistency} \end{align} \medskip \item \textbf{Stability}: there exist two real, positive constants $\alpha_*$ and $\alpha^*$ independent of $h$ such that \begin{align} \alpha_*\as^{\P}(\vvh,\vvh)\leq\as^{\P}_{\hh}(\vvh,\vvh)\leq\alpha^*\as^{\P}(\vvh,\vvh) \qquad\forall\vvh\in\Vv^{h}_{k}(\P). \label{eq:ash:stability} \end{align} Both constants $\alpha_*$ and $\alpha^*$ may depend on the polynomial degree $k$ and the mesh regularity constant $\rho$. \end{itemize} By adding all the elemental contributions, we find that $\as_{\hh}(\cdot,\cdot)$ is a coercive bilinear form on $\Vv^{h}_{k}\times\Vv^{h}_{k}$: \begin{align} \as_{\hh}(\vvh,\vvh) \geq \alpha_*\snorm{\vvh}{1,\Omega}^2. \label{eq:ash:coercivity} \end{align} A second straighforward consequence of~\eqref{eq:ash:stability} and the symmetry of $\as^{\P}_{\hh}(\cdot,\cdot)$ is that this bilinear form is an inner product on $\Vv^{h}_{k}(\P)\setminus\mathbbm{R}$. Using the Cauchy-Schwarz inequality, it holds that: \begin{align} \as^{\P}_{\hh}(\vvh,\wvh) \leq \big[\as^{\P}_{\hh}(\vvh,\vvh)\big]^{\frac{1}{2}}\,\big[\as^{\P}_{\hh}(\wvh,\wvh)\big]^{\frac{1}{2}} \leq \alpha^*\,\big[\as^{\P}(\vvh,\vvh)\big]^{\frac{1}{2}}\,\big[\as^{\P}(\wvh,\wvh)\big]^{\frac{1}{2}} = \alpha^*\,\snorm{\vvh}{1,\P}\,\snorm{\wvh}{1,\P}, \label{eq:ash:local:continuity} \end{align} which implies that the local bilinear form $\as^{\P}_{\hh}(\cdot,\cdot)$ is continuous on $\Vv^{h}_{k}(\P)\times\Vv^{h}_{k}(\P)$. The global continuity of $\as_{\hh}(\cdot,\cdot)$ follows on summing all the local terms and using again the Cauchy-Schwarz inequality: \begin{align} \as_{\hh}(\vvh,\wvh) &= \sum_{\P\in\Omega_{\hh}}\as^{\P}_{\hh}(\vvh,\wvh) \leq \alpha^*\,\sum_{\P\in\Omega_{\hh}}\,\snorm{\vvh}{1,\P}\,\snorm{\wvh}{1,\P} \leq \alpha^*\,\Bigg(\sum_{\P\in\Omega_{\hh}}\,\snorm{\vvh}{1,\P}^2\Bigg)^{\frac12}\,\Bigg(\sum_{\P\in\Omega_{\hh}}\,\snorm{\wvh}{1,\P}^2\Bigg)^{\frac12} \nonumber\\[0.5em] &= \alpha^*\,\snorm{\vvh}{1,\Omega}\,\snorm{\wvh}{1,\Omega}. \label{eq:ash:global:continuity} \end{align} \PGRAPH{Construction of the virtual element bilinear forms $\bs_{\hh}$} Similarly, we define the virtual element bilinear form $\bs_{\hh}(\cdot,\cdot)$ as the sum of local bilinear forms $\bs^{\P}_{\hh}(\cdot,\cdot):\Vv^{h}_{k}(\P)\times\PS{k-1}(\P)\to\mathbbm{R}$ as follows: \begin{align} \bs_{\hh}(\vvh,\qs_{\hh}) = \sum_{\P\in\Omega_{\hh}}\bs^{\P}_{\hh}(\vvh,\qs_{\hh}) \qquad\textrm{where}\qquad \bs^{\P}_{\hh}(\vvh,\qs_{\hh}) = \int_{\P}\qs_{\hh}\PizP{k-1}\DIV\vvh\,d\xv. \label{eq:bsh:def} \end{align} From the definition of the orthogonal projection operator $\PizP{k-1}$, it immediately follows that \begin{align} \bs^{\P}_{\hh}(\vvh,\qs_{\hh})=\bs^{\P}(\vvh,\qs_{\hh}) \qquad\forall\vvh\in\Vv^{h}_{k}(\P),\,\qs_{\hh}\in\PS{k-1}(\P). \label{eq:bsPh=bsP} \end{align} If we add this relation over all the mesh elements, we find that \begin{align} \bs_{\hh}(\vvh,\qs_{\hh}) = b(\vvh,\qs_{\hh}) \qquad\forall\vvh\in\Vv^{h}_{k},\,\qs_{\hh}\in\PS{k-1}(\Omega_{\hh}), \label{eq:bsh=bs} \end{align} which will be used in the analysis of the next section. \medskip \begin{remark} Since $\PizP{k-1}(\DIV\uvh)$ for all elements $\P$ is a polynomial of degree $k-1$, equation~\eqref{eq:stokes:vem:B} is equivalent to require that $\PizP{k-1}(\DIV\uvh)=0$ in $\P$. This condition is the discrete analog in $\PS{k-1}(\P)$ of the incompressibility condition $\DIV\mathbf{u}=0$. \end{remark} \PGRAPH{Construction of the virtual element right-hand side} In every polygonal element $\P$, we approximate the right-hand side vector $\mathbf{f}$ with its polynomial projection $\fvh:=\PizP{\bar{k}}\mathbf{f}$ onto the local polynomial space $\PS{\bar{k}}(\P)$. We consider two possible choices of $\bar{k}$ given the integer $k\geq1$: \begin{itemize} \item[$\bullet$] $\bar{k}=\max(k-2,0)$: this is the setting proposed in the original paper~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013}; \item[$\bullet$] $\bar{k}=k$: this is the setting proposed in Ref.~\cite{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013}, which requires the enhanced definition of the virtual element space. We discuss the enhanced definition of the virtual element space of both formulations in the next sections. \end{itemize} \medskip Finally, the right hand-side of equation~\eqref{eq:stokes:vem:A} is given by \begin{align} \bil{\fvh}{\vvh} = \sum_{\P\in\Omega_{\hh}} \int_{\P}\PizP{\bar{k}}\fvh\cdot\vvh\,d\xv = \sum_{\P\in\Omega_{\hh}} \int_{\P}\fvh\cdot\PizP{\bar{k}}\vvh\,d\xv, \label{eq:fvh:def} \end{align} where the second equality follows on applying the definition of the orthogonal projector $\PizP{\bar{k}}$. We recall the following results pertaining these two possible approximations of the right-hand side, which follows on noting that $\big(1-\PizP{\bar{k}}\big)$ is orthogonal to $\PizP{0}$ in the $\LTWO$-inner product. Assuming $\mathbf{f}\in\big[\HS{s}(\Omega)\big]^{\dims}$ with $1\leq\ss\leq\bar{k}$, we find that \begin{align} \ABS{\bil{\fvh}{\vvh}-\scal{\mathbf{f}}{\vvh}} &= \ABS{ \sum_{\P\in\Omega_{\hh}}\int_{\P}\big(\PizP{\bar{k}}\mathbf{f}-\mathbf{f}\big)\vvh\,d\xv } \leq \sum_{\P\in\Omega_{\hh}}\ABS{ \int_{\P}\big(\PizP{\bar{k}}\mathbf{f}-\mathbf{f}\big)\big(\vvh-\PizP{0}\vvh\big)\,d\xv } \nonumber\\[0.5em] &\leq \sum_{\P\in\Omega_{\hh}} \norm{\PizP{\bar{k}}\mathbf{f}-\mathbf{f}}{0,\P}\,\norm{\vvh-\PizP{0}\vvh}{0,\P} \leq Ch^{s+1}\norm{\mathbf{f}}{s,\P}\,\snorm{\vvh}{1,\P}. \label{eq:fv:bound:0} \end{align} For $\bar{k}=0$ and assuming $\mathbf{f}\in\big[\LTWO(\Omega)\big]^{\dims}$, we find that \begin{align} \ABS{\bil{\fvh}{\vvh}-\scal{\mathbf{f}}{\vvh}} &= \ABS{ \sum_{\P\in\Omega_{\hh}}\int_{\P}\big(\PizP{0}\mathbf{f}-\mathbf{f}\big)\vvh\,d\xv } \leq \sum_{\P\in\Omega_{\hh}}\ABS{ \int_{\P}\big(\PizP{0}\mathbf{f}-\mathbf{f}\big)\big(\vvh-\PizP{0}\vvh\big)\,d\xv } \nonumber\\[0.5em] &\leq \sum_{\P\in\Omega_{\hh}} \norm{\PizP{0}\mathbf{f}-\mathbf{f}}{0,\P}\,\norm{\vvh-\PizP{0}\vvh}{0,\P} \leq Ch\norm{\mathbf{f}}{0,\P}\,\snorm{\vvh}{1,\P}. \label{eq:fv:bound:1} \end{align} \subsection{Formulation $\textit{F1}$} \label{subsec:first:formulation} We set the virtual element space for the velocity vector-valued fields of the first formulation as \begin{align*} \Vvhkp(\P)=\Big[\Vshkp(\P)\Big]^2, \end{align*} where the corresponding scalar virtual element space is given by \begin{align} \Vshkp(\P):=\Big\{\vsh\in\HONE(\P)\,:\, \restrict{\vsh}{\partial\P}\in\CS{0}(\partial\P),\, \restrict{\vsh}{\E}\in\PS{k+1}(\E)\,\forall\E\in\partial\P,\, \Delta\vsh\in\PS{k-2}(\P) \Big\}. \label{eq:FO:regular-space:def} \end{align} With a small abuse of notation, we denote the enhanced version of the local space with the same symbol $\Vshkp(\P)$, and we consider the following definition: \begin{align} \Vshkp(\P):=\Big\{\vsh\in\HONE(\P)\,:\, &\restrict{\vsh}{\partial\P}\in\CS{0}(\partial\P),\, \restrict{\vsh}{\E}\in\PS{k+1}(\E)\,\forall\E\in\partial\P,\, \Delta\vsh\in\PS{k}(\P),\nonumber\\[0.25em] &\int_{\P}\big(\vsh-\PinP{k}\vsh\big)\qs_{\hh}\,d\xv=0\quad\forall\qs_{\hh}\in\PS{k}(\P)\backslash{\PS{k-2}(\P)} \Big\}, \label{eq:FO:enhanced-space:def} \end{align} where $\PS{k}(\P)\backslash{\PS{k-2}(\P)}$ is the space of polynomials of degree exactly equal to $k$ or $k-1$. This definition uses the elliptic projection operator $\PinP{k}$, which is computable from the degrees of freedom defined below, cf. Lemma~\ref{lemma:3}. \medskip \begin{remark} The virtual element space~\eqref{eq:FO:regular-space:def} and its modified version~\eqref{eq:FO:enhanced-space:def} differ from the spaces respectively defined in References~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} and~\cite{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013} because all the edge traces of a virtual element function are polynomials of degree $k+1$ instead of $k$. This definition is a special case of the generalized local virtual element space that is considered in~\cite[Section~3]{BeiraodaVeiga-Vacca:2020-arXiv} for the discretization of the Poisson equation. In fact, the local scalar space~\eqref{eq:FO:regular-space:def} can be obtained by setting $k_{\partial}=k+1$ in~\cite[Eq.~(7)]{BeiraodaVeiga-Vacca:2020-arXiv} (with the same meaning for the parameter $k$). \end{remark} \medskip \begin{remark} Assuming that the trace on the edges of the elemental boundary is a polynomial of degree $k+1$ instead of $k$ does not change the convergence rate of the method and implies that an additional degree of freedom is needed for each velocity components on every edge, thus increasing the complexity and the computational costs. However, it makes the proof of the inf-sup condition almost straightforward, which is crucial to prove the well-posedness and convergence of the method. So, this formulation allows us to build a stable numerical approximation to the Stokes problem that holds on any kind of polygonal meshes, including triangular and square meshes, for all orders of accuracy $k\geq1$. \end{remark} \begin{figure}[!t] \centering \begin{tabular}{ccc} \includegraphics[width=0.28\textwidth]{fig00.pdf} &\qquad \includegraphics[width=0.28\textwidth]{fig01.pdf} &\qquad \includegraphics[width=0.28\textwidth]{fig02.pdf} \\[0.5em] \hspace{2mm}$\mathbf{k=1}$ & \hspace{9mm}$\mathbf{k=2}$ & \hspace{9mm}$\mathbf{k=3}$ \end{tabular} \caption{First virtual element formulation: degrees of freedom of each component of the virtual element vector-valued fields (left) and the scalar polynomial fields (right) of an hexagonal element for the accuracy degrees $k=1,2,3$. Nodal values at the polygonal vertices and edge polynomial moments are marked by a circular bullet; cell polynomial moments are marked by a square bullet.} \medskip \label{fig:dofs:FO} \end{figure} \medskip \noindent The degrees of freedom of this formulation for the spaces defined in~\eqref{eq:FO:regular-space:def} and~\eqref{eq:FO:enhanced-space:def} are given by: \medskip \begin{description} \item[-]\DOFS{F1}{a} for $k\geq1$, the vertex values $\vsh(\X_{\V})$, $\V\in\partial\P$; \item[-]\DOFS{F1}{b} for $k\geq1$, the polynomial edge moments of $\vsh$ \begin{align} \frac{1}{\ABS{\E}}\int_{\E}\vsh(\ss)\qs_{\hh}(\ss)d\quad\forall\qs_{\hh}\in\PS{k-1}(\E) \end{align} for every edge $\E\in\partial\P$; \item[-]\DOFS{F1}{c} for $k\geq2$, the polynomial cell moments of $\vsh$ \begin{align} \frac{1}{\ABS{\P}}\int_{\P}\vsh(\mathbf{x})\qs_{\hh}(\mathbf{x})\,d\xv\quad\forall\qs_{\hh}\in\PS{k-2}(\P). \end{align} \end{description} Figure~\ref{fig:dofs:FO} shows the degrees of freedom for each component of the velocity vector and the pressure for $k=1,2,3$ on an hexagonal element. \medskip \noindent \begin{lemma}[Unisolvence of the degrees of freedom] \label{lemma:F1:unisolvence} The degrees of freedom \DOFS{F1}{a}, \DOFS{F1}{b}, and \DOFS{F1}{c} are unisolvent in the space $\Vshkp(\P)$ for both the definitions given in~\eqref{eq:FO:regular-space:def} and~\eqref{eq:FO:enhanced-space:def}. \end{lemma} \begin{proof} The proof of the unisolvence of the degrees of freedom \DOFS{F1}{a}-\DOFS{F1}{c} for $\Vshkp(\P)$ follows by adapting the arguments used in \cite[Proposition~1]{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} for the space defined in~\eqref{eq:FO:regular-space:def} and~\cite[Proposition~2]{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013} for the space defined in~\eqref{eq:FO:enhanced-space:def}. We briefly sketch the proof of the unisolvence for the space defined in~\eqref{eq:FO:regular-space:def}. For every virtual element function in $\Vshkp(\P)$, we consider the integration by parts: \begin{align} \int_{\P}\ABS{\nabla\vsh}^2\,d\xv = -\int_{\P}\vsh\cdot\Delta\vsh + \sum_{\E\in\partial\P}\int_{\E}\vsh\,\norE\cdot\nabla\vsh\,ds = \TERM{I}{} + \TERM{II}{}. \end{align} Now, assume that the degrees of freedom \DOFS{F1}{a}, \DOFS{F1}{b}, and \DOFS{F1}{c} are all zero. Then, \begin{description} \item[-] for $k=1$, it holds that $\Delta\vsh=0$; for $k\geq2$, it holds that $\Delta\vsh$ is a polynomial of degree $k-2$ and $\TERM{I}{}$ is a degree of freedom, hence it is zero by hypothesis; \item[-] the trace of $\vsh$ along each edge $\E\in\partial\P$ is a polynomial of degree $k+1$ that can be recovered by the interpolation of the degrees of freedom \DOFS{F1}{a} and \DOFS{F1}{b}. Since these degrees of freedom are zero by hypothesis, their trace interpolation is zero. \end{description} Consequently, $\nabla\vsh=0$, which implies that $\vsh$ is constant on $\P$, and this constant is zero since it coincides with the value of all its degrees of freedom, which we assume to be zero. The proof of the unisolvence for the space defined in~\eqref{eq:FO:regular-space:def} is completed by noting that the number of the degrees of freedom equals the dimension of space $\Vshkp(\P)$. Similar modifications to the argument of~\cite[Proposition~2]{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013} make it possible to prove the unisolvence for the enhanced virtual element space defined in~\eqref{eq:FO:enhanced-space:def}. \end{proof} \medskip \noindent \begin{lemma} \label{lemma:2} Let $\P$ be an element of mesh $\Omega_{\hh}$. For every virtual element function $\vsh\in\Vshkp(\P)$, the polynomial projection $\PizP{k-1}\nabla\vsh$ is computable using the degrees of freedom \DOFS{F1}{a}, \DOFS{F1}{b}, and \DOFS{F1}{c} of $\vsh$. \end{lemma} \begin{proof} To prove that $\PizP{k-1}\big(\nabla\vsh\big)$ is computable, we explicitly prove that $\PizP{k-1}\big(\partial\vsh\slash{\partialx}\big)$ is computable. Then, the same argument can be applied to prove that $\PizP{k-1}(\partial\vsh\slash{\partialy})$ is also computable. To this end, we start from the definition of the orthogonal projection and integrate by parts: \begin{align} \int_{\P}\qs_{\hh}\PizP{k-1}\frac{\partial\vsh}{\partialx}\,d\xv = \int_{\P}\qs_{\hh}\frac{\partial\vsh}{\partialx}\,d\xv = -\int_{\P}\vsh\frac{\partial\qs_{\hh}}{\partialx}\,d\xv + \sum_{\E\in\partial\P}n_x\int_{\E}\vsh\qs_{\hh}\,ds = \TERM{I}{} + \TERM{II}{}, \end{align} which holds for every $\qs_{\hh}\in\PS{k-1}(\P)$. Term $\TERM{I}{}$ is computable since $\partial\qs_{\hh}\slash{\partialx}\in\PS{k-2}(\P)$ and this integral is determined by the degrees of freedom of $\vsh$ in \DOFS{F1}{c}. Term $\TERM{II}{}$ is computable since the polynomial $\qs_{\hh}$ is known and $\restrict{\vsh}{\E}\in\PS{k+1}(\E)$ can be interpolated from the degrees of freedom of $\vsh$ given by \DOFS{F1}{a} and \DOFS{F1}{b} on every edge $\E\in\partial\P$. \end{proof} \medskip \noindent \begin{remark} \label{remark:2} For all scalar virtual element functions $\vsh\in\Vshkp(\P)$, the polynomial projections $\PizP{k-1}\big(\partial\vsh\slash{\partialx}\big)$ and $\PizP{k-1}\big(\partial\vsh\slash{\partialy}\big)$ forming $\PizP{k-1}\nabla\vsh$ are computable by using the degrees of freedom of $\vsh$. Consequently, the polynomial projections $\PizP{k-1}\nabla\vvh\in\big[\PS{k-1}(\P)\big]^{2\times2}$ and $\PizP{k-1}\DIV\vvh\in\PS{k-1}(\P)$ are computable for all virtual vector-valued fields $\vvh\in\big[\Vshkp(\P)\big]^2$. \end{remark} \medskip \noindent \begin{lemma} \label{lemma:3} Let $\P$ be an element of mesh $\Omega_{\hh}$. For all virtual element functions $\vsh\in\Vshkp(\P)$, the polynomial projection $\PinP{k}\vsh\in\PS{k}(\P)$ is computable from the degrees of freedom of $\vsh$. \end{lemma} \begin{proof} The same argument of Lemma~\ref{lemma:2} is used here. We start from the definition of the elliptic projection and we integrate by parts: \begin{align} \int_{\P}\nabla\PinP{k}\vsh\cdot\nabla\qs_{\hh}\,d\xv = \int_{\P}\nabla\vsh\cdot\nabla\qs_{\hh}\,d\xv = -\int_{\P}\vsh\Delta\qs_{\hh}\,d\xv + \sum_{\E\in\partial\P}\int_{\E}\vsh\norE\cdot\nabla\vsh\,ds = \TERM{I}{} + \TERM{II}{}. \label{eq:lemma3:aux:00} \end{align} Since in~\eqref{eq:lemma3:aux:00} we take $\qs_{\hh}\in\PS{k}(\P)$ and $\Delta\qs_{\hh}\in\PS{k-2}(\P)$, term $\TERM{I}{}$ is computable using the degrees of freedom \DOFS{F1}{c} of $\vsh$. Similarly, since $\restrict{\vsh}{\E}\in\PS{k+1}(\E)$ is computable from an interpolation of the degrees of freedom \DOFS{F1}{a} and \DOFS{F1}{b}, term $\TERM{II}{}$ is computable. \end{proof} \medskip \noindent \begin{remark} $\PinP{k}\vvh$ is computable componentwisely for every vector-valued virtual element field $\vvh\in\Vvhkp(\P)$ and is used in the stabilization term of $\as^{\P}_{\hh}(\cdot,\cdot)$, cf.~\eqref{eq:asPh:def}. \end{remark} \subsection{Formulation $\textit{F2}$} \label{subsec:second:formulation} We denote the tangential and normal components of $\vvh$ along the edge $\E\in\partial\P$ by $\restrict{\vvh}{\E}\cdot\tngE$ and $\restrict{\vvh}{\E}\cdot\norE$, where $\tngE$ and $\norE$ are the unit tangential and orthogonal vector of $\E$. The virtual element space of the second formulation is defined as: \begin{align} \Vv^{\textit{F2},h}_{k}(\P):=\Big\{ \vvh\in\big[\HONE(\P)\big]^2:\, \restrict{\vvh}{\partial\P}\in\big[\CS{0}(\partial\P)\big]^2, \restrict{\vvh}{\E}\cdot\tngE\in\PS{k}(\E), \restrict{\vvh}{\E}\cdot\norE\in\PS{k+1}(\E), \Delta\vvh\in\big[\PS{k-2}(\P)\big]^2 \Big\}. \label{eq:FT:regular-space:def} \end{align} With a small abuse of notation we denote the ``enhanced'' version of this space with the same symbol ``$\Vv^{\textit{F2},h}_{k}$'': \begin{align} \Vv^{\textit{F2},h}_{k}(\P):=\Big\{ \vvh\in\big[\HONE(\P)\big]^2:\, & \restrict{\vvh}{\partial\P}\in\big[\CS{0}(\partial\P)\big]^2, \restrict{\vvh}{\E}\cdot\tngE\in\PS{k}(\E), \restrict{\vvh}{\E}\cdot\norE\in\PS{k+1}(\E), \Delta\vvh\in\big[\PS{k}(\P)\big]^2,\, \nonumber\\[0.25em] & \int_{\P}\big(\vvh-\PinP{k}\vvh\big)\cdot\qvh\,d\xv=0\quad\forall\qvh\in\big[\PS{k}(\P)\backslash{\PS{k-2}(\P)}\big]^2 \Big\}, \label{eq:FT:enhanced-space:def} \end{align} where $\PS{k}(\P)\backslash{\PS{k-2}(\P)}$ is the space of polynomials of degree exactly equal to $k$ and $k-1$. This definition uses the elliptic projection operator $\PinP{k}$, which is computable from the degrees of freedom defined below, cf.~Lemma~\ref{lemma:6}. Note that the normal component of $\vvh$ is a polynomial of degree $k+1$ while the tangential component is a polynomial of degree $k$. These conditions are reflected by the following degrees of freedom, which are the same for the virtual element functions defined in both~\eqref{eq:FT:regular-space:def} and~\eqref{eq:FT:enhanced-space:def}: \begin{figure}[!t] \centering \begin{tabular}{ccc} \includegraphics[width=0.28\textwidth]{fig03.pdf} &\qquad \includegraphics[width=0.28\textwidth]{fig04.pdf} &\qquad \includegraphics[width=0.28\textwidth]{fig05.pdf} \\[0.5em] \hspace{2mm}$\mathbf{k=1}$ & \hspace{9mm}$\mathbf{k=2}$ & \hspace{9mm}$\mathbf{k=3}$ \end{tabular} \caption{Second virtual element formulation: degrees of freedom of the virtual element vector-valued fields (left) and the scalar polynomial fields (right) of an hexagonal element for the accuracy degrees $k=1,2,3$. Nodal values are marked by a circular bullet at the vertices; the edge moments of the tangential and normal components of the vector-valued fields are respectively marked by circular bullets and arrows in the interior of the edges. Cell polynomial moments for both the vector and scalar fields are marked by a square bullet.} \medskip \label{fig:dofs:FT} \end{figure} \medskip \begin{description} \item[-]\DOFS{F2}{a} for $k\geq1$, the vertex values $\vvh(\X_{\V})$; \item[-]\DOFS{F2}{b} for $k\geq1$, the polynomial edge moments of $\vvh\cdot\norE$: \begin{align} \frac{1}{\ABS{\E}}\int_{\E}\vvh\cdot\norE\qs_{\hh}\,ds \qquad \forall \qs_{\hh}\in\PS{k-1}(\E) \end{align} for every edge $\E\in\partial\P$; \item[-]\DOFS{F2}{c} for $k\geq2$, the polynomial edge moments of $\vvh\cdot\tngE$: \begin{align} \frac{1}{\ABS{\E}}\int_{\E}\vvh\cdot\tngE\qs_{\hh}\,ds \qquad \forall \qs_{\hh}\in\PS{k-2}(\E) \end{align} for every edge $\E\in\partial\P$; \item[-]\DOFS{F2}{d} for $k\geq2$, the polynomial cell moments of $\vvh$: \begin{align} \frac{1}{\ABS{\P}}\int_{\P}\vvh\cdot\qvh\,d\xv \qquad \forall\qvh\in\big[\PS{k-2}(\P)\big]^2. \end{align} \end{description} Figure~\ref{fig:dofs:FT} shows the degrees of freedom of the velocity vector and the pressure for $k=1,2,3$ on an hexagonal element. \medskip \noindent \begin{remark} \label{remark:4} In this virtual element space, the normal component of $\vvh$ has an increased polynomial degree. For example, for $k=1$ the vector field $\vvh\in\Vvhpp{1}(\P)$ is such that $\vvh\cdot\norE\in\PS{2}(\E)$ and $\vvh\cdot\tngE\in\PS{1}(\E)$ for every edge $\E\in\partial\P$. These degrees of freedom are the same used in the low-order MFD method of Reference~\cite{BeiraodaVeiga-Gyrya-Lipnikov-Manzini:2009} and our VEM is actually a reformulation of this mimetic scheme in the variational setting and a generalization to orders of accuracy that are higher than one. The analysis of the mimetic method and its extension to the three-dimensional case is presented in~\cite{BeiraodaVeiga-Lipnikov-Manzini:2010} and considers the additional edge degrees of freedom as associated with edge bubble functions. \end{remark} \medskip \noindent \begin{remark} \label{remark:5} Using the degrees of freedom \DOFS{F2}{a} and \DOFS{F2}{b} the edge traces $\vvh\cdot\norE\in\PS{k+1}(\E)$ and $\vvh\cdot\tngE\in\PS{k}(\E)$ are computable by solving a suitable interpolation problem. Consider the edge $\E=(\xVp,\xV^{\prime\prime})$ defined by the vertices $\xVp$ and $\xV^{\prime\prime}$. Then, \begin{description} \item[-] to interpolate $\vvh\cdot\norE\in\PS{k+1}(\E)$ we need $k+2$ independent pieces of information, which are provided by $\vvh(\xVp)\cdot\norE$, $\vvh(\xV^{\prime\prime})\cdot\norE$ from the degrees of freedom \DOFS{F2}{a} and by the $k$ moments of $\vvh\cdot\norE$ from the degrees of freedom \DOFS{F2}{b}; \item[-] to interpolate $\vvh\cdot\tngE\in\PS{k}(\E)$ we need $k+1$ independent pieces of information, which are provided by $\vvh(\xVp)\cdot\tngE$, $\vvh(\xV^{\prime\prime})\cdot\tngE$ from the degrees of freedom \DOFS{F2}{a} and by the $k-1$ moments of $\vvh\cdot\tngE$ from the degrees of freedom \DOFS{F2}{c}. \end{description} \end{remark} \medskip \noindent \begin{lemma}[Unisolvence of the degrees of freedom] \label{lemma:BF:unisolvence} The degrees of freedom \DOFS{F2}{a}-\DOFS{F2}{d} are unisolvent for both the regular and enhanced definition of $\Vv^{\textit{F2},h}_{k}(\P)$, respectively given in~\eqref{eq:FT:regular-space:def} and~\eqref{eq:FT:enhanced-space:def}. \end{lemma} \begin{proof} The argument that we use to prove the assertion of the lemma is similar to the one used to prove the unisolvency of the degrees of freedom of the first formulation. First, consider a vector field in the virtual element space defined in~\eqref{eq:FT:regular-space:def}. An integration by parts yields: \begin{align} \int_{\P}\abs{\nabla\vvh}^2\,d\xv = -\int_{\P}\vvh\cdot\Delta\vvh\,d\xv + \sum_{\E\in\partial\P}\int_{\E}\vvh\cdot\nabla\vvh\cdot\norE\,ds = \TERM{I}{} + \TERM{II}{}. \end{align} Next, we assume that all the degrees of freedom in \DOFS{F2}{a}, \DOFS{F2}{b}, \DOFS{F2}{c}, and \DOFS{F2}{d} are zero. Then, \begin{description} \item[-] $\TERM{I}{}$ is zero because $\Delta\vvh\in\big[\PS{k-2}(\P)\big]^2$, and, hence, it is a degree of freedom of type \DOFS{F2}{d} for $k\geq2$ or zero for $k=1$; \item[-] to see that $\TERM{II}{}$ is also zero, we use the orthogonal decomposition $\vvh=(\vvh\cdot\norE)\norE+(\vvh\cdot\tngE)\tngE$ and note that $\vvh\cdot\norE=0$ and $\vvh\cdot\tngE=0$ since these traces are computed by the interpolation of the degrees of freedom \DOFS{F2}{a}, \DOFS{F2}{b}, and \DOFS{F2}{c}, and these data are zero by hypothesis. Therefore, $\restrict{\vvh}{\E}=0$ on every edge $\E\in\partial\P$ and all the edge integrals of $\TERM{II}{}$ must be zero. \end{description} It follows that $\nabla\vvh=0$, i.e., all the spatial derivatives of the components of $\vvh$ are zero. Therefore, the vector-valued field $\vvh$ is constant on $\P$ and since all its degrees of freedom are zero the constant must be zero. The assertion of the lemma is finally proved by noting that the number of the degrees of freedom is equal to the dimension of $\Vv^{\textit{F2},h}_{k}(\P)$. The unisolvence of the degrees of freedom \DOFS{F2}{a}-\DOFS{F2}{d} for the enhanced space defined in~\eqref{eq:FT:enhanced-space:def} follows by similarly adjusting the argument that is used in the proof of~\cite[Proposition~2]{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013}. \end{proof} \medskip \noindent \begin{lemma} \label{lemma:5} Let $\P$ be an element of mesh $\Omega_{\hh}$. For every virtual element function $\vvh\in\Vv^{\textit{F2},h}_{k}(\P)$, the polynomial projection $\PizP{k-1}\nabla\vvh$ is computable from the degrees of freedom \DOFS{F2}{a}, \DOFS{F2}{b}, and \DOFS{F2}{c} of $\mathbf{v}$. \end{lemma} \begin{proof} We start from the definition of the orthogonal projection: \begin{align} \int_{\P}\PizP{k-1}\nabla\vvh:\bm\tau_{\hh}\,d\xv = \int_{\P}\nabla\vvh:\bm\tau_{\hh}\,d\xv \qquad\forall\bm\tau_{\hh}\in[\PS{k-1}(\P)]^{2\times2}. \end{align} To prove that the right-hand side is computable from the degrees of freedom of $\vvh$, we integrate by parts: \begin{align} \int_{\P}\nabla\vvh:\bm\tau_{\hh}\,d\xv = -\int_{\P}\vvh\cdot\DIV\bm\tau_{\hh}\,d\xv +\sum_{\E\in\partial\P}\int_{\E}\vvh\cdot\bm\tau_{\hh}\cdot\norE\,ds = \TERM{I}{} + \TERM{II}{}. \end{align} Since $\DIV\bm\tau_{\hh}\in\big[\PS{k-2}(\P)\big]^2$, term $\TERM{I}{}$ is computable using the values \DOFS{F2}{d} of $\vvh$. Then, we observe that the traces $\restrict{\vvh}{\E}\cdot\norE\in\PS{k+1}(\E)$ and $\restrict{\vvh}{\E}\cdot\tngE\in\PS{k}(\E)$ are computable from the degrees of freedom \DOFS{F2}{a}-\DOFS{F2}{c}. On using the decomposition $\vvh=(\vvh\cdot\norE)\norE+(\vvh\cdot\tngE)\tngE$, we conclude that the trace $\restrict{\vvh}{\E}$ is computable. Therefore, all edge integrals and ultimately $\TERM{II}{}$ are computable. \end{proof} \medskip \noindent \begin{lemma} \label{lemma:6} Let $\P$ be an element of mesh $\Omega_{\hh}$. For every virtual element function $\vvh\in\Vv^{\textit{F2},h}_{k}(\P)$, the polynomial projection $\PinP{k}\vvh\in\big[\PS{k-1}(\P)\big]^{2}$ is computable from the degrees of freedom of $\vvh$. \end{lemma} \begin{proof} Consider the definition of the elliptic projection operator: \begin{align} \int_{\P}\nabla\PinP{k}\vvh:\nabla\qvh\,d\xv = \int_{\P}\nabla\vvh:\nabla\qvh\,d\xv \qquad\qvh\in\big[\PS{k}(\P)\big]^2. \end{align} We integrate the right-hand side by parts: \begin{align} \int_{\P}\nabla\vvh:\nabla\qvh\,d\xv = - \int_{\P}\vvh\cdot\Delta\qvh\,d\xv + \sum_{\E\in\partial\P}\int_{\E}\vvh\cdot\nabla\qvh\cdot\norE\,ds = \TERM{I}{} + \TERM{II}{}. \end{align} Since we take $\qvh\in\big[\PS{k}(\P)\big]^2$ and $\Delta\qvh\in\big[\PS{k-2}(\P)\big]^2$, the first integral in term $\TERM{I}{}$ is the moment of $\vvh$ against a vector polynomial of degree $k-2$ and is, thus, computable using the degrees of freedom of $\vvh$ provided by \DOFS{F2}{d}. Then, we observe that the traces $\restrict{\vvh}{\E}\cdot\norE\in\PS{k+1}(\E)$ and $\restrict{\vvh}{\E}\cdot\tngE\in\PS{k}(\E)$ are computable from the degrees of freedom \DOFS{F2}{a}-\DOFS{F2}{c}. On using the decomposition $\vvh=(\vvh\cdot\norE)\norE+(\vvh\cdot\tngE)\tngE$, also the trace $\restrict{\vvh}{\E}$ is computable, cf. Remark~\ref{remark:5}. Therefore, all edge integrals and ultimately $\TERM{II}{}$ are computable. \end{proof} \section{Wellposedness and convergence analysis} \label{sec:convergence} In this section, we first prove the wellposedness of the two virtual element formulations of Section~\ref{sec:VEM}. Then, we prove that these two formulations are convergent and we derive error estimates in the energy norm and the $\LTWO$ norm for the velocity field and the $\LTWO$ norm for the pressure field. The analysis is the same for both formulations $\textit{F1}$ and $\textit{F2}$, regardless of using the non-enhanced or the enhanced definition of the virtual element space. For this reason, we use the generic symbol $\Vv^{h}_{k}(\P)$ to refer to the two virtual element spaces introduced in Section~\ref{sec:VEM}, i.e., $\Vvhkp(\P)$ and $\Vv^{\textit{F2},h}_{k}(\P)$. Hereafter, we use the capitol letter ``$C$'' to denote a generic constant that is independent of $h$ but may depend on the other parameters of the discretization, e.g., the polynomial degree $k$, the mesh regularity constant $\rho$, the stability constants $\alpha_*$ and $\alpha^*$, etc. The constant $C$ may take a different value at any occurrence. In some mathematical proofs, we may find it convenient to write ``$A\STACKON{=}{(X)}B$'' to mean that ``$A=B$ follows from equation (X)'', i.e., to stack the equation reference number on the symbols ``$=$'', ``$\leq$'', ``$\geq$``etc. \subsection{Wellposedness of the virtual element approximation} \label{subsec:well-posedness} To prove the wellposedness of our formulations, we must verify that the virtual element space $\Vv^{h}_{k}$ and the discontinuous polynomial space $\Qs^{h}_{k-1}$ are such that: $(i)$ the bilinear form $\as_{\hh}(\cdot,\cdot)$ is bounded and coercive; $(ii)$ the bilinear form $\bs_{\hh}(\cdot,\cdot)$ is bounded and satisfies the inf-sup condition. Properties $(i)$ are the immediate consequence of the stability property~\eqref{eq:ash:stability} and the Cauchy-Schwarz inequality, which imply~\eqref{eq:ash:coercivity} and~\eqref{eq:ash:global:continuity}. We rewrite these two inequalities here for the reader's convenience: \begin{align} \ABS{\as_{\hh}(\vvh,\wvh)} &\leq \alpha^*\snorm{\vvh}{1,\Omega}\,\snorm{\wvh}{1,\Omega} \phantom{\as_{\hh}(\vvh,\vvh)}\hspace{-0.5cm} \forall\vvh,\,\wvh\in\Vv^{h}_{k}, \label{eq:continuity} \\[0.5em] \alpha_*\snorm{\vvh}{1,\Omega}^2 &\leq \as_{\hh}(\vvh,\vvh) \phantom{C\snorm{\vvh}{1,\Omega}\,\snorm{\wvh}{1,\Omega}}\hspace{-0.5cm} \forall\vvh\in\Vv^{h}_{k}. \label{eq:coercivity} \end{align} Similarly, we can readily prove the boundedness of the bilinear form $\bs_{\hh}(\cdot,\cdot)$ by using the Cauchy-Schwarz inequality, so that \begin{align*} \ABS{\bs_{\hh}(\vvh,\qs_{\hh})}\leq\sqrt{2}\snorm{\vvh}{1,\Omega}\,\norm{\qs_{\hh}}{0,\Omega} \qquad\forall\vvh\in\Vv^{h}_{k},\,\qs_{\hh}\in\PS{k-1}(\Omega_{\hh}). \end{align*} Instead, the discrete inf-sup condition is proved in the following lemma, which relies on the construction of a suitable Fortin operator, see~\cite{Boffi-Brezzi-Fortin:2013}. The construction of this operator is the same for both the regular and the enhanced versions of formulations $\textit{F1}$ and $\textit{F2}$. \medskip \begin{lemma}[Inf-sup condition] \label{lemma:inf-sup:condition} The bilinear form $\bs_{\hh}(\cdot,\cdot)$ is \emph{inf-sup stable} on $\Vv^{h}_{k}\times\Qs^{h}_{k-1}$ for the formulations \textit{F1}{} and \textit{F2}{} and for any given polynomial degree $k\geq1$. \end{lemma} \begin{proof} The proof is essentially based on the construction of a Fortin operator $\PiF:\big[\HONE(\Omega)\big]^2\to\Vv^{h}_{k}$ such that \begin{align} b(\mathbf{v},\qs_{\hh}) &= \bs_{\hh}(\PiF\mathbf{v},\qs_{\hh})\qquad\forall\qs_{\hh}\in\PS{k-1}(\Omega_{\hh}),\label{eq:Fortin:bs=bsh}\\[0.5em] \norm{\PiF\mathbf{v}}{1,\Omega} &\leq \norm{\mathbf{v}}{1,\Omega},\label{eq:Fortin:boundedness} \end{align} for all $\mathbf{v}\in\big[\HONE(\Omega)\big]^2$, cf., e.g.,~\cite{Boffi-Brezzi-Fortin:2013}. As the proof is based on rather standard arguments, see e.g., \cite[Proposition~3.1]{BeiraodaVeiga-Lovadina-Vacca:2017}, we only briefly mention its three main steps. \medskip In the first step, reasoning as in~\cite[Proposition~4.2]{Mora-Rivera-Rodriguez:2015} for the non-enhanced virtual element space and~\cite[Theorem~5 (case $d=2$)]{Cangiani-Georgoulis-Pryer-Sutton:2016} for the enhanced virtual element space, we can prove the existence of a quasi-interpolation operator $\pi_{1}^{\P}:\big[\HS{s+1}(\P)\big]^2\to\Vv^{h}_{k}(\P)$, $0\leq\ss\leqk$ for all elements $\P\in\Omega_{\hh}$ such that \begin{align*} \norm{\mathbf{v}-\pi_{1}^{\P}\mathbf{v}}{0,\P} + \hh_{\P}\snorm{\mathbf{v}-\pi_{1}^{\P}\mathbf{v}}{1,\P} \leq C\hh_{\P}^{s+1}\snorm{\mathbf{v}}{s+1,\P}. \end{align*} Adding all elemental contributions, it is easy to see that \begin{align*} \norm{\mathbf{v}-\pi_{1}\mathbf{v}}{1,\Omega} \leq C \norm{\mathbf{v}}{1,\Omega}, \end{align*} where $\pi_{1}:\big[\HS{s+1}(\Omega)\big]^2\to\Vv^{h}_{k}$ is the global quasi-interpolation operator such that $\restrict{\big(\pi_{1}\mathbf{v}\big)}{\P}=\pi_{1}^{\P}(\restrict{\mathbf{v}}{\P})$ for all $\P\in\Omega_{\hh}$. \medskip In the second step, for any $\mathbf{v}\in\big[\HONE(\Omega)\big]^2$ we consider a vector-valued virtual element function $\vvh$ such that \medskip \begin{description} \item[$(i)$] for $k\geq1$, for all mesh edges $\E$, it holds that \begin{align} \int_{\E}\qs_{\hh}\vvh\cdot\norE\,ds = \int_{\E}\qs_{\hh}\mathbf{v}\cdot\norE\,ds \qquad \forall\qs_{\hh}\in\PS{k-1}(\E), \end{align} where we recall that $\norE$ is the unit normal vector to the edge $\E$, whose orientation is fixed once and for all; \medskip \item[$(ii)$] for $k\geq2$ and for all $\P\in\Omega_{\hh}$, it holds that \begin{align} \int_{\P}\vvh\cdot\qvh\,d\xv = \int_{\P}\mathbf{v}\cdot\qvh\,d\xv \qquad \forall\qvh\in\big[\PS{k-2}(\P)\big]^2. \end{align} \end{description} The vector-valued field $\vvh$ is easily determined in $\Vv^{h}_{k}$ by properly setting the degrees of freedom of the formulations $\textit{F1}$ and $\textit{F2}$. In particular, if $\vvh\cdot\norE=\vshx\norEx+\vshy\norEy$ for $\norE=(\norEx,\norEy)^T$ and $\vvh=(\vshx,\vshy)^T$, then it holds that \begin{itemize} \item condition $(i)$ is verified by setting accordingly the degrees of freedom \DOFS{F1}{b} of formulation $\textit{F1}$ and \DOFS{F2}{b} of formulation $\textit{F2}$; \item condition $(ii)$ is verified by setting accordingly the degrees of freedom \DOFS{F1}{c} of formulation $\textit{F1}$ and \DOFS{F2}{d} of formulation $\textit{F2}$. \end{itemize} All the remaining degrees of freedom are set to zero. The unisolvency property ensures that such $\vvh$ exists and is unique in $\Vv^{h}_{k}$. We denote the correspondance between $\mathbf{v}$ and $\vvh$ by introducing the elemental operator $\pi_{2}^{\P}:\big[\HONE(\P)\big]^2\to\Vv^{h}_{k}(\P)$, which is such that $\pi_{2}^{\P}\mathbf{v}=\vvh$, and the global operator $\restrict{\big(\pi_{2}\mathbf{v}\big)}{\P}=\pi_{2}^{\P}(\restrict{\mathbf{v}}{\P})$ for all $\P\in\Omega_{\hh}$. \medskip In the third and last step, we define the Fortin operator as $\PiF\mathbf{v} = \pi_{1}\mathbf{v} + \pi_{2}(1-\pi_{1})\mathbf{v}$. This operator satisfies~\eqref{eq:Fortin:bs=bsh} and~\eqref{eq:Fortin:boundedness}. The discrete inf-sup condition then follows immediately from the Fortin argument by using these relations and the continuous inf-sup condition~\eqref{eq:exact:inf-sup}. \end{proof} The properties of coercivity and boundedness of $\as_{\hh}(\cdot,\cdot)$ and inf-sup stability (cf. Lemma~\ref{lemma:inf-sup:condition}) and boundedness of $\bs_{\hh}(\cdot,\cdot)$ implies the wellposedness of the two virtual element formulations considered in this work. We formally state this result in the next theorem. \medskip \begin{theorem}[Well-posedness] The virtual element formulations \textit{F1}{} and \textit{F2}{} for any given polynomial degree $k\geq1$ have one and only one solution pair $(\uvh,p_{\hh})\in\Vv^{h}_{k}\times\Qs^{h}_{k-1}$, which is such that \begin{align} \norm{\uvh}{1,\Omega} + \norm{p_{\hh}}{0,\Omega} \leq C\norm{\mathbf{f}}{0,\Omega}. \end{align} \end{theorem} \medskip The proof is omitted as this is a standard result in the numerical approximation of saddle-point problems, cf.~\cite{Boffi-Brezzi-Fortin:2013}. \subsection{Preliminary results} \label{subsec:preliminary} To derive the error estimates in the energy norm and the $\LTWO$ norm, we need three technical lemmas that are preliminarly reported here. The first two lemmas are reported without the proof as they are well-known results from the approximation theory, see \cite{Brenner-Scott:1994,Dupont-Scott:1980}. In particular, the first lemma provides an estimate of the projection error and is the vector version of the analogous result reported in~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} for the scalar case. \medskip \begin{lemma}[Projection error] \label{lemma:projection:error} Under Assumptions~\textbf{(M1)}-\textbf{(M2)}, for every vector-valued field $\mathbf{v}\in\big[\HS{s+1}(\P)\big]^2$ with $1\leq\ss\leq\ell$ for some given integer number $\ell$, there exists a vector polynomial $\mathbf{v}_{\pi}\in\big[\PS{\ell}(\P)\big]^2$ such that \begin{align} &\norm{\mathbf{v}-\mathbf{v}_{\pi}}{0,\P} + \hh_{\P}\snorm{\mathbf{v}-\mathbf{v}_{\pi}}{1,\P}\leqC\hh_{\P}^{s+1}\snorm{\mathbf{v}}{s+1,\P}, \end{align} where $C$ is some positive constant that is independent of $\hh_{\P}$ but may depend on the polynomial degree $\ell$ and the mesh regularity constant $\varrho$. \end{lemma} \medskip \noindent The second lemma reports an estimate of the approximation errors for the interpolants $\vvI$ and $\qsI$. According to~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013}, we define the local interpolation $\vvI\in\Vv^{h}_{k}(\P)$ of a (smooth enough) field $\mathbf{v}$ as the virtual element field that has the same degrees of freedom. Similarly, we define the local interpolation $\qsI\in\Qs^{h}_{k-1}$ of a (smooth enough) scalar function $q$ as the polynomial function that has the same degrees of freedom. Therefore, $\restrict{(\qsI)}{\P}\in\PS{k-1}(\P)$ for all elements $\P\in\Omega_{\hh}$, and \begin{align} \int_{\Omega}\qsI(\mathbf{x})\,d\xv = 0, \label{eq:interp:zero-average} \end{align} since according to~\eqref{eq:SV:scalar:space:def} it also holds that $\qsI\inL^2_0(\Omega)$. \medskip \begin{lemma}[Interpolation error] \label{lemma:interpolation:error} Under Assumptions~\textbf{(M1)}-\textbf{(M2)}, for every vector-valued field $\mathbf{v}\in\big[\HS{s+1}(\P)\big]^2$ and scalar function $q\in\HS{s}(\P)$ with $1\leq\ss\leq\ell$, for some given integer number $\ell$, there exist a vector-valued field $\vvI\in\Vvh{\ell}(\P)$ and a scalar field $\qsI\in\PS{\ell-1}(\P)$ such that \begin{align} \norm{\mathbf{v}-\vvI}{0,\P} + \hh_{\P}\snorm{\mathbf{v}-\vvI}{1,\P}\leqC\hh_{\P}^{s+1}\snorm{\mathbf{v}}{s+1,\P},\\[0.5em] \norm{q-\qsI}{0,\P} + \hh_{\P}\snorm{q-\qsI}{1,\P}\leqC\hh_{\P}^{s} \snorm{q}{s,\P}, \end{align} for some positive constant $C$ that is independent of $\hh_{\P}$ but may depend on the polynomial degree $\ell$ and the mesh regularity constant $\varrho$. \end{lemma} \medskip \noindent In the last lemma of this section we prove a relation between $\uvh$, $\uvI$, $p_{\hh}$, and $p_{\INTP}$ that will be used in the convergence analysis of the next sections. \medskip \begin{lemma} Let $(\mathbf{u},p)\in\big[\HS{s+1}(\Omega)\big]^2\timesL^2_0(\Omega)$, $s\geq1$, be the exact solution of the variational formulation of the Stokes problem given in~\eqref{eq:stokes:var:A}-\eqref{eq:stokes:var:B} and $(\uvI,p_{\INTP})\in\Vv^{h}_{k}\times\Qs^{h}_{k-1}$ the corresponding virtual element interpolation. Let $(\uvh,p_{\hh})\in\Vv^{h}_{k}\times\Qs^{h}_{k-1}$ be the virtual element approximation to $(\mathbf{u},p)$ solving~\eqref{eq:stokes:vem:A}-\eqref{eq:stokes:vem:B}. Then, it holds that \begin{align} b(\uvh-\uvI,p_{\hh}-p_{\INTP}) = 0. \label{eq:aux:20} \end{align} \end{lemma} \begin{proof} Let $\P$ be an element of mesh $\Omega_{\hh}$ and $k\geq1$ an integer number. Consider the function $\mathbf{v}\in\big[\HS{s+1}(\P)\big]^2$, $s\geq1$, and its virtual element interpolant $\vvI\in\Vv^{h}_{k}(\P)$. Integrating by parts twice and using the definition of the interpolant $\vvI$, we find that: \begin{align} -\bs^{\P}(\mathbf{v},\qs_{\hh}) = \int_{\P}\qs_{\hh}\DIV\mathbf{v}\,d\xv &= -\int_{\P}\nabla\qs_{\hh}\cdot\mathbf{v}\,d\xv +\sum_{\E\in\partial\P}\int_{\E}\qs_{\hh}\norE\cdot\mathbf{v}d \nonumber\\[0.5em] &= -\int_{\P}\nabla\qs_{\hh}\cdot\vvI\,d\xv +\sum_{\E\in\partial\P}\int_{\E}\qs_{\hh}\norE\cdot\vvId = \int_{\P}\qs_{\hh}\DIV\vvI\,d\xv = -\bs^{\P}(\vvI,\qs_{\hh}), \label{eq:bsP=bsP} \end{align} which holds for all $\qs_{\hh}\in\PS{k-1}(\P)$. The identity chain~\eqref{eq:bsP=bsP} implies that $\bs^{\P}(\mathbf{v},\qs_{\hh})=\bs^{\P}(\vvI,\qs_{\hh})$, and, adding this relation over all elements $\P$ yields $b(\mathbf{v},\qs_{\hh})=b(\vvI,\qs_{\hh})$. By taking $\mathbf{v}=\mathbf{u}$, equation \eqref{eq:stokes:var:B} implies that $b(\uvI,\qs_{\hh})=b(\mathbf{u},\qs_{\hh})=0$. Likewise, by taking $\vvh=\uvh$, equations~\eqref{eq:bsh=bs} and~\eqref{eq:stokes:vem:B} imply that $b(\uvh,\qs_{\hh})=\bs_{\hh}(\uvh,\qs_{\hh})=0$. Taking the difference of the left-most left-hand side of the two previous identities yields $b(\uvh-\uvI,\qs_{\hh})=0$, which holds for all $\qs_{\hh}\in\Qs^{h}_{k-1}$. The assertion of the lemma readily follows by taking $\qs_{\hh}=p_{\hh}-p_{\INTP}$. \end{proof} \subsection{Error estimate in the energy norm} \label{subsec:error:estimate:H1} \begin{theorem} \label{theorem:H1:estimate} Let $\mathbf{u}\in\big[\HS{s+1}(\Omega)\capH^1_0(\Omega)\big]^2$ and $p\in\HS{s}(\Omega)\capL^2_0(\Omega)$, $1\leq\ss\leqk$, be the solution of the variational formulation of the Stokes problem given in~\eqref{eq:stokes:var:A}-\eqref{eq:stokes:var:B}. Let $(\uvh,p_{\hh})\in\Vv^{h}_{k}\times\Qs^{h}_{k-1}$ be the solution of the virtual element variational formulation \eqref{eq:stokes:vem:A}-\eqref{eq:stokes:vem:B} under the mesh regularity assumptions $\textbf{(M1)}-\textbf{(M2)}$ and for any polynomial degree $k\geq1$. Then, there exists a real, strictly positive constant $C$ independent of $h$ such that the following abstract estimate holds: \begin{align} \snorm{\mathbf{u}-\uvh}{1,\Omega} + \norm{p-p_{\hh}}{0,\Omega} \leqC\Bigg( \snorm{\mathbf{u}-\uvI}{1,\Omega} + \snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,h} + \norm{p-p_{\INTP}}{0,\Omega} + \sup_{\vvh\in\Vv^{h}_{k}\setminus\{\mathbf{0}\}}\frac{\ABS{\bil{\fvh}{\vvh}-\scal{\mathbf{f}}{\vvh}}}{\snorm{\vvh}{1,\Omega}} \Bigg) \label{eq:theo:abstract} \end{align} where $\uvI\in\Vv^{h}_{k}$ and $p_{\INTP}\in\Qs^{h}_{k-1}$ are the interpolants of $\mathbf{u}$ and $p$ from Lemma~\ref{lemma:interpolation:error}, and $\mathbf{u}_{\pi}\in\big[\PS{k}(\Omega_{\hh})\big]^2$ is any polynomial approximation of $\mathbf{u}$ that is defined in accordance with Lemma~\ref{lemma:projection:error}. Moreover, if $\mathbf{f}\in\big[\HS{t}(\Omega)\big]^2$, $t\geq0$, it holds that \begin{align} \snorm{\mathbf{u}-\uvh}{1,\Omega} + \norm{p-p_{\hh}}{0,\Omega} \leq C\Big( h^{s}\big(\norm{\mathbf{u}}{s+1,\Omega} + \norm{p}{s,\Omega}\big) + h^{\min(t,\bar{k})+1}\norm{\mathbf{f}}{t,\Omega} \Big), \label{eq:the:H1:estimates} \end{align} where $\bar{k}$ is defined as in~\eqref{eq:fvh:def}. \end{theorem} \begin{proof} We add and subtract $\uvI$ and $p_{\INTP}$ in the two terms of the left-hand side of~\eqref{eq:theo:abstract} and use the triangle inequality: \begin{align} \snorm{\mathbf{u}-\uvh}{1,\Omega} \leq \snorm{\mathbf{u}-\uvI}{1,\Omega} + \snorm{\uvI-\uvh}{1,\Omega}, \label{eq:H1:proof:00} \\[0.2em] \norm {p-p_{\hh}}{0,\Omega} \leq \norm {p-p_{\INTP}}{0,\Omega} + \snorm{p_{\INTP}-p_{\hh}}{1,\Omega}. \label{eq:H1:proof:10} \end{align} The two terms $\snorm{\mathbf{u}-\uvh}{1,\Omega}$ and $\norm{p-p_{\hh}}{0,\Omega}$ are in the right-hand side of~\eqref{eq:theo:abstract}. We can estimate them by applying Lemma~\ref{lemma:interpolation:error} to obtain~\eqref{eq:the:H1:estimates}. Instead, to estimate the second term of the right-hand side of~\eqref{eq:H1:proof:00} and~\eqref{eq:H1:proof:10}, we proceed as follows. Let ${\bm\delta}_{\hh}=\uvh-\uvI\in\Vv^{h}_{k}$. Starting from the coercivity inequality~\eqref{eq:coercivity}, we find that: \begin{align} \begin{array}{lll} &\alpha_*\snorm{{\bm\delta}_{\hh}}{1,\Omega}^2 \leq \as_{\hh}({\bm\delta}_{\hh},{\bm\delta}_{\hh}) &\hspace{-3cm}\mbox{\big[split ${\bm\delta}_{\hh}=\uvh-\uvI$\big]}\nonumber\\[0.4em] &\qquad= \as_{\hh}(\uvh,{\bm\delta}_{\hh}) - \as_{\hh}(\uvI,{\bm\delta}_{\hh}) &\hspace{-3cm}\mbox{\big[use~\eqref{eq:stokes:vem:A} and add $\pm\mathbf{u}_{\pi}$\big]}\nonumber\\[0.2em] &\qquad= \bil{\fvh}{{\bm\delta}_{\hh}} - \bs_{\hh}({\bm\delta}_{\hh},p_{\hh}) - \sum_{\P\in\Omega_{\hh}}\Big(\as^{\P}_{\hh}(\uvI-\mathbf{u}_{\pi},{\bm\delta}_{\hh}) + \as^{\P}_{\hh}(\mathbf{u}_{\pi},{\bm\delta}_{\hh}) \Big) &\hspace{-3cm}\mbox{\big[use~\eqref{eq:bsh=bs} and~\eqref{eq:consistency}\big]}\nonumber\\[0.2em] &\qquad= \bil{\fvh}{{\bm\delta}_{\hh}} - b({\bm\delta}_{\hh},p_{\hh}) - \sum_{\P\in\Omega_{\hh}}\Big(\as^{\P}_{\hh}(\uvI-\mathbf{u}_{\pi},{\bm\delta}_{\hh}) + \as^{\P} (\mathbf{u}_{\pi},{\bm\delta}_{\hh}) \Big) &\hspace{-3cm}\mbox{\big[use~\eqref{eq:aux:20} and add $\pm\mathbf{u}$\big]}\nonumber\\[0.2em] &\qquad= \bil{\fvh}{{\bm\delta}_{\hh}} - b({\bm\delta}_{\hh},p_{\INTP}) - \sum_{\P\in\Omega_{\hh}}\as^{\P}_{\hh}(\uvI-\mathbf{u}_{\pi},{\bm\delta}_{\hh}) - \sum_{\P\in\Omega_{\hh}}\Big( \as^{\P} (\mathbf{u}_{\pi}-\mathbf{u},{\bm\delta}_{\hh}) + \as^{\P}(\mathbf{u},{\bm\delta}_{\hh}) \Big) &\hspace{-1.5cm}\mbox{\big[use~\eqref{eq:asP:def}\big]}\nonumber\\[0.2em] &\qquad= \bil{\fvh}{{\bm\delta}_{\hh}} - b({\bm\delta}_{\hh},p_{\INTP}) - \sum_{\P\in\Omega_{\hh}}\as^{\P}_{\hh}(\uvI-\mathbf{u}_{\pi},{\bm\delta}_{\hh}) - \sum_{\P\in\Omega_{\hh}}\as^{\P} (\mathbf{u}_{\pi}-\mathbf{u},{\bm\delta}_{\hh}) - a(\mathbf{u},{\bm\delta}_{\hh}) &\hspace{-1.5cm}\mbox{\big[use~\eqref{eq:stokes:var:A}\big]}\nonumber\\[0.2em] &\qquad= \bil{\fvh}{{\bm\delta}_{\hh}} - b({\bm\delta}_{\hh},p_{\INTP}) - \sum_{\P\in\Omega_{\hh}}\as^{\P}_{\hh}(\uvI-\mathbf{u}_{\pi},{\bm\delta}_{\hh}) - \sum_{\P\in\Omega_{\hh}}\as^{\P} (\mathbf{u}_{\pi}-\mathbf{u},{\bm\delta}_{\hh}) - \Big( \scal{\mathbf{f}}{{\bm\delta}_{\hh}} - b({\bm\delta}_{\hh},p) \Big) \nonumber\\[0.5em] &\qquad = \Big[ \bil{\fvh}{{\bm\delta}_{\hh}} - \scal{\mathbf{f}}{{\bm\delta}_{\hh}} \Big] + \Big[ b({\bm\delta}_{\hh},p) - b({\bm\delta}_{\hh},p_{\INTP}) \Big] + \bigg[ - \sum_{\P\in\Omega_{\hh}}\as^{\P}_{\hh}(\uvI-\mathbf{u}_{\pi},{\bm\delta}_{\hh}) - \sum_{\P\in\Omega_{\hh}}\as^{\P} (\mathbf{u}_{\pi}-\mathbf{u},{\bm\delta}_{\hh}) \bigg] \nonumber\\[0.2em] &\qquad = \big[\TERM{R}{1}\big] + \big[\TERM{R}{2}\big] + \big[\TERM{R}{3}\big]. \end{array} \end{align} We derive an upper bound of term $\TERM{R}{1}$ as follows: \begin{align*} \ABS{\TERM{R}{1}} = \ABS{\bil{\fvh}{{\bm\delta}_{\hh}} - \scal{\mathbf{f}}{{\bm\delta}_{\hh}}} \leq \left[\sup_{\vvh\in\Vv^{h}_{k}\setminus\{\mathbf{0}\}}\frac{ \ABS{\bil{\fvh}{\vvh} - \scal{\mathbf{f}}{\vvh}} }{ \snorm{\vvh}{1,\Omega} }\right]\,\snorm{{\bm\delta}_{\hh}}{1,\Omega}. \end{align*} We derive an upper bound of term $\TERM{R}{2}$ by using the Cauchy-Schwarz inequality: \begin{align*} \ABS{\TERM{R}{2}} = \ABS{b({\bm\delta}_{\hh},p-p_{\INTP})} \leq \norm{\DIV{\bm\delta}_{\hh}}{0,\Omega}\,\norm{p-p_{\INTP}}{0,\Omega} \leq C\snorm{{\bm\delta}_{\hh}}{1,\Omega}\,\norm{p-p_{\INTP}}{0,\Omega}. \end{align*} To derive an upper bound of term $\TERM{R}{3}$, we use the continuity of $\as_{\hh}(\cdot,\cdot)$, cf.~\eqref{eq:continuity}, and $a(\cdot,\cdot)$, we add and subtract $\mathbf{u}$ in the first summation argument, and, in the last step, we use definition~\eqref{eq:broken:seminorm} of the broken seminorm $\snorm{\cdot}{1,h}$ to find that \begin{align*} &\ABS{\TERM{R}{3}} = \ABS{ \sum_{\P\in\Omega_{\hh}}\Big( \as^{\P}_{\hh}(\uvI-\mathbf{u}_{\pi},{\bm\delta}_{\hh}) + \as^{\P} (\mathbf{u}_{\pi}-\mathbf{u},{\bm\delta}_{\hh}) \Big) } \leq \sum_{\P\in\Omega_{\hh}}\Big( \alpha^*\snorm{\uvI-\mathbf{u}_{\pi}}{1,\P} + \snorm{\mathbf{u}_{\pi}-\mathbf{u}}{1,\P} \Big)\,\snorm{{\bm\delta}_{\hh}}{1,\P} \nonumber\\[0.5em] &\quad\leq \sum_{\P\in\Omega_{\hh}}\Big( \alpha^*\snorm{\uvI-\mathbf{u}}{1,\P} + (1+\alpha^*)\snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,\P} \Big)\,\snorm{{\bm\delta}_{\hh}}{1,\P} \leq\Big(\, \alpha^*\snorm{\mathbf{u}-\mathbf{u}_{I}}{1,\Omega} + (1+\alpha^*)\snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,h} \,\Big) \,\snorm{{\bm\delta}_{\hh}}{1,\Omega}. \end{align*} Let $\sigma_{\hh}=p_{\hh}-p_{\INTP}\in\Qs^{h}_{k-1}$. In view of the discrete inf-sup condition, cf. Lemma~\ref{lemma:inf-sup:condition}, there exists a real, strictly positive constant $\tilde{\beta}$ and a virtual element vector-valued field $\vvh$ such that \begin{align} \begin{array}{lll} &\tilde{\beta}\norm{\sigma_{\hh}}{0,\Omega}\snorm{\vvh}{1,\Omega} \leq \bs_{\hh}(\vvh,\sigma_{\hh}) &\hspace{-1.75cm}\mbox{\big[split $\sigma_{\hh}=p_{\hh}-p_{\INTP}$\big]}\nonumber\\[0.5em] &\qquad= \bs_{\hh}(\vvh,p_{\hh}) - \bs_{\hh}(\vvh,p_{\INTP}) &\hspace{-1.75cm}\mbox{\big[use~\eqref{eq:stokes:vem:A}\big]}\nonumber\\[0.5em] &\qquad= -\as_{\hh}(\uvh,\vvh) + \bil{\fvh}{\vvh} - \bs_{\hh}(\vvh,p_{\INTP}) &\hspace{-1.75cm}\mbox{\big[add~\eqref{eq:stokes:var:A}\big]}\nonumber\\[0.5em] &\qquad= -\as_{\hh}(\uvh,\vvh) + \big[ a(\mathbf{u},\vvh) + b(\vvh,p) - \scal{\mathbf{f}}{\vvh} \big] + \bil{\fvh}{\vvh} - \bs_{\hh}(\vvh,p_{\INTP}) &\hspace{-1.75cm}\mbox{\big[use~\eqref{eq:asP:def} and~\eqref{eq:ash:def}\big]}\nonumber\\[0.5em] &\qquad= \bil{\fvh}{\vvh} - \scal{\mathbf{f}}{\vvh} + b(\vvh,p) - \bs_{\hh}(\vvh,p_{\INTP}) + \sum_{\P\in\Omega_{\hh}}\Big( \as^{\P}(\mathbf{u},\vvh) - \as^{\P}_{\hh}(\uvh,\vvh) \Big) &\hspace{-1.75cm}\mbox{\big[use~\eqref{eq:consistency} with $\qvh=\mathbf{u}_{\pi}$\big]}\nonumber\\[1.em] &\qquad= \Big[ \bil{\fvh}{\vvh} - \scal{\mathbf{f}}{\vvh} \Big] + \Big[ b(\vvh,p) - \bs_{\hh}(\vvh,p_{\INTP}) \Big] + \sum_{\P\in\Omega_{\hh}}\Big( \as^{\P}(\mathbf{u}-\mathbf{u}_{\pi},\vvh) - \as^{\P}_{\hh}(\uvh-\mathbf{u}_{\pi},\vvh) \Big) \nonumber\\%[2.em] & \qquad= \big[\TERM{R}{4}\big] + \big[\TERM{R}{5}\big] + \big[\TERM{R}{6}\big]. \end{array} \end{align} We derive an upper bound of term $\TERM{R}{4}$ using the same steps as for the bound of term $\TERM{R}{1}$ with $\vvh$ instead of ${\bm\delta}_{\hh}$: \begin{align*} \ABS{\TERM{R}{4}} = \ABS{\bil{\fvh}{\vvh} - \scal{\mathbf{f}}{\vvh}} \leq \left[\sup_{\vvh\in\Vv^{h}_{k}\setminus\{\mathbf{0}\}}\frac{ \ABS{\bil{\fvh}{\vvh} - \scal{\mathbf{f}}{\vvh}} }{ \snorm{\vvh}{1,\Omega} }\right]\,\snorm{\vvh}{1,\Omega}. \end{align*} We derive an upper bound of term $\TERM{R}{5}$ using the same steps as for the bound of term $\TERM{R}{2}$ with $\vvh$ instead of ${\bm\delta}_{\hh}$: \begin{align*} \ABS{\TERM{R}{5}} \leq \snorm{\vvh}{1,\Omega}\,\norm{p_{\INTP}-p}{0,\Omega}. \end{align*} We derive an upper bound of term $\TERM{R}{6}$ using the same steps as for the bound of term $\TERM{R}{3}$ with $\vvh$ instead of ${\bm\delta}_{\hh}$ and $\uvh$ instead of $\uvI$ \begin{align*} \ABS{\TERM{R}{6}} &\leq \Big( \alpha^*\snorm{\mathbf{u}-\uvh}{1,\Omega} + (1+\alpha^*)\snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,h} \Big) \,\snorm{\vvh}{1,\Omega}. \end{align*} \medskip Finally, we use the bound of terms $\TERM{R}{1}-\TERM{R}{3}$ to control $\snorm{\uvI-\uvh}{1,\Omega}$ in~\eqref{eq:H1:proof:00}. Then, we use the bound of terms $\TERM{R}{4}-\TERM{R}{4}$ and $\snorm{\mathbf{u}-\uvh}{1,\Omega}$ to control $\snorm{p_{\INTP}-p_{\hh}}{1,\Omega}$ in \eqref{eq:H1:proof:10}. The first assertion of the theorem follows on using the resulting inequalities to control the left-hand side of~\eqref{eq:theo:abstract}. The estimate~\eqref{eq:the:H1:estimates} follows from a straightforward application of Lemmas~\ref{lemma:projection:error} and~\ref{lemma:interpolation:error}, and estimates~\eqref{eq:fv:bound:0}-\eqref{eq:fv:bound:1} in the right-hand side of~\eqref{eq:theo:abstract}. \end{proof} \subsection{Error estimate in the $\LTWO$ norm for the velocity field} \label{subsec:error:estimate:L2} \begin{theorem} \label{theorem:L2:estimate} Let $\mathbf{u}\in\big[\HS{s+1}(\Omega)\capH^1_0(\Omega)\big]^2$ and $p\in\HS{s}(\Omega)\capL^2_0(\Omega)$, $1\leq\ss\leqk$, be the exact solution of the variational formulation of the Stokes problem given in~\eqref{eq:stokes:var:A}-\eqref{eq:stokes:var:B} with $\mathbf{f}\in\big[\HS{t}(\Omega)\big]^2$, $0\leqt$. Let $(\uvh,p_{\hh})\in\Vv^{h}_{k}\times\Qs^{h}_{k-1}$ be the solution of the virtual element variational formulation \eqref{eq:stokes:vem:A}-\eqref{eq:stokes:vem:B} under the mesh regularity assumptions $\textbf{(M1)}-\textbf{(M2)}$. Then, it holds: \begin{align} \label{eq:theo:L2:estimates} \norm{\mathbf{u}-\uvh}{0,\Omega} \leq C\bigg( h^{s+1}\Big( \norm{\mathbf{u}}{s+1,\Omega} + \norm{p}{s,\Omega} \Big) + h^{\min(t,\bar{k})+1}\norm{\mathbf{f}}{t,\Omega} \bigg) \end{align} for some real, strictly positive constant $C$ independent of $h$ and where $\bar{k}$ is defined as in~\eqref{eq:fvh:def}. \end{theorem} \begin{proof} In the derivation of the $\LTWO$ error for the virtual element approximation of the velocity vector $\mathbf{u}$, we make use of the solution $({\bm\Psi},\varphi)\in\big[\HTWO(\Omega)\capH^1_0(\Omega)\big]^2\times\big[\HONE(\Omega)\capL^2_0(\Omega)\big]$ of the dual problem: \begin{align} -\Delta{\bm\Psi} - \nabla\varphi = \mathbf{u} -\uvh & \qquad\textrm{in~}\Omega,\label{eq:dual:A}\\[0.2em] \DIV{\bm\Psi} = 0 & \qquad\textrm{in~}\Omega.\label{eq:dual:B} \end{align} Since ${\bm\Psi}\in\big[\HTWO(\Omega)\big]^2$ and $\varphi\in\HONE(\Omega)$, the application of Lemmas~\ref{lemma:projection:error} and~\ref{lemma:interpolation:error} yields \begin{align} \snorm{{\bm\Psi}-{\bm\Psi}_{\INTP}}{1,\Omega} + \snorm{{\bm\Psi}-{\bm\Psi}_{\pi}}{1,h} &\leq Ch\snorm{{\bm\Psi}}{2,\Omega}, \label{eq:bound:psiv}\\[0.5em] \norm{\varphi-\varphi_{\INTP}}{0,\Omega} &\leq Ch\snorm{\varphi}{1,\Omega}, \label{eq:bound:phis} \end{align} where ${\bm\Psi}_{\INTP}$ and $\varphi_{\INTP}$ are the virtual element interpolant of ${\bm\Psi}$ and $\varphi$ in $\Vv^{h}_{k}$ and $\Qs^{h}_{k-1}$, respectively, ${\bm\Psi}_{\pi}$ is the polynomial approximation of ${\bm\Psi}$ according to Lemma~\ref{lemma:projection:error}, and $\norm{\,\cdot\,}{1,h}$ in~\eqref{eq:bound:psiv} is the ``broken'' norm defined in Eq.~\eqref{eq:broken:seminorm}. Under the assumption that the domain $\Omega$ is convex, the solution pair $({\bm\Psi},\varphi)$ has the following regularity property: \begin{align} \norm{{\bm\Psi}}{2,\Omega} + \norm{\varphi}{1,\Omega} \leq C\norm{\mathbf{u}-\uvh}{0,\Omega}. \label{eq:regularity:bound:psiv} \end{align} Then, we use the definition of the $\LTWO$ norm, and note that the boundary integral on $\partial\Omega$ of $\mathbf{n}\cdot(\mathbf{u}-\uvh)$, which is originated by an integration by parts, is zero since $\mathbf{u}=\uvh=0$ on $\partial\Omega$, and we find that \begin{align} \begin{array}{lll} &\norm{\mathbf{u}-\uvh}{0,\Omega}^2 = \int_{\Omega}(\mathbf{u}-\uvh)\cdot(\mathbf{u}-\uvh)\,d\xv &\hspace{-5cm}\mbox{\big[use~\eqref{eq:dual:A}\big]}\nonumber\\[0.5em] &\qquad= \int_{\Omega}\big(-\Delta{\bm\Psi}-\nabla\varphi\big)\cdot(\mathbf{u}-\uvh)\,d\xv &\hspace{-5cm}\mbox{\big[integrate by parts both terms\big]}\nonumber\\[1.em] &\qquad= \int_{\Omega}\nabla{\bm\Psi}\cdot\nabla(\mathbf{u}-\uvh)\,d\xv + \int_{\Omega}\varphi\,\DIV(\mathbf{u}-\uvh)\,d\xv &\hspace{-5cm}\mbox{\big[use~\eqref{eq:as:def}-\eqref{eq:bs:def}\big]}\nonumber\\[1.25em] &\qquad= a({\bm\Psi},\mathbf{u}-\uvh) - b(\mathbf{u}-\uvh,\varphi) &\hspace{-5cm}\mbox{\big[add $\pm{\bm\Psi}_{\INTP}$ and $\pm\varphi_{\INTP}$ \big]}\nonumber\\[1.em] &\qquad= \big[ a({\bm\Psi}-{\bm\Psi}_{\INTP},\mathbf{u}-\uvh) \big] + \big[ a({\bm\Psi}_{\INTP},\mathbf{u}-\uvh) \big] + \big[ -b(\mathbf{u}-\uvh,\varphi-\varphi_{\INTP}) \big] + \big[ -b(\mathbf{u}-\uvh,\varphi_{\INTP}) \big] &\nonumber\\[1.em] &\qquad= \big[\TERM{R}{1}\big] + \big[\TERM{R}{2}\big] + \big[\TERM{R}{3}\big] + \big[\TERM{R}{4}\big]. \label{eq:L2:error-estimate} \end{array} \end{align} We estimate separately each term $\TERM{R}{i}$, $i=1,\ldots,4$. \medskip We derive an upper bound for term $\TERM{R}{1}$ by using the continuity of the bilinear form $a(\cdot,\cdot)$ and inequalities~\eqref{eq:bound:psiv} and~\eqref{eq:regularity:bound:psiv}: \begin{align} \ABS{\TERM{R}{1}} & = \ABS{a({\bm\Psi}-{\bm\Psi}_{\INTP},\mathbf{u}-\uvh)} \leq \snorm{{\bm\Psi}-{\bm\Psi}_{\INTP}}{1,\Omega}\,\snorm{\mathbf{u}-\uvh}{1,\Omega} \STACKON{\leq}{\eqref{eq:bound:psiv}} \Csh\snorm{{\bm\Psi}}{2,\Omega}\,\snorm{\mathbf{u}-\uvh}{1,\Omega} \nonumber\\[0.5em] & \STACKON{\leq}{\eqref{eq:regularity:bound:psiv}} \Csh\norm{\mathbf{u}-\uvh}{0,\Omega}\,\snorm{\mathbf{u}-\uvh}{1,\Omega}. \end{align} \medskip We split term $\TERM{R}{2}$ into three subterms by using~\eqref{eq:stokes:var:A}, adding~\eqref{eq:stokes:vem:A} and rearranging the terms: \begin{align} \TERM{R}{2} & = a({\bm\Psi}_{\INTP},\mathbf{u}-\uvh) = a(\mathbf{u},{\bm\Psi}_{\INTP}) - a(\uvh,{\bm\Psi}_{\INTP}) \nonumber\\[0.5em] &= \scal{\mathbf{f}}{{\bm\Psi}_{\INTP}} - b({\bm\Psi}_{\INTP},p) - a(\uvh,{\bm\Psi}_{\INTP}) + \Big( \as_{\hh}(\uvh,{\bm\Psi}_{\INTP}) + \bs_{\hh}({\bm\Psi}_{\INTP},p_{\hh}) - \bil{\fvh}{{\bm\Psi}_{\INTP}} \Big) \nonumber\\[0.5em] & = \big[ \scal{\mathbf{f}}{{\bm\Psi}_{\INTP}} - \bil{\fvh}{{\bm\Psi}_{\INTP}} \big] + \big[ \bs_{\hh}({\bm\Psi}_{\INTP},p_{\hh}) - b({\bm\Psi}_{\INTP},p) \big] + \big[ \as_{\hh}(\uvh,{\bm\Psi}_{\INTP}) - a(\uvh,{\bm\Psi}_{\INTP}) \big] \nonumber\\[0.5em] &= \TERM{R}{21} + \TERM{R}{22} + \TERM{R}{23}. \end{align} To bound term $\TERM{R}{21}$, we use inequalities~\eqref{eq:fv:bound:0} and~\eqref{eq:fv:bound:1}, the boundedness of the interpolation operator, and inequality~\eqref{eq:regularity:bound:psiv}, and we find that \begin{align} \ABS{\TERM{R}{21}} \leq Ch^{\min(s,\bar{k})+1}\norm{\mathbf{f}}{s,\Omega} \snorm{{\bm\Psi}_{\INTP}}{1,\Omega} \leq Ch^{\min(s,\bar{k})+1}\norm{\mathbf{f}}{s,\Omega} \norm {\mathbf{u}-\uvh}{0,\Omega}. \end{align} To derive an upper bound for term $\TERM{R}{22}$, we first note that $\bs_{\hh}({\bm\Psi}_{\INTP},p_{\hh})=b({\bm\Psi}_{\INTP},p_{\hh})$ from~\eqref{eq:bsh=bs} and that we can subtract $b({\bm\Psi},p_{\hh}-p)=0$, which is zero since $\DIV{\bm\Psi}=0$, cf.~\eqref{eq:dual:B}. Then, we use the Cauchy-Schwarz inequality, inequalities~\eqref{eq:bound:psiv} and~\eqref{eq:regularity:bound:psiv}, and we find that \noindent \begin{align} \ABS{\TERM{R}{22}} & = \ABS{b({\bm\Psi}_{\INTP},p_{\hh}-p)} = \ABS{b({\bm\Psi}_{\INTP}-{\bm\Psi},p_{\hh}-p)} \leq \norm{\DIV({\bm\Psi}_{\INTP}-{\bm\Psi})}{0,\Omega}\,\norm{p_{\hh}-p}{0,\Omega} \nonumber\\[0.5em] &\leq C\snorm{{\bm\Psi}_{\INTP}-{\bm\Psi}}{1,\Omega} \,\norm{p_{\hh}-p}{0,\Omega} \STACKON{\leq}{\eqref{eq:bound:psiv}} Ch\snorm{{\bm\Psi}}{2,\Omega} \,\norm{p_{\hh}-p}{0,\Omega} \STACKON{\leq}{\eqref{eq:regularity:bound:psiv}} Ch\norm{\mathbf{u}-\uvh}{0,\Omega} \,\norm{p_{\hh}-p}{0,\Omega}. \end{align} \noindent To estimate $\TERM{R}{23}$, we first note that the local consistency property of the bilinear form $\as_{\hh}(\cdot,\cdot)$ implies that \begin{align} \as^{\P}_{\hh}(\uvh,{\bm\Psi}_{\INTP}) - \as^{\P}(\uvh,{\bm\Psi}_{\INTP}) &= \as^{\P}_{\hh}(\uvh-\mathbf{u}_{\pi},{\bm\Psi}_{\INTP}-{\bm\Psi}_{\pi}) - \as^{\P}(\uvh-\mathbf{u}_{\pi},{\bm\Psi}_{\INTP}-{\bm\Psi}_{\pi}), \label{eq:R23:aux} \end{align} where $\mathbf{u}_{\pi}$ and ${\bm\Psi}_{\pi}$ are suitable polynomial approximations of $\mathbf{u}$ and ${\bm\Psi}$ satisfying the assumptions of Lemma~\ref{lemma:projection:error}. Then, we use this identity, Lemmas~\ref{lemma:projection:error} and~\ref{lemma:interpolation:error} and inequality~\eqref{eq:regularity:bound:psiv} to obtain the bound on $\TERM{R}{23}$ as follows: \begin{align} \ABS{\TERM{R}{23}} &= \ABS{\as_{\hh}(\uvh,{\bm\Psi}_{\INTP}) - a(\uvh,{\bm\Psi}_{\INTP})} = \ABS{\sum_{\P\in\Omega_{\hh}}\Big( \as^{\P}_{\hh}(\uvh-\mathbf{u}_{\pi},{\bm\Psi}_{\INTP}-{\bm\Psi}_{\pi}) - \as^{\P}(\uvh-\mathbf{u}_{\pi},{\bm\Psi}_{\INTP}-{\bm\Psi}_{\pi}) \Big)} \nonumber\\[0.125em] & \leq (1+\alpha^*)\sum_{\P\in\Omega_{\hh}} \snorm{\uvh-\mathbf{u}_{\pi}}{1,\P}\,\snorm{{\bm\Psi}_{\INTP}-{\bm\Psi}_{\pi}}{1,\P} \leq (1+\alpha^*) \left(\sum_{\P\in\Omega_{\hh}} \snorm{\uvh-\mathbf{u}_{\pi}}{1,\P}^2 \right)^{\frac12} \left(\sum_{\P\in\Omega_{\hh}} \snorm{{\bm\Psi}_{\INTP}-{\bm\Psi}_{\pi}}{1,\P}^2 \right)^{\frac12}. \label{eq:L2:proof:R23:00} \end{align} We add and subtract $\mathbf{u}$ and ${\bm\Psi}$, and use the triangular inequality to find that \begin{align} \snorm{\uvh-\mathbf{u}_{\pi}}{1,\P}^2 &= \left( \snorm{\uvh-\mathbf{u}}{1,\P} + \snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,\P} \right)^2 \leq 2\snorm{\uvh-\mathbf{u}}{1,\P}^2 + 2\snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,\P}^2, \label{eq:R23:aux:1}\\[0.5em] \snorm{{\bm\Psi}_{\INTP}-{\bm\Psi}_{\pi}}{1,\P}^2 &= \left( \snorm{{\bm\Psi}_{\INTP}-{\bm\Psi}}{1,\P} + \snorm{{\bm\Psi}-{\bm\Psi}_{\pi}}{1,\P} \right)^2 \leq 2\snorm{{\bm\Psi}_{\INTP}-{\bm\Psi}}{1,\P}^2 + 2\snorm{{\bm\Psi}-{\bm\Psi}_{\pi}}{1,\P}^2. \label{eq:R23:aux:2} \end{align} Using inequalities~\eqref{eq:R23:aux:1},~\eqref{eq:R23:aux:2},~\eqref{eq:bound:psiv}, and~\eqref{eq:regularity:bound:psiv}, we find that \begin{align} \ABS{\TERM{R}{23}} & \STACKON{\leq}{\eqref{eq:R23:aux:1},\eqref{eq:R23:aux:2}} C \Big(\snorm{\uvh-\mathbf{u}}{1,\Omega}+\snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,h}\Big) \Big(\snorm{{\bm\Psi}_{\INTP}-{\bm\Psi}}{1,\Omega}+\snorm{{\bm\Psi}-{\bm\Psi}_{\pi}}{1,h}\Big) \nonumber\\[0.25em] &\STACKON{\leq}{\eqref{eq:bound:psiv}} C\Big(\snorm{\uvh-\mathbf{u}}{1,\Omega}+\snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,h}\Big)\,\, h\snorm{{\bm\Psi}}{2,\Omega} \STACKON{\leq}{\eqref{eq:regularity:bound:psiv}} Ch\Big(\snorm{\uvh-\mathbf{u}}{1,\Omega} + \snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,h}\Big)\,\norm{\mathbf{u}-\uvh}{0,\Omega}. \end{align} \medskip We derive an upper bound for term $\TERM{R}{3}$ by using the Cauchy-Schwarz inequality, and the inequalities~\eqref{eq:bound:phis} and~\eqref{eq:regularity:bound:psiv}: \begin{align} \ABS{\TERM{R}{3}} &= \ABS{b(\uvh-\mathbf{u},\varphi-\varphi_{\INTP})} \leq C\norm{\DIV(\uvh-\mathbf{u})}{0,\Omega}\,\norm{\varphi-\varphi_{\INTP}}{0,\Omega} \leq C\snorm{\uvh-\mathbf{u}}{1,\Omega}\,\norm{\varphi-\varphi_{\INTP}}{0,\Omega} \nonumber\\[0.5em] & \STACKON{\leq}{\eqref{eq:bound:phis}} C\snorm{\uvh-\mathbf{u}}{1,\Omega}\,h\snorm{\varphi}{1,\Omega} \STACKON{\leq}{\eqref{eq:regularity:bound:psiv}} Ch\snorm{\uvh-\mathbf{u}}{1,\Omega}\,\norm{\uvh-\mathbf{u}}{0,\Omega}. \end{align} \medskip Finally, we note that term $\TERM{R}{4}$ is zero by using~\eqref{eq:stokes:var:B} and~\eqref{eq:stokes:vem:B} (set $q=\qs_{\hh}=\varphi_{\INTP}$): \begin{align} \TERM{R}{4} = b(\mathbf{u}-\uvh,\varphi_{\INTP}) = b(\mathbf{u},\varphi_{\INTP}) - \bs_{\hh}(\uvh,\varphi_{\INTP}) = 0. \end{align} The assertion of the theorem follows by using the bounds of terms $\TERM{R}{i}$, for $i=1,2,3$ and $\TERM{R}{4}=0$ to estimate the left-hand side of~\eqref{eq:theo:L2:estimates}, Theorem~\ref{theorem:H1:estimate} to bound the resulting term $\snorm{\uvh-\mathbf{u}}{1,\Omega}+\norm{p-p_{\hh}}{0,\Omega}$ and Lemma~\ref{lemma:projection:error} to bound $\snorm{\mathbf{u}-\mathbf{u}_{\pi}}{1,h}$. \end{proof} \section{Numerical experiments} \label{sec:numerical} \begin{figure} \centering \begin{tabular}{ccc} \hspace{-0.42cm}\includegraphics[scale=0.35]{fig06.pdf} & \hspace{-0.42cm}\includegraphics[scale=0.35]{fig07.pdf} & \hspace{-0.42cm}\includegraphics[scale=0.35]{fig08.pdf} \\[1em] \hspace{-0.42cm}\includegraphics[scale=0.35]{fig09.pdf} & \hspace{-0.42cm}\includegraphics[scale=0.35]{fig10.pdf} & \hspace{-0.42cm}\includegraphics[scale=0.35]{fig11.pdf} \\[0.5em] \hspace{-2mm} $\MESH{1}$ & \hspace{-2mm}$\MESH{2}$ & \hspace{-2mm}$\MESH{3}$ \end{tabular} \caption{Base meshes (top row) and first refinement meshes (bottom row) of the three mesh families used in this section: $(\MESH{1})$ random quadrilateral meshes; $(\MESH{2})$ general polygonal meshes; $(\MESH{3})$ concave element meshes; } \label{fig:Meshes} \end{figure} We assess the convergence property of the two virtual element formulations considered in this paper by numerically solving problem \eqref{eq:stokes:var:A}-\eqref{eq:stokes:var:B} on the computational domain $\Omega=[0,1]\times[0,1]$. The Dirichlet boundary conditions and the source term are set accordingly to the manufactured solution $\mathbf{u}=(u_x,u_y)^T$ and $p$ given by \begin{align*} u_x(x,y) &= \cos{(2\pix)}\sin{(2\piy)},\\ u_y(x,y) &= -\sin{(2\pix)}\cos{(2\piy)},\\ p (x,y) &= e^{x+y}-(e-1)^2. \end{align*} Our implementation of the virtual element method uses the basis of orthogonal polynomials in all mesh elements, which is well-known to control the ill-conditioning of the final linear system very efficiently. We run our virtual element solver on three mesh families respectively composed by random quadrilateral meshes ($\MESH{1}$), general polygonal meshes ($\MESH{2}$), and concave element meshes ($\MESH{3}$). The construction of these mesh families is rather standard in the literature of the VEM and its description can easily be found, for example, in~\cite{Berrone-Borio-Manzini:2018:CMAME:journal}. For every mesh family, we consider five refinements. The base mesh and the first refined mesh of each family are shown in Figure~\ref{fig:Meshes}; mesh data are reported in Tables~\ref{tab:mesh-diameter} and~\ref{tab:mesh-elem-vrtx}. \newcommand{\TABROW}[4]{ #1 & #2 & #3 & #4 \\} \begin{table}[t!] \begin{center} \begin{tabular}{c|c|c|c} \TABROW{Level }{ $\MESH{1}$ }{ $\MESH{2}$ }{ $\MESH{3}$ } \hline \TABROW{ 1 }{ $3.72 \cdot 10^{-1}$ }{ $4.26 \cdot 10^{-1}$ }{ $3.81 \cdot 10^{-1}$ } \TABROW{ 2 }{ $1.99 \cdot 10^{-1}$ }{ $2.50 \cdot 10^{-1}$ }{ $1.91 \cdot 10^{-1}$ } \TABROW{ 3 }{ $1.01 \cdot 10^{-1}$ }{ $1.25 \cdot 10^{-1}$ }{ $9.54 \cdot 10^{-2}$ } \TABROW{ 4 }{ $5.17 \cdot 10^{-2}$ }{ $6.21 \cdot 10^{-2}$ }{ $4.77 \cdot 10^{-2}$ } \TABROW{ 5 }{ $2.61 \cdot 10^{-2}$ }{ $3.41 \cdot 10^{-2}$ }{ $2.38 \cdot 10^{-2}$ } \end{tabular} \caption{Diameter $h$ of meshes $\MESH{1}$, $\MESH{2}$, and $\MESH{3}$. } \label{tab:mesh-diameter} \end{center} \end{table} \renewcommand{\TABROW}[7]{ #1 & #2 & #3 & #4 & #5 & #6 & #7 \\} \begin{table}[t!] \begin{center} \begin{tabular}{c|cc|cc|cc} Level & \multicolumn{2}{c|}{ $\MESH{1}$ } & \multicolumn{2}{c|}{ $\MESH{2}$ } & \multicolumn{2}{c}{ $\MESH{3}$ }\\ \hline \TABROW{ }{ $N_{el}$ }{ $N$ }{ $N_{el}$ }{ $N$ }{ $N_{el}$ }{ $N$ } \TABROW{ 1 }{ 16 }{ 25 }{ 22 }{ 46 }{ 16 }{ 73 } \TABROW{ 2 }{ 64 }{ 81 }{ 84 }{ 171 }{ 64 }{ 305 } \TABROW{ 3 }{ 256 }{ 289 }{ 312 }{ 628 }{ 256 }{ 1249 } \TABROW{ 4 }{ 1024 }{ 1089 }{ 1202 }{ 2406 }{ 1024 }{ 5057 } \TABROW{ 5 }{ 4096 }{ 4225 }{ 4772 }{ 9547 }{ 4096 }{ 20353 } \end{tabular} \caption{Number of elements $N_{el}$ and vertices $N$ of meshes $\MESH{1}$, $\MESH{2}$, and $\MESH{3}$. } \label{tab:mesh-elem-vrtx} \end{center} \end{table} On any set of refined meshes, we measure the $\HONE$ relative error for the velocity vector field by applying the formula \begin{align} \text{error}_{\HONE(\Omega)}(\mathbf{u}) = \frac{\snorm{\mathbf{u}-\Piz{k}\uvh}{1,h}}{\snorm{\mathbf{u}}{1,\Omega}} \approx \dfrac{\snorm{\mathbf{u}-\uvh}{1,\Omega}}{\snorm{\mathbf{u}}{1,\Omega}}, \label{eq:error:H1:velocity} \end{align} and the $\LTWO$ relative error by applying the formula \begin{align} \text{error}_{\LTWO(\Omega)}(\mathbf{u}) = \dfrac{\norm{ \mathbf{u}-\Piz{k}\uvh}{0,\Omega}}{\norm{\mathbf{u}}{0,\Omega}} \approx \dfrac{ \norm{\mathbf{u}-\uvh}{0,\Omega} }{\norm{\mathbf{u}}{0,\Omega}}. \label{eq:error:L2:velocity} \end{align} For the pressure scalar field we measure the $\LTWO(\Omega)$ relative error by applying the formula \begin{align} \text{error}_{\LTWO(\Omega)}(p) = \dfrac{\norm{p-p_{\hh}}{0,\Omega}}{\norm{p}{0,\Omega}}. \label{eq:error:L2:pressure} \end{align} In our implementations, the use of the enhancement spaces only changes the calculation of the right-hand side of Eq.~\eqref{eq:stokes:vem:A}. In fact, in the implementations of $\textit{F1}$ and $\textit{F2}$ using the non-enhanced space definitions, we approximate the right-hand side through the projection operator $\Piz{\bar{k}}$ with $\bar{k}=max(0,k-2)$, while in the ones using the enhanced space definitions, we approximate the right-hand side through the projection operator $\Piz{k}$. However, since the nonenhanced and the enhanced versions have the same degrees of freedom, we can always compute the projection operator $\Piz{k}$, and use it to evaluate the approximation error as in~\eqref{eq:error:H1:velocity} and~\eqref{eq:error:L2:velocity} above. In the non-enhanced case, this is equivalent to a sort of post-processing of $\uvh$, which is known only through its degrees of freedom, to derive a polynomial approximation of $\mathbf{u}$ that is defined on the whole computational domain. \begin{figure} \includegraphics[width=\textwidth,clip=]{fig12.pdf} \caption{ Error curves versus $h$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers show the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers show the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{\bar{k}}$ with $\bar{k}=max(0,k-2)$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:h_errorPi0km2} \end{figure} \begin{figure} \includegraphics[width=\textwidth,clip=]{fig13.pdf} \caption{ Error curves versus $h$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers show the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers show the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{k}$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:h_errorPi0k} \end{figure} \ifARXIV \begin{figure} \includegraphics[width=\textwidth,clip=]{fig14.pdf} \caption{ Error curves versus $N^{dof}$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers shows the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers shows the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{\bar{k}}$ with $\bar{k}=max(0,k-2)$. \RED{\textbf{????}} The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:node_errorPi0km2} \end{figure} \begin{figure} \includegraphics[width=\textwidth,clip=]{fig15.pdf} \caption{ Error curves versus $N^{dof}$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers shows the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers shows the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{k}$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:node_errorPi0k} \end{figure} \fi \begin{figure} \includegraphics[width=\textwidth,clip=]{fig16.pdf} \caption{ $\LTWO$-norm of the divergence of the velocity field using the non-enhanced virtual element space~\eqref{eq:FO:regular-space:def} (top panels) and the enhanced virtual element space space~\eqref{eq:FT:regular-space:def} (bottom panels). The right-hand side~\eqref{eq:fvh:def} is approximated by using the projection operator $\Piz{k}$. Solid (red and black) lines with square markers refer to $\PizP{k}(\DIV\uvh)$; dotted (blue) lines with circle markers refer to $\PizP{k+1}(\DIV\uvh)$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:divergence} \end{figure} \begin{comment} \begin{figure} \includegraphics[width=\textwidth,clip=]{fig17.pdf} \caption{$\LTWO$ norm of the divergence of the velocity field using space~\eqref{eq:FO:regular-space:def} (top panels) and space~\eqref{eq:FT:regular-space:def} (bottom panels). The right-hand side is approximated by using the projection operator $\Piz{k}$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:divergence} \end{figure} \begin{table} \begin{center} \begin{tabular}{c|cc|cc|cc|cc|cc|cc|} \hline Level & $k=1$ & $p$ & $k=2$ & $p$ & $k=3$ & $p$ & $k=4$ & $p$ & $k=5$ & $p$ & $k=6$ $p$ \\ \hline 1 & 4.8371e-16 & - & 1.5714e-15 & - & 2.2617e-15 & - & 3.2929e-15 & - & 4.7399e-15 & - & 6.2252e-15 - \\ 2 & 7.9251e-16 & - & 3.0509e-15 & - & 4.1475e-15 & - & 6.7510e-15 & 0.00 & 8.8386e-15 & - & 1.1704e-14 - \\ 3 & 1.4956e-15 & 1.29 & 5.9748e-15 & 1.01 & 8.1164e-15 & 1.11 & 1.3485e-14 & 0.96 & 1.6511e-14 & 1.00 & 2.4852e-14 1.19 \\ 4 & 3.0703e-15 & 1.13 & 1.3008e-14 & 1.16 & 1.5995e-14 & 1.01 & 2.6138e-14 & 0.96 & 3.2449e-14 & 1.08 & 4.9535e-14 0.92 \\ 5 & 6.3776e-15 & 1.02 & 2.5104e-14 & 0.85 & 3.1608e-14 & 1.00 & 5.4233e-14 & 1.10 & 6.4273e-14 & 1.01 & 9.4954e-14 0.94 \\ \hline \end{tabular} \caption{F1, random mesh} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c|cc|cc|cc|cc|cc|cc|} \hline Level & $k=1$ & $p$ & $k=2$ & $p$ & $k=3$ & $p$ & $k=4$ & $p$ & $k=5$ & $p$ & $k=6$ $p$ \\ \hline 1 & 1.3029e-15 & - & 1.9664e-15 & - & 3.3041e-15 & - & 5.1890e-15 & - & 6.5923e-15 & - & 7.8341e-15 - \\ 2 & 1.5259e-15 & - & 4.8557e-15 & - & 6.2691e-15 & - & 9.7802e-15 & - & 1.3945e-14 & - & 1.7173e-14 - \\ 3 & 2.2537e-15 & 2.47 & 8.5726e-15 & 0.63 & 1.1974e-14 & 1.01 & 1.9540e-14 & 1.09 & 2.4203e-14 & 0.74 & 3.2024e-14 0.79 \\ 4 & 4.2534e-15 & 1.63 & 1.7406e-14 & 1.25 & 2.1799e-14 & 0.93 & 3.6394e-14 & 0.90 & 4.4591e-14 & 1.11 & 6.0502e-14 1.02 \\ 5 & 8.2138e-15 & 1.04 & 3.4306e-14 & 0.96 & 4.2948e-14 & 1.13 & 7.2787e-14 & 1.11 & 8.8242e-14 & 1.12 & 1.2030e-13 1.08 \\ \hline \end{tabular} \caption{F1, esagonal mesh} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c|cc|cc|cc|cc|cc|cc|} \hline Level & $k=1$ & $p$ & $k=2$ & $p$ & $k=3$ & $p$ & $k=4$ & $p$ & $k=5$ & $p$ & $k=6$ $p$ \\ \hline 1 & 6.5329e-16 & - & 1.6424e-15 & - & 3.1202e-15 & - & 4.0494e-15 & - & 6.2768e-15 & - & 8.1959e-15 - \\ 2 & 9.4905e-16 & - & 3.7595e-15 & - & 5.2728e-15 & - & 7.8849e-15 & - & 1.1864e-14 & - & 1.5008e-14 - \\ 3 & 1.7981e-15 & 1.71 & 6.6715e-15 & 0.69 & 9.8153e-15 & 1.18 & 1.5324e-14 & 1.00 & 2.1545e-14 & 0.94 & 2.7638e-14 1.01 \\ 4 & 3.4591e-15 & 1.02 & 1.4267e-14 & 1.33 & 1.8258e-14 & 1.00 & 3.0389e-14 & 1.03 & 3.9610e-14 & 1.02 & 5.2274e-14 1.04 \\ 5 & 6.6453e-15 & 1.00 & 2.7578e-14 & 0.87 & 3.6163e-14 & 1.10 & 5.8315e-14 & 0.95 & 7.7855e-14 & 1.11 & 9.8401e-14 0.99 \\ \hline \end{tabular} \caption{F1, concave mesh} \end{center} \end{table} \end{comment} \begin{figure} \includegraphics[width=\textwidth,clip=]{fig18.pdf} \caption{ Error curves versus $h$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers show the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers show the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{\bar{k}}$ with $\bar{k}=max(0,k-2)$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:h_errorPi0km2} \end{figure} \begin{figure} \includegraphics[width=\textwidth,clip=]{fig19.pdf} \caption{ Error curves versus $h$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers show the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers show the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{k}$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:h_errorPi0k} \end{figure} \ifARXIV \begin{figure} \includegraphics[width=\textwidth,clip=]{fig20.pdf} \caption{ Error curves versus $N^{dof}$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers shows the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers shows the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{\bar{k}}$ with $\bar{k}=max(0,k-2)$. \RED{\textbf{????}} The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:node_errorPi0km2} \end{figure} \begin{figure} \includegraphics[width=\textwidth,clip=]{fig21.pdf} \caption{ Error curves versus $N^{dof}$ for the velocity approximation using the energy norm~\eqref{eq:error:H1:velocity} (top panels) and the $\LTWO$-norm~\eqref{eq:error:L2:velocity} (mid panels), and for the pressure approximation using the $\LTWO$-norm~\eqref{eq:error:L2:pressure} (bottom panels). Solid (red) lines with square markers shows the errors for the first formulation using space~\eqref{eq:FO:regular-space:def}; solid (black) lines with triangular markers shows the errors for the second formulation using space~\eqref{eq:FT:regular-space:def}. The right-hand side is approximated by using the projection operator $\Piz{k}$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:node_errorPi0k} \end{figure} \fi \begin{figure} \includegraphics[width=\textwidth,clip=]{fig22.pdf} \caption{ $\LTWO$-norm of the divergence of the velocity field using the non-enhanced virtual element space~\eqref{eq:FO:regular-space:def} (top panels) and the enhanced virtual element space space~\eqref{eq:FT:regular-space:def} (bottom panels). The right-hand side~\eqref{eq:fvh:def} is approximated by using the projection operator $\Piz{k}$. Solid (red and black) lines with square markers refer to $\PizP{k}(\DIV\uvh)$; dotted (blue) lines with circle markers refer to $\PizP{k+1}(\DIV\uvh)$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:divergence} \end{figure} \begin{comment} \begin{figure} \includegraphics[width=\textwidth,clip=]{fig23.pdf} \caption{$\LTWO$ norm of the divergence of the velocity field using space~\eqref{eq:FO:regular-space:def} (top panels) and space~\eqref{eq:FT:regular-space:def} (bottom panels). The right-hand side is approximated by using the projection operator $\Piz{k}$. The mesh families used in each calculations are shown in the left corner of each panel and the expected convergence rates are reflected by the slopes of the triangles and corresponding numeric labels.} \label{fig:divergence} \end{figure} \begin{table} \begin{center} \begin{tabular}{c|cc|cc|cc|cc|cc|cc|} \hline Level & $k=1$ & $p$ & $k=2$ & $p$ & $k=3$ & $p$ & $k=4$ & $p$ & $k=5$ & $p$ & $k=6$ $p$ \\ \hline 1 & 4.8371e-16 & - & 1.5714e-15 & - & 2.2617e-15 & - & 3.2929e-15 & - & 4.7399e-15 & - & 6.2252e-15 - \\ 2 & 7.9251e-16 & - & 3.0509e-15 & - & 4.1475e-15 & - & 6.7510e-15 & 0.00 & 8.8386e-15 & - & 1.1704e-14 - \\ 3 & 1.4956e-15 & 1.29 & 5.9748e-15 & 1.01 & 8.1164e-15 & 1.11 & 1.3485e-14 & 0.96 & 1.6511e-14 & 1.00 & 2.4852e-14 1.19 \\ 4 & 3.0703e-15 & 1.13 & 1.3008e-14 & 1.16 & 1.5995e-14 & 1.01 & 2.6138e-14 & 0.96 & 3.2449e-14 & 1.08 & 4.9535e-14 0.92 \\ 5 & 6.3776e-15 & 1.02 & 2.5104e-14 & 0.85 & 3.1608e-14 & 1.00 & 5.4233e-14 & 1.10 & 6.4273e-14 & 1.01 & 9.4954e-14 0.94 \\ \hline \end{tabular} \caption{F1, random mesh} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c|cc|cc|cc|cc|cc|cc|} \hline Level & $k=1$ & $p$ & $k=2$ & $p$ & $k=3$ & $p$ & $k=4$ & $p$ & $k=5$ & $p$ & $k=6$ $p$ \\ \hline 1 & 1.3029e-15 & - & 1.9664e-15 & - & 3.3041e-15 & - & 5.1890e-15 & - & 6.5923e-15 & - & 7.8341e-15 - \\ 2 & 1.5259e-15 & - & 4.8557e-15 & - & 6.2691e-15 & - & 9.7802e-15 & - & 1.3945e-14 & - & 1.7173e-14 - \\ 3 & 2.2537e-15 & 2.47 & 8.5726e-15 & 0.63 & 1.1974e-14 & 1.01 & 1.9540e-14 & 1.09 & 2.4203e-14 & 0.74 & 3.2024e-14 0.79 \\ 4 & 4.2534e-15 & 1.63 & 1.7406e-14 & 1.25 & 2.1799e-14 & 0.93 & 3.6394e-14 & 0.90 & 4.4591e-14 & 1.11 & 6.0502e-14 1.02 \\ 5 & 8.2138e-15 & 1.04 & 3.4306e-14 & 0.96 & 4.2948e-14 & 1.13 & 7.2787e-14 & 1.11 & 8.8242e-14 & 1.12 & 1.2030e-13 1.08 \\ \hline \end{tabular} \caption{F1, esagonal mesh} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c|cc|cc|cc|cc|cc|cc|} \hline Level & $k=1$ & $p$ & $k=2$ & $p$ & $k=3$ & $p$ & $k=4$ & $p$ & $k=5$ & $p$ & $k=6$ $p$ \\ \hline 1 & 6.5329e-16 & - & 1.6424e-15 & - & 3.1202e-15 & - & 4.0494e-15 & - & 6.2768e-15 & - & 8.1959e-15 - \\ 2 & 9.4905e-16 & - & 3.7595e-15 & - & 5.2728e-15 & - & 7.8849e-15 & - & 1.1864e-14 & - & 1.5008e-14 - \\ 3 & 1.7981e-15 & 1.71 & 6.6715e-15 & 0.69 & 9.8153e-15 & 1.18 & 1.5324e-14 & 1.00 & 2.1545e-14 & 0.94 & 2.7638e-14 1.01 \\ 4 & 3.4591e-15 & 1.02 & 1.4267e-14 & 1.33 & 1.8258e-14 & 1.00 & 3.0389e-14 & 1.03 & 3.9610e-14 & 1.02 & 5.2274e-14 1.04 \\ 5 & 6.6453e-15 & 1.00 & 2.7578e-14 & 0.87 & 3.6163e-14 & 1.10 & 5.8315e-14 & 0.95 & 7.7855e-14 & 1.11 & 9.8401e-14 0.99 \\ \hline \end{tabular} \caption{F1, concave mesh} \end{center} \end{table} \end{comment} \PGRAPH{Convergence results} \ifARXIV In Figures~\ref{fig:h_errorPi0km2}, \ref{fig:h_errorPi0k}, \ref{fig:node_errorPi0km2}, and~\ref{fig:node_errorPi0k}, \else In Figures~\ref{fig:h_errorPi0km2} and~\ref{fig:h_errorPi0k}, \fi we compare the approximation errors \eqref{eq:error:H1:velocity}, \eqref{eq:error:L2:velocity}, and \eqref{eq:error:L2:pressure} that are obtained when using the \emph{non-enhanced} and the \emph{enhanced} definitions of the virtual element space for the velocity approximation. In particular, we recall that formulation~$\textit{F1}$ uses the space definitions~\eqref{eq:FO:regular-space:def} (non-enhanced) and~\eqref{eq:FO:enhanced-space:def} (enhanced); formulation~$\textit{F2}$ uses the space definitions~\eqref{eq:FT:regular-space:def} (non-enhanced) and~\eqref{eq:FT:enhanced-space:def} (enhanced). All error curves in Figures~\ref{fig:h_errorPi0km2} and~\ref{fig:h_errorPi0k}, for $k=1,\ldots,6$ are shown in a log-log plot versus the mesh size parameter $h$. \ifARXIV All error curves in Figures~\ref{fig:node_errorPi0km2} and~\ref{fig:node_errorPi0k}, for $k=1,\ldots,6$ are shown in a log-log plot versus the total number of degrees of freedom $N^{\footnotesize{\mbox{dof}}}$. \fi Solid (red) lines with square markers show the errors for the formulation $\textit{F1}$; solid (black) lines with triangular markers show the errors for the formulation $\textit{F2}$. The mesh family is shown in the bottom-left corner and the slopes of the error curves reflect the numerical order of convergence of each scheme. When the error on the velocity approximation is measured using the energy norm, both formulations $\textit{F1}$ and $\textit{F2}$ provide the optimal convergence rate, which scales as $\mathcal{O}(h^{k})$ as expected from Theorem~\ref{theorem:H1:estimate}, regardless of using the non-enhanced or the enhanced versions of the method. An optimal convergence rate, this time scaling like $\mathcal{O}(h^{k+1})$, is also visible for all the error curves of both formulations in the $\LTWO$-norm as expected from Theorem~\ref{theorem:L2:estimate} when using the enhanced definition of the virtual element spaces and the projection operator $\Piz{k}$ in the right-hand side of the VEM. Optimal convergence rates are also visible for both formulations $\textit{F1}$ and $\textit{F2}$ if $k\neq2$ when using the non-enhanced versions of the virtual element spaces and the projection operator $\Piz{\bar{k}}$ with $\bar{k}=max(0,k-2)$. Instead, when $k=2$ the non-enhanced formulations $\textit{F1}$ and $\textit{F2}$ loose one order of convergence. This fact is in agreement with the behavior previously noted in~\cite{BeiraodaVeiga-Brezzi-Marini:2013}, where the optimal convergence rate for $k=2$ was obtained by changing (in some sense, ``enhancing'') the construction of the right-hand side. \ifARXIV We also note that there is not a significant difference when we compare the accuracy of the two formulation with respect to the number of degrees of freedom, although we expect that formulation $\textit{F2}$ can be more convenient than formulation $\textit{F1}$ as it has a smaller number of degrees of freedom. \fi \PGRAPH{Free-divergence condition} Regarding the approximation of the zero-divergence constraint, the polynomial projection $\PizP{k-1}(\DIV\uvh)$ is close to the machine precision in all elements $\P\in\Omega_{\hh}$ for all the formulations and meshes here considered. Although we do not have a direct control on the divergence of the virtual element approximation, a straightforward calculation using the free-divergence condition for the ground truth, i.e., $\DIV\mathbf{u}=0$, and an application of Theorem~\ref{theorem:H1:estimate} yield \begin{align*} \norm{\DIV\uvh}{0,\Omega} = \norm{\DIV(\uvh-\mathbf{u})}{0,\Omega} \leq C\snorm{\uvh-\mathbf{u}}{1,\Omega} \approx \mathcal{O}(h^{k}). \end{align*} So, we expect that the ``true'' divergence of the numerical approximation $\uvh$ scales like $\mathcal{O}(h^{k})$ for $h\to\mathbf{0}$. Furthermore, we note that for both formulations $\textit{F1}$ and $\textit{F2}$ the projections $\Piz{\ell}(\DIV\uvh)$, $\ell=k,k+1$, are computable from the degrees of freedom of $\uvh$ when using the enhanced version of the two spaces. This fact allows us to post-process $\DIV\uvh$ and obtain the polynomial projections $\PizP{k}(\DIV\uvh)$ and $\PizP{k+1}(\DIV\uvh)$ in every element $\P\in\Omega_{\hh}$, which, in principle, could be better approximations than $\PizP{k-1}(\DIV\uvh)$. However, it is worth noting that $\PizP{k-1}(\DIV\uvh)$ is expected to be zero (not considering rounding effects and the ill-conditioning of the discretization) and a straightforward calculation using the boundedness of $\Piz{\ell}$ and again the result of Theorem~\ref{theorem:H1:estimate} shows that \begin{align} \norm{\Piz{\ell}\DIV\uvh}{0,\Omega} = \norm{\Piz{\ell}\DIV(\uvh-\mathbf{u})}{0,\Omega} \leq C\norm{\DIV(\uvh-\mathbf{u})}{0,\Omega} \leq C\snorm{\uvh-\mathbf{u}}{1,\Omega} \approx \mathcal{O}(h^{k}), \label{eq:divg:k:rate} \end{align} where $C\approx\norm{\Piz{\ell}}{}$. So, we cannot expect a real gain by pursuing this route although this estimate concerns with the worst case scenario and a convergence rate to zero faster than $\mathcal{O}(h^{k})$ is still possible. This effect is illustrated by the different error curves that are obtained using the three mesh families $\MESH{1}$, $\MESH{2}$, and $\MESH{3}$ and are shown in the log-log plots of Figure~\ref{fig:divergence}. In this figure, the three top panels are related to formulation $\textit{F1}$; the solid (red) curves show the behavior of the $\LTWO$-norm of $\Piz{k}(\DIV\uvh)$; the dotted (blue) curves show the behavior of the $\LTWO$-norm of $\Piz{k+1}(\DIV\uvh)$. Here, the deviation from zero looks decreasing like $\mathcal{O}(h^k)$ in agreement with~\eqref{eq:divg:k:rate}. The three bottom panels are related to formulation $\textit{F2}$; the solid (black) curves show the behavior of the $\LTWO$-norm of $\Piz{k}(\DIV\uvh)$; the dotted (blue) curves show the behavior of the $\LTWO$-norm of $\Piz{k+1}(\DIV\uvh)$. Here, the deviation from zero looks decreasing at a rate that is closer to $\mathcal{O}(h^{k+1})$ for $k\neq2$ especially on mesh families $\MESH{1}$ and $\MESH{3}$, and intermediate between $h^2$ and $h^3$ for $k=2$ when using mesh family $\MESH{2}$. \section{Conclusions} \label{sec:conclusions} We studied two conforming virtual element formulations for the numerical approximation of the Stokes problem to unstructured meshes that work at any order of accuracy. The components of the vector-valued unknown are approximated by using variants of the conforming regular or enhanced virtual element spaces that were originally introduced for the discretization of the Poisson equation. The scalar unknown is approximated by using discontinuous polynomials. The stiffness bilinear form is approximated by using the orthogonal polynomial projection of the gradients onto vector polynomials of degree $k-1$ and adding a suitable stabilization term. The zero divergence constraint is taken into account by projecting the divergence equation onto the space of polynomials of degree $k-1$. Our convergence analysis proves that the method is well-posed and convergent and optimal convergence rates are obtained through error estimates in the energy norm and in the $\LTWO$-norm. Such optimal convergence rates are confirmed by numerical results on a set of three different representative families of meshes. These methods work well also in the lowest-order case (e.g., for the polynomial order $k=1$) on triangular and square meshes, which are well-known to be potentially unstable. Moreover, our numerical experiments show that the divergence constraint is satisfied at the machine precision level by the orthogonal polynomial projection of the divergence of the approximate velocity vector. \section*{Acknowledgments} GM was partially supported by the ERC Project CHANGE, which has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No 694515).
1,314,259,996,958
arxiv
\section{Introduction} \label{intro} One of the mysterious issues in quantum mechanics is the wave function collapse or the wave function reduction. It has been studied in various contexts like the collapse hypothesis in standard or conventional quantum mechanics, many worlds interpretation, decoherence approach, gravitational reduction of the wave function and etc.\cite{Refsak,RefWh,RefG,RefV,RefBe,RefZ,RefMa,RefDe,RefDe2}. According to the conventional or standard quantum mechanics, the evolution of a quantum system is described by the Schr\"{o}dinger equation. It is a deterministic or unitary or probability preserving evolution. The word "deterministic" arises from the fact that if we have the wave function at the initial time $t$, then it will be determined at the latter times by using the Schr\"{o}dinger equation: \begin{equation}\label{schro} i\hbar \frac{\partial \psi(\mathbf{x},t)}{\partial t}=\left(-\frac{\hbar^2}{2m}\nabla^2 + V(\mathbf{x},t) \right)\psi(\mathbf{x},t) \end{equation} where $V(\mathbf{x},t)$ represents an external potential.\par The linearity of the Schr\"{o}dinger equation allows us to consider the superposition of some solutions as a new solution. But this causes a strange behavior or feature for a quantum system. Because such a superposed state should describe the dynamics of a particle or body, while we have not seen that in the classical world a particle to be in a superposed state. In quantum mechanics, the usual justification for the superposition principle is that the particle is simultaneously in the all states before any observation. For example, before observation for detecting the direction of the spin of an electron, the electron is in a superposition of states up $\vert \uparrow \rangle$ and down $\vert \downarrow \rangle$ simultaneously. Another example is the famous Schr\"{o}dinger's cat which is alive or dead simultaneously before an observation.\cite{RefSch}. Only after observation we can talk about the death or life of the cat with certainty. A familiar example for the superposition of states, is a wave packet representing a free particle. In this case we assume that the particle is most probably in the spatial interval $\Delta x = \sigma$ where, $\sigma$ is the width of the wave packet. It is clear that this contradicts our daily experiences. In our daily experiences of the classical world, we do not observe objects in a superposition of different states simultaneously. Here, we can have at least two proposals. The first one is that the macroscopic world does not obey the quantum rules. Then, quantum mechanics is not universal and does not include the classical world. The second choice is that we assume that the quantum mechanics is universal and tends to the classical world continuously. In the latter case, the quantum and classical worlds should be seen as an undivided whole. The spirit of Bohmian quantum physics and the concept of quantum potential belongs to this deep view.\cite{RefBohm,RefHolland,RefImplicate,Refuniverse}. It is better to note beforehand that in Bohmian quantum mechanics the criterion for having classical limit is the vanishing of quantum potential or quantum force. We shall see how these quantities are involved in an objective model.\par In general, in both Bohmian quantum mechanics and standard quantum mechanics, the process of wave function reduction is along with a measurement operation. But in Bohmian mechanics the wave function reduction is an objective process in the sense that it does not need an observer to measure the specified quantity. Also, it resolves the non-unitary collapse of the wave function, by using the concept of empty waves and the possibility of definition of trajectories for the system and apparatus. The concept of empty waves will be clear latter.\par The objective gravitational reduction of the wave function is based on the Penrose gravitational considerations.\cite{RefP1,RefP2,RefP3,RefP4}. In objective gravitational reduction, the existence of an observer is not necessary. In fact, Penrose has proven that the collapse hypothesis of standard quantum mechanics is understood through the interposition of gravitational effects. In a measurement processes, an apparatus is entangled with the quantum system. The apparatus is usually a macroscopic object. On the other hand, if we accept that quantum mechanics is universal, it should include the classical world and macroscopic objects. In fact, the apparatus (macroscopic object) is in a superposition of different states, but its self gravity reduces its quantum state vector to a specific state, and since the apparatus is entangled with the quantum system, the quantum system should also reduce to a specific state. This is the meaning of reduction in an objective gravitational reduction. In the other words, for the same reason we do not see the Schr\"{o}dinger cat or a macroscopic object in a superposed state, we do not see an electron in a superposition of different states simultaneously.\cite{RefP1,RefP2,RefP3}. Also, we expect that a macroscopic body behaves classically. By using this idea and the considerations of objective gravitational reduction, a criterion for the mass of the particle or the body which is necessary for transition from quantum domain to classical domain is obtained. Before Penrose, Diosi has derived a relation for the minimal width of a wave packet in terms of the needed mass for which the particle or body behaves classically.\cite{RefD1}. It is based on the Schr\"{o}dinger-Newton equation. In that work the problem is to obtain an objective condition for the classical limit of a body. On the other hand, in the Penrose ideas for resolving the collapse hypothesis, the classical limit of a system is obtained objectively and gives the results of Diosi with more accuracy. Thus, the problem of wave function collapse and the classical limit of a quantum system have been resolved using the gravitational effects objectively. This, motivates us to investigate the classical limit of a free particle in Bohmian quantum mechanics and to see how is it possible to relate Bohmian mechanical concepts like quantum potential and quantum force, to the classical limit of a system objectively.?\par Bohmian quantum physics is a deterministic and causal quantum theory which gives the same results as those of conventional quantum mechanics in experiments. The word "deterministic" in Bohmian quantum mechanics has a wider range relative to the conventional quantum mechanics. Because, in Bohmian quantum mechanics a quantum system is composed of a material system with physical properties that have been attributed to it like in classical mechanics with this difference that the dynamical quantities are obtained using the wave function of the system. In this theory, the wave function originates from a real agent which is not still clear to us. The wave function is represented in the configuration space and guides particles on some trajectories with definable positions and momentum, even before experiment or observation. Bohm's work is not a recovery of classical mechanics. Because in Bohm's causal theory an important quantity known as quantum potential is responsible for quantum motion of matter with non-classical features. Quantum potential and consequently quantum force act on a system in such a way that the system reaches the areas in the configuration space that are not accessible in classical mechanics.\cite{RefHolland}. The wave function in Bohmian quantum mechanics is not only a probabilistic tool; rather, its main task is to guide the quantum system causally.\cite{RefHolland}. In the other words, in Bohmian quantum mechanics, the probabilistic nature of quantum world is not intrinsic. Therefore, a relation like $\rho=\psi^{*}\psi$, is due to our ignorance of hidden variables that give quantum mechanical features to a system.\cite{RefHolland}. \par In non-relativistic Bohmian quantum mechanics, the quantum motion of a particle is described by Schr\"{o}dinger's equation and the associated Hamilton-Jacobi equation. By writing the wave function in the polar form $\psi(\mathbf{x},t)=R(\mathbf{x},t)\exp(i\frac{S(\mathbf{x},t)}{\hbar})$ and substituting it into Schr\"{o}dinger's equation, we obtain the quantum Hamilton-Jacobi and continuity equations: \begin{equation}\label{hamilton} \frac{\partial S(\mathbf{x},t)}{\partial t}+\frac{(\nabla S)^2}{2m}+V(\mathbf{x})+Q(\mathbf{x})=0 \end{equation} and \begin{equation}\label{con} \frac{\partial R^2}{\partial t}+\frac{1}{m}\nabla \cdot(R^2 \nabla S)=0 \end{equation} The position of the particle is obtained from the following equation: \begin{equation}\label{guidance} \frac{d\mathbf{x}(t)}{dt}=\left(\frac{\nabla S(\mathbf{x},t)}{m}\right)_{\mathbf{X}=\mathbf{x}(t)} \end{equation} where $\nabla S(x,t)$ is the momentum of the particle and $ \rho=\psi^{*}\psi =R^2$. By knowing the initial position $\mathbf{x}_0$ and wave function $\psi(\mathbf{x}_0,t_0)$, the future of the system is obtained.The expression $\mathbf{X}=\mathbf{x}(t)$ means that among all possible trajectories, in an ensemble of particles, one of them is chosen. The quantity Q in (\ref{hamilton}) is called quantum potential, and it is given by: \begin{equation}\label{potential} Q=-\frac{\hbar^2 \nabla^2 R(\vec{x},t)}{2mR(\vec{x},t)} \end{equation} It may be said that the method by which we get quantum Hamilton-Jacobi equation in Bohmian mechanics is somewhat ad hoc. But here we note that the substitution of the polar form of the wave function into the Schr\"{o}dinger equation is not the only approach towards Bohmian mechanics. The Hamilton-Jacobi equation and quantum potential is derivable also from another approach, without using the wave function and Schr\"{o}dinger's equation. Furthermore, for study a quantum system the usage of the Schr\"{o}dinger equation is not necessary and the equations (\ref{hamilton}) and (\ref{con}) are adequate.\cite{RefAtigh,RefAtigh2}.\par In the following, we study the wave function reduction in conventional and Bohmian quantum mechanics briefly. Then we do a short review on the Penrose ideas about the wave function reduction. After that we shall study the classical limit of a free particle in the context of Bohmian quantum mechanics through the concepts of quantum potential and quantum force. Then, we shall argue that the existence of a gravitational self interaction is necessary for having objective classical limit in Bohmian mechanics. The result that we obtain for the minimal width of an stationary wave packet for getting the classical limit, is the same that of the Diosi, which was obtained through the Schr\"{o}dinger-Newton equation. \cite{RefD1} But here, we shall derive it trough the concepts of Bohmian quantum mechanics. Finally, we shall get a nonlinear differential equation for mass distribution at the classical limit. \section{Constructing an objective classical limit in the Bohmian context} \label{sec:1} In the conventional or standard quantum mechanics, all information that we need to describe a quantum system exists in the wave function of that system. The wave function does not point to any reality. It is only a probabilistic instrument which is interpreted as the knowledge of observer or experimenter about the system.\cite{RefP4}. In fact, our knowledge about a physical system is summarized in the prediction of the probability of measurement of a specific eigenvalue for a specific physical quantity, like energy, momentum, spin direction, etc. It is clear that this not an ontological view. This is known as Copenhagen interpretation of quantum mechanics. By the evolution of a quantum system, we mean that the probabilistic wave $\psi$ of a system has unitary evolution governed by the Schr\"{o}dinger equation. In this context, the evolution of the physical system is deterministic. Because, having the wave function at an initial time, the Schr\"{o}dinger equation gives its evolution at the latter times. \par According to the postulates of standard quantum mechanics, the measurement operation collapses the state vector $\vert \psi \rangle$ of a system to one of its eigenvectors instantaneously. This is not a unitary evolution, because it takes place instantaneously and also the process of measurement is a random jumping from a continuous evolution process to a mixture of some states. In other words: \begin{displaymath} \psi=\left(\sum_i a_i \psi_i \right)\otimes \phi_0 \xrightarrow[\text{measurement}]{\text{random jump}} \psi_i \otimes \phi_i, \quad \text{with the detection probability}\quad \vert a_i \vert ^2 \end{displaymath} where, $\psi$ is the total wave function of the system plus apparatus. The initial state of the apparatus is $\phi_0$ and its state after measurement is $\phi_i$. This is an ideal measurement in which the system state $\sum_i a_i \psi_i $ does not alter during the measurement process.\cite{RefHolland} In a specific measurement we can not predict which of the eigenvalues will be detected. The above statistical jump does not conserves the unitarity condition. See ref \cite{RefHolland}. In the conventional quantum mechanics, the measurement apparatus is a classical object. This means that there is sharp distinction between classical and quantum world in the conventional quantum mechanics.\par In Bohmian quantum mechanics, the situation is somewhat better. There is no need to an observer to register an eigenvalue of the quantum system. There, the measurement apparatus and quantum system are interacting and there is an interaction Hamiltonian which participate in the total dynamics of the system and apparatus like in standard quantum mechanics. But, in Bohmian quantum mechanics, a quantum system consists of a pilot wave and the particle(s) or body with definable trajectories. Also, the apparatus obeys the quantum rules. Thus, in a position measurement the pointer of the apparatus which has a definite trajectory will be determined using the total state of the system automatically without needing an observer. Thus, the need for an observer is removed in this interpretation. The collapse hypothesis is also removed, but with another novel concept namely "empty waves" comes in. In Bohmian quantum mechanics, a particle chooses one of the trajectories among the possible trajectories of the system . In this situation, the associated wave function is not empty but other possible states are empty. Also, the apparatus lies in a specified state with its associated trajectory, whether an observer is present or not. The other states remain empty and go away after measurement. So, the collapse hypothesis is not necessary. \par It is noteworthy the empty waves affect the dynamics of the system through the superposition of all possible states. For example in the two-slit experiment, when a wave $\psi$, splits into the two packets $\psi_1$ and $ \psi_2$, the particle is in one of the traversed routs, not in both of them simultaneously. Because the single-valuedness of the wave function does not allow the trajectories cross the each other.\cite{RefHolland}. Hence, when the particle is in one of the routs, the other wave, for example, $\psi_2$ is an empty wave and vice-versa. But the important point is that the empty waves affect the dynamics of the system through the interference of waves. The usefulness of the empty packets is that it avoids the collapse hypothesis. More clearly, when a measurement takes place the total wave function somehow leads to the state in which the particle is present and other empty waves go their own way. This demonstrates that in Bohmian quantum mechanics there is no sharp distinction between classical and quantum world. For details see, \cite{RefHolland}. This type of interaction does not contradict the unitary evolution of the Schr\"{o}dinger equation in the presence of measurements. Because, the process goes on continuously without the losing of information by the aid of empty waves. \par Here there are two problems. First, is the existence of empty waves and the problem of detecting them. The second, is that it is not an objective reduction in the sense that we can determine the reduction time or the needed reduction mass for a particle or body. We know that the state vector of an electron is reduced through a measurement process. But how our universe as a macroscopic body is reduced?, and through which measurement or apparatus? Thus, we look for an internal agent for reducing the system whether the system is a sub atomic particle or the whole of universe. It seems that by increasing the mass of a particle or system, its dynamics tends to classical deterministic dynamics. In the other words, a classical object is in a localized state not in a superposed one. But how does this occur systematically? Penrose has proven this fact through the self-gravitational effects of the particle or body. Before him some authors like Karolyhazy and Diosi had studied the relation between the self-gravity of a body and the classical limit of the system. \cite{RefD1,RefK,RefD2,RefD3}. As we mentioned before, resolving the collapse problem through the gravitational considerations, also gives the classical limit of the system. In other words, due to the effects of gravity, a mechanism is set up that relates the classical limit of a quantum system and its wave function reduction objectively. Therefore, through the properties of the particle the classical limit is determined. This motivates us to use the concept of gravitational reduction in a Bohmian context to make a clear objective criterion for having classical limit related to quantum potential or quantum force.\par Penrose has two viewpoints on the reduction of wave function in refs \cite{RefP1} and \cite{RefP3}, based on the equivalence principle of general relativity and the principle of general covariance. In the first approach, the principle of equivalence leads to a phase difference in the wave function of a falling body with respect to the wave function of body that experiences gravitational force. This difference generates a new term in the energy of the body that measures the uncertainty in the gravitational self energy of the particle and gives an objective criterion for transferring from quantum phase to the classical phase. In the latter case, the principle of general covariance leads us to conclude that considering the quantum states of the self gravity of a body leads to different vacuums with different Killing vectors, like what happens in the Unruh effect. But here, the problem is studied non-relativistically. Here, we look at the general themes of Penrose's ideas briefly.\par Consider a non-relativistic falling body in a constant gravitational field $\mathbf{g}$ with the coordinate system $(\mathbf{x},t)$. The Schro\"{o}dinger equation for this case becomes: \begin{equation}\label{ns} i\hbar \frac{\partial\psi}{\partial t}=-\frac{\hbar^2}{2m}\nabla^2 \psi -m\mathbf{x}\cdot \mathbf{g}\psi \end{equation} According to the equivalence principle, this is also a freely falling particle. If we choose the coordinate $(\mathbf{X},T=t)$ for freely falling motion, the Schr\"{o}dinger equation for it becomes: \begin{equation}\label{rs} i\hbar \frac{\partial\Psi}{\partial T}=-\frac{\hbar^2}{2m}\nabla^2 \Psi \end{equation} Where, $\Psi$ is the free falling wave function. These two coordinates are related as $\mathbf{x}=\mathbf{X}+\frac{1}{2}\mathbf{g}t^2$. Also, we have $\nabla^2_{\mathbf{X}}=\nabla_{\mathbf{x}}^2=\nabla^2$. For establishing the equivalence principle, the two wave functions $\psi$ and $\Psi$ should be related as: \begin{equation}\label{wr} \Psi=\exp\left(\frac{im}{\hbar}\left(\frac{g^2 t^3}{6}-\mathbf{x}\cdot \mathbf{g} t\right)\right)\psi \end{equation} or \begin{equation}\label{wr2} \psi=\exp\left(\frac{im}{\hbar}\left(\frac{g^2 T^3}{3}+\mathbf{X}\cdot \mathbf{g} T\right)\right)\Psi \end{equation} In ref \cite{RefP3} the important role of the term $\frac{im g^2 t^3}{6\hbar} $ has been explained. This term, refers to a new vacuum and the issue of the Unruh effect, in which a pure quantum state in an accelerated frame or in a gravitational field is seen as a mixture of different states. On the other hand, we know that the measurement process or reduction of the wave function leads to some statistical mixture of information about a quantum system. This means that both of them may have identical origin.\cite{RefP1,RefP2,RefP3}. \par The other consideration of Penrose is based on the concept of Killing vectors. If, we consider the quantum states of the self-gravity of the particle, when it is in two different locations, the superposed state does not include a unique time-like Killing vector. We know that for having a stationary spacetime the existence of a time-like Killing vector is necessary. In this case, such unique Killing vector is not definable. So the superposed state decays to a single state for which the definition of a time-like Killing vector is possible. Some conditions are obtained for the needed mass of the particle and the width of its associated wave packet for transition from quantum domain to classical domain through these arguments. This is an objective gravitational reduction description. Because, it is determined by the properties of the particle or body like its mass.\par Some related results have also been obtained by Diosi from another point of view, based on the Schr\"{o}dinger-Newton equation. The Schr\"{o}dinger-Newton equation describes the quantum dynamics of a system that are affected by its self gravitational fields. For a single body this equation is: \begin{equation}\label{sn} i\hbar \frac{\partial\psi(\mathbf{x},t)}{\partial t}=\left(-\frac{\hbar^2}{2M}\nabla^2 -GM^2 \int \frac{\vert \psi(\mathbf{x}^\prime,t)\vert^2}{\vert \mathbf{x}^\prime -\mathbf{x} \vert} d^3 x^\prime\right) \psi(\mathbf{x},t) \end{equation} By using a stationary state $\psi=\psi_0 e^\frac{iEt}{\hbar}$, which satisfies the above equation, a relation is obtained between the mass of the particle and the width of its associated stationary wave packet which provides a criterion for the transition from the quantum world to the classical world.\cite{RefD1}. In the following, we first argue that the usual Bohmian condition of transition from quantum mechanics to classical mechanics for a wave packet is not suitable for an objective reduction. Then we investigate this problem by using the concepts of Bohmian quantum potential and Bohmian quantum force in such a way that leads us to an objective Bohmian reduction (classical limit).\par In summary, gravitation would localize a bulk of matter. In the language of Bohmian quantum mechanics, it would localize the ensemble of different locations of a particle or body. In fact, the concept of gravitational localization should be generalized. A free particle is described by a spreading wave packet, according to the Schro\"{o}dinger equation. This dispersion is a quantum mechanical effect which is explained by the use of the Heisenberg uncertainty principle in conventional quantum mechanics. In Bohmian quantum mechanics, this is described with the concept of quantum force.\cite{RefHolland}\par In Bohmian quantum mechanics, the quantum potential has important properties. In Bohm's own view, it is responsible for the quantum motion of matter. In some other views, like that of DGZ, the quantum potential is not necessary to describe the dynamics of the system and the guidance equation (\ref{guidance}) is sufficient for determining the dynamics of the particle.\cite{RefGold}. But in the both of them the vanishing of quantum potential or quantum force ($f=-\nabla Q$) is a main criterion for the transition from quantum domain to classical domain. In conventional quantum mechanics, the condition for transition from quantum to classical domain is the vanishing of the Plank constant ($\hbar \longrightarrow 0$), which is not an acceptable condition. Its conflicts with Bohmian conditions for transition from quantum to classical world i.e, the vanishing of quantum potential or quantum force, is studied in ref \cite{RefHolland}. \par Now, we want to look at the issue through the study of the dynamics of wave packets in Bohmian quantum mechanics. A free wave packet, satisfying the Schr\"{o}dinger equation, is: \begin{equation}\label{packet} \psi(\mathbf{x},t)=(2\pi s_t^2)^{-\frac{3}{4}} e^{\left(i\mathbf{k}\cdot ( \mathbf{x}-\frac{\mathbf{u}t}{2}) -\frac{(\mathbf{x}-\mathbf{u}t)^2}{4s_t \sigma_0}\right) } \end{equation} Where, $\mathbf{u}=\frac{\hbar \mathbf{k}}{m}$ is the initial group velocity of the center of the packet.\cite{RefHolland}. The $\sigma_0$ is the random mean square width of the packet. The $s_t$ is defined as $s_t = \sigma_0(1+\frac{i\hbar t}{2m\sigma_0^2})^{\frac{1}{2}}$. The random mean square width of the packet at time $t$ is defined as \begin{equation}\label{w} \sigma = \vert s_t \vert =\sigma_0 \left(1+ (\frac{\hbar t}{2m\sigma_0^2})^2\right)^{\frac{1}{2}} \end{equation} which represents the spreading of the wave packet. The amplitude and the phase of the packet are: \begin{equation}\label{rf} R= (2\pi \sigma^2)^{-\frac{3}{4}} e^{-\frac{(\mathbf{x}-\mathbf{u}t)^2}{4\sigma^2}} \end{equation} and \begin{equation}\label{sf} S = -(\frac{3\hbar}{2})\arctan(\frac{\hbar t}{2m\sigma_0^2})+m\mathbf{u}\cdot (\mathbf{x}-\frac{1}{2}\mathbf{u}t)+\frac{(\mathbf{x}-\mathbf{u}t)^2}{8m\sigma_0^2 \sigma^2} \end{equation} The quantum potential and quantum force for this system is obtained through the relations: \begin{equation}\label{qf} Q= \frac{\hbar^2}{4m\sigma^2}\left(3-\frac{(\mathbf{x}-\mathbf{u}t)^2}{2\sigma^2} \right) \end{equation} and \begin{equation}\label{ff} f=-\nabla Q = \frac{\hbar^2}{4m\sigma^2}(\mathbf{x}-\mathbf{u}t) \end{equation} Now we want to argue that the usual Bohmian condition for getting classical limit is not suitable from an objective point of view. As we mentioned before, in conventional quantum mechanics, the explanation for spreading of the wave packet is based on the Heisenberg uncertainty relation. In Bohmian quantum mechanics, the spreading of wave packet is due to quantum force \cite{RefHolland}. The condition for the classical limit is the vanishing of the quantum force or quantum potential. When quantum potential vanishes the quantum Hamilton-Jacobi equation reduces to classical Hamilton-Jacobi equation. In Bohmian quantum mechanics, there is no explanation for the vanishing of quantum potential or quantum force objectively. The formalism only states that if quantum potential or force vanishes, the classical circumstances will be retrieved. \par Here, we investigate the classical limit of a free particle with its associated wave packet (\ref{packet}). The vanishing of the quantum potential or the quantum force of the wave packet is based on this argument that in classical limit the wave packet does not spread i.e. we have $\sigma \longrightarrow \sigma_0$. Thus, we should impose the condition \begin{equation}\label{c1} \frac{\hbar t}{2m\sigma_0^2} \longrightarrow 0 \end{equation} on the relation (\ref{w}). By this condition, the amplitude and the phase of the wave packet become: \begin{equation} R \longrightarrow (2\pi \sigma_0^2)^{-\frac{3}{4}} e^{-\frac{(\mathbf{x}-\mathbf{u}t)^2}{4\sigma_0^2}}\\ \end{equation} and \begin{equation} S \longrightarrow m\mathbf{u}\cdot \mathbf{x}- Et \end{equation} with the classical constant energy $E=\frac{1}{2}m \mathbf{u}^2$. The condition (\ref{c1}), states that if the initial width of the wave packet or the mass of the particle is very large, at the initial times, the fraction $\frac{\hbar t}{2m\sigma_0^2}$ is too small and the dispersion of the wave packet is negligible. From the experimental view this is a flawless condition. Because, before increasing the magnitude of the fraction (\ref{c1}) over time, we can do a measurement. But from an objective point of view, it is not a convincing condition. Suppose that we let the time in numerator of the relation (\ref{w}) grows to infinity. Naturally, the above condition fails. Because, the numerator and denominator will be comparable. But, we are interested in a condition that expresses the sufficient amount of mass to get classical limit, independent of time. Another defect is that, it is possible to have large mass in the relation (\ref{w}) with the width of the wave packet being very small. Since the width of the wave packet refers to wave function or uncontrollable hidden variables, the relation (\ref{w}) does not give an objective criterion. We should replace $\sigma$ with the properties of the object or universal constants which are measurable. We have seen experimentally that by increasing the mass of a particle its dynamics tends to classical dynamics. In the classical dynamics, the particle has a precise position, while in quantum mechanics there is an uncertainty in its position. We note that in the standard quantum mechanics the position of a particle is measured through the action of its associated operator on the wave function, while in Bohmian quantum mechanics the particle has specific position independent of operator formalism and measurement theory. \par As we mentioned earlier the quantum force is the responsible for the spreading of the wave packet. Dynamically, we need a force that prevents the quantum force to spread the wave function. Since, we do not think about external agents, we have to find this force in the particle's own properties. The best candidate among forces is the gravity which is always attractive and would localize the mass distribution. Note that it is the self gravity of a system which is important, not the gravitation due to other objects. Because, the gravitation due to other objects does not localize the different locations of a body in the ensemble. Here, the localization is a more general concept than the localization in classical mechanics. It should be applicable even for a point-like particle. In fact, we should have a quantum mechanical view about the effects of gravity. In the other words, we know from Bohmian quantum mechanics that for a particle at an initial time $t_0$ with the initial wave function $\psi_0$, there is an ensemble of positions and consequently trajectories which are distributed in space. So, we can consider the gravitational interaction between the different locations of particle in the ensemble. In figure \ref{fig:1}, we illustrate the gravitational force between different particle locations in a wave packet. In Bohmian mechanics, a particle can be at different points of a wave packet. In fact, since the hidden variable(s) are not clear to us, we think that the particle is in all possible locations of ensemble simultaneously. However, this behavior is justified with the uncertainty principle in standard quantum mechanics, but in Bohmian quantum mechanics such behaviors are due to our ignorance with respect to the nonlocal hidden variables\cite{RefHolland}.\par At the classical limit two statements are possible. First, the quantum force and self gravity of particles are equal. In this case, we have a stationary non-spreading wave packet with the constant width $\sigma_0$ as: \begin{equation} \psi \longrightarrow R_0(\mathbf{x}) exp(\frac{iEt}{\hbar}) \end{equation} Second, the self gravity of the particle overcomes the quantum force completely and the wave packet tends to a Dirac delta function, i.e. \begin{equation} \psi \longrightarrow \delta^3(\mathbf{x}-\mathbf{x}^\prime) \end{equation} But, the final state can not be a Dirac delta function. Because, the width of the Dirac delta function tends to zero and this causes an infinite quantum force. It seems that an equivalence should be between quantum force and self gravitation of the particle at the classical limit. \begin{figure}[ht] \centerline{\includegraphics[width=6cm]{force2}} \caption{The quantum force of the wave packet spreads the wave packet and increases the uncertainty in the particle location. On the other hand the gravitation between mass distribution at different positions of the particle in the ensemble would localize the distribution and decreases the uncertainty in the position of the particle. The quantum force is depicted schematically.\label{fig:1}} \end{figure} In fact, according to the figure (\ref{fig:1}), the wave packet begins to disperse due to the quantum force. The relation (\ref{ff}) demonstrates that if initial width is more narrow, the force is stronger and the the wave packet spreads rapidly. At the same time, the self gravitation of the particle (gravitation in ensemble) prevents more dispersion. If the gravity is so strong that overcomes the spreading of the wave function, then after a specific time the quantum dispersion vanishes. Then, we should have at the equivalence time: \begin{equation}\label{fe} \mathbf{f}_q = \mathbf{f}_{\mathfrak{g}} \end{equation} The relation (\ref{fe}) is locally equal to: \begin{equation}\label{fe2} \nabla Q(\mathbf{x}) =m \nabla \varphi(\mathbf{x}) \end{equation} Through this argument, we conclude that we should add a gravitational self interaction to the quantum Hamilton-Jacobi of the particle. Because, the dynamics of the particle is determined through the all potentials of the system i.e., $m\mathbf{a}= \nabla(Q+\sum_i U_i)$. For the two specified elements of the figure (\ref{fig:1}), the gravitational potential is \begin{equation} d \varphi (\mathbf{x}, \mathbf{x}^\prime)=-\frac{G m \rho(\mathbf{x}^\prime)d^3 \mathbf{x}^\prime}{\vert \mathbf{x}-\mathbf{x}^\prime \vert} \end{equation} The gravitational energy for all possible particle locations is: \begin{equation} U_{\mathfrak{g}}(\mathbf{x})= -m^2 \int \frac{G \rho(\mathbf{x}^\prime)d^3 x^\prime}{\vert \mathbf{x}-\mathbf{x}^\prime \vert} \end{equation} Note that in figure (\ref{fig:1}) the quantum force is depicted schematically. By the specified force in figure (\ref{fig:1}), we do not mean the quantum force as a repulsive force between the elements of the ensemble, like the forces of the classical mechanics. However, this force would make more uncertainty in the location of the particle. Thus, the new quantum Hamilton-Jacobi equation becomes: \begin{equation}\label{hg} \frac{\partial S}{\partial t}+\frac{(\nabla S)^2}{2m}+ Q(\mathbf{x})- m^2G \int \frac{ \vert \psi (\mathbf{x}^\prime)\vert^2 }{\vert \mathbf{x}-\mathbf{x}^\prime \vert}d^3 \mathbf{x}^\prime=0 \end{equation} Where, we have used $\rho (\mathbf{x}^\prime)=\vert \psi (\mathbf{x}^\prime)\vert^2 $. It is not difficult to check that the substitution the polar form of the wave function into the Schr\"{o}dinger-Newton equation leads to the above Hamilton-Jacobi equation. But, we suggested to get the above Hamilton-Jacobi equation through some special arguments. The average of the above Hamilton-Jacobi equation is: \begin{equation} \int \rho(\mathbf{x})\left( \frac{\partial S}{\partial t}+\frac{(\nabla S)^2}{2m}+ Q(\mathbf{x})- m^2G \int \frac{ \vert \psi (\mathbf{x}^\prime)\vert^2 d^3 x^\prime}{\vert \mathbf{x}-\mathbf{x}^\prime \vert} \right)d^3\mathbf{x}=0 \end{equation} Or equivalently, \begin{equation} \int \left( -E+\frac{\mathbf{p}^2}{2m}+ Q(\mathbf{x})- m^2G \int \frac{ \vert \psi (\mathbf{x}^\prime)\vert^2 d^3 x^\prime}{\vert \mathbf{x}-\mathbf{x}^\prime \vert} \right) \vert \psi(\mathbf{x})\vert^2 d^3\mathbf{x}=0 \end{equation} In an abbreviated form, we have: \begin{equation}\label{abb} \left\langle E \right\rangle = \left\langle \frac{\mathbf{p}^2}{2m} \right\rangle + \left\langle Q(\mathbf{x}) \right\rangle + \left\langle U_\mathfrak{g} \right\rangle \end{equation} For simplicity, we do calculations for a one-dimensional wave packet with the width $\sigma_0$ and zero initial group velocity. If we calculate the quantum potential for a stationary one-dimensional wave packet $\psi(x,t)= (2\pi \sigma_0^2)^{-\frac{3}{4}}e^{-\frac{x^2}{4\sigma_0^2}} e^{\frac{iEt}{\hbar}}$, in which $R_0=(2\pi \sigma_0^2)^{-\frac{3}{4}}e^{-\frac{x^2}{4\sigma_0^2}} $, we get: \begin{equation}\label{aq} \langle Q\rangle _{0}=\int_{-\infty}^{+\infty} R_0^2 Q dx \sim \frac{\hbar^2}{2m\sigma_0^2} \end{equation} which is the average quantum potential of the particle, when it is described by a stationary wave packet with the width $\sigma_0$. The average gravitational self energy of a point-like particle, with the probability radius $\sigma_0$, is \begin{equation} \langle U _{\mathfrak{g}}\rangle = \int_{-\infty}^{+\infty} R_0^2 U _{\mathfrak{g}} dx \sim \frac{Gm^2}{\sigma_0} \end{equation} A macroscopic body with radius $R$ has been studied in refs \cite{RefP3} and \cite{RefD2}. For simplicity, we consider a point-like particle. Because our aim is only the study of gravitational reduction in Bohmian context. Since for a stationary state, the phase of the wave packet is independent of position we conclude that $p=\nabla S=0$. Thus the kinetic energy in the Hamilton-Jacobi equation is zero. Note that this statement is possible in Bohmian quantum mechanics and not in the standard quantum mechanics. Thus, the relation (\ref{abb}) becomes: \begin{equation}\label{eg} \langle E \rangle =\frac{\hbar^2}{2m\sigma_0^2}-\frac{Gm^2}{\sigma_0} \end{equation} For having equilibrium of forces as the average, we calculate $\frac{d \langle Q\rangle _{0}}{d\sigma_0}=\frac{d \langle U _{\mathfrak{g}}\rangle}{d \sigma_0}$, which yields condition \begin{equation}\label{min} (\sigma_0)_{\text{min}}\sim \frac{\hbar^2}{Gm^3} \end{equation} This is the famous result of ref. \cite{RefD1} which we have derived here through the Bohmian considerations. In this relation, the width of the wave packet which is related to a subjective concept, like hidden variable, is related to an objective quantity like the mass of the particle. By using the relation (\ref{min}), the average stationary quantum potential is represented by measurable quantities such as \begin{equation} \langle Q\rangle _{(stationary)_{min}}\sim \frac{G^2 m^5}{\hbar^2} \end{equation} This is a criterion for the average quantum potential to give the classical limit for a free particle with mass $m$ objectively, independent of whether the particle is an electron or a macroscopic body.\par If we operate with the operator $(\nabla \cdot) $ on the both sides of the relation (\ref{fe2}), we get: \begin{equation}\label{con2} \nabla^2 Q = 4\pi G m \rho \end{equation} where, we have used the Poisson equation $\nabla^2 \varphi =4\pi G \rho $. This is an interesting result. Because, it represents a nonlinear differential equation for the objective Bohmian classical limit. The relation (\ref{con2}), has an interesting interpretation. It states that at the classical limit, for which the quantum and gravitational forces are equal, the quantum information which is in general non-local, reduces to local gravitational information. The relation (\ref{con2}) can be represented as \begin{equation}\label{ee} \frac{\hbar^2}{2m^2}\nabla^2(\frac{\nabla^2 \sqrt{\rho}}{\sqrt{\rho}}) =-4\pi G \rho \end{equation} which represents a stationary quantum-gravitational bulk in space. It is possible to solve this equation analytically or numerically and investigate the solutions of $\rho$. \par In the following, the figure (\ref{fig:4}) illustrates different solutions of the equation (\ref{ee}) for different values of mass. It demonstrates how the probability of mass distribution is concentrated by the increase in the mass of the particle. \begin{figure}[ht] \centerline{\includegraphics[width=9cm]{ro2}} \caption{Different solutions of (\ref{ee}) for various values of $m$. The width of distribution of $\rho$ is proportional to $\sigma_0 \propto \frac{1}{\sqrt{m}}$ .\label{fig:4}} \end{figure} These arguments demonstrate that how an important quantity such as Bohmian quantum potential, which in Bohm's own view is the responsible for quantum behaviors of matter, is related to the gravitational potential, and gives an objective criterion for the classical limit of a free particle in Bohmian quantum mechanics. \section{conclusion} \label{sec:3} In this work, we demonstrated that how the famous result of the gravitational wave function reduction i.e. the relation (\ref{min}), is obtained through the considerations of Bohmian quantum mechanics. We derived an objective condition for the transition from the quantum domain to the classical domain, using the concept of Bohmian quantum mechanics. The practical criterion is the equivalence of the average quantum force and the average self gravitational force of a body. Also, we obtained a relation for the average of quantum potential, relation (\ref{aq}), in terms of measurable quantities like the mass of the particle. The study of wave function reduction in the context of Bohmian quantum mechanics leads to a quantum-gravitational matter bulk that its associated equation can be solved analytically or numerically for getting more understanding. In fact, we have demonstrated that quantum information reduces to gravitational information at the reduction time. It represents a deep relation between quantum mechanics and gravity which should be studied more. Also, in this approach, the non-objective classical limit in Bohmian quantum mechanics, i.e., the vanishing of quantum potential or quantum force, is modified somehow, and could be expressed in terms of objective parameters like the mass of the particle. Another achievement is that in this approach, the particle participates in its quantum state reduction through the its mass and its self gravity. But, in the usual Bohmian classical limit all that happens is the one-way effects of the wave function, and the mass and gravity of the particle has no direct role in obtaining classical limit of the system. Therefore, contrary to usual Bohmian criterion, and the authors of the ref \cite{RefB}, the particle has active role in its quantum state reduction for obtaining objective Bohmian classical limit.
1,314,259,996,959
arxiv
\section{Introduction}\label{s:intro} \emph{Automaton (semi)groups} --- short for semigroups generated by Mealy automata or groups generated by invertible Mealy automata --- were formally introduced a half century ago (for details, see~\cite{clas32} and references therein). Two decades later, important results have started revealing their full potential. In particular, contributing to the Burnside problem, \cite{aleshin,grigorchuk1} construct Mealy automata generating particularly simple infinite torsion groups, and, answering to the Milnor problem, \cite{brs,grigorchukMilnor} produce Mealy automata generating the first examples of (semi)groups with intermediate growth. Since these pioneering works, a substantial theory continues to develop using various methods, ranging from finite automata theory to geometric group theory, and various viewpoints from self-similarity to natural actions on regular rooted trees (see~\cite{bgn,bgs,bs,clas32,gns,gsu,nek} for groups and~\cite{brs,cain,gns,mal,min,sst} for semigroups) and never ceases to show that automaton (semi)groups possess multiple interesting and sometimes unusual features. The classical decision problems have been investigated for automaton groups and semigroups: the word problem is solvable \cite{cain,gns} while the conjugacy problem has recently been proved to be unsolvable \cite{conjugacy}. Here we address the \emph{finiteness problem}, that is, the question of the existence of an algorithm that takes as input a Mealy automaton and decides if the generated (semi)group is finite (see~\cite[Problem~7.2.1(b)]{gns}). Since the word problem is solvable, then a semidecision procedure for the finiteness problem simply consists of enumerating all the elements. Three results related to the finiteness problem have to be mentioned here. First, the finiteness problem is solved for the special class of semigroups generated by (dual) Cayley machines~(see~\cite{cain,mal,min,sst}) by using semigroup theory and especially the Green's relations machinery. Second, the class of those automata which always generate finite (semi)groups independently of their output function has been completely characterized (see~\cite{anto,antoberk,russ}). Third, the class of so-called ``bounded'' (invertible) automata where all the states have growth degree at most~0 has been thoroughly studied and the solution to the order problem (see~\cite{sidkiconjugacy,sidki}) yields an infiniteness criterion. Observe that these three classes correspond to very special structures for the concerned Mealy automata. Two $\mathbf{GAP}$ packages are dedicated to automaton (semi)groups: $\mathbf{FR}$ by~Bartholdi and~$\mathbf{automgrp}$ by Muntyan and~Savchuk \cite{FR,GAP4,sav}. Both include specific (in)finiteness tests. Besides the three results above-mentioned, all that was known up to now about the finiteness question for automaton groups happened to be somehow summarized in the documentation for~$\mathbf{FR}$: \begin{quote}\small The order of [an automaton] group is computed as follows: if all [the states have growth degree at most~0], then enumeration will succeed in computing the order. If the action of the group is primitive, and it comes from a bireversible automaton, then the Thompson-Wielandt theorem is tested against [$\ldots$] see~\cite[Prop.~2.1.1]{BM}. Then, $\mathbf{FR}$ attempts to find whether the group is level-transitive (in which case it would be infinite). Finally, it attempts to enumerate the group's elements, testing at the same time whether these elements have infinite order. \nobreak\noindent\medskip Needless to say, none except the first few steps are guaranteed to succeed. \end{quote} \smallskip\noindent In this paper, we give several new criteria for testing (in)finiteness, that could easily be added to the $\mathbf{FR}$ and $\mathbf{automgrp}$ packages. The original ingredients in these packages mainly come from geometric group theory. Our new notions and tools --- like \emph{helix graphs} and \emph{minimization-dualization} --- are automata-theoretic in nature and most often work in the general setting of semigroups. The common idea is to put a special emphasis on the \emph{dual automaton}, obtained by exchanging the roles of stateset and alphabet. The stepping stone is Proposition~\ref{pr:duale-finitude} stating that any Mealy automaton generates a finite semigroup if and only if so does its dual. The general strategies vary by analyzing a Mealy automaton and its dual either alternatively --- see the minimization-dualization reduction in Section~\ref{s:reduction} --- or both together as a whole --- see the helix graph construction in~Section~\ref{s:helix}. In Section~\ref{s:reduction}, we give an effective sufficient but not necessary condition for finiteness using minimization-dualization. Focusing on those invertible automata with invertible dual, and using helix graphs, Section~\ref{s:helix} provides an effective necessary but not sufficient condition for finiteness, and also a non-effective necessary and sufficient condition. The decidability of the finiteness problem remains open. \smallskip Gathering the new criteria with the previously known ones allows to decide the (semi)group (in)finiteness for substantially more Mealy automata. In Table~\ref{tbl-intro}, we report on the results of the experimentation carried out on: $(i)$ all 3-letter 2-state Mealy automata; $(ii)$ all 3-letter 3-state invertible or reversible Mealy automata. The first three columns are the number of automata treated successfully respectively by previously known criteria, our new criteria, and the union of both. The last column is the total number of Mealy automata. The automata are counted up to isomorphism. \begin{table}[ht] \centering \caption{Some results of the experimentations to decide (in)finiteness with old and new criteria.\label{tbl-intro}} {\begin{tabular}{lE>{\centering }m{24mm}|>{\centering }m{18mm}|>{\centering }m{20mm}E>{\centering }m{12mm}E} \cline{2-5} & {\bf\scriptsize previous criteria} &{\bf\scriptsize new criteria}&{\bf\scriptsize previous+new}&{\bf\scriptsize~~~total~~~}\tabularnewline \hline \multicolumn{1}{ElE}{\,general (3,2)} & 398 & \numprint{1130} & \numprint{1214} & \numprint{4003} \tabularnewline\hline \multicolumn{1}{ElE}{\,inv. or rev. (3,3)\,} & \numprint{78721} & \numprint{100924} & \numprint{172737} & \numprint{236558} \tabularnewline \hline \end{tabular}} \end{table} \noindent More detailed experimental results are given in Section~\ref{sec-experimentations} and a gallery of meaningful examples is given in Table~\ref{tbl-examples}. \section{Preliminaries} \label{s:prelim} Let $S$ be a finite and non-empty set. We denote by $\mathfrak{T}_S$ the set of functions from~$S$ to~$S$, and we denote by $\perm_S$ the set of bijections from~$S$ to~$S$. \subsection{Mealy automaton} If one forgets about initial and final states, a {\em (finite, deterministic and complete) automaton} $\aut{A}$ is a triple \( \bigl( A,\Sigma,\delta = (\delta_i: A\rightarrow A )_{i\in \Sigma} \bigr) \), where the \emph{set of states}~$A$ and the \emph{alphabet}~$\Sigma$ are non-empty finite sets, and where the $\delta_i$'s are functions. In a condensed way, the automaton is identified with $\delta$, that is an element of~$\mathfrak{T}_A^{\Sigma}$. \smallskip A \emph{Mealy automaton} is a quadruple \[ \bigl( A, \Sigma, \delta = (\delta_i: A\rightarrow A )_{i\in \Sigma}, \rho = (\rho_x: \Sigma\rightarrow \Sigma )_{x\in A} \bigr) \:, \] such that both $(A,\Sigma,\delta)$ and $(\Sigma,A,\rho)$ are automata. Another standard terminology for Mealy automaton would be: letter-to-letter transducer with the same input and output alphabets. A Mealy automaton is identified with an element of $\mathfrak{T}_A^{\Sigma}\times \mathfrak{T}_{\Sigma}^A$. Graphically, a Mealy automaton is represented by a labelled directed graph with: \[ \mathrm{nodes}: \ A, \qquad \text{arcs (transitions)}: \ x \stackrel{i | j}{\longrightarrow} y \ \mathrm{if} \ \delta_i(x)=y\text{ and }\rho_x(i)=j \:. \] The notation $x\stackrel{\mot{u}| \mot{v}}{\longrightarrow} y$ with $\mot{u}=u_1\cdots u_n$, $\mot{v}=v_1\cdots v_n$ is a shorthand for the existence of a path $x \stackrel{u_1|v_1}{\longrightarrow} x_1 \stackrel{u_2|v_2}{\longrightarrow} x_2 \cdots x_{n-1} \stackrel{u_{n}|v_n}{\longrightarrow} y$ in $\aut{A}$. \smallskip Two examples of Mealy automata are given in Fig.~\ref{fi-Maut}. \begin{figure}[ht] \begin{center} \TinyPicture \VCDraw{% \begin{VCPicture}{(-6,-1)(10,2)} \LargeState \State[a]{(-5,0)}{A} \State[b]{(0,0)}{B} \State[a]{(4,0)}{C} \State[b]{(9,0)}{D} \LArcL[0.5]{A}{B}{\IOL{0}{0}} \LArcL[0.5]{B}{A}{\IOL{1}{1}} \LoopN[0.5]{A}{\IOL{1}{0}} \LoopN[0.5]{B}{\IOL{0}{1}} \LArcL[0.5]{C}{D}{\IOL{0}{1}} \LArcL[0.5]{D}{C}{\IOL{0}{0}} \LoopN[0.5]{C}{\IOL{1}{0}} \LoopN[0.5]{D}{\IOL{1}{1}} \end{VCPicture} } \end{center} \caption{Two Mealy automata.} \label{fi-Maut} \end{figure} In a Mealy automaton $(A,\Sigma, \delta, \rho)$, the sets $A$ and $\Sigma$ play dual roles. So we may consider the \emph{dual (Mealy) automaton} defined by \( \dz(\aut{A}) = (\Sigma,A, \rho, \delta) \). Alternatively, we can define the dual Mealy automaton via the set of its transitions: \begin{equation} x \stackrel{i\mid j}{\longrightarrow} y \ \in \aut{A} \quad \iff \quad i \stackrel{x\mid y}{\longrightarrow} j \ \in \dz(\aut{A}) \:. \label{eq-dual} \end{equation} In what follows, it is often pertinent to consider a Mealy automaton and its dual together, that is to work with the pair $\{\aut{A}, \dz(\aut{A})\}$. A pair of dual Mealy automata is represented in Fig.~\ref{fi-Mautdual}. \begin{figure}[ht] \begin{center} \TinyPicture \VCDraw{% \begin{VCPicture}{(-6,-.5)(10,2)} \LargeState \State[a]{(-5,0)}{A} \State[b]{(0,0)}{B} \State[0]{(4,0)}{C} \State[1]{(9,0)}{D} \LArcL[0.5]{A}{B}{\IOL{0}{1}} \LArcL[0.5]{B}{A}{\IOL{0}{0}} \LoopN[0.5]{A}{\IOL{1}{0}} \LoopN[0.5]{B}{\IOL{1}{1}} \LArcL[0.5]{C}{D}{\IOL{a}{b}} \LArcL[0.5]{D}{C}{\IOL{a}{a}} \LoopN[0.5]{C}{\IOL{b}{a}} \LoopN[0.5]{D}{\IOL{b}{b}} \end{VCPicture} } \end{center} \caption{A pair of dual Mealy automata.} \label{fi-Mautdual} \end{figure} Consider a Mealy automaton $\aut{A}\in \mathfrak{T}_A^{\Sigma}\times \perm_{\Sigma}^A$. Let $A^{-1}=\{x^{-1}, x \in A\}$ be a disjoint copy of $A$. The \emph{inverse (Mealy) automaton} $\inverse{\aut{A}}\in \mathfrak{T}_{A^{-1}}^{\Sigma}\times \perm_{\Sigma}^{A^{-1}}$ is defined by the set of its transitions: \begin{equation} x \stackrel{i\mid j}{\longrightarrow} y \ \in \aut{A} \quad \iff \quad x^{-1} \stackrel{j\mid i}{\longrightarrow} y^{-1} \ \in \aut{A}^{-1} \:. \label{eq-inv} \end{equation} \smallskip Let us call respectively \emph{dualization} (denoted $\dz$) and {\em inversion} (denoted $\mathfrak{i}$) the two transformations on transitions defined in \eref{eq-dual} and \eref{eq-inv}. Starting with a transition and alternating the dualization and inversion transformations, we obtain eight transitions. (In the process, we also define $\Sigma^{-1}=\{x^{-1}, x \in \Sigma\}$, a disjoint copy of $\Sigma$; and we set $(A^{-1})^{-1}=A$ and $(\Sigma^{-1})^{-1}=\Sigma$.) \smallskip Now consider a Mealy automaton $\aut{A}$ identified with its set of transitions, and apply the same transformations to $\aut{A}$. We obtain eight sets of transitions that we denote by: \[ \aut{A}, \ \dz(\aut{A}), \ \mathfrak{i}(\aut{A}), \ \dz\mathfrak{i}(\aut{A}), \ \mathfrak{i}\dz(\aut{A}), \ \dz\mathfrak{i}\dz(\aut{A}), \ \mathfrak{i}\dz\mathfrak{i}(\aut{A}), \ \dz\mathfrak{i}\dz\mathfrak{i}(\aut{A}) = \mathfrak{i}\dz\mathfrak{i}\dz(\aut{A}) \:. \] If $\aut{A}\in \mathfrak{T}_A^{\Sigma}\times \perm_{\Sigma}^A$, then $\mathfrak{i}(\aut{A})=\inverse{\aut{A}}$. Apart from $\dz(\aut{A})$ which is always a Mealy automaton, the other six sets may or may not define a Mealy automaton depending on $\aut{A}$. \smallskip By tracking the content of the sets of transitions, we observe the following: \begin{align*} \bigl[ \dz\mathfrak{i}\dz\mathfrak{i}(\aut{A}) \ \in \bigr. & \left. \mathfrak{T}_A^{\Sigma}\times \mathfrak{T}_{\Sigma}^A \right] \implies \\ & \left[ \aut{A}, \dz(\aut{A}), \mathfrak{i}(\aut{A}), \dz\mathfrak{i}(\aut{A}), \mathfrak{i}\dz(\aut{A}), \dz\mathfrak{i}\dz(\aut{A}), \mathfrak{i}\dz\mathfrak{i}(\aut{A}), \dz\mathfrak{i}\dz\mathfrak{i}(\aut{A}) \ \in \perm_A^{\Sigma}\times \perm_{\Sigma}^A \right] \:. \end{align*} Let us introduce some additional terminology. \begin{definition} A Mealy automaton is {\em invertible} if it belongs to $\mathfrak{T}_A^{\Sigma}\times \perm_{\Sigma}^A$; and \emph{reversible} if it belongs to $\perm_A^{\Sigma}\times \mathfrak{T}_{\Sigma}^A$. A Mealy automaton is an {\em IR-automaton} if it is both invertible and reversible, that is, if it belongs to $\perm_A^{\Sigma}\times \perm_{\Sigma}^A$. If $\dz\mathfrak{i}\dz\mathfrak{i}(\aut{A})$ is a Mealy automaton, we say that $\aut{A}$ (\hbox{\textit{resp.}} $\dz(\aut{A}), \dots , \dz\mathfrak{i}\dz\mathfrak{i}(\aut{A})$) is {\em bireversible}. \end{definition} The terms ``invertible, reversible, and bireversible'' are standard since~\cite{mns}. The acronym IR-automaton is introduced for convenience. IR-automata are of particular interest and the core of the paper is devoted to them. In Fig.~\ref{fi-Maut}, the right Mealy automaton is an IR-automaton, but not the left one. \paragraph{Mealy automaton of order $(n,k)$.} Consider a Mealy automaton $\aut{A} = ( A,\Sigma,\delta,\rho)$ in $\mathfrak{T}_A^{\Sigma}\times \mathfrak{T}_{\Sigma}^A$ and $n,k>0$. The quadruple \[ \aut{A}_{n,k} = \bigl( \ A^n,\Sigma^k, (\delta_{\mot{x}} : A^n \rightarrow A^n)_{\mot{x}\in \Sigma^k}, (\rho_{\mot{u}} : \Sigma^k \rightarrow \Sigma^k )_{\mot{u}\in A^n} \ \bigr) \] is a Mealy automaton in $\mathfrak{T}_{A^n}^{\Sigma^k}\times\mathfrak{T}_{\Sigma^k}^{A^n}$ that we call the \emph{Mealy automaton of order $(n,k)$ associated with $\aut{A}$}. Observe that $\aut{A}_{1,1}=\aut{A}$. In Fig.~\ref{fi-A21}, we show the Mealy automaton of order $(2,1)$ associated with the Mealy automaton of Fig.~\ref{fi-Mautdual}. \begin{figure}[ht] \begin{center} \TinyPicture \VCDraw{% \begin{VCPicture}{(3,-.5)(11,5.5)} \LargeState \State[aa]{(4,5)}{C} \State[ab]{(4,0)}{D} \State[ba]{(9,5)}{E} \State[bb]{(9,0)}{F} \EdgeL[0.5]{C}{E}{\IOL{0}{0}} \EdgeR[0.5]{D}{F}{\IOL{0}{1}} \LoopE[0.5]{E}{\IOL{1}{0}} \LoopE[0.5]{F}{\IOL{1}{1}} \ArcL[0.5]{C}{D}{\IOL{1}{1}} \ArcL[0.5]{D}{C}{\IOL{1}{0}} \EdgeL[0.2]{E}{D}{\IOL{0}{1}} \EdgeBorder \EdgeR[0.2]{F}{C}{\IOL{0}{0}} \EdgeBorderOff \end{VCPicture} } \end{center} \caption{Mealy automaton of order $(2,1)$.} \label{fi-A21} \end{figure} \subsection{Helix graph} We have already seen two equivalent ways of presenting a Mealy automaton: $(i)$ as a quadruple $(A,\Sigma,\delta,\rho)$, $(ii)$ as a labelled directed graph (see Fig.~\ref{fi-Mautdual}). We propose here a third and original one which turns out to be very convenient. \smallskip The \emph{helix graph} ${\mathcal H}$ of a Mealy automaton $\aut{A}=(A,\Sigma,\delta,\rho)$ is the directed graph with nodes \(A\times \Sigma\) and arcs \((x,i) \longrightarrow \bigl(\delta_i(x), \rho_x(i)\bigr)\) for all \((x,i)\). The \emph{helix graph ${\mathcal H}_{n,k}$ of order $(n,k)$ associated with $\aut{A}$} is the helix graph of $\aut{A}_{n,k}$. In Fig.~\ref{fi-helix}, we have represented the helix graph of the Mealy automaton of Fig.~\ref{fi-Mautdual}. \begin{figure}[ht] \begin{center} \TinyPicture \VCDraw{% \begin{VCPicture}{(-1,4)(9,4.5)} \LargeState \ChgStateLabelScale{0.8} \State[a,1]{(0,5)}{a1} \State[b,0]{(0,3)}{b0} \State[a,0]{(4,4)}{a0} \State[b,1]{(8,4)}{b1} \Edge{a1}{a0} \Edge{b0}{a0} \Edge{a0}{b1} \LoopE{b1}{} \end{VCPicture} } \end{center} \caption{Helix graph.} \label{fi-helix} \end{figure} Bireversible automata have a nice characterization using the helix graph. \begin{lemma}\label{le-bi} Consider an IR-automaton $\aut{A}$ with helix graph ${\mathcal H}$. We have: \[ \left[ \ \aut{A} \text{ bireversible } \right] \iff \left[ \ {\mathcal H} \text{ union of cycles } \right] \:. \] \end{lemma} \begin{proof} Define the directed graph $\widetilde{{\mathcal H}}$ as follows: \begin{itemize} \item nodes: \(A^{-1}\times \Sigma^{-1}\), \item arcs: \((x^{-1},i^{-1})\longrightarrow (y^{-1},j^{-1})\) if \((y,j) \longrightarrow (x,i)\) is an arc of \({\mathcal H}\). \end{itemize} If $\dz\mathfrak{i}\dz\mathfrak{i}(\aut{A})$ is a Mealy automaton, then $\widetilde{{\mathcal H}}$ is its helix graph. Conversely, assume that $\aut{A}$ is an IR-automaton and that ${\mathcal H}$ is a union of cycles. Consider a node $(y,j)$ of ${\mathcal H}$: it has a unique predecessor in ${\mathcal H}_{1,1}$. \end{proof} \subsection{Automaton (semi)group}\label{sse-automgroup} Let $\aut{A} = (A,\Sigma, \delta,\rho)$ be a Mealy automaton. We view $\aut{A}$ as an automaton with an input and an output tape, thus defining mappings from input words over $\Sigma$ to output words over~$\Sigma$. Formally, for $x\in A$, the map $\rho_x : \Sigma^* \rightarrow \Sigma^*$, extending $\rho_x : \Sigma \rightarrow \Sigma$, is defined by: \[ \rho_x (\mot{u}) = \mot{v} \quad \textrm{if} \quad \exists y, \ x\stackrel{\mot{u}|\mot{v}}{\longrightarrow} y \:.\] By convention, the image of the empty word is itself. The mapping $\rho_x$ is length-preserving and prefix-preserving (the prefix of the image is the image of the prefix). It satisfies \begin{equation}\label{eq-property} \forall u \in \Sigma, \ \forall \mot{v} \in \Sigma^*, \qquad \rho_x(u\mot{v}) = \rho_x(u)\rho_{\delta_u(x)}(\mot{v}) \:. \end{equation} We can also use \eref{eq-property} to define $\rho_x:\Sigma^* \rightarrow \Sigma^*$ inductively starting from $\rho_x:\Sigma \rightarrow \Sigma$. We say that $\rho_x$ is the \emph{production function} associated with $(\aut{A},x)$. For $\mot{u}=u_1\cdots u_n \in A^n$, $n>0$, set \(\rho_\mot{u}: \Sigma^* \rightarrow \Sigma^*, \rho_\mot{u} = \rho_{u_n} \circ \cdots \circ \rho_{u_1} \:\). \begin{definition} Consider $\aut{A} \in \mathfrak{T}_A^{\Sigma}\times \mathfrak{T}_{\Sigma}^A$. The semigroup of mappings from $\Sigma^*$ to $\Sigma^*$ generated by $\rho_x, x\in A$, is called the \emph{semigroup of $\aut{A}$} and is denoted by $\presm{\aut{A}}$. Assume that $\aut{A} \in \mathfrak{T}_A^{\Sigma}\times \perm_{\Sigma}^A$. The group of mappings from $\Sigma^*$ to $\Sigma^*$ generated as a group by $\rho_x, x\in A$, is called the \emph{group of $\aut{A}$} and is denoted by $\pres{\aut{A}}$ \end{definition} The above definition makes sense. Indeed if $\aut{A} \in \mathfrak{T}_A^{\Sigma}\times \perm_{\Sigma}^A$, then the production mapping $\rho_x$ associated with $(\aut{A},x)$ is a bijection from $\Sigma^*$ to $\Sigma^*$. The inverse bijection $\rho_x^{-1}:\Sigma^*\rightarrow \Sigma^*$ is the production mapping $\rho_{x^{-1}}$ associated with $(\inverse{\aut{A}},x^{-1})$, where $\inverse{\aut{A}}$ is the inverse Mealy automaton defined in \eref{eq-inv}. Therefore, we have \[ \presm{\aut{A}}= \{ \rho_\mot{u}, \mot{u} \in A^* \}, \qquad \pres{\aut{A}} = \{ \rho_\mot{u}, \mot{u} \in (A\sqcup A^{-1})^* \} \:. \] \begin{lemma}\label{lm-g-sg-finis} Let \(\aut{A}\) be an IR-automaton. Then we have $\pres{\aut{A}}=\pres{\inverse{\aut{A}}} = \pres{ \aut{A} \sqcup \inverse{\aut{A}}} = \presm{ \aut{A} \sqcup \inverse{\aut{A}}}$, where $\aut{A} \sqcup \inverse{\aut{A}}$ is the Mealy automaton whose set of transitions is the union of the ones of $\aut{A}$ and $\inverse{\aut{A}}$. Furthermore, if either \(\pres{\aut{A}}\) or \(\presm{\aut{A}}\) is finite, then we have~\(\pres{\aut{A}} = \presm{\aut{A}}\). \end{lemma} \begin{proof} The first statement follows directly from the definitions. Suppose that \(\presm{\aut{A}}\) is finite and let \(x\) be one of its elements. Since the semigroup \(\presm{\aut{A}}\) is finite, there exist~$k$ and~$n$ such that $x^{n+k}=x^k$. So we have \(x^n=1\) in the group \(\pres{\aut{A}}\). Hence the inverse of \(x\) is \(x^{n-1}\) which belongs to the semigroup \(\presm{\aut{A}}\). So we have \(\pres{\aut{A}} = \presm{\aut{A}}\). Assume now that \(\pres{\aut{A}}\) is finite. Since the semigroup~\(\presm{\aut{A}}\) naturally embeds into the group~\(\pres{\aut{A}}\), it is also finite. \end{proof} \begin{definition}\label{de-gag} A semigroup $M$ is called an \emph{automaton semigroup} if there exists a Mealy automaton $\aut{A}$ such that $M = \presm{\aut{A}}$. A group $G$ is called an \emph{automaton group} if there exists an invertible Mealy automaton $\aut{A}$ such that $G = \pres{\aut{A}}$. In both cases, we say that $\aut{A}$ \emph{generates} the (semi)group. \end{definition} Denote dually by $\delta_i:A^*\rightarrow A^*, i\in \Sigma$, the production mappings associated with the dual Mealy automaton $\dz(\aut{A})$. For $\mot{v}=v_1\cdots v_n \in \Sigma^n$, $n>0$, set $\delta_\mot{v}: A^* \rightarrow A^*, \ \delta_\mot{v} = \delta_{v_n}\circ \cdots \circ \delta_{v_1}$. \smallskip A pair of Mealy automata $\{\aut{A},\dz(\aut{A})\}$ generates a pair of (semi)groups. \medskip Examples of automata (semi)groups are given in Table~\ref{tbl-examples}. \bigskip The two following propositions complement each other. Proposition~\ref{pr:duale-finitude} is proved by Nekrashevych for a pair of dual bireversible Mealy automata~\cite[Lem.1.10.6]{nek}. For the sake of completeness, we provide a similar proof in the general case. \begin{proposition}\label{pr-fifi} Let $G$ and $H$ be two finite semigroups. There exists a Mealy automa\-ton $\aut{A}$ such that $\pres{\aut{A}}_+ =G$ and $\pres{\dz(\aut{A})}_+ =H$. Let $G$ and $H$ be two finite groups. There exists an IR-automa\-ton $\aut{A}$ such that $\pres{\aut{A}} =G$ and $\pres{\dz(\aut{A})} =H$. \end{proposition} \begin{proof} We carry out the proof for groups. The argument is similar for semigroups. Any finite group is a subgroup of a permutation group. Let $\Sigma_1$ and $A_2$ be two finite sets such that $G$ is a subgroup of $\perm_{\Sigma_1}$ and $H$ is a subgroup of $\perm_{A_2}$. Let $A_1 \subset \perm_{\Sigma_1}$ be a set of generators of $G$, let $\Sigma_2 \subset \perm_{A_2}$ be a set of generators of $H$. Set $A = A_1\times A_2$ and $\Sigma=\Sigma_1\times \Sigma_2$. Consider the Mealy automaton $\aut{A}$ with states $A$, alphabet $\Sigma$, and transitions \[ (a,b) \xrightarrow{(i,j) \mid (a(i),j)} (a,j(b)) \:. \] Denote the corresponding mappings by $\delta$ and $\rho$. Clearly, for $(a,b)\in A_1\times A_2$ and $(a,b')\in A_1\times A_2$, we have $\rho_{(a,b)}=\rho_{(a,b')}$ and we denote this mapping by $\rho_a:\Sigma^*\rightarrow \Sigma^*$. We have, $\forall a\in A_1, \forall (i_1,j_1)\cdots (i_n,j_n) \in \Sigma^*,$ \[ \rho_a \bigl( (i_1,j_1)\cdots (i_n,j_n) \bigr) = (a(i_1),j_1) \ (a(i_2),j_2)\ \cdots \ (a(i_n),j_n) \:. \] So the group generated by $(\rho_a:\Sigma^*\rightarrow \Sigma^*)_{a\in A_1}$ is isomorphic to the group generated by $(a:\Sigma_1\rightarrow \Sigma_1)_{a\in A_1}$. That is $\pres{\aut{A}} = G$. Similarly, $\pres{\dz(\aut{A})} = H$. \end{proof} \begin{sidewaystable} \centering \caption{Examples of automata (semi)groups.\label{tbl-examples}} { \begin{tabular}{|m{98pt}|m{75pt}|m{70pt}|c|c|c|m{70pt}|m{70pt}|m{65pt}|} \hline \multicolumn{3}{|c|}{infinite world} &\!\!\multirow{2}{*}{\rotatebox{270}{\footnotesize \!\!\!invertible}}\!\! &\!\!\multirow{2}{*}{\rotatebox{270}{\footnotesize \!\!\!reversible}}\!\! &\!\!\multirow{2}{*}{\rotatebox{270}{\footnotesize \!\!\!bireversible}}\!\! &\multicolumn{3}{c|}{finite world}\\ \cline{1-3}\cline{7-9} \multicolumn{1}{|c|}{\small generated (semi)group} &\multicolumn{1}{c|}{diagram} &\multicolumn{1}{c|}{helix graph} &&&&\multicolumn{1}{c|}{helix graph} &\multicolumn{1}{c|}{diagram} &\multicolumn{1}{c|}{\small gen. (semi)group}\\ \cline{1-3}\cline{7-9} \footnotesize the semigroup~$\mathbf{S_{I_2}}$ (the very smallest Mealy automaton with intermediate growth, see~\cite{brs}) &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-.7)(4,2.3)} \State[a]{(0,0)}{A} \State[b]{(4,0)}{B} \EdgeL{B}{A}{\IOL{1}{1}} \LoopN[.7]{A}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \LoopN[.4]{B}{\IOL{0}{1}} \end{VCPicture}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-.7)(2,2.7)} \State[a0]{(0,0)}{A0} \State[a1]{(2,0)}{A1} \State[b1]{(2,2)}{B1} \State[b0]{(0,2)}{B0} \ArcR{A0}{A1}{} \ArcR{A1}{A0}{} \EdgeL{B1}{A1}{} \EdgeL{B0}{B1}{} \end{VCPicture}} &&& &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(-2,-0.3)(2,2.3)} \State[a0]{(-2,2)}{A0} \State[a2]{(0,2)}{A2} \State[b2]{(2,2)}{B2} \State[b0]{(-2,0)}{B0} \State[b1]{(0,0)}{B1} \State[a1]{(2,0)}{A1} \EdgeL{A0}{B0}{} \EdgeL{A2}{B0}{} \EdgeL{B2}{A2}{} \EdgeL{A1}{B2}{} \ArcR{B0}{B1}{} \ArcR{B1}{B0}{} \end{VCPicture}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-1.3)(4,2.3)} \State[a]{(0,0)}{A} \State[b]{(4,0)}{B} \CLoopN[.3]{B}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \ArcL[.2]{B}{A}{\IOL{2}{2}} \ArcL[.2]{A}{B}{\StackThreeLabels{\IOL{0}{0}}{\IOL{1}{2}}{\IOL{2}{0}}} \end{VCPicture}} & an order~13597 semigroup \\ \hline \footnotesize the {\bf Grigorchuk group} see~\cite{gns} &\centering \TinyPicture\VCDraw{% \begin{VCPicture}{(0,-4.8)(4,0.8)} \State[a]{(0,-4)}{A} \State[b]{(0,0)}{B} \State[c]{(2,-2)}{C} \State[d]{(4,0)}{D} \State[e]{(4,-4)}{E} \EdgeL[.8]{A}{E}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \EdgeR[.3]{B}{A}{\IOL{0}{0}} \EdgeR[.7]{B}{C}{\IOL{1}{1}} \EdgeL[.3]{C}{A}{\IOL{0}{0}} \EdgeR[.3]{C}{D}{\IOL{1}{1}} \EdgeL[.3]{D}{E}{\IOL{0}{0}} \EdgeL{D}{B}{\IOL{1}{1}} \CLoopE[.2]{E}{\StackTwoLabels{\IOL{0}{0}}{\IOL{1}{1}}} \end{VCPicture}} &\centering \TinyPicture\VCDraw{% \begin{VCPicture}{(1,-2.3)(5,2.3)} \State[a0]{(1,0)}{a0} \State[a1]{(5,0)}{a1} \State[b0]{(0.4,2)}{b0} \State[b1]{(5,2)}{b1} \State[c0]{(1.6,2)}{c0} \State[c1]{(3,2)}{c1} \State[d0]{(3,-2)}{d0} \State[d1]{(3,0)}{d1} \State[e0]{(5,-2)}{e0} \State[e1]{(1,-2)}{e1} \EdgeL{b0}{a0}{} \EdgeL{c0}{a0}{} \EdgeL{a0}{e1}{} \CLoopNE{e1}{} \EdgeL{d0}{e0}{} \EdgeL{a1}{e0}{} \CLoopNW{e0}{} \EdgeL{b1}{c1}{} \EdgeL{c1}{d1}{} \EdgeR{d1}{b1}{} \end{VCPicture}} &\cellcolor{gray}&& &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(-2.5,-0.3)(2.5,2.3)} \State[c0]{(-2,2)}{c0} \State[b0]{(0,2)}{b0} \State[c1]{(2,2)}{c1} \State[a0]{(-2,0)}{a0}\State[a1]{(0,0)}{a1} \State[b1]{(2,0)}{b1} \EdgeL{c1}{b1}{} \EdgeL{b0}{a0}{} \EdgeL{c0}{a0}{} \EdgeL{b1}{a1}{} \ArcR{a0}{a1}{} \ArcR{a1}{a0}{} \end{VCPicture}} &\centering\SmallPicture\VCDraw{% \begin{VCPicture}{(0,-.3)(4,2.3)} \State[a]{(2,0)}{A} \State[b]{(0,2)}{B} \State[c]{(4,2)}{C} \CLoopE[.55]{A}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \EdgeR{B}{A}{\StackTwoLabels{\IOL{0}{0}}{\IOL{1}{1}}} \EdgeL[.3]{C}{A}{{\IOL{0}{0}}} \EdgeR{C}{B}{{\IOL{1}{1}}} \end{VCPicture}} & the group~$\mathbb{Z}_2\times D_4$\\ \hline \footnotesize the {\bf Basilica group} see~\cite{gns} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-2.8)(4,1.2)} \State[a]{(0,0)}{A} \State[b]{(4,0)}{B} \State[c]{(2,-2)}{C} \ArcL[.2]{A}{B}{\IOL{0}{1}} \EdgeR[.25]{A}{C}{\IOL{1}{0}} \ArcL[.5]{B}{A}{\IOL{0}{0}} \EdgeL[.25]{B}{C}{\IOL{1}{1}} \CLoopE[.51]{C}{\StackTwoLabels{\IOL{0}{0}}{\IOL{1}{1}}} \end{VCPicture}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(-2.5,-0.3)(2.5,2.3)} \State[c0]{(-2,2)}{c0} \State[a1]{(0,2)}{a1} \State[c1]{(2,2)}{c1} \State[b0]{(-2,0)}{b0} \State[a0]{(0,0)}{a0} \State[b1]{(2,0)}{b1} \EdgeL{b0}{a0}{} \EdgeL{a0}{b1}{} \EdgeL{a1}{c0}{} \EdgeL{b1}{c1}{} \CLoopS{c0}{} \CLoopW{c1}{} \end{VCPicture}} &\cellcolor{gray}&& &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,0)(2,2)} \State[b0]{(0,2)}{b0} \State[a0]{(2,2)}{a0} \State[b1]{(0,0)}{b1}\State[a1]{(2,0)}{a1} \EdgeL{b0}{a0}{} \EdgeL{b1}{a1}{} \ArcR{a0}{a1}{} \ArcR{a1}{a0}{} \end{VCPicture}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-1)(4,1)} \State[a]{(0,0)}{A} \State[b]{(4,0)}{B} \LoopN[.8]{A}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \EdgeL{B}{A}{\StackTwoLabels{\IOL{0}{0}}{\IOL{1}{1}}} \end{VCPicture}} &the Klein $4$-group $V=\mathbb{Z}_2\times\mathbb{Z}_2$ \\ \hline \footnotesize the {\bf lamplighter group} $L=\mathbb{Z}\wr\mathbb{Z}_2$ see~\cite{gns} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-1.3)(4,2)} \State[a]{(0,0)}{A} \State[b]{(4,0)}{B} \ArcL{A}{B}{\IOL{0}{1}} \ArcL{B}{A}{\IOL{0}{0}} \CLoopN[.4]{A}{\IOL{1}{0}} \CLoopN[.6]{B}{\IOL{1}{1}} \end{VCPicture}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(-.5,-1.1)(4.5,1.1)} \State[a0]{(2,0)}{a0} \State[a1]{(0,.8)}{a1} \State[b0]{(0,-.8)}{b0} \State[b1]{(4,0)}{b1} \EdgeL{a1}{a0}{} \EdgeL{b0}{a0}{} \EdgeL{a0}{b1}{} \CLoopN{b1}{} \end{VCPicture}} &\cellcolor{gray}&\cellcolor{gray} &\\ \hline {\footnotesize the rank~3 free group ({\bf Ale\v{s}in} automaton) see~\cite{aleshin,svv}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-2.5)(4,1.2)} \State[a]{(0,0)}{A} \State[b]{(2,-2)}{B} \State[c]{(4,0)}{C} \ArcR[.5]{A}{C}{\IOL{0}{1}} \EdgeR[.2]{A}{B}{\IOL{1}{0}} \CLoopW[.5]{B}{\IOL{0}{1}} \EdgeR[.2]{B}{C}{\IOL{1}{0}} \ArcR[.1]{C}{A}{\StackTwoLabels{\IOL{0}{0}}{\IOL{1}{1}}} \end{VCPicture}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(-.5,-1.3)(4.5,1.3)} \State[b1]{(0,1)}{b1} \State[c0]{(2,1)}{c0} \State[a0]{(4,1)}{a0} \State[b0]{(0,-1)}{b0} \State[a1]{(2,-1)}{a1} \State[c1]{(4,-1)}{c1} \EdgeL{b1}{c0}{} \EdgeL{c0}{a0}{} \EdgeL{a0}{c1}{} \EdgeL{c1}{a1}{} \EdgeL{a1}{b0}{} \EdgeL{b0}{b1}{} \end{VCPicture}} &\cellcolor{gray}&\cellcolor{gray}&\cellcolor{gray} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(-.5,-1.3)(4.5,1.3)} \State[a0]{(0,1)}{a0} \State[a1]{(2,1)}{a1} \State[b0]{(4,1)}{b0} \State[b1]{(0,-1)}{b1} \State[b2]{(2,-1)}{b2} \State[a2]{(4,-1)}{a2} \EdgeL{b0}{a1}{} \EdgeL{a1}{b2}{} \EdgeL{b2}{a2}{} \EdgeR{a2}{b0}{} \ArcR{a0}{b1}{} \ArcR{b1}{a0}{} \end{VCPicture}} &\centering\SmallPicture\VCDraw{% \begin{VCPicture}{(0,-2.3)(4,2.3)} \State[a]{(0,0)}{A} \State[b]{(4,0)}{B} \ArcL[.1]{A}{B}{\StackThreeLabels{\IOL{0}{1}}{\IOL{1}{2}}{\IOL{2}{0}}} \ArcL[.1]{B}{A}{\StackThreeLabels{\IOL{0}{1}}{\IOL{1}{0}}{\IOL{2}{2}}} \end{VCPicture}} &an order~36 group\\ \hline {\footnotesize the free product $\mathbb{Z}_2^{*3}=\mathbb{Z}_2*\mathbb{Z}_2*\mathbb{Z}_2$ ({\bf BabyAle\v{s}in} automaton) see~\cite{svv}} &\centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0,-2.5)(4,1.2)} \State[a]{(0,0)}{A} \State[b]{(2,-2)}{B} \State[c]{(4,0)}{C} \ArcL[.1]{A}{C}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \EdgeL[.2]{B}{A}{\IOL{0}{0}} \CLoopE[.5]{B}{\IOL{1}{1}} \EdgeL[.2]{C}{B}{\IOL{0}{0}} \ArcL[.5]{C}{A}{\IOL{1}{1}} \end{VCPicture}} &\centering\SmallPicture\VCDraw{% \begin{VCPicture}{(-.5,-1.3)(4.5,1.3)} \State[c0]{(0,1)}{c0} \State[a1]{(2,1)}{a1} \State[b1]{(4,1)}{b1} \State[b0]{(0,-1)}{b0} \State[a0]{(2,-1)}{a0} \State[c1]{(4,-1)}{c1} \EdgeL{c0}{b0}{} \EdgeL{b0}{a0}{} \EdgeL{a0}{c1}{} \EdgeL{c1}{a1}{} \EdgeL{a1}{c0}{} \CLoopS{b1}{} \end{VCPicture}} & \cellcolor{gray} & \cellcolor{gray} & \cellcolor{gray} & \centering \SmallPicture\VCDraw{% \begin{VCPicture}{(0.5,-1.7)(5.5,1.7)} \State[a3]{(1,1.5)}{a3} \State[b0]{(3,1.5)}{b0} \State[b1]{(5,1.5)}{b1} \State[a2]{(1,0)}{a2} \State[a0]{(5,0)}{a0} \State[b3]{(1,-1.5)}{b3} \State[b2]{(3,-1.5)}{b2} \State[a1]{(5,-1.5)}{a1} \EdgeR{a2}{b3}{} \EdgeR{b3}{b2}{} \EdgeR{b2}{a1}{} \EdgeR{a1}{a0}{} \EdgeR{a0}{b1}{} \EdgeR{b1}{b0}{} \EdgeR{b0}{a3}{} \EdgeR{a3}{a2}{} \end{VCPicture}} & \centering\SmallPicture\VCDraw{% \begin{VCPicture}{(0,-1.9)(4,2.5)} \State[a]{(0,0)}{A} \State[b]{(4,0)}{B} \ArcL{A}{B}{\StackTwoLabels{\IOL{0}{1}}{\IOL{2}{3}}} \ArcL{B}{A}{\StackTwoLabels{\IOL{0}{3}}{\IOL{2}{1}}} \CLoopN[.5]{A}{\StackTwoLabels{\IOL{1}{0}}{\IOL{3}{2}}} \CLoopN[.5]{B}{\StackTwoLabels{\IOL{1}{0}}{\IOL{3}{2}}} \end{VCPicture}} & the group~$G_{16}^{(9)}$ \\ \hline \end{tabular}} \end{sidewaystable} \begin{proposition}\label{pr:duale-finitude} Let $\aut{A}$ be a Mealy automaton. The semigroup ~$\pres {\aut{A}}_+$ is finite if and only if the semigroup~$\pres{\dz(\aut{A})}_+$ is finite. \end{proposition} Proposition \ref{pr:duale-finitude} extends to groups using Lemma~\ref{lm-g-sg-finis}. \begin{proof} Set $\aut{A}=(A,\Sigma,\delta,\rho)$ and assume that $\pres{\dz(\aut{A})}_+= \{\delta_{\mot{u}}: A^* \rightarrow A^*, \ \mot{u}\in \Sigma^*\}$ is finite. Consider the Cayley graph~$\cal G$ of~$\pres{\dz(\aut{A})}_+$ with respect to the set of generators~$\Sigma$, see the left of the figure just below. Now fix $\mot{w}\in A^*$ and recall that \[ \rho_{\mot{w}}(u_1u_2\cdots u_n) := \rho_{\mot{w}}(u_1)\rho_{\delta_{u_1}(\mot{w})}(u_2)\rho_{\delta_{u_1u_2}(\mot{w})}(u_3)\cdots \rho_{\delta_{u_1u_2\cdots u_{n-1}}(\mot{w})}(u_n)\:, \] for all $u_1u_2\cdots u_n \in \Sigma^*$. This shows that $\rho_{\mot{w}}$ can also be described as the output map of a letter-to-letter transducer built upon~$\cal G$, see the right of the figure. \begin{center} \TinyPicture \VCDraw{% \begin{VCPicture}{(-6,-0.2)(10,0)} \LargeState \State[\delta_{\mot{u}}]{(-5,0)}{A} \State[\delta_{\mot{u}i}]{(0,0)}{B} \State[\delta_{\mot{u}}]{(4,0)}{C} \State[\delta_{\mot{u}i}]{(9,0)}{D} \EdgeL[.5]{C}{D}{\IOL{i}{\rho_{\delta_{\mot{u}}(\mot{w})}(i)}} \ChgEdgeLabelSep{2} \EdgeL[.5]{A}{B}{i} \end{VCPicture} } \end{center} \noindent Now observe that there is only a finite number of possible different transducers built on $\cal G$, which is equal to the number of different mappings from $\pres{\dz( \aut{A})}_+$ to~$\mathfrak{T}_{\Sigma}$. We conclude that $\# \pres{\aut{A}}_+ \leq \bigl(\# \Sigma \bigr)^{(\# \Sigma)\ ( \# \pres{\dz( \aut{A})}_+)}$. \end{proof} The growth of a Mealy automaton is defined as the growth of the number of different elements $\rho_{\mot{u}}, \ \mot{u} \in A^n$, as a function of $n$, see~\cite{brs,growth}. Automata generating finite (semi)groups are those of finite growth. Looking at the 2-letter 2-state automata, it appears that it is the only growth class within the known growth classes (finite, polynomial, intermediate and exponential) to be stable by dualization. \medskip Let \(\aut{A}\) be an IR-automaton. Recall that~$\pres{\aut{A}} = \pres{\aut{A}\sqcup \aut{A}^{-1}}$. In words, considering the states and their inverses does not modify the generated group. We can also consider the letters and their inverses. Set \(\widetilde{\aut{A}}= \aut{A}' \sqcup (\aut{A}')^{-1}\) where \(\aut{A}'=\dz(\dz(\aut{A}) \sqcup \inverse{\dz(\aut{A})})\). The Mealy automaton \(\widetilde{\aut{A}}\) is the extension of \(\aut{A}\) with stateset \(\alphA\sqcup\inverse{\alphA}\) and alphabet \(\alphS\sqcup\inverse{\alphS}\). Next result is a corollary of Proposition \ref{pr:duale-finitude} and Lemma~\ref{lm-g-sg-finis}. \begin{corollary}\label{cor-gen} Let \(\aut{A}\) be an IR-automaton. The groups \(\pres{\aut{A}}\) and \(\pres{\tilde{\aut{A}}}\) are either both finite or both infinite. \end{corollary} The above groups are not necessary equal. Consider for instance the automaton \(\aut{A}\) generating \(G_{16}^{(9)}\) in Table~\ref{tbl-examples}: we have \(|\pres{\aut{A}}|=16\) and \(|\pres{\widetilde{\aut{A}}}|=64\). \section{Reduction of Mealy automata and finiteness} \label{s:reduction} Here we define the \emph{$\mz\dz$-reduction} of Mealy automata which provides a sufficient condition of finiteness. The condition is not necessary and two counterexamples are provided. \subsection{Minimization of a Mealy automaton}\label{quo-auto} \begin{definition} Let $\mathcal{A}=(A,\Sigma,\delta,\rho)$ be a Mealy automaton. An equivalence $\equiv$ on $A$ is a \emph{congruence} for $\mathcal{A}$ if \[ \left[\forall x,y\in A,\ x\equiv y\right] \Longrightarrow \left[\forall i\in\Sigma,\ \rho_x(i)=\rho_y(i)\text{ and } \delta_i(x)\equiv\delta_i(y)\right]. \] The \emph{Nerode equivalence} on $A$ is the coarsest congruence for $\mathcal{A}$. \end{definition} The Nerode equivalence is the limit of the sequence $(\equiv_k)$ of increasingly finer equivalences defined recursively by: \begin{align*} \forall x,y\in A,\qquad\qquad x\equiv_0 y & \Longleftrightarrow \forall i\in\Sigma,\ \rho_x(i)=\rho_y(i),\\ \forall k\geqslant 0, x\equiv_{k+1} y & \Longleftrightarrow x\equiv_k y \text{ and }\forall i\in\Sigma,\ \delta_i(x)\equiv_k\delta_i(y). \end{align*} Since the set $A$ is finite, this sequence is ultimately constant; moreover if two consecutive equivalences are equal, the sequence remains constant from this point. The limit is therefore computable. For every $x$ in $A$, we denote by $[x]$ the class of $x$ w.r.t. the Nerode equivalence. \begin{definition} Let $\mathcal{A}=(A,\Sigma,\delta,\rho)$ be a Mealy automaton and let $\equiv$ be the Nerode equivalence on $\mathcal{A}$. The \emph{minimization} of $\mathcal{A}$ is the Mealy automaton \(\mathcal{A}/\negthickspace\equiv\,=(A/\negthickspace\equiv,\Sigma,\tilde{\delta},\tilde{\rho})\), where for every $(x,i)$ in $A\times \Sigma$, $\tilde{\delta}_i([x])=[\delta_i(x)]$ and $\tilde{\rho}_{[x]}(i)=\rho_x(i)$. \end{definition} This definition is consistent with the minimization of ``deterministic finite automata'', where instead of considering the production functions $(\rho_x)_x$, the computation of the congruence is initiated by the separation between terminal and non-terminal states. \begin{lemma}\label{lem-min} Let $\mathcal{A}=(A,\Sigma,\delta,\rho)$ be a Mealy automaton, and let $\mathcal{A}/\negthickspace\equiv$ be its minimization. The function on $\Sigma^*$ generated by~$x$ in~$\mathcal{A}$ is equal to the function generated by~$[x]$ in~$\mathcal{A}/\equiv$. Therefore, the Mealy automata $\mathcal{A}$ and $\mathcal{A}/\equiv$ generate the same semigroup. \end{lemma} \begin{proof} Let $(\mot{i}_n)_{n\in\mathbb{N}}$ be a sequence of words of $\Sigma^*$ such that for all integer \(n\), the length of \(\mot{i}_n\) is \(n\) and \(\mot{i}_n\) is a prefix of \(\mot{i}_{n+1}\): \(\mot{i}_{n+1} = \mot{i}_ni_{n+1}\), where \(i_{n+1}\in\Sigma\). We prove by induction on $n$ that for every $x$ of $A$, we have $\rho_x = \tilde{\rho}_{[x]}$ on $\Sigma^n$. It is obviously true for $n=0$. If $n>0$: \begin{align*} \rho_x(\mot{i}_n)=&\rho_x(\mot{i}_{n-1})\rho_{\delta_{\mot{i}_{n-1}}(x)}(i_n)\\ =&\tilde{\rho}_{[x]}(\mot{i}_{n-1})\tilde{\rho}_{[\delta_{\mot{i}_{n-1}}(x)]}(i_n)\\ =&\tilde{\rho}_{[x]}(\mot{i}_{n-1})\tilde{\rho}_{\tilde{\delta}_{\mot{i}_{n-1}}([x])}(i_n) =\tilde{\rho}_{[x]}(\mot{i}_n). \end{align*} \end{proof} \subsection{The $\mz\dz$-reduction of Mealy automata} Observe that the minimization of a Mealy automaton with a minimal dual can make the dual automaton non-minimal. \begin{definition} A pair of dual Mealy automata is \emph{reduced} if both Mealy automata are minimal. Let~$\mz$ be the operation of minimization; recall that $\dz$ is the operation of dualization. The \emph{$\mz\dz$-reduction} of a Mealy automaton consists in minimizing the automaton or its dual until the resulting pair of dual Mealy automata is reduced. \end{definition} If both a Mealy automaton and its dual automaton are non-minimal, the procedure of $\mz\dz$-reduction seems to be dependent on the first automaton chosen for the minimization. The reduction is actually confluent: \begin{proposition}\label{prop:paire} If $(\mathcal{A},\mathcal{B})$ is a pair of dual Mealy automata, the reduced pair obtained by minimizing $\mathcal{A}$ first is the same as the one obtained by minimizing $\mathcal{B}$ first. \end{proposition} \begin{proof} If $(\mathcal{A},\mathcal{B})$ is reduced, both Mealy automata are minimal, and the proposition trivially holds. Otherwise, the proof is by induction on the total number of states in $\mathcal{A}$ and~$\mathcal{B}$. Let $(\mathcal{A}_1,\mathcal{B}_1)$ be the pair obtained by minimizing $\mathcal{A}$ and let $(\mathcal{A}_2,\mathcal{B}_2)$ be the pair obtained by minimizing $\mathcal{B}$. Let us set $\mathcal{A}=(A,\Sigma,\delta,\rho)$, $\mathcal{A}_1=(A_1,\Sigma,\delta^{(1)},\rho^{(1)})$, and $\mathcal{A}_2=(A,\Sigma_2,\delta^{(2)},\rho^{(2)})$. Let $\equiv_1$ and $\equiv_2$ be the congruences on $\mathcal{A}$ and $\mathcal{B}$ such that $A_1=A/\negthickspace\equiv_1$ and $\Sigma_2=\Sigma/\negthickspace\equiv_2$. We show that $\equiv_1$ is a congruence on $\mathcal{A}_2$. Let $x$ and $y$ be in $A$ such that $x\equiv_1 y$. Then, for every $i$ in $\Sigma$, $\rho_x(i)=\rho_y(i)$ and therefore, $\rho^{(2)}_x([i])=[\rho_x(i)]=[\rho_y(i)]=\rho^{(2)}_y([i])$; besides, $\delta^{(2)}_{[i]}(x)=\delta_i(x)\equiv_1\delta_i(y)=\delta^{(2)}_{[i]}(y)$. Hence, $\equiv_1$ is a congruence on $\mathcal{A}_2$ and, likewise, $\equiv_2$ is a congruence on $\mathcal{B}_1$. We consider now the Mealy automaton $\mathcal{A}'=(A_1,\Sigma_2,\delta',\rho')$ which is the quotient of $\mathcal{A}_2$ with respect to $\equiv_1$, and $\mathcal{B}'=(\Sigma_2,A_1,\rho",\delta")$ which is the quotient of $\mathcal{B}_1$ w.r.t. $\equiv_2$. For every $x$ in $A$ and every $i$ in $\Sigma$, it holds: \[ \delta"_{[i]}([x])=\delta^{(1)}_i([x])= [\delta_i(x)]=[\delta^{(2)}_{[i]}(x)]=\delta'_{[i]}([x])\:. \] Thus, $\delta"=\delta'$ and likewise $\rho"=\rho'$. \begin{center} \VCDraw{\begin{VCPicture}{(-3.5,-3.4)(3.5,3.3)}% \ChgStateLineStyle{none} \StateVar[\mathcal{A},\mathcal{B}]{(0,3)}{A} \StateVar[\mathcal{A}_1,\mathcal{B}_1]{(-2,1)}{A1} \StateVar[\mathcal{A}_2,\mathcal{B}_2]{(2,1)}{A2} \StateVar[\mathcal{A}',\mathcal{B}']{(0,-1)}{AP} \StateVar[\mathcal{A}_3,\mathcal{B}_3]{(-3,-3)}{A3} \StateVar[\mathcal{A}_4,\mathcal{B}_4]{(3,-3)}{A4} \EdgeR{A}{A1}{\equiv_1:\mz(\mathcal{A})} \EdgeL{A}{A2}{\equiv_2:\mz(\mathcal{B})} \EdgeR{A1}{AP}{\equiv_2} \EdgeL{A2}{AP}{\equiv_1} \ArcL{AP}{A3}{}\LabelR{\mz(\mathcal{B}')} \ArcR{AP}{A4}{}\LabelL{\mz(\mathcal{A}')} \LArcR{A1}{A3}{\mz(\mathcal{B}_1)} \LArcL{A2}{A4}{\mz(\mathcal{A}_2)} \end{VCPicture}} \end{center} \noindent Consider now $\mathcal{A}_2=(A,\Sigma_2, \delta^{(2)}, \rho^{(2)})$ and $\mathcal{A}'=(A_1,\Sigma_2, \delta', \rho')$. Clearly, applying the coarsest congruences respectively on $A$ in \(\mathcal{A}_2\) and $A_1$ in \(\mathcal{A}'\) will result in the same minimized Mealy automaton $\mathcal{A}_4$. The minimized Mealy automaton $\mathcal{B}_3$ is defined similarly starting from either $\mathcal{B}_1$ or $\mathcal{B}'$. Let $\mathcal{B}_4$ be the dual of $\mathcal{A}_4$, and let $\mathcal{A}_3$ be the dual of $\mathcal{B}_3$. By construction, the pair $(\mathcal{A}_3,\mathcal{B}_3)$ (resp. $(\mathcal{A}_4,\mathcal{B}_4)$) is the one obtained from $(\mathcal{A},\mathcal{B})$ by minimizing first $\mathcal{A}$ (resp. $\mathcal{B}$) then $\mathcal{B}$ (resp. $\mathcal{A}$). But the pair $(\mathcal{A}_3,\mathcal{B}_3)$ (resp. $(\mathcal{A}_4,\mathcal{B}_4)$) is also the one obtained by applying one minimization step starting from $(\mathcal{A}',\mathcal{B}')$. Observe that the pair $(\mathcal{A}',\mathcal{B}')$ has a number of states strictly smaller than the one of $(\mathcal{A},\mathcal{B})$. By induction hypothesis, starting from $(\mathcal{A}',\mathcal{B}')$, the $\mz\dz$-reduction does not depend on the first minimization step, which proves the result. \end{proof} \subsection{A sufficient condition for finiteness} A trivial Mealy automaton is a Mealy automaton with one state over a one-letter alphabet. It clearly generates the trivial group. \begin{theorem}\label{prop-red-finite} If the $\mz\dz$-reduction of a Mealy automaton (\hbox{\textit{resp.}} an invertible Mealy automaton) leads to a trivial Mealy automaton, then the automaton generates a finite semigroup (\hbox{\textit{resp.}} a finite group). \end{theorem} \begin{proof} Let $(\mathcal{A},\mathcal{B})$ be a pair of dual Mealy automata and assume that there exists a sequence of dual Mealy automata $((\mathcal{A}_k,\mathcal{B}_k))_{k\in[0,m]}$ such that $(\mathcal{A}_0,\mathcal{B}_0)=(\mathcal{A},\mathcal{B})$, $(\mathcal{A}_m,\mathcal{B}_m)$ is trivial and, for every $k\in[1,m]$, either $\mathcal{A}_k$ is the minimization of $\mathcal{A}_{k-1}$ or $\mathcal{B}_k$ is the minimization of $\mathcal{B}_{k-1}$. By Proposition~\ref{pr:duale-finitude}, for every $k$, if $\mathcal{A}_k$ or $\mathcal{B}_k$ generates a finite semigroup, both automata do. Obviously, $\mathcal{A}_m$ and $\mathcal{B}_m$ both generate the trivial group. We prove that if $\mathcal{A}_k$ generates a finite semigroup, so does $\mathcal{A}_{k-1}$. If $\mathcal{A}_k$ is the minimization of $\mathcal{A}_{k-1}$, by Lemma~\ref{lem-min}, they both generate the same semigroup. Otherwise, $\mathcal{B}_k$ is the minimization of $\mathcal{B}_{k-1}$. Then $\mathcal{B}_k$ generates a finite semigroup (Prop.~\ref{pr:duale-finitude}), so does $\mathcal{B}_{k-1}$ (Lem.~\ref{lem-min}), and thus $\mathcal{A}_{k-1}$ (Prop.~\ref{pr:duale-finitude}). Therefore $\mathcal{A}$ generates a finite semigroup. \end{proof} Let \(\aut{A}\) be the following automaton: \begin{center} \MediumPicture\VCDraw{% \begin{VCPicture}{(-1,-2.5)(6,2.5)} \State[a]{(0,0)}{A} \State[b]{(5,0)}{B} \ArcL{A}{B}{\StackTwoLabels{\IOL{0}{1}}{\IOL{2}{3}}} \ArcL{B}{A}{\StackTwoLabels{\IOL{0}{3}}{\IOL{2}{1}}} \LoopN[.2]{A}{\StackTwoLabels{\IOL{1}{0}}{\IOL{3}{2}}} \LoopN[.8]{B}{\StackTwoLabels{\IOL{1}{0}}{\IOL{3}{2}}} \end{VCPicture}} \end{center} Let us compute the $\mz\dz$-reduced automaton of~\(\aut{A}\). \begin{center} \SmallPicture \FixVCGridScale{.8} \VCDraw{% \begin{VCPicture}{(-3,-32)(26,-2)} \State[a]{(0,-8)}{AA} \State[b]{(6,-8)}{BB} \ArcL{AA}{BB}{\StackTwoLabels{\IOL{0}{1}}{\IOL{2}{3}}} \ArcL{BB}{AA}{\StackTwoLabels{\IOL{0}{3}}{\IOL{2}{1}}} \LoopN[.2]{AA}{\StackTwoLabels{\IOL{1}{0}}{\IOL{3}{2}}} \LoopN[.8]{BB}{\StackTwoLabels{\IOL{1}{0}}{\IOL{3}{2}}} \Point{(10.5,-8)}{E2} \Point{(12.5,-8)}{F2} \EdgeL{E2}{F2}{{\mathfrak d}} \State[0]{(17,-3)}{A0} \State[1]{(23,-3)}{A1} \State[3]{(17,-9)}{A3} \State[2]{(23,-9)}{A2} \ArcL{A1}{A0}{\StackTwoLabels{\IOL{a}{a}}{\IOL{b}{b}}} \ArcL{A0}{A1}{\IOL{a}{b}} \ArcL{A3}{A2}{\StackTwoLabels{\IOL{a}{a}}{\IOL{b}{b}}} \ArcL{A2}{A3}{\IOL{a}{b}} \EdgeR{A0}{A3}{\IOL{b}{a}} \EdgeR{A2}{A1}{\IOL{b}{a}} \Point{(20,-11)}{E3} \Point{(20,-13)}{F3} \EdgeL{E3}{F3}{{\mathfrak m}} \StateVar[13]{(17,-16)}{A13} \StateVar[02]{(23,-16)}{A02} \ArcL{A13}{A02}{\StackTwoLabels{\IOL{a}{a}}{\IOL{b}{b}}} \ArcL{A02}{A13}{\StackTwoLabels{\IOL{a}{b}}{\IOL{b}{a}}} \Point{(12.5,-16)}{E4} \Point{(10.5,-16)}{F4} \EdgeL{E4}{F4}{{\mathfrak d}} \State[a]{(0,-16)}{AAA} \State[b]{(6,-16)}{BBB} \ArcL{AAA}{BBB}{\IOL{02}{13}} \ArcL{BBB}{AAA}{\IOL{02}{13}} \LoopN[.2]{AAA}{\IOL{13}{02}} \LoopN[.8]{BBB}{\IOL{13}{02}} \Point{(3,-18.5)}{E5} \Point{(3,-20.5)}{F5} \EdgeL{E5}{F5}{{\mathfrak m}} \StateVar[ab]{(3,-23.5)}{X} \LoopN[.2]{X}{\StackTwoLabels{\IOL{13}{02}}{\IOL{02}{13}}} \Point{(10.5,-23.5)}{E6} \Point{(12.5,-23.5)}{F6} \EdgeL{E6}{F6}{{\mathfrak d}} \StateVar[13]{(17,-23.5)}{AAAA} \StateVar[02]{(23,-23.5)}{BBBB} \ArcL{AAAA}{BBBB}{\IOL{ab}{ab}} \ArcL{BBBB}{AAAA}{\IOL{ab}{ab}} \Point{(20,-26)}{E7} \Point{(20,-28)}{F7} \EdgeL{E7}{F7}{{\mathfrak m}} \StateVar[0123]{(20,-31)}{Y} \LoopN[.2]{Y}{\IOL{ab}{ab}} \Point{(12.5,-31)}{E8} \Point{(10.5,-31)}{F8} \EdgeL{E8}{F8}{{\mathfrak d}} \StateVar[ab]{(3,-31)}{Z} \LoopN[.2]{Z}{\IOL{0123}{0123}} \end{VCPicture}} \end{center} The group generated by~$\aut{A}$ is finite and can be shown to be isomorphic to~$G_{16}^{(9)}$, that is, the group of order~16 with presentation\[\langle~a,b:a^4=b^4=abab=1,ab^3=ba^3~\rangle.\] \medskip Now consider the family~$(\aut{M}^{\sharp}_{p,q})$ of bireversible $p$-letter $q$-state Mealy automata: \begin{center} \SmallPicture \FixVCGridScale{3} \VCDraw{\begin{VCPicture}{(-1.5,-1.35)(1.5,1.3)} \State[a_1]{(-.62,.78)}{A1} \State[a_q]{(.22,.97)}{AQ} \State[a_2]{(-1,0)}{A2} \State{(.9,-.43)}{A5}\State{(.9,.43)}{A6} \State[a_3]{(-.62,-.78)}{A3} \State[a_4]{(.22,-.97)}{A4} \EdgeR[.7]{A1}{A2}{\IOL{i}{i+1},\IOL{p}{1}} \EdgeR[.3]{A2}{A3}{\StackThreeLabels{\IOL{1}{1}}{\IOL{i}{i+1},\IOL{p}{2}}{(i\neq 1)\ \ \ }} \EdgeR{A3}{A4}{\IOL{i}{i}} \EdgeR{A4}{A5}{\IOL{i}{i}} \EdgeR{A6}{AQ}{\IOL{i}{i}} \EdgeR{AQ}{A1}{\IOL{i}{i}} \SetEdgeLineStyle{dotted} \EdgeR{A5}{A6}{\IOL{i}{i}} \RstEdgeLineStyle \end{VCPicture}} \end{center} \noindent One can check that $(\dz\mz\dz\mz)(\aut{M}^{\sharp}_{p,q})$ is trivial for any~$p$ and~$q$. Hence by Theorem~\ref{prop-red-finite}, the groups $\pres{\aut{M}^{\sharp}_{p,q}}$ are all finite. In fact and independently, the group \(\pres{\aut{M}^{\sharp}_{p,q}}\) can be identified with \(\perm_q^p\). For comparison, the packages~$\mathbf{FR}$ and~$\mathbf{automgrp}$ both fail to decide finiteness of~$\pres{\aut{M}^{\sharp}_{p,q}}$ (except for very small values of~$p,q$). \subsection{This sufficient condition is not necessary} \noindent The following Mealy automaton is $\mz\dz$-reduced, but it generates a finite semigroup of order 6: it provides a counterexample to the converse of Theorem~\ref{prop-red-finite}. \begin{center} \SmallPicture\VCDraw{% \begin{VCPicture}{(-1.5,-0.5)(6,2)} \State[a]{(0,0)}{A} \State[b]{(5,0)}{B} \EdgeL{B}{A}{\IOL{1}{0}} \LoopN[.2]{A}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{1}}} \LoopN[.8]{B}{\IOL{0}{1}} \end{VCPicture}} \end{center} \medskip\noindent There also exist counterexamples among bireversible Mealy automata. Consider the order~8 dihedral group viewed as generated by a reflection~$\sigma$ and by a product~$\mu=\rho\sigma$ with a rotation: \[D_4= \langle~\sigma,\mu:\sigma^2=\mu^2=(\sigma\mu)^4=1~\rangle .\] It is generated by the bireversible Mealy automaton of Fig.~\ref{fig-dihedral-automaton}. This ad-hoc automaton is its own dual and is $\mz\dz$-reduced. \begin{figure}[h!] \SmallPicture\VCDraw{% \begin{VCPicture}{(2,-2.5)(26,11)} \StateVar[1]{(8,.5)}{ID} \StateVar[\mu\sigma\mu]{(8,6)}{MSM} \StateVar[\sigma\mu\sigma\mu]{(5,0)}{SMSM} \StateVar[\sigma]{(5,8)}{S} \StateVar[\mu]{(11,8)}{M} \StateVar[\sigma\mu]{(11,0)}{SM} \StateVar[\mu\sigma]{(24,8)}{MS} \StateVar[\sigma\mu\sigma]{(24,0)}{SMS} \ChgEdgeLabelScale{.75} \ChgEdgeLabelSep{.2} \ForthBackOffset% \LoopVarN[.3]{ID}{\forall x, \ \IOL{x}{x}} \LoopVarN[.5]{MSM}{\StackEightLabels {\IOL{1}{1}}{\IOL{\sigma}{\sigma}}{\IOL{\mu}{\sigma\mu\sigma}}{\IOL{\sigma\mu}{\mu\sigma}} {\IOL{\mu\sigma}{\sigma\mu}}{\IOL{\sigma\mu\sigma}{\mu}}{\IOL{\mu\sigma\mu}{\mu\sigma\mu}}{\IOL{\sigma\mu\sigma\mu}{\sigma\mu\sigma\mu}}} \EdgeL{S}{SMSM}{\StackFourLabels{\IOL{\mu}{\mu\sigma}}{\IOL{\sigma\mu}{\sigma\mu\sigma}}{\IOL{\mu\sigma}{\mu}}{\IOL{\sigma\mu\sigma}{\sigma\mu}}} \EdgeL{SMSM}{S}{\StackFourLabels{\IOL{\mu}{\sigma\mu}}{\IOL{\sigma\mu}{\mu}}{\IOL{\mu\sigma}{\sigma\mu\sigma}}{\IOL{\sigma\mu\sigma}{\mu\sigma}}} \EdgeL[.6]{M}{SM}{\StackTwoLabels {\IOL{\sigma\mu\sigma}{\mu\sigma}} {\IOL{\sigma\mu\sigma\mu}{\sigma}}} \EdgeL[.6]{M}{MS}{\StackTwoLabels {\IOL{\sigma}{\sigma\mu\sigma\mu}} {\IOL{\mu\sigma}{\sigma\mu\sigma}}} \EdgeL[.3]{SM}{M}{\StackTwoLabels {\IOL{\sigma\mu\sigma}{\sigma\mu}} {\IOL{\sigma\mu\sigma\mu}{\sigma}}} \EdgeL[.6]{SM}{SMS}{\StackTwoLabels {\IOL{\sigma}{\sigma\mu\sigma\mu}} {\IOL{\mu\sigma}{\mu}}} \EdgeL[.6]{MS}{M}{\StackTwoLabels {\IOL{\sigma}{\sigma\mu\sigma\mu}} {\IOL{\sigma\mu}{\sigma\mu\sigma}}} \EdgeL[.3]{MS}{SMS}{\StackTwoLabels {\IOL{\mu}{\mu\sigma}} {\IOL{\sigma\mu\sigma\mu}{\sigma}}} \EdgeL[.6]{SMS}{SM}{\StackTwoLabels {\IOL{\sigma}{\sigma\mu\sigma\mu}} {\IOL{\sigma\mu}{\mu}}} \EdgeL[.6]{SMS}{MS}{\StackTwoLabels {\IOL{\mu}{\sigma\mu}} {\IOL{\sigma\mu\sigma\mu}{\sigma}}} \LoopVarN[.3]{M}{\StackTwoLabels {\IOL{1}{1}} {\IOL{\mu}{\mu}}} \LoopVarS[.3]{SM}{\StackTwoLabels {\IOL{1}{1}} {\IOL{\mu}{\sigma\mu\sigma}}} \LoopVarN[.7]{MS}{\StackTwoLabels {\IOL{1}{1}} {\IOL{\sigma\mu\sigma}{\mu}}} \LoopVarS[.7]{SMS}{\StackTwoLabels {\IOL{1}{1}} {\IOL{\sigma\mu\sigma}{\sigma\mu\sigma}}} \ChgEdgeLabelSep{-2} \EdgeL[.7]{M}{SMS}{\StackTwoLabels {\IOL{\mu\sigma\mu}{\mu\sigma\mu}} {\IOL{\sigma\mu}{\sigma\mu}}} \EdgeL[.75]{SMS}{M}{\StackTwoLabels {\IOL{\mu\sigma}{\mu\sigma}} {\IOL{\mu\sigma\mu}{\mu\sigma\mu}}} \EdgeL[.7]{SM}{MS}{\StackTwoLabels {\IOL{\mu\sigma\mu}{\mu\sigma\mu}} {\IOL{\sigma\mu}{\mu\sigma}}} \EdgeL[.7]{MS}{SM}{\StackTwoLabels {\IOL{\mu\sigma}{\sigma\mu}} {\IOL{\mu\sigma\mu}{\mu\sigma\mu}}} \ChgEdgeLabelSep{-3} \LoopVarN[.2]{S}{\StackFourLabels{\IOL{\sigma\mu\sigma\mu}{\sigma\mu\sigma\mu}}{\IOL{\mu\sigma\mu}{\mu\sigma\mu}}{\IOL{\sigma}{\sigma}}{\IOL{1}{1}}} \LoopVarS[.8]{SMSM}{\StackFourLabels{\IOL{1}{1}}{\IOL{\sigma}{\sigma}}{\IOL{\mu\sigma\mu}{\mu\sigma\mu}}{\IOL{\sigma\mu\sigma\mu}{\sigma\mu\sigma\mu}}} \end{VCPicture}} \caption{An $\mz\dz$-reduced non-trivial IR-automaton whose group is finite.}\label{fig-dihedral-automaton} \end{figure} \section{Helix graphs and finiteness} \label{s:helix} In this section, we concentrate on IR-automata and show the pertinence of helix graphs for the finiteness problem. \subsection{A necessary condition for finiteness} To prove the results in this section, it is convenient to use a graphical representation in which $A$ and $\Sigma$ play symmetrical roles. Consider $(x,i)\in A \times \Sigma$ with $\delta_i(x)=y$ and $\rho_x(i)=j$. The corresponding transition $x\stackrel{i|j}{\longrightarrow} y$ is represented by the \emph{cross-transition}: \[ \croix{x}{y}{i}{j}\:. \] The automaton $\aut{A}$ is identified with the set of its cross-transitions (of cardinality $|A| \times |\Sigma|$). A path in \(\aut{A}\) (\hbox{\textit{resp.}} in \(\dz(\aut{A})\)) is represented by an horizontal (\hbox{\textit{resp.}} vertical) \emph{cross-diagram} obtained by concatenating the crosses. We may also consider rectangular cross-diagrams of dimension $m\times n$, on which one can read the production functions of $\aut{A}_{m,n}$ and $\dz(\aut{A}_{m,n})$. For instance the cross-diagram: \begin{minipage}{.4\linewidth} \[\begin{array}{ccccc} & i_1 & & i_{n} \\ x_1 & \lacroix & \dots & \lacroix & y_1\\ & \vdots & & \vdots & \\ x_{m} & \lacroix & \dots & \lacroix & y_{m}\\ & j_1 & & j_{n} \end{array}\] \end{minipage}\qquad \begin{minipage}{.5\linewidth} corresponds in $\aut{A}_{m,n}$ to \[ \rho_{x_1\cdots x_m} (i_1\cdots i_n) = j_1\cdots j_n,\] \[\delta_{i_1\cdots i_n}(x_1\cdots x_m) = y_1\cdots y_m \:. \] \end{minipage} \noindent Replacing every cross by a square, we get the ``square-diagrams'' of \cite{square}. \begin{proposition}\label{prop:helices} Let \(\aut{A}\) be an IR-automaton. If the helix graph of order $(1,1)$ of \(\aut{A}\) is a union of cycles, so are all the helix graphs (of any order) of \(\aut{A}\). \end{proposition} \begin{proof} Observe that a helix graph is a union of cycles if and only if any node has a predecessor. By assumption, $\cal H$ is a union of cycles, therefore, any \((x, u)\in\alphA\times\alphS\) has a predecessor. Now consider \((\mot{x}, \mot{u})\in\alphA^m\times \alphS^n\) with \(\mot{x}=x_1\cdots x_m\) and \(\mot{u}=u_1\cdots u_n\). Let $(\tilde{x}_m,\tilde{u}_n)$ be the predecessor of~$(x_m,u_n)$ in~$\cal H$. Start with the cross of $(\tilde{x}_m,\tilde{u}_n)$ and $(x_m,u_n)$ (left of~(\ref{eq-cross})), and expand it step-by-step using the existence of predecessors in~$\cal H$ (right of~(\ref{eq-cross}) for the first few steps). \begin{equation}\label{eq-cross} \croix{\tilde{x}_{m}}{x_{m}}{\tilde{u}_{n}}{u_{n}}, \qquad \qquad \qquad \begin{array}{ccccc} & * & & * \\ * & \lacroix & * & \lacroix & x_{m-1}\\ & * & & \tilde{u}_{n} & \\ * & \lacroix & \tilde{x}_{m} & \lacroix & x_{m}\\ & u_{n-1} & & u_{n} \end{array} \end{equation} In the end we get a cross-diagram of dimension $m\times n$. The words on the west and north of the cross-diagram form a predecessor for~\((\mot{x}, \mot{u})\). \end{proof} \begin{theorem}\label{thm:fini_cycles} Let \(\aut{A}\) be an IR-automaton. If \(\grEng{{\cal A}}\) is finite, then the helix graphs of \(\aut{A}\) are unions of cycles. \end{theorem} \begin{proof} By Proposition~\ref{prop:helices}, it is sufficient to prove the result for order \((1,1)\). Consider \(x\in \alphA\) and \(i\in\alphS\). According to Proposition~\ref{pr:duale-finitude}, \(\grEng{\dual{{\cal A}}}\) is finite. Therefore, there exist $m,n>0$ such that \(\rho_x^m=\rho_{x^m}=\id[\grEng{{\cal A}}]\) and \(\delta_i^n=\delta_{i^n}=\id[\grEng{\dual{{\cal A}}}]\). It implies that $x^m\stackrel{i^n|i^n}{\longrightarrow} x^m$ is a transition in the Mealy automaton of order $(m,n)$. The corresponding cross-diagram is represented below: \[\begin{array}{ccccc} & i & & i \\ x & \lacroix & \dots & \lacroix & x\\ & \vdots & & \vdots & \\ x & \lacroix & \dots & \lacroix & x\\ & i & & i \end{array}\:.\] The south-east cross of the diagram provides a predecessor for~$(x,i)$ \end{proof} \noindent There exist IR-automata generating infinite groups whose helix graphs are union of cycles. The smallest examples are Ale\v{s}in automata (see Table~\ref{tbl-examples}). Next result follows directly from Lemma~\ref{le-bi} and Theorem~\ref{thm:fini_cycles}. \begin{corollary}\label{cor:jir} Consider an IR-automaton which is not bireversible. Then the group generated by the automaton is infinite. \end{corollary} \subsection{A necessary and sufficient condition for finiteness} The condition in next theorem is not effective. Hence, it does not directly lead to a decision procedure of finiteness. Recall the construction and notation defined at the end of section \ref{sse-automgroup}: for an IR-automaton $\aut{A}$ with stateset $A$ and alphabet $\Sigma$, we denote by $\widetilde{A}$ the extension with stateset $A\sqcup A^{-1}$ and alphabet $\Sigma\sqcup \Sigma^{-1}$. \begin{theorem}\label{th:cycles_bornes} Consider an IR-automaton $\aut{A}$. The group $\grEng{\aut{A}}$ is finite if and only if there exists $K$ such that, for all $k,l$, the helix graphs \({\mathcal H}(k,l)\) of \(\widetilde{\aut{A}}\) are unions of cycles of lengths bounded by \(K\). \end{theorem} \begin{proof} Assume first that $\grEng{{\cal A}}$ is finite: so is \(\pres{\widetilde{\aut{A}}}\) by Corollary~\ref{cor-gen}. Theorem~\ref{thm:fini_cycles} shows that helix graphs of any order are unions of cycles. It remains to prove that the lengths of these cycles are uniformly bounded. By Proposition~\ref{pr:duale-finitude}, the group $\grEng{\dual{\widetilde{\aut{A}}}}$ is finite as well. Let \({\cal C}\) be a cycle in a helix graph of \(\widetilde{\aut{A}}\) and let \((\mot{u},\mot{v})\in(\alphA\sqcup\inverse{\alphA})^* \times(\alphS\sqcup\inverse{\alphS})^*\) be a node of this cycle. Each node of \({\cal C}\) is of the form \((h(\mot{u}), g(\mot{v}))\), where \(g\) (\hbox{\textit{resp.}} \(h\)) is an element of \(\pres{\widetilde{\aut{A}}}\) (\hbox{\textit{resp.}} \(\pres{\dz(\widetilde{\aut{A}})}\)). Since the nodes are pairwise distinct, the length of the cycle \({\cal C}\) is at most \(\#\pres{\widetilde{\aut{A}}} \times \#\pres{\dz(\widetilde{\aut{A}})}\). \medskip Let us prove the converse and assume that the group \(\grEng{{\cal A}}\) is infinite: so is \(\pres{\widetilde{\aut{A}}}\) by Corollary~\ref{cor-gen}. First we argue that the orders of the elements of $\pres{\widetilde{\aut{A}}}$ are unbounded. Indeed, automata groups are residually finite by construction since they act faithfully on rooted locally finite trees. Moreover it follows from Zelmanov's solution of the restricted Burnside problem \cite{Ze1,Ze2,Vl} that any residually finite group with bounded torsion is finite. Since $\pres{\widetilde{\aut{A}}}$ is infinite, the orders of its elements are unbounded. There exists either $\mot{x}\in (A\sqcup \inverse{A})^*$ such that the order of $\rho_{\mot{x}}$ is infinite, or a sequence \((\mot{x}_n)_{n\in\mathbb{N}}\subseteq(\alphA\sqcup\inverse{\alphA})^*\) such that the sequence $(k_n)_n$ of orders of the elements $\rho_{\mot{x}_n}$ converges to infinity. We carry out the proof in the second case, the first one can be treated similarly. Let us concentrate on $\rho_{\mot{x}_n}$, element of order $k_n$ of $\pres{\widetilde{\aut{A}}}$. For all $1\leq k<k_n$, there exists a word $\mot{u}_k \in (\Sigma\sqcup\inverse{\Sigma})^*$ such that $\rho_{\mot{x}_n}^k(\mot{u}_k)=\widetilde{\mot{u}}_k \neq \mot{u}_k$. Say that a word $\mot{v}\in (\Sigma\sqcup\inverse{\Sigma})^*$ is {\em unitary} if $\delta_{\mot{v}}$ is the identity of $\grEng{\dual{\widetilde{\aut{A}}}}$. Since $\grEng{\dual{\widetilde{\aut{A}}}}$ is a group, the word $\mot{u}_k$ can be extended into a unitary word $\mot{u}_k\mot{v}_k$. Set $\mot{w}_n= \mot{u}_1\mot{v}_1\cdots \mot{u}_{k_n-1}\mot{v}_{k_n-1}$. By construction, we have: $\rho_{\mot{x}_n}(\mot{w}_n)= \widetilde{\mot{u}}_1\cdots \neq \mot{w}_n$. Since $\mot{u}_1\mot{v}_1$ is unitary, we also have: \begin{eqnarray*} \rho_{\mot{x}_n}^2(\mot{w}_n) & = & \rho_{\mot{x}_n}^2 (\mot{u}_1\mot{v}_1) \rho_{\mot{x}_n}^2 (\mot{u}_2\mot{v}_2\cdots \mot{u}_{k_n-1}\mot{v}_{k_n-1}) \\ & = & \rho_{\mot{x}_n}^2 (\mot{u}_1\mot{v}_1) \widetilde{\mot{u}}_2 \cdots \ \neq \ \mot{w}_n\:. \end{eqnarray*} In the same way, we prove that for all $k<k_n$, we have $\rho_{\mot{x}_n}^k(\mot{w}_n) \neq \mot{w}_n$. In the helix graph of \(\widetilde{\aut{A}}\) of order $(|\mot{x}_n|,|\mot{w}_n|)$, consider the cycle containing the node \((\mot{x}_n, \mot{w}_n)\). Since \( \mot{w}_n \) is unitary, the successors of \((\mot{x}_n, \mot{w}_n)\) on the cycle are: \((\mot{x}_n, \rho_{\mot{x}_n}(\mot{w}_n))\), \((\mot{x}_n, \rho_{\mot{x}_n}^2(\mot{w}_n))\), \dots Therefore the cycle is of length~$k_n$. Since $k_n$ converges to infinity, the lengths of the cycles of the helix graphs of~\(\widetilde{\aut{A}}\) are not uniformly bounded. \end{proof} \section{Experimentations}\label{sec-experimentations} Here, we show how gathering the new criteria with previously known ones allows to decide the (semi)group finiteness for substantially more Mealy automata (at least for those with small alphabet and stateset --- of size up to~3). \begin{table}[ht]\label{gag2x2} \centering \caption{Results of experimentations on 2-letter 2-state Mealy automata.} {\begin{tabular}{|c|lE>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering } m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}E>{\centering }m{6mm}%{9mm}|} \cline{3-6} \multicolumn{2}{c|}{}&\multicolumn{4}{c|}{invertible}\tabularnewline \hline \multicolumn{2}{|cE}{$2$-letter $2$-state} &$\mathbf{\iz J\neg I\neg R}$ &$\mathbf{J\neg I}$ &$\mathbf{J\neg I\neg R}$ &$\mathbf{B\neg I\neg R}$ &$\mathbf{\dz\iz J\neg I\neg R}$ &$\mathbf{\dz J\neg I}$ &$\mathbf{N}$ &$\mathbf{W}$\tabularnewline \multicolumn{2}{|cE}{Mealy automata} &1 &14 &1 &8 &1 &14 &37 &76\tabularnewline \hline \multicolumn{4}{c|}{}&\multicolumn{4}{c|}{reversible}\tabularnewline \cline{5-8} \multicolumn{10}{c}{\vspace*{-8pt}}\tabularnewline \hline \multirow{7}{*}{\rotatebox{90}{\bf\scriptsize \!\!previous criteria}} &Finitary &-- &5 &-- &3 &-- &-- &1 &9\tabularnewline &Thompson-Wielandt\, &-- &-- &-- &5 &-- &-- &-- &5\tabularnewline &Level-transitive &1 &4 &1 &-- &-- &-- &-- &6\tabularnewline \cline{2-10} &Sidki &-- &1 &-- &-- &-- &-- &-- &1\tabularnewline \cline{2-10} &Limitary cycles &-- &4 &-- &6 &-- &8 &6 &6\tabularnewline \cline{2-10} &Cayley${}^\pm$ &1 &1 &1 &-- &-- &1 &2 &6\tabularnewline &Dual Cayley${}^\pm$ &1 &-- &1 &1 &-- &-- &3 &6\tabularnewline \cline{2-10} &\quad union &1 &11 &1 &8 &-- &8 &8 &37\tabularnewline \hline \hline \multirow{5}{*}{\rotatebox{90}{\bf\scriptsize \!\!new criteria}} &$\mz\dz$-trivial &-- &10 &-- &8 &-- &10 &11 &39\tabularnewline &Cycles &1 &-- &1 &-- &-- &-- &-- &2\tabularnewline &+Sum &-- &-- &-- &3 &-- &4 &-- &7\tabularnewline &+Dual &-- &8 &1 &8 &1 &11 &8 &37\tabularnewline \cline{2-10} &\quad union &1 &10 &1 &8 &1 &14 &13 &48\tabularnewline \hline \multicolumn{10}{c}{\vspace*{-8pt}}\tabularnewline \cline{2-10} \multicolumn{1}{c|}{} &\quad total union &1 &14 &1 &8 &1 &14 &13 &~\,52\footnotemark[1]\tabularnewline \cline{2-10} \end{tabular}} \end{table} \vspace*{10pt} \footnotetext[1]{The table shows that 52 out of 76 (isomorphism classes of) 2-letter 2-state Mealy automata can be treated directly using either the old or the new criteria. But actually, the finiteness problem is solved for the 76 cases. Indeed, a series of papers dealing specifically with 2-letter 2-state Mealy automata (see~\cite{brs} and references therein) has contributed to the actual state of knowledge: 48 automata generate finite semigroups, 10 generate semigroups of linear growth, 17 generate semigroups of exponential growth and 1 generates the semigroup~$\mathbf{S_{I_2}}$ of intermediate growth (see Table~\ref{tbl-examples}).} \subsection{Partition} \noindent For convenience of exposition, we introduce the decomposition of the whole class of Mealy automata~$\mathbf{W}$ (up to isomorphism) into a disjoint union of seven subclasses. By denoting~$\mathbf{I}$ the class of invertible Mealy automata and~$\mathbf{I\neg R}$ the class of invertible-reversible Mealy automata, the seven classes are defined as follows: \begin{enumerate} \item[] $\mathbf{B\neg I\neg R}$ is the class of bireversible Mealy automata, \item[] $\mathbf{J\neg I\neg R}$ (standing for $\mathbf J$ust $\mathbf{I\neg R}$) is the complementary in~$\mathbf{I\neg R}$ of~$\mathbf{B\neg I\neg R}$, \item[] $\mathbf{\iz J\neg I\neg R}$ consists of the inverses of automata from~$\mathbf{J\neg I\neg R}$, \item[] $\mathbf{\dz\iz J\neg I\neg R}$ consists of the duals of automata from~$\mathbf{\iz J\neg I\neg R}$, \item[] $\mathbf{J\neg I}$ (standing for $\mathbf J$ust $\mathbf{I}$) is the complementary in~$\mathbf{I}$ of the union~$\mathbf{I\neg R}\cup\mathbf{\iz J\neg I\neg R}$, \item[] $\mathbf{\dz J\neg I}$ consists of the duals of automata from~$\mathbf{J\neg I}$, \item[] $\mathbf{N}$ is the complementary (in~$\mathbf{W}$) of the (disjoint) union of the previous six. \end{enumerate} \subsection{Previous criteria} \subsubsection*{Previously implemented criteria} The $\mathbf{GAP}$ packages~$\mathbf{FR}$ and~$\mathbf{automgrp}$ (see~\cite{FR,GAP4,sav}) both overload the functions~\texttt{Order} and~\texttt{IsFinite} by using several criteria mainly coming from geometric group theory. More precisely, we have tested all the corresponding functions: \texttt{IsFinitaryFRMachine}, \texttt{IsLevelTransitive} and \texttt{ISFINITE\char`\_ THOMPSONWIELANDT\char`\_ FR} from $\mathbf{FR}$ and~\texttt{IsFractal} and \texttt{IsSphericallyTransitive} from $\mathbf{automgrp}$. While the first two work perfectly, the last three may not stop. From a practical point of view, \texttt{IsSphericallyTransitive} allows to discriminate too few automata. Now \texttt{IsLevelTransitive} happens to be much slower than \texttt{IsFractal}, so the latter can be advantageously viewed as a preliminary criterion of the former. The first half of the~{\bf\small previous criteria} part of the following tables expands the performance of these three criteria coming from geometric group theory. For~$2$-letter $3$-state and $3$-letter $2$-state (\hbox{\textit{resp.}} $3$-letter $3$-state) automata, the execution time of~\texttt{IsFractal} and~\texttt{IsLevelTransitive} was limited to 100\,000~ms (\hbox{\textit{resp.}} 200\,000~ms). The resulting data have to be considered with this arbitrary limitation in mind, together with the observation that both functions happen to be significantly sensitive to the representative inside an isomorphism~class. \subsubsection*{Sidki's criterion Based on Sidki's fundamental work, the solution to the order problem~\cite{sidkiconjugacy, sidki} for the class of so-called bounded automorphisms --- that is, with growth degree at most~0 --- may provide an infiniteness criterion: in any invertible automaton~$(A, \Sigma, \delta,\rho)$, a bounded state~$x\in A$ has infinite order whenever there exists a label~$i|j$ with~$j\not =i\in\Sigma$ on an edge between~$x$ and some state belonging to the same strongly connected component. This criterion appears as the second field of the~{\bf\small previous criteria} part of the tables. \subsubsection*{Antonenko's criterion An interesting point of view is to investigate those automata~$(A, \Sigma, \delta)$ compelling all the Mealy automata~$(A, \Sigma, \delta,\rho)$ to generate a finite semigroup. A complete characterization of the latter in term of \emph{limitary cycle} given in~\cite{anto} (see also~\cite{russ}) provides a simple effective criterion for finiteness. An automaton is \emph{with limitary cycle} whenever every state~$x\in A$ accessible from some \emph{cyclic} one~$y \in A$ (that is, there exists a nontrivial word~$w\in\Sigma^*$ satisfying~$\delta_w(y)=y$) is \emph{without branch} (that is, $\delta_i(x) = \delta_j(x)$ holds for any~$(i,j)\in\Sigma^2$). First considered in~\cite{antoberk}, the branchless condition alone is covered by~Proposition~\ref{pr:duale-finitude} and \emph{a fortiori} by Theorem~\ref{prop-red-finite}. This criterion appears as third field of the~{\bf\small previous criteria} part of the tables. \subsubsection*{Maltcev's criterion Let $S$ be a finite semigroup. Define the Cayley machine~$C(S)$ (\hbox{\textit{resp.}} the dual\footnotemark[2] Cayley machine~$C^*(S)$) to be the Mealy automaton with stateset $S$, alphabet $S$, and the following transitions: $\forall x,y \in S$, \[ C(S)~: \quad x \stackrel{y | xy}{\longrightarrow} xy, \qquad C^*(S) ~: \quad x \stackrel{y | yx}{\longrightarrow} xy \:. \] \footnotetext[2]{It should be emphasized that the current term~\emph{dual} for a Cayley machine is not consistent with the widely used term~\emph{dual} for a Mealy automaton.} According to~\cite{mal} (see also~\cite{min,cain}), for every finite semigroup~$S$, the semigroup generated by~$C(S)$ (\hbox{\textit{resp.}} by~$C^*(S)$) is finite if and only if $S$ is $\mathcal{H}$-trivial (\hbox{\textit{resp.}} $S$ is $\mathcal{H}$-trivial and does not contain non-trivial right zero subsemigroups). This can be viewed as an effective finiteness criterion for those Mealy automata whose isomorphism class intersects the special class of Cayley machines (\hbox{\textit{resp.}} dual Cayley machines) and their possible inverses (which justifies the symbol~${\tiny\pm}$ in the tables). These two criteria coming from semigroup theory compose the last quarter of the~{\bf\small previous criteria} part of the tables. \subsection{New criteria} \noindent The first criterion of the~{\bf\small new criteria} part is the $\mz\dz$-triviality from Theorem~\ref{prop-red-finite}. Next, the criterion Cycles corresponds to Corollary~\ref{cor:jir} which ensures that every automaton from~$\mathbf{\iz J\neg I\neg R}$ and~$\mathbf{J\neg I\neg R}$ generates an infinite group. The last two criteria are ``relative criteria'' --- which vindicates the symbol~+ --- allowing in good cases to reduce or transpose the finiteness question to smaller and/or simpler automata. The criterion~+Sum follows from the easy observation: provided that a Mealy automaton decomposes into a sum of (smaller) Mealy automata, it generates an infinite semigroup whenever one sum component does so. Finally, the criterion~+Dual follows from~Proposition~\ref{pr:duale-finitude}. \vbox{ \medbreak\noindent As a simple illustration, let us consider the Mealy automaton~$\aut{C}$ below on the left. None of the previously known criteria is suitable to detect the infiniteness of~$\grEng{\aut{C}}$. Now, the dual~$\dual{\aut{C}}$ happens to be a sum whose $2$-state component is (isomorphic to) the dual~$\dual{\aut{B}}$ of the baby~Ale\v{s}in automaton~$\aut{B}$ (see Table~\ref{tbl-examples}), which turns out to be level-transitive. \begin{center} \SmallPicture\VCDraw{% \begin{VCPicture}{(-2,-4.9)(22,1.2)} \State[a]{(0,0)}{A} \State[b]{(5,0)}{B} \State[c]{(2.5,-4.33)}{C} \ArcR[.7]{A}{C}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \EdgeR{B}{A}{\IOL{0}{0}} \LoopE[.2]{B}{\IOL{1}{1}} \LoopW[.2]{A}{\IOL{2}{2}} \ArcR[.3]{C}{B}{\StackTwoLabels{\IOL{0}{0}}{\IOL{2}{2}}} \ArcR[.3]{B}{C}{\IOL{2}{2}} \ArcR[.7]{C}{A}{\IOL{1}{1}} \HideState \State[]{(6,-2)}{X} \State[]{(8,-2)}{Y} \EdgeL[.5]{X}{Y}{\dz} \EdgeL{Y}{X}{} \ShowState \State[0]{(9.5,-3)}{A0} \State[1]{(12.5,-3)}{A1} \State[2]{(11,-1)}{A2} \LoopN[.15]{A2}{\StackThreeLabels{\IOL{a}{a}}{\IOL{b}{c}}{\IOL{c}{b}}} \LoopW[.8]{A0}{\StackTwoLabels{\IOL{b}{a}}{\IOL{c}{b}}} \LoopE[.8]{A1}{\StackTwoLabels{\IOL{b}{b}}{\IOL{c}{a}}} \ArcR{A0}{A1}{\IOL{a}{c}} \ArcR{A1}{A0}{\IOL{a}{c}} \HideState \State[]{(14.5,-3)}{X} \State[]{(16.5,-3)}{Y} \EdgeL[.5]{X}{Y}{\dz} \EdgeL{Y}{X}{} \ShowState \State[a]{(17.5,-2)}{AA} \State[b]{(20.5,-2)}{BB} \State[c]{(19,-4.33)}{CC} \ArcR{AA}{CC}{\StackTwoLabels{\IOL{0}{1}}{\IOL{1}{0}}} \EdgeR{BB}{AA}{\IOL{0}{0}} \LoopE[.8]{BB}{\IOL{1}{1}} \EdgeR{CC}{BB}{\IOL{0}{0}} \ArcR{CC}{AA}{\IOL{1}{1}} \put(-0.5,-2){\scalebox{2}{\makebox(0,0){$\aut{C}$}}} \put(20.5,-4.5){\scalebox{2}{\makebox(0,0){$\aut{B}$}}} \end{VCPicture}} \end{center} \noindent In this way, the isomorphism class of~$\aut{B}$ contributes for~one in the Level-transitive row only, those of~$\dual{\aut{B}}$ and~$\dual{\aut{C}}$ both contribute for~one in the respective +Dual rows only and finally that of~$\aut{C}$ contributes for~one in the +Sum row only. } \begin{table}[ht]\label{gag2x3} \centering \caption{Results of experimentations on 2-letter 3-state Mealy automata.} {\begin{tabular}{|c|lE>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering } m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}E>{\centering }m{6mm}%{9mm}|} \cline{3-6} \multicolumn{2}{c|}{}&\multicolumn{4}{c|}{invertible}\tabularnewline \hline \multicolumn{2}{|cE}{$2$-letter $3$-state} &$\mathbf{\iz J\neg I\neg R}$ &$\mathbf{J\neg I}$ &$\mathbf{J\neg I\neg R}$ &$\mathbf{B\neg I\neg R}$ &$\mathbf{\dz\iz J\neg I\neg R}$ &$\mathbf{\dz J\neg I}$ &$\mathbf{N}$ &$\mathbf{W}$\tabularnewline \multicolumn{2}{|cE}{Mealy automata} &14 &488 &14 &28 &14 &175 &3270 &4003\tabularnewline \hline \multicolumn{4}{c|}{}&\multicolumn{4}{c|}{reversible}\tabularnewline \cline{5-8} \multicolumn{10}{c}{\vspace*{-8pt}}\tabularnewline \hline \multirow{4}{*}{\rotatebox{90}{\bf\scriptsize \!\!\!\!\!\!prev. crit.}} &Finitary &-- &91 &-- &8 &-- &-- &50 &149\tabularnewline &Thompson-Wielandt\, &-- &-- &-- &18 &-- &-- &-- &18\tabularnewline &Level-transitive &14 &263 &14 &2 &-- &-- &-- &293\tabularnewline \cline{2-10} &Sidki &-- &35 &-- &-- &-- &-- &-- &35\tabularnewline \cline{2-10} &Limitary cycles &-- &50 &-- &14 &-- &37 &218 &319\tabularnewline \cline{2-10} &\quad union &14 &385 &14 &28 &-- &37 &242 &720\tabularnewline \cline{2-10} \hline \hline \multirow{5}{*}{\rotatebox{90}{\bf\scriptsize \!\!new criteria}} &$\mz\dz$-trivial &-- &194 &-- &26 &-- &55 &386 &661\tabularnewline &Cycles &14 &-- &14 &-- &-- &-- &-- &28\tabularnewline &+Sum &2 &28 &2 &14 &2 &59 &99 &206\tabularnewline &+Dual &-- &132 &14 &21 &14 & 104 &118 &403\tabularnewline \cline{2-10} &\quad union &14 &202 &14 &27 &14 &159 &427 &857\tabularnewline \hline \multicolumn{10}{c}{\vspace*{-8pt}}\tabularnewline \cline{2-10} \multicolumn{1}{c|}{} &\quad total union &14 &466 &14 &28 &14 &159 &519 &1214\tabularnewline \cline{2-10} \end{tabular}} \end{table} \vspace*{15pt} \begin{table}[ht]\label{gag3x2} \centering \caption{Results of experimentations on 3-letter 2-state Mealy automata.} {\begin{tabular}{|c|lE>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering } m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}|>{\centering }m{6mm}%{9mm}E>{\centering }m{6mm}%{9mm}|} \cline{3-6} \multicolumn{2}{c|}{}&\multicolumn{4}{c|}{invertible}\tabularnewline \hline \multicolumn{2}{|cE}{$3$-letter $2$-state} &$\mathbf{\iz J\neg I\neg R}$ &$\mathbf{J\neg I}$ &$\mathbf{J\neg I\neg R}$ &$\mathbf{B\neg I\neg R}$ &$\mathbf{\dz\iz J\neg I\neg R}$ &$\mathbf{\dz J\neg I}$ &$\mathbf{N}$ &$\mathbf{W}$\tabularnewline \multicolumn{2}{|cE}{Mealy automata} &14 &175 &14 &28 &14 &488 &3270 &4003\tabularnewline \hline \multicolumn{4}{c|}{}&\multicolumn{4}{c|}{reversible}\tabularnewline \cline{5-8} \multicolumn{10}{c}{\vspace*{-8pt}}\tabularnewline \hline \multirow{4}{*}{\rotatebox{90}{\bf\scriptsize \!\!\!\!\!\!prev. crit.}} &Finitary &-- &11 &-- &4 &-- &-- &4 &19\tabularnewline &Thompson-Wielandt\, &-- &-- &-- &13 &-- &-- &-- &13\tabularnewline &Level-transitive &11 &84 &12 &-- &-- &-- &-- &107\tabularnewline \cline{2-10} &Sidki &-- &2 &-- &-- &-- &-- &-- &2\tabularnewline \cline{2-10} &Limitary cycles &-- &11 &-- &16 &-- &132 &118 &277\tabularnewline \cline{2-10} &\quad partial union &11 &104 &12 &21 &-- &132 &118 &398\tabularnewline \hline \hline \multirow{5}{*}{\rotatebox{90}{\bf\scriptsize \!\!new criteria}} &$\mz\dz$-trivial &-- &55 &-- &26 &-- &194 &386 &661\tabularnewline &Cycles &14 &-- &14 &-- &-- &-- &-- &28\tabularnewline &+Sum &-- &-- &-- &8 &-- &66 &-- &74\tabularnewline &+Dual &2 &69 &14 &28 &14 &395 &313 &835\tabularnewline \cline{2-10} &\quad partial union &14 &75 &14 &28 &14 &466 &519 &1130\tabularnewline \hline \multicolumn{10}{c}{\vspace*{-8pt}}\tabularnewline \cline{2-10} \multicolumn{1}{c|}{} &\quad total union &14 &159 &14 &28 &14 &466 &519 &1214\tabularnewline \cline{2-10} \end{tabular}} \end{table} \vspace*{15pt} \begin{table}[ht]\label{gag3x3} \centering \caption{Results of experimentations on 3-letter 3-state invertible or reversible automata.} {\begin{tabular}{|c|lE>{\centering }m{8mm}|>{\centering }m{8mm}|>{\centering }m{8mm}|>{\centering } m{8mm}|>{\centering }m{8mm}|>{\centering }m{8mm}E>{\centering }m{10mm}|} \cline{3-6} \multicolumn{2}{c|}{}&\multicolumn{4}{c|}{invertible}\tabularnewline \hline \multicolumn{2}{|cE}{$3$-letter $3$-state} &$\mathbf{\iz J\neg I\neg R}$ &$\mathbf{J\neg I}$ &$\mathbf{J\neg I\neg R}$ &$\mathbf{B\neg I\neg R}$ &$\mathbf{\dz\iz J\neg I\neg R}$ &$\mathbf{\dz J\neg I}$ &$\mathbf{W}\setminus\mathbf{N}$\tabularnewline \multicolumn{2}{|cE}{Mealy automata} &1073 &116502 &1073 &335 &1073 &116502 &236558\tabularnewline \hline \multicolumn{4}{c|}{}&\multicolumn{4}{c|}{reversible}\tabularnewline \cline{5-8} \multicolumn{9}{c}{\vspace*{-8pt}}\tabularnewline \hline \multirow{7}{*}{\rotatebox{90}{\bf\scriptsize \!\!previous criteria}} &Finitary &-- &898 &-- &17 &-- &-- &915\tabularnewline &Thompson-Wielandt\, &-- &-- &-- &164 &-- &-- &164\tabularnewline &Level-transitive &996 &71748 &612 &12 &-- &-- &73368\tabularnewline \cline{2-9} &Sidki &-- &614 &-- &-- &-- &-- &614\tabularnewline \cline{2-9} &Limitary cycles &-- &627 &-- &68 &-- &3415 &4110\tabularnewline \cline{2-9} &Cayley${}^\pm$ &1 &1 &1 &-- &-- &1 &4\tabularnewline &Dual Cayley${}^\pm$ &1 &-- &1 &1 &-- &-- &3\tabularnewline \cline{2-9} &\quad union &996 &73494 &612 &204 &-- &3415 &78721\tabularnewline \hline \hline \multirow{5}{*}{\rotatebox{90}{\bf\scriptsize \!\!new criteria}} &$\mz\dz$-trivial &-- &5928 &-- &187 &-- &5928 &12043\tabularnewline &Cycles &1073 &-- &1073 &-- &-- &-- &2146\tabularnewline &+Sum &76 &736 &76 &109 &76 &9985 &11058\tabularnewline &+Dual &76 &11077 &1073 &228 &1073 &73725 &87252\tabularnewline \cline{2-9} &\quad union &1073 &12811 &1073 &293 &1073 &84601 &100924\tabularnewline \hline \multicolumn{9}{c}{\vspace*{-8pt}}\tabularnewline \cline{2-9} \multicolumn{1}{c|}{} &\quad total union &1073 &84601 &1073 &316 &1073 &84601 &172737\tabularnewline \cline{2-9} \end{tabular}} \end{table} \vspace*{10pt} \section{Conclusion} In this paper, we have emphasized the interest of the duality of Mealy automata for the finiteness problem. Our new approaches enable to treat a much larger number of Mealy automata as before, see Section~\ref{sec-experimentations}. We also completely settle the case of non-bireversible IR-automata (they generate infinite groups). On the downside, the decidability of the finiteness problem remains open. However, we believe that the characterization in Theorem~\ref{th:cycles_bornes} could lead to a decision procedure for bireversible automata. Indeed, experimentations show that the cycle-lengths stay almost constant for known finite groups and increase extremely fast for known infinite groups. \newpage \bibliographystyle{plain}
1,314,259,996,960
arxiv
\subsubsection{Recursive Cayley trajectory} \label{sec:whatpath} Recursive Cayley sampling walks in every direction of Cartesian space at every step. If it hits a boundary, then it does not proceed forward at that point. Since in our assembly settings, feasible regions in Cartesian space are connected RecursiveSampling will find a path to cover the region.\\ This way, in the case of a nested infeasible region inside a feasible region such as a steric boundary, just the boundary of the infeasible region is sampled (the inside of the steric region is not inefficiently sampled and discarded).\\ In order to keep track if specific points in Cartesian grid have been visited, a boolean map $M$ is used as Cartesian grid coordinate system of appropriate size. See Algorithm~\ref{alg:RecursiveSampling}. \\ \begin{algorithm*}[h!tbp] % \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} {\bf RecursiveSampling}\\ \Input{$S$, $C$, $M$, cartpoint, config, threshold \tcp*[f] {$M$:the matrix keeps track of a point is visited or not, cartpoint: the vector for current Cartesian point in $M$} } \Output{$R$ \tcp*[f] set of configs } \BlankLine $R$ := $R$ + config; \\ $new\_S$ := computeCayleyDirectionRecursively($S$, $C$, config); \tcp*[f] uses S from previous point to converge faster. \\ \For{each Cartesian dimension $i$ and reverse direction of $i$} { new\_cartpoint:= cartpoint; \\ new\_cartpoint($i$) := $\pm$1; \tcp*[f] -1 if reverse direction \\ \If { $M$(new\_cartpoint) is not visited before } { new\_config := adaptiveMagnitudeAndDirection($new\_S$, $C$, config, $i$, threshold); \\ \If{ new\_config is failed \tcp*[f] due to Cartesian boundary or Jacobian was not good enough approximate to walk one step within the threshold} { new\_config := jumpToDisconnectedRegion($S(i)$, $M$, cartpoint, config ); \tcp*[f] uses $S(i)$ that is previous Cayley step \\ update new\_cartpoint of new\_config; } \If{ new\_config is succesfully computed} { set M(new\_cartpoint) to true to be visited \\ \If { new\_config doesnot hit any boundary like sterics or etc.} { RecursiveSampling($new\_S$, $C$, $M$, new\_cartpoint, new\_config, threshold); \\ } } } } \caption{RecursiveSampling} \label{alg:RecursiveSampling} \end{algorithm*} \textbf{Note:} Consecutive small deviations that are within a tolerance at each step may result in change of the direction of the path. In order to correct this:\\ \begin{comment} \begin{figure}[H] \centering \subfigure[distorted path]{ \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/curve.eps, width=.44\textwidth}} \caption{\small Consecutive distortions cause path to be deviated from original path. } \label{fig:curve} \end{figure} \end{comment} Usually, expected step size is set to be $C(i,i)$ for $i$th Cartesian direction in Algorithm~\ref{alg:adaptiveMagnitudeAndDirection}. However if previous point is deviated for the amount of $\mu$ from the original path along an arbitrary Cartesian direction, then the next step size should be set to $C(i,i) - \mu$. \\ \medskip\noindent \underbar{Narrow Cartesian Gates} As pointed out earlier, connected Cartesian regions permit comprehensive sampling, in principle. However, since the sampling is discrete, and Jacobian can be illconditioned, the issue of narrow gates at unknown locations in Cartesian regions needs to be dealt with. Here we leverage the fact that Cayley space is convex. The idea is to use previous Cayley step that stayed in feasible Cartesian region as a new step. We can guarantee that this will not reverse direction or repeat sample in Cartesian space. In short, for every point close to the boundary in Cartesian, we check if it is possible to walk on Cayley space. \begin{algorithm*}[h!tbp] % \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} {\bf jumpToDisconnectedRegion}\\ \Input{$s_i$, $M$, cartpoint, config \tcp*[f] {$M$:the matrix keeps track of a Cartesian point is visited or not, cartpoint: the vector for current Cartesian point in $M$} } \Output{new\_config } \BlankLine compute new\_config by walking from $config$ by step := $s_i$ \tcp*[f] this is a jump on Cartesian space\\ \If{ new\_config stays in Cayley boundary} { compute new\_cartpoint of new\_config; \\ \If { $M$( new\_cartpoint ) is not visited before } { \If { new\_cartpoint moves at least $1$ Cartesian step from cartpoint} { return new\_config; } } } return failed; \\ \caption{jumpToDisconnectedRegion} \label{alg:jumpToDisconnectedRegion} \end{algorithm*} \section*{CONCLUSIONS} \begin{comment} \section{Discussion} A key goal is to find hybrid methods that combine the complementary strengths of EASAL with prevailing methods. A useful development would be a gradual tuning parameter, or flexible choice to allow a smooth transition from uniform sampling on Cartesian space to uniform sampling on Cayley space. Such a tuning parameter would improve EASAL's flexibility to go from the basic-EASAL to mimicking multigrid and MC, while still maintaining the advantages of EASAL. This would additionally make it easier to develop hybrids between EASAL and prevailing methods leveraging the complementary advantages. Extensive comparison of EASAL's and MC's performances have been reported in \cite{Ozkan2014MC}. Algorithm~\ref{alg:jumpToDisconnectedRegion} can be used with some modifications as an independent component to improve ergodicity of regular MC sampling in order to help jump to a region separated by a narrow channel, or to pass a high energy barrier. Some aspects of the recursive and adaptive jacobian computation and sampling method presented here require a \emph{seed} matrix or direction or value starting from which they iterate. These include Algorithms \ref{alg:computeCayleyDirectionIteratively}, and \ref{alg:adaptiveMagnitudeAndDirection}. In most cases, a good choice of seed is crucial for rapid convergence. \section{Results} \label{experiments} Recall that our goal is to combine the advantages of Cayley sampling with that of uniform sampling in Cartesian. The former permits topological roadmapping, as well as guaranteed isolation and coverage of effectively low dimensional, low potential energy regions relatively much more efficiently and with much fewer samples compared to MonteCarlo or simply Cartesian grid sampling, with the additional efficiency of not leaving the feasible regions, and not discarding samples. \cite{Ozkan2011, Ozkan2014MainEasal}. \emph{Since the methods of this paper have preserved the above advantages, the emphasis of our comparison here is only the uniformity of sampling in Cartesian}. For this purpose only, we compare the original EASAL \cite{Ozkan2011, Ozkan2014MainEasal}, modified EASAL-jacobian (this paper) and uniform Cartesian grid sampling of assembly configuration spaces of $2$ rigid molecules with about $20$ atoms. We used last 20 residues of HiaPP(human islet amyloid polypeptide PDB-2KJ7) which contain the 6 residues where it differs from RiaPP (rat islet amyloid polypeptide PDB-2KB8). See fig.~\ref{fig:pdb}. We created the 5D stratum (regions with a single active constraint) of both versions of EASAL atlas for $2$ assembling HiaPP molecules and separately, for $2$ assembling RiaPP molecules. For comparison purposes, in both cases, a reference Grid is generated, which is designed to cover the part of the configurational space of interest, i.e., observed in nature. \begin{figure} \centering \begin{subfigure}{.2\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/human0d.eps, width = \linewidth } \caption{HiaPP} \end{subfigure} \hskip0.01\linewidth \begin{subfigure}{.18\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/rat0d_rotated.eps, width = \linewidth } \caption{RiaPP} \end{subfigure} \caption{Easal screenshot: displays the molecule } \label{fig:pdb} \end{figure} \subsection{Multigrid} Both versions of EASAL are designed to isolate and sample each active constraint region. In addition, EASAL-Jacobian samples each such region uniformly in Cartesian. Yet, when we combine all such regions, those regions where more pairs of atoms are in their Lennard-Jones wells (regions with more active constraints) will have denser sampling. i.e. EASAL tends to oversample the lower energy regions. This is a positive feature of EASAL that we preserve in EASAL-Jacobian. Since the 5D strata of the atlas generated by both versions of EASAL would sample a configuration that has $l$ active constraints $l$ times (once for each of the 5D active constraint regions in which the configuration lies), the meaningful comparison would require similarly replicating such configurations in the grid, which we call the \emph{multigrid}. \subsection{Grid Generation} \begin{itemize} \item The Grid is uniform along the Cartesian configuration space. \item The bounds of the Cartesian configuration space for both Grid and EASAL are:\\ $X, Y :$ -26 to 26 Angstroms\\ $Z :$ -7 to 7 Angstroms\\ \item The angle parameters are described in Euler angles representation (Cardan angle ZXZ).\\ $\phi, \psi : -\pi$ to $\pi$ \item Inter principal-axis angle $\theta < 30.0$ degrees where $\theta = a\cos(uv)$ where $u$ and $v$ are the principal axis of each rigid body. I.e. $u$ and $v$ are eigenvectors of the inertia matrix. \item Additionally, there is the pairwise distance lower bound criterion:\\ For all atom pairs $i,j$ belonging to different rigid molecular components, $d_{ij} > 0.8 ∗ (r_i + r_j )$ where $i$ and $j$ are residues, $d_{ij}$ is the distance for residues $i$ and $j$, $r_i$ and $r_j$ are the radius of residue atoms $i$ and $j$. \\ \item 147 Million grid configurations are generated in this manner. \item Over 93\% of them are discarded to ensure at least one pair $d_{ij} < r_1 + r_2 + 0.9$, i.e, an active constraint and to eliminate collisions. About $9.6$ Million grid configurations remain. \end{itemize} \subsection{Computational Time/Resources for EASAL} The specification of the processor that EASAL executed is Intel Core 2 Quad CPU Q9450 @ 2.66GHz x 4 with Memory:3.9 GiB.\\ EASAL-Jacobian for input HIAPP took 2 days 9 hours 20 minutes(3440 minutes) and for input RIAPP took 3 days 14 hours 44 minutes(5204 minutes).\\ EASAL for input HIAPP took 5 hours 40 minutes(340 minutes) and for input RIAPP took 6 hours 52 minutes(412 minutes).\\ \subsection{Epsilon Coverage} Ideally, we would expect each Grid point to be covered by at least one EASAL sample point that is situated in an $\epsilon$-cube centered around a Grid point with a range of $2\epsilon$ in each of the 6 dimensions. \begin{itemize} \item The value of $\epsilon$ is computed as follows: $\epsilon$ = ($\#$ of Grid points / \# of Easal points)$^ {1/6} / 2$ \item We set $\epsilon$ to be $\lceil \epsilon \rceil$ since grid points are by definition a discrete number of steps from each other. \item In order to compute the coverage, we assign each EASAL sample to its closest Grid point. Call those Grid points \emph{EASAL-mapped} Grid points. We say that a Grid point $p$ is \emph{covered} if there is at least one EASAL-mapped Grid point within the $\epsilon$-cube centered around $p.$ \item \underbar{$\epsilon$ for HiaPP:} The number of samples generated by Grid, EASAL and EASAL-jacobian were 9,619,435/194,595/2,861,926 respectively. The corresponding $\epsilon$ for EASAL is $\lceil 49.4331^ {1/6} / 2 \rceil = \lceil 0.957869 \rceil $ and for EASAL-Jacobian is $\lceil 3.36118^ {1/6} / 2 \rceil = \lceil 0.611954 \rceil $. \item \underbar{$\epsilon$ for RiaPP:} The number of samples generated by Grid, EASAL and EASAL-jacobian were 13,267,314/319,016/4,744,878 respectively. The corresponding $\epsilon$ for EASAL is $\lceil 41.5882^ {1/6} / 2 \rceil = \lceil 0.930676 \rceil$ and for EASAL-Jacobian is $\lceil 2.79613^ {1/6} / 2 \rceil = \lceil 0.593467 \rceil$. \end{itemize} \subsection{Coverage Results} The results show that \textbf{$96.21\%$} of Grid points are covered by EASAL-jacobian for HiaPP and \textbf{$96.14\%$} of Grid points are covered for RiaPP. For basic EASAL, \textbf{$85.03\%$} of Grid points are covered for HiaPP and \textbf{$85.46\%$} of Grid points are covered for RiaPP. Hence EASAL-jacobian is verified to have almost full coverage. \subsection{Density Distribution} The fig.~\ref{fig:coverage_projection} shows the sampling distribution over Cartesian $x,y$ space for Grid, MultiGrid, EASAL-jacobian and EASAL. The reddish regions are considered to be the lower energy regions. EASAL and EASAL-jacobian is run for the majority of active constraint regions. i.e. it generated most of the 5D strata of the atlas. Hence a configuration with $l$ active constraints is sampled close to $l$ times. Then we would expect density distribution for EASAL and EASAL-jacobian to lay in between Grid and MultiGrid. \begin{figure*} [h!tbp] \def.4\textwidth{.4\textwidth} \centering \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/human/grid/grid_coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{HiaPP: GRID} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/human/multigrid/multigrid_coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{HiaPP: MULTIGRID} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/human/jac/coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{HiaPP: EASAL jacobian sampling} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/human/basicEasal/coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{HiaPP: EASAL sampling} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/rat/grid/grid_coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{RiaPP: GRID} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/rat/multigrid/multigrid_coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{RiaPP: MULTIGRID} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/rat/jac/coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{RiaPP: EASAL jacobian sampling} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/experiments/rat/basicEasal/coverage_projections_on_XY_e2_eliminated_newscaled.eps, width=\linewidth} \caption{RiaPP: EASAL sampling} \end{subfigure} \caption{ horizontal axis: Cartesian $x$ coordinate, vertical axis: Cartesian Y coordinate\\ color code: the ratio of "the \# of points that lay in an $\epsilon$-cube centered around Grid point $x,y$" over "total \# of points" \\ } \label{fig:coverage_projection} \end{figure*} \subsection{Recursive, Adaptive Cayley Sampling} \label{sec:whatdirection} We propose an Iterative Jacobian computation method with adapted step magnitude and direction, followed by a recursive Cayley trajectory determination method to deal with the issues discussed in the previous subsection. We will use $S, C, J$ to denote the $d\times d$ matrices of Cayley steps, Cartesian steps, Jacobian, respectively, as described above, where $d$ is dimension of the active constraint region that is currently being sampled. Recall that the value $d$ is at most 6 for packing of $2$ rigid molecules. The first two subsections deal separately with the two issues mentioned above: Illconditioning of the Jacobian and Cayley sampling trajectory. \subsubsection{Ill-conditioning: Iterative Jacobian computation} The Jacobian matrix would give the best approximation, if the Cayley steps that are used to create Jacobian matrix are close to the output Cayley steps as a result of Jacobian adjustment. In order to achieve best approximation, we iterate on Cayley directions and magnitudes until convergence. See Algorithm~\ref{alg:computeCayleyDirectionIteratively}. \\ \begin{algorithm}[h!tbp] % \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} {\bf computeCayleyDirectionIteratively}\\ \Input{$S, C$, config } \Output{$S$ \tcp*[f] final Cayley steps } \BlankLine $J$ := computeJacobian($S$, config) \\ $K := J_{inv}C$; \tcp*[f]{$K$ is Cayley transformer(definition below)}\\ $S := SK$; \\ \eIf {$K$ is not close to identity matrix} { return computeCayleyDirectionIteratively($S, C$, config); } {return $S$;} \caption{computeCayleyDirectionIteratively} \label{alg:computeCayleyDirectionIteratively} \end{algorithm} Note that when the the numerical Jacobian is computed, the $i$th column of $J$ represents Cartesian changes after walking one step on Cayley parameter $p_i$. The $i$th column of $J$ is divided to $\Delta p_i$ which is a scalar value. See Table~\ref{table:Jacobian}. However, now the $i$th column of $J$ represents Cartesian changes after walking one directional Cayley step $\overrightarrow{s_i}$ that is $i$th column of $S.$ Hence $\Delta \overrightarrow{s_i}$ is a vector having components in all Cayley parameters $p_i$. So we redefine the Jacobian matrix as: \begin{table}[h!tbp] \begin{center} \begin{tabular}{ l c c c c c r } \hline & $\overrightarrow{s_1}$ & $\overrightarrow{s_2}$ & $\overrightarrow{s_3}$ & $\overrightarrow{s_4}$ & $\overrightarrow{s_5}$ & $\overrightarrow{s_6}$ \\ \hline $x$ & $\Delta x_{s_1}$ & $\Delta x_{s_2}$ & $\Delta x_{s_3}$ & $\Delta x_{s_4}$ & $\Delta x_{s_5}$ & $\Delta x_{s_6}$ \\ $y$ & $\Delta y_{s_1}$ & $\Delta y_{s_2}$ & $\Delta y_{s_3}$ & $\Delta y_{s_4}$ & $\Delta y_{s_5}$ & $\Delta y_{s_6}$ \\ $z$ & $\Delta z_{s_1}$ & . & . & . & . & . \\ $\phi$ & $\Delta \phi_{s_1}$ & . & . & . & . & . \\ $\cos(\theta)$ & $\Delta \cos(\theta)_{s_1}$ & . & . & . & . & . \\ $\psi$ & $\Delta \psi_{s_1}$ & . & . & . & . & . \\ \hline \end{tabular} \end{center} \caption{Redefined Jacobian Matrix J} \label{table:newJacobian} \end{table} With the redefined Jacobian $J$ $J_{inv}C$ has a new interpretation. \begin{definition} [The Cayley transformer matrix $K$] Let $K$ be the Cayley transformer matrix such that when adjusted by the Jacobian we obtain $C$. i.e. $JK = C$ \\ See Table~\ref{table:CayleyConverter}.\\ Each column of $K$ contains the coefficients of current Cayley steps that will lead to new direction in Cayley space that will yield orthogonal sampling in Cartesian space. See fig.~\ref{fig:CayleyTransformation}.\\ \begin{table}[h!tbp] \begin{center} \begin{tabular}{ l c c c c r } \hline $k_1\_s_1$ & $k_2\_s_1$ & . & . & . & . \\ \hline $k_1\_s_2$ & $k_2\_s_2$ & . & . & . & . \\ $k_1\_s_3$ & $k_2\_s_3$ & . & . & . & . \\ $k_1\_s_4$ & $k_2\_s_4$ & . & . & . & . \\ $k_1\_s_5$ & $k_2\_s_5$ & . & . & . & . \\ $k_1\_s_6$ & $k_2\_s_6$ & . & . & . & . \\ \hline \end{tabular} \end{center} \caption{Cayley Transformer $K$} \label{table:CayleyConverter} \end{table} \end{definition} In order to compute $K$, $J_{inv}$ needs to be computed, hence $J$ has to be a square matrix. At first glance, computing the inverse of Jacobian matrix can be worrying since the Jacobian matrix is $6 \times d$ matrix now. However, if Cayley space is $d$ dimensional ($d < 6$), then in fact the Cartesian basis has only $d$ independent vectors Hence, we can crop $6 - d$ rows of Jacobian matrix to make it $d\times d$ square matrix. Here the question is then how to best find those dependent $6 - d$ rows. Among all ${6 \choose d}$ combinations of $d\times d$ submatrix of $J$, pick the one that gives best determinant. Figure~\ref{fig:CayleyTransformation} illustrates the transformation of from initial orthogonal Cayley basis to the new directed Cayley basis. At each iteration, new Cayley transformer matrix $K$ is computed.\\ \begin{figure}[h!tbp] \def.4\textwidth{.4\textwidth} \centering \begin{subfigure}[b]{.44\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/CayleyCoeff.eps, width=\linewidth} \caption{$2$D Cayley basis transformation} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/CayleyCoeff2.eps, width=\linewidth} \caption{$2$D Cayley basis transformation second iteration} \end{subfigure} \caption{ Initial Cayley steps are orthogonal to Cayley parameter space ($p_1$, $p_2$). $K$ is first applied to inital Cayley steps(red lines) to achieve new Cayley basis(blue lines). Then applied to current Cayley basis (blue lines) to achieve new Cayley basis (green lines). } \label{fig:CayleyTransformation} \end{figure} The following method can be used to speed up convergence of the above method or for finer adjustments - its convergence, however, is not guaranteed. It works best for a small number of dimensions. \subsubsection{Illconditioning: Adaptive magnitude and direction:} In order to correct the direction distortions, the idea is to precompute, for the $i$th direction Cartesian step, how much distortion is caused in the $j$th direction. Adjust the $j$th direction by using $j$th Cayley step that is dedicated to the $j$th Cartesian dimension and subtract those distortion adjustments \comm{from the $i$th direction Cartesian} step. See Algorithm~\ref{alg:adaptiveMagnitudeAndDirection}. \\ \begin{algorithm*}[h!tbp] % \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} {\bf adaptiveMagnitudeAndDirection}\\ \Input{$S$, $C$, config, $i$, threshold \tcp*[f]{$i$:direction} } \Output{ out\_config } \BlankLine new\_config := adaptiveMagnitude($S(i), C(i,i)$, config, $i$, threshold); \\ \If{ new\_config is still failed due to inaccuracy of Jacobian} { return failed} $\gamma$ := the distortions on all Cartesian directions between new\_config and expected new\_config \\ $\gamma_c$ := $\gamma$ corresponding to the deviation in terms Cartesian unit steps. \\ \For{ each $j$th Cartesian direction } { set Cayley step $s_j$ to -$S(j)\gamma_c(j)$ to reset the deviation on $j$th direction of Cartesian space. \\ temp\_config := adaptiveMagnitude($s_j$, $\gamma(j)$, new\_config, $j$, threshold);\\ \If{ able to fix distortion } { update new\_config to be temp\_config consecutively } } // final check if distorted or not after cumulative distortion fixes \\ compute the change on Cartesian direction $i$ between $config$ and new\_config \\ ratio := the change / expected Cartesian step $C(i,i)$ \\ \eIf{ ratio is within [1 $\pm$ threshold] } { return new config; } { return failed; } \caption{adaptiveMagnitudeAndDirection} \label{alg:adaptiveMagnitudeAndDirection} \end{algorithm*} The method Algorithm~\ref{alg:adaptiveMagnitude} called above has \textit{adaptive step size} to compensate for the inaccuracy of the Jacobian. It uses binary search on the step size (multiplier to the column) until it gets the desired step size. The adaptive search stops if stepping ratio is within [1 $\pm$ threshold]. \\ \begin{algorithm*}[h!tbp] \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} {\bf adaptiveMagnitude}\\ \Input{$s_i$, $c_i$, $config$, $i$, $threshold$, $min\_s_i$, $max\_s_i$ \tcp*[f] {$s_i$: $i$th Cayley step, $c_i$: $i$th Cartesian step, $i$:direction} } \Output{ out\_config } \BlankLine update $min\_s_i$ and $max\_s_i$ by comparing $s_i$\\ \If{ $min\_s_i$ $>$ $max\_s_i$ } { return failed;} compute new\_config by walking from $config$ by step := $s_i$ \\ \eIf{ new\_config is realizable } { compute the change on Cartesian dimension $i$ between $config$ and new\_config \\ ratio := the change / expected Cartesian step $c_i$ \\ \eIf { ratio is within [1 $\pm$ threshold] } { return new config; } { $s_i$ := $s_i$ / ratio;\\ return adaptiveMagnitude($s_i$, $c_i$, $config$, $i$, $threshold$, $min\_s_i$, $max\_s_i$); } } { $s_i$ := ($min\_s_i$ + $max\_s_i$)/2 ;\\ return adaptiveMagnitude($s_i$, $c_i$, $config$, $i$, $threshold$, $min\_s_i$, $max\_s_i$); } \caption{adaptiveMagnitude} \label{alg:adaptiveMagnitude} \end{algorithm*} As mentioned earlier, these patch ups work well in practice for fine tuning or for small number of dimensions. Convergence is not guaranteed in theory. For the high dimensions, the adjustment of one dimension may increse distortion in another. \\ Hence, correctness of input Cayley direction matrix $S$ is crucial, for which the Cayley trajectory becomes important, in order to achieve the best approximation of $S$ by recursive Jacobian computation in Algorithm~\ref{alg:computeCayleyDirectionIteratively} above.\\ We discuss the issue of Cayley trajectory next. \section{Introduction} \label{intro} Understanding and engineering a variety of supramolecular assembly, packing and docking processes even for small assemblies requires a comprehensive atlasing of the topological roadmap of the constant-potential-energy regions, as well as the ability to isolate and sample such regions and their boundaries even if they are narrow and geometrically complex. A recently reported geometric method called EASAL (efficient atlasing and sampling of assembly landscapes) \cite{Ozkan2014MainEasal} provides such comprehensive atlasing as well as customized and efficient sampling of its regions, crucially employing so-called Cayley or distance parameters. However, for developing hybrids that combine the complementary strengths of EASAL and prevailing methods that predict noncovalent binding affinities and kinetics, accurate computation of configurational entropy and other integrals is essential. This in turn requires uniform distribution over the cartesian or appropriate Cartesian space parametrization. Standard adjustments using the Jacobian of the Cartesian-Cayley map poses multiple challenges due to illconditioning. This paper analyzes these challenges and develops a modification of EASAL that combines the best of both worlds of Cayley sampling with uniform distribution in Cartesian space. \subsection{Recent Related Work} \label{related} A number of very recent results are directly related to or build upon the approach presented here. First, the basic EASAL approach was first described in \cite{Ozkan2011}. The approach is discussed in detail in \cite{Ozkan2014MainEasal}, which gives EASAL-based computations of entropy integrals for clusters of assembling spherical particles that both simplify and extend the methodology and computational results of \cite{Holmes-Cerfon2013} that were reported after \cite{Ozkan2011} appeared. A multi-perspective comparison of variants of EASAL including the modification described here with traditional Montecarlo sampling of the assembly landscape of 2 transmembrane helices is given in \cite{Ozkan2014MC} with a view towards leveraging complementary strengths for hybrid methods. An application of EASAL towards detecting assembly-crucial inter-atomic interactions for viral capsid self-assembly is given in \cite{Wu2012, Wu2014Virus} (applied to 3 viral systems - Minute virus of Mice (MVM), Adeno-associated virus (AAV), and Bromo-mosaic virus (BMV)). Finally the architecture and functionalities of an opensource software implementation of the basic EASAL is described in \cite{Ozkan2014TOMS}. \section{Methodology} \label{sec:methodology} The first subsection gives background from \cite{Ozkan2011, Ozkan2014MainEasal} for the theoretical underpinnings of EASAL's key features - geometrization, stratification and convexification using Cayley parameters - culminating in the concept of an \emph{atlas} of an assembly configuration space. The second subsection analyzes the issues that arise with a preliminary, straightforward use of the Jacobian of the map from Cartesian to Cayley parameters. The third subsection presents a method of adaptive, optimized choice of step-size and direction in Cayley sampling that compensates for an ill-conditioned Jacobian. \subsection{Background: Theory underlying EASAL} We begin with a description of the input to EASAL. An \emph{assembly system} consisting of the following. \begin{itemize} \item A collection of \emph{rigid molecular components}, drawn from a small set of \emph{rigid component types} (often just a single type). Each type is a is specified as the set of positions of \emph{atom-centers}, in a local coordinate system. In many cases, an \emph{atom-center} could be the representation for the average position of a \emph{collection of atoms in a residue}. Note that an assembly \emph{configuration} is given by the positions and orientations of the entire set of $k$ rigid molecular components in an assembly system, relative to one fixed component. Since each rigid molecular component has 6 degrees of freedom, a configuration is a point in $6(k-1)$ dimensional Euclidean space. The maximum number of atom-centers in any rigid molecular component is denoted $n$. \item The potential energy is specified using \emph{Lennard-Jones} (which includes \emph{Hard-Sphere}) \emph{pairwise potential energy functions}. The pairwise Lennard-Jones term for a pair of atoms, $i$ and $j$, one from each component, is given as a function of the distance $d_{i,j}$ between $i$ and $j$; The function is typically discretized to take different constant values on 3 intervals of the distance value $d_{i,j}$: $(0,l_{i,j}), (l_{i,j}, u_{i,j}), and (u_{i,j}, \infty).$ Typically, $l_{i,j}$ is the so-called Van der Waal or steric distance given by "forbidden" regions around atoms $i$ and $j.$ And $u_{i,j}$ is a distance where the interaction between the two atoms is no longer relevant. Over these 3 intervals respectively, the Lennard-Jones potential assumes a very high value $h_{i,j}$, a small value $s_{i,j}$, and a medium value $m_{i,j}.$ All of these \emph{bounds} for the intervals for $d_{i,j}$, as well as the values for the Lennard-Jones potential on these intervals are \emph{specified constants} as part of the input to the assembly model. These constants are specified for each pair of atoms $i$ and $j$, i.e., the subscripts are necessary. The middle interval is called the \emph{well}. In the special case of Hard Spheres, $l_{i,j} = u_{i,j}$. \item A non-pairwise component of the potential energy function in the form of \emph{global potential energy} terms that capture other factors including the implicit solvent (water or lipid bilayer membrane) effect \cite{Lazaridis_Karplus_1999, Lazaridis_2003, Im_Feig_Brooks_2003}. These are specified as a function of the entire assembly configuration. \end{itemize} It is important to note that all the above potential energy terms are \emph{functions of the assembly configuration}. \emph{Note} that the input to the assembly usually specifies the configurations of interest i.e., a region of the configuration space, often specified as a collection $C$ of $m$ atom pairs "of interest" with the understanding that the only configurations of interest are those in which at least one of these $m$ pairs in $C$ occupy their corresponding Lennard-Jones well. Clearly $m\le n^2\choose{k,2}$. In addition, we assume the desired level of refinement of sampling is specified as a desired number of sample configurations $t$. \subsubsection{Geometrization} Observe that for the purposes of this paper stated in Section \ref{intro}, it is sufficient to view the assembly landscape as a union of constant potential energy regions. Thus an assembly system can alternatively be represented as a set of rigid molecular components drawn from a small set of types, together with \emph{assembly constraints}, in the form of distance intervals. These constraints define \emph{feasible} configurations (where the pairwise inter-atoms distances are larger than $l_{i,j}$, and any relevant tether and implicit solvent constraints are satisfied). The set of feasible configurations is called the \emph{assembly configuration space}. The \emph{active constraints} of a configuration are those atom-pairs in the configuration that lie in the Lennard-Jones well. An \emph{active constraint} region of the configuration space is a region consisting of all configurations where a specified (nonempty) set of constraints is active, i.e, those Lennard-Jones inter-atom distances between atoms $i$ and $j$ lie in their corresponding wells, i.e, the interval $(l_{i,j}, u_{i,j})$. \subsubsection{Stratification, active constraint graphs} \label{stratification} Consider an assembly configuration space $\cal{A}$ of $k$ rigid components, defined by a system $A$ of assembly constraints. The configuration space has dimension $6(k-1)$, the number of internal degrees of freedom of the configurations since a rigid object in Euclidean 3-space has $6$ rotational and translational degrees of freedom. For $k = 2$, this dimension is at most $6$ and in the presence of two active constraints, it is at most $4$. A \emph{Thom-Whitney stratification} of the configuration space $\cal{A}$ (see \figref{fig:atlas}) is a partition of the space into regions grouped into strata $X_i$ of $\cal{A}$ that form a filtration $\emptyset\subset X_0\subset X_1 \subset \ldots \subset X_m=\cal{A}$, $m = 6(k-1)$. Each $X_i$ is a union of nonempty closed \emph{active constraint regions} $R_{Q}$ where $m-i$ the set of pairwise constraints $Q\subseteq A$ are {\emph active}, meaning each pair in $Q$ lies in its corresponding Lennard-Jones well, and the constraints are independent (i.e., no proper subset of these constraints generically implies any other constraint in the set). Each active constraint set $Q$ is itself part of at least one, and possibly many, hence $l$-indexed, nested chains of the form $\emptyset\subset Q^l_0\subset$ $Q^l_1$ $\subset\ldots\subset Q^l_{d-i}=Q$ $\subset\ldots\subset Q^l_m$. See Figures \ref{fig:contacts} and \ref{fig:prtree}(left). These induce corresponding reverse nested chains of active constraint regions $R_{Q^l_j}$: $\emptyset\subset R_{Q^l_d}\subset R_{Q^l_{d-1}} \subset\ldots\subset R_{Q^l_{d-i}}=R_Q \subset \ldots\subset R_{Q^l_0}$ Note that here for all $l,j$, $R_{Q^l_{d-j}} \subseteq X_{j}$ is closed and \emph{effectively} $j$ dimensional; by which we mean that if all the $d-j$ Lennard-Jones wells that define the active constraint set $Q^l_{d-j}$ narrowed to zero width (i.e, if they degenerated to a Hard-Sphere potentials), then the active constraint region $R_{Q^l_{d-j}}$ would be $j$ dimensional. \def.4\textwidth{0.45\linewidth} \def0.40\linewidth{0.40\linewidth} \def0.55\linewidth{0.55\linewidth} \begin{figure} \centering \begin{subfigure}[b]{0.40\linewidth} \epsfig{file=/cise/research/constraints/JACOBIAN/mysections/fig/extra/StratificationBlack.eps, width=\linewidth} \caption{stratification of assembly} \label{fig:atlas} \end{subfigure} % \begin{subfigure}[b]{0.55\linewidth} \epsfig{file=/cise/research/constraints/JACOBIAN/mysections/fig/extra/paramspaceandflipsBlack.eps, width=\linewidth} \caption{ {\it top}: Cayley point s, {\it bottom}: Cartesian realization s} \label{fig:prtree} \end{subfigure} \caption{ (a) {\bf Stratification:} of assembly constraint system with parameters $n=$ 4 (red), 3 (yellow), 2 (green), 1 (white), 0 (purple). Strata of each dimension $j$ for the assembly constraint system visualized in the lower right inset are shown as nodes of one color and shape in a directed acyclic graph. Each node represents an active constraint region. Edges indicate containment in a parent region one dimension higher. (b) {\it top:} Realizable {\bf Cayley point s} (distance values) corresponding to one node in (a). \emph{ Note a different use of color in the display of sample boxes in Cayley configuration space than in the stratification diagram.} One Cayley point\ in the green group is highlighted. {\it bottom:} Three {\bf Cartesian realization s} of the highlighted Cayley point. Each edge on a realization represents an active constraint graph and its chosen parameters.} \end{figure} \begin{figure} \centering \includegraphics[width=.5\textwidth] {/cise/research/constraints/JACOBIAN/mysections/fig/extra/contacts_pink.eps} \caption{ {\bf Adding constraints, removing parameters until $j=0$}. {\it top}: Cartesian realization s with {\it non-white segments:} parameters and {\it white segements} constraints and {\it bottom}: activeConstraintGraph G\ yielding configurations with ever fewer free parameters as constraints are added one by one. } \label{fig:contacts} \end{figure} We represent the active constraint system for a region, by an \emph{active constraint graph} (sometimes called \emph{contact graph}) whose vertices represent the participating atoms (at least $3$ in each rigid component) and edges representing the active constraints between them. Between a pair of rigid components, there are only a small number of possible active constraint graph isomorphism types since there are at most $12$ contact vertices. For the case of $k=2$ these are listed in Figure \ref{acgtypes2}, and for higher $k$ a partial list appears in Figure \ref{fig:v6e12}. There could be regions of the stratification of dimension $j$ whose number of active constraints exceeds $6(k-1) -j$, i.e.\ the active constraint system is overconstrained, or whose active constraints are not all independent. Dependent constraints diminish the set of realizations. For entropy calculations, these regions should be tracked explicitly, but in the present paper, we do not consider these overconstrained regions in the stratification. Our regions are obtained by choosing any $6(k-1)-j$ independent active constraints. \begin{figure} \epsfig{file=/cise/research/constraints/JACOBIAN/mysections/fig/extra/heping.eps, width=3in} \caption{All active constraint graphs} \label{acgtypes2} \end{figure} \begin{figure} \epsfig{file=/cise/research/constraints/JACOBIAN/mysections/fig/extra/v6e12.eps, width=3in} \caption{All non-isomorphic active constraint graphs with 6 vertices and 12 edges.} \label{fig:v6e12} \end{figure} \subsubsection{Convex representation of active constraint region and atlas} A new theory of Convex Cayley Configuration Spaces {\bf (CCCS)} recently developed by the author \cite{SiGa:2010} gives a clean characterization of active constraint graphs whose configuration spaces are convex when represented by a specific choice of so-called {\sl Cayley parameters} i.e., distance parameters between pairs of atoms (vertices in the active constraint graph) that are inactive in the given active constraint region (non-edges in the active constraint graph). See Figure \ref{parameterchoice}. Such active constraint regions are said to be \emph{convexifiable}, and the corresponding Cayley parameters are said to be its \emph{convexifying} parameters. See Figures \ref{fig:pctree} \ref{fig:flips} In general, the active constraint regions $R'_{G}$ for an active constraint graph $G$, can be entirely convexified after ignoring the remainder of the assembly constraint system, namely the atom marker s not in $G$ and their constraints. \figref{fig:chart} The true active constraint region $R_{G}$ is subset of $R'_{G}$, however the cut out regions are also defined by active constraints, hence they, too, could be convexified. See Figures \ref{fig:pctree}, \ref{fig:flips}. \begin{figure} \centering \begin{subfigure}{.4\textwidth} \epsfig{file=/cise/research/constraints/JACOBIAN/mysections/fig/extra/pctreeSpacesBlack.eps, width=\linewidth} \caption{ Cayley chart s of dimensions 1,2,3 attached to nodes.} \label{fig:pctree} \end{subfigure} \hskip0.01\linewidth % \begin{subfigure}{.4\textwidth} \epsfig{file=/cise/research/constraints/JACOBIAN/mysections/fig/extra/pctreeFlipsBlack.eps, width=\linewidth} \caption{Cartesian realization s of dimensions 1,2,3 attached to nodes.} \label{fig:flips} \end{subfigure} \caption{ {\bf Nested chains for one region of the atlas,} i.e.\ nodes and paths in the directed acyclic graph of the stratification containing a 2d contraint region. {\it center, green:} a $2$d active constraint region. {\it left, red and yellow:} 4d and 3d parent regions containing the 2d region. {\it right:} 1d and 0d child regions. The G\ and chart\ are displayed next to each region. (a) The $2$-dimensional (exact, convex) chart\ in the center has a hole due to infeasible configurations also defined by Cayley parameter ranges, hence convex. Also, due to choice of different Cayley parameters, the same 2-dimensional region appears, without hole, in the $3$-dimensional parent chart s as orange boxes {\it top left}, pink boxes {\it middle left} and red-orange boxes {\it lower left}; green boxes {\it on right:} 1-dimensional subregions. (b) Three grey fans attach the Cartesian realization s to their nodes as separate sweeps for different chirality of a region (the blue molecular unit\ is fixed without loss of generality). } \end{figure} When a constraint (edge $e$) not in $G$ becomes active (at a configuration $c$ in $R'_{G}$), $G\cup \{e\}$ defines a child active constraint region $R_{G\cup e}$ containing $c$. This new region belongs to the stratum of the assembly configuration space that is of one lower dimension (\defref{stratification}) and defines within $R'_{G}$ a boundary of the smaller, true active constraint region $R_{G}$. We can still choose the chart\ of $R'_{G}$ as tight convex chart\ for $R_{G}$, but now region $R_{G\cup e}$ has an exact or tight convex chart\ of its own. Then the configurations in the region $R_{G\cup e}$ have lower potential energy since the configurations in that region lie in one more Lennard-Jones well. Hence they should be carefully sampled in free energy and entropy computations although the region has one lower effective dimension (e.g, represents a much narrower boundary channel). However, sampling in the larger parent chart of $R(G)$ (of one higher effective dimension) often does not provide adequate coverage of the narrow boundary region $R_{G\cup e}$. For example, \figref{fig:reparametrization} shows that providing a separate chart\ for each active constraint region can reveal additional realizations at the same level of sampling. \begin{figure} \centering \begin{subfigure}{3.3in} \epsfig{file = /cise/research/constraints/JACOBIAN/mysections/fig/extra/pspaceLower.eps, width = \linewidth } \caption{} \label{fig:chart} \end{subfigure} \hskip0.01\linewidth \begin{subfigure}{1.4in} \epsfig{file = /cise/research/constraints/JACOBIAN/mysections/fig/extra/108529sweepboundaries.eps, width = \linewidth } \caption{} \end{subfigure} \begin{subfigure}{1.5in} \epsfig{file = /cise/research/constraints/JACOBIAN/mysections/fig/extra/108529sweepinterior.eps, width = \linewidth} \caption{} \end{subfigure} \begin{subfigure}{2.25in} \epsfig{file = /cise/research/constraints/JACOBIAN/mysections/fig/extra/reparametrization1.eps, width = \linewidth } \caption{} \label{fig:reparametrization} \end{subfigure} \caption{\scriptsize Top Left: atlas region showing interiors and boundaries sampled in its convexifying Cayley parameters; boundary/child regions sampled in their own Cayley parameters and mapped back to the parent region's Cayley parameters ({\sl note increase in samples}). Top Right: boundary/child regions sampled in their own Cayley parameters shown as sweeps around grey reference (toy) helix. Bottom Left: union of boundary regions sampled in parent's Cayley parameters, shown as sweep around blue reference helix ({\sl notice (b) is bigger}) Bottom Right: sweep of one of the boundary regions sampled in parent's Cayley parameters is shown in red around gray reference helix; the sampling {\sl misses the other colored configurations} in the same boundary region, obtained by sampling in its own Cayley parameters.} \label{parameterchoice} \end{figure} The {\em Atlas} of an assembly configuration space is a stratification of the configuration space into convexifiable regions. In \cite{Ozkan2011}, we have shown that {\sl molecular assembly configuration spaces with 2 rigid molecular components have an atlas.} The software EASAL (Efficient Atlasing and Search of Assembly Landscapes) efficiently finds the stratification, incorporates provably efficient algorithms to choose the Cayley parameters \cite{SiGa:2010} that convexify an active constraint region, efficiently computes bounds for the parametrized convex regions \cite{ugandhar}, and converts the parametrized configurations into standard cartesian configurations \cite{eigr2004}. \subsection{Preliminary Method: Cayley Sampling for Cartesian Uniformity} We discuss a preliminary method that highlights the issues and challenges that need to be addressed. The Cayley point s of the atlas\ need to be converted to Cartesian realization s as in Figure \ref{fig:flips}. An assembly configuration is a point in 6 dimensional Cartesian space representing the rotations and translations of one rigid molecular unit\ with respect to the fixed rigid molecular unit: ($x$, $y$, $z$, $\phi$, $\cos(\theta)$, $\psi$). For the active constraint graphs that occur in assembly \cite{Ozkan2011, Ozkan2014MainEasal}, the Cartesian or Euclidean realization can be found using a sequence of tetrahedra constructions. \begin{observation} Every Cayley point\ in the exact convex chart\ $\Phi_H(G,F,d_F,d_H)$ has at least 1 and generically at most finitely many Cartesian realization s in the region $R_{G_F}$. See Figure \ref{fig:prtree}. \end{observation} Multiple Cartesian orientations correspond to same Cayley configuration. Those orientations are called flips of the Cayley configuration. The methods to be discussed below executed Cartesian sampling on each flip seperately. The flips can meet and bifurcate. For accurate configurational entropy computations, sampling in Cartesian space should maintain a measure of uniformity. The Cartesian sampling we aim for would be uniform on each flip. Ensuring uniformity when the flips are combined is beyond the scope of this paper. See fig.~\ref{fig:flips}.\\ \begin{figure}[h!tbp] \centering \def.4\textwidth{0.45\linewidth} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/allFlipsJacobianSamplingonCayleySpace.eps, width=\linewidth} \caption{$2$-d Cayley Space} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/allFlipsJacobianSamplingonCartesianSpace.eps, width=\linewidth} \caption{Cartesian x, y view} \end{subfigure} \caption{ Easal screenshot: a) $2$-D Jacobian sampling projected on Cayley space. b) $2$-D Jacobian sampling projected on $2$ independent Cartesian dimensions. c) $2$-D Jacobian sampling projected on $2$ independent Cartesian dimensions plus $1$ dependent dimension. All flips are colored differently. } \label{fig:flips} \end{figure} \begin{comment} \begin{figure}[h!tbp] \centering \subfigure[$2$-d Cayley Space]{ \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/allFlipsJacobianSamplingonCayleySpace.eps, width=.4\textwidth}} \subfigure[Cartesian x, y view]{ \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/allFlipsJacobianSamplingonCartesianSpace.eps, width=.4\textwidth}} \subfigure[Cartesian x, y, z view]{ \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/allFlipsJacobianSamplingonCartesianSpace4.eps, width=.45\textwidth}} \caption{ Easal screenshot: a) $2$-D Jacobian sampling projected on Cayley space. b) $2$-D Jacobian sampling projected on $2$ independent Cartesian dimensions. c) $2$-D Jacobian sampling projected on $2$ independent Cartesian dimensions plus $1$ dependent dimension. All flips are colored differently. } \label{fig:flips} \end{figure} \end{comment} However, while ensuring uniform Cartesian sampling on each flip, we would like to retain the advantages of Cayley sampling, including convexification of the active constraint regions. To obtain a measure of uniform sampling on Cartesian space while Cayley sampling, Cayley steps using the (inverse) Jacobian of the map from Cartesian to Cayley. \begin{definition} [J] The numerical Jacobian matrix $J$ defines a linear map F: \textit{Cayley space} $\rightarrow$ \textit{Cartesian space}, which is the best linear approximation of the function F near the configuration $p$. Each column of $J$ represents Cartesian changes after walking one step around $p$ = $(p_1, p_2, p_3, p_4, p_5, p_6)$ on Cayley space where $p_i$ is $i$th Cayley parameter. See Table~\ref{table:Jacobian}. i.e. the first row of J is the changes along Cartesian $x$ dimension for each Cayley step.\\ \begin{table}[!htbp] \large \begin{center} \begin{tabular}{ l c c c c c r } \hline & $p_1$ & $p_2$ & $p_3$ & $p_4$ & $p_5$ & $p_6$ \\ \hline $x$ & $\frac{\Delta x}{\Delta p_1}$ & $\frac{\Delta x}{\Delta p_2}$ & $\frac{\Delta x}{\Delta p_3}$ & $\frac{\Delta x}{\Delta p_4}$ & $\frac{\Delta x}{\Delta p_5}$ & $\frac{\Delta x}{\Delta p_6}$ \\ $y$ & $\frac{\Delta y}{\Delta p_1}$ & $\frac{\Delta y}{\Delta p_2}$ & $\frac{\Delta y}{\Delta p_3}$ & $\frac{\Delta y}{\Delta p_4}$ & $\frac{\Delta y}{\Delta p_5}$ & $\frac{\Delta y}{\Delta p_6}$ \\ $z$ & $\frac{\Delta z}{\Delta p_1}$ & . & . & . & . & . \\ $\phi$ & $\frac{\Delta \phi}{\Delta p_1}$ & . & . & . & . & . \\ $\cos(\theta)$ & $\frac{\Delta \cos(\theta)}{\Delta p_1}$& . & . & . & . & . \\ $\psi$ & $\frac{\Delta \psi}{\Delta p_1}$ & . & . & . & . & . \\ \hline \end{tabular} \end{center} \caption{Jacobian Matrix J} \label{table:Jacobian} \end{table} \end{definition} It is clear that the numerical Jacobian can be computed at each Cayley point, column-wise by finite differences. In other words, let $s_x$, $s_y$, $s_z$, $s_{\phi}$, $s_{\cos(\theta)}$, $s_{\psi}$ be the sizes of the one step for each dimension on Cartesian space. Let $\Delta x$, $\Delta y$, $\Delta z$, $\Delta \phi$, $\Delta \cos(\theta)$, $\Delta \psi$ be the discretized Cartesian differences after one Cayley step. Then let $ k_1 = \Delta x/ s_x$, $ k_2 = \Delta y/ s_y$, $k_3 = \Delta z/ s_z$, $k_4 = \Delta \phi/ s_{\phi}$, $k_5 = \Delta \cos(\theta)/ s_{\cos(\theta)}$, $k_6 = \Delta shi/ s_{shi}$ be the coordinates of the Cartesian. \\ As criterion of uniformity, we could require the Euclidean 2-norm step distance $\|k_1, k_2, k_3, k_4, k_5, k_6\|$ to be 1.\\ In order to achieve the above, we can try interpolation and binary search over the Cayley step size. This works reasonably well if the active constraint region being sampled is effectively 1-dimensional. However, for higher dimensions, since sampling is usually done one Cayley parameter at a time, although the Cartesian spacing may be maintained for samples along each Cayley line, the Cartesian trajectories corresponding to two Cayley lines may diverge. \begin{comment} \begin{figure}[H] \centering \subfigure[Binary sampling on $2$d Cayley space]{ \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/binary.eps, width=.43\textwidth}} \caption{ The step sizes on Cayley parameters $p_1$ and $p_2$ are arranged in such a way that the solid lines in between every point corresponds to one step in Cartesian space. However this may not be the case for Cartesian distance between red point and the closest point on Cayley dimension $p_2$ (the dashed lines) } \label{fig:binary} \end{figure} \end{comment} In other words, the sampling adjustment should not be restricted only to $d$ sampling directions, where $d$ is the effective dimension of active constraint region being sampled. The entire volume of the $d$-dimensional neighborhood must be considered see fig.~\ref{fig:uniformCartesian}, and Jacobian adjustments are required to address both the step size and direction issues.\\ \begin{figure*} \def.4\textwidth{.4\textwidth} \centering \begin{subfigure}[b]{.43\textwidth} \includegraphics[width=\linewidth]{/blank1/aozkan/Jacobian_revtex/mysections/fig/rec_0_s} \caption{Uniform Cartesian sampling projected on Cayley Space} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \includegraphics[width=\linewidth]{/blank1/aozkan/Jacobian_revtex/mysections/fig/easalSpaceViewFlip2} \caption{Uniform Cayley sampling projected on Cayley Space} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \includegraphics[width=\linewidth]{/blank1/aozkan/Jacobian_revtex/mysections/fig/jacrec} \caption{Uniform Cartesian sampling projected on Cartesian Space} \end{subfigure} \begin{subfigure}[b]{.4\textwidth} \includegraphics[width=\linewidth]{/blank1/aozkan/Jacobian_revtex/mysections/fig/cayleySamplingOnCartesian_Viewi} \caption{Uniform Cayley sampling projected on Cartesian Space} \end{subfigure} \caption{ Easal screenshot: Different sampling methods projected on both $2$d Cayley space and Cartesian space. Notice the need of walking directionally (not just horizontal and vertical) on Cayley space in order to have uniform sampling on Cartesian space. } \label{fig:uniformCartesian} \end{figure*} \begin{definition} \textbf{The Orthogonal Cartesian Step Matrix $C$}\\ Let $C$ be the matrix where each column represents expected Cartesian changes after one directional Cayley step. See Table~\ref{table:CartesianSteps}. We would like to walk orthogonally in Cartesian space.\\ \begin{table}[h!tbp] \begin{center} \begin{tabular}{ l c c c c r } \hline $s_x$ & 0 & 0 & 0 & 0 & 0 \\ \hline 0 & $s_y$ & 0 & 0 & 0 & 0 \\ 0 & 0 & $s_z$ & 0 & 0 & 0 \\ 0 & 0 & 0 & $s_{\phi}$ & 0 & 0 \\ 0 & 0 & 0 & 0 &$s_{\cos(\theta)}$& 0 \\ 0 & 0 & 0 & 0 & 0 & $s_{\psi}$ \\ \hline \end{tabular} \end{center} \caption{Cartesian Step Matrix $C$: diagonal matrix with Cartesian steps as diagonal entries.} \label{table:CartesianSteps} \end{table} \end{definition} \begin{definition} \textbf{The Cayley Step Matrix $S$ corresponding to $C$}\\ Let $S$ be the matrix of Cayley steps such that when adjusted by the Jacobian results in $C$. i.e. $JS = C$. $S$ is the numerical $J_{inv}C$. See Table~\ref{table:DirectionalCayleyStep}.\\ Each column of $S$ represents one directional Cayley step that is predicted to yield orthogonal stepping in Cartesian space. \begin{table}[h!tbp] \begin{center} \begin{tabular}{ l c c c c c r } \hline & $\overrightarrow{s_1}$ & $\overrightarrow{s_2}$ &$\overrightarrow{s_3}$&$\overrightarrow{s_4}$&$\overrightarrow{s_5}$&$\overrightarrow{s_6}$ \\ \hline $p_1$ & $s_{11}$ & $s_{21}$ & . & . & . & . \\ $p_2$ & $s_{12}$ & $s_{22}$ & . & . & . & . \\ $p_3$ & $s_{13}$ & $s_{23}$ & . & . & . & . \\ $p_4$ & $s_{14}$ & $s_{24}$ & . & . & . & . \\ $p_5$ & $s_{15}$ & $s_{25}$ & . & . & . & . \\ $p_6$ & $s_{16}$ & $s_{26}$ & . & . & . & . \\ \hline \end{tabular} \end{center} \caption{Directional Cayley Steps} \label{table:DirectionalCayleyStep} \end{table} \end{definition} \subsubsection{Issues} \label{issues} \underbar{Ill-conditioned Jacobian:} \\ Jacobian matrix is by definition an \textit{linear approximation} of the nonlinear map $F: \textit{Cayley space} \rightarrow \textit{Cartesian space}$. The Jacobian can be ill-conditioned and sensitive to small changes and numerical errors in its arguments.\\ \begin{figure}[h!tbp] \def.4\textwidth{.4\textwidth} \centering \epsfig{file = /blank1/aozkan/Jacobian_revtex/mysections/fig/corrupted.eps, width=.25\textwidth \caption{ Easal screenshot: Jacobian sampling projected on Cartesian Space fails to satisfy uniformity for some regions.} \label{fig:corrupted} \end{figure} \underbar{What Cayley trajectory to follow to ensure} \underbar{comprehensive coverage?} \\ In uniform Cayley sampling, the Cayley parameters are walked one by one (grid sampling on Cayley space). With the above Jacobian adjustments to Cayley step direction, such grid sampling is impossible. Hence it is important to have a systematic method to determine what path to follow avoiding repetitions and ensuring coverage. For a single Cartesian dimension the corresponding Cayley direction is specified by the Jacobian adjustment in every step. As in the previous approach (without direction adjustment) it is not clear how to generalize this to higher dimensional regions. While uniform Cayley sampling comprehensively covers Cayley space and thereby also Cartesian space. This property is not generally preserved by the use of Jacobian adjustments to stepping direction. \subsection{Previous Work, Scope and Motivation} There has been a long and distinguished history of configurational entropy and free energy computation methods \cite{kaku, Andricioaei_Karplus_2001, Hnizdo_Darian_Fedorowicz_Demchuk_Li_Singh_2007, Hnizdo_Tan_Killian_Gilson_2008, Hensen_Lange_Grubmuller_2010, Killian_Yundenfreund_Kravitz_Gilson_2007, Head_Given_Gilson_1997, GregoryS201199, doi:10.1021/jp2068123}, many of which use as input the configuration trajectories of Molecular Dynamics or Monte Carlo sampling which are known to be nonergodic, whereby locating and isolating narrow channels and their boundaries, i.e., regions of low effective dimension separated by high energy barriers might take arbitrarily long, requiring several trajectories starting from different initial configuations. This also causes problems for many entropy computation methods that rely on principal component analyses of the covariance matrices from a trajectory of MC samples in internal coordinates, followed by a quasiharmonic \cite{Andricioaei_Karplus_2001} or nonparametric (such as nearest-neighbor-based) \cite{Hnizdo_Darian_Fedorowicz_Demchuk_Li_Singh_2007} estimates. Since MC trajectories are not geometrically optimized, these methods are generally known to \emph{overestimate} the volumes of configuration space regions with high geometric or topological complexity, even when hybridized with higher order mutual information \cite{Hnizdo_Tan_Killian_Gilson_2008}, and nonlinear kernel methods, such as the Minimally Coupled Subspace approach of \cite{Hensen_Lange_Grubmuller_2010}. Most of the above methods do not explicitly restrict the number of atoms in each of the assembling rigid molecular components, and in fact they are used for assembly or folding. For cluster assemblies from spheres, (with $k\le 12$), there are a number of methods \cite{Holmes-Cerfon2013, Hagen1993, Doye1996, Meng2010, Gazzillo2006} to compute free energy and configurational entropy for subregions of the configuration space, and some of these subregions are the entire configuration spaces of small molecules such as cyclo-octane \cite{Martin2010, Jaillet2011, Porta2007}. These include robotics and computational geometry based methods such as \cite{GregoryS201199} ($n=3$). These methods are used to give bounds or to approximate configurational entropy without relying on Monte Carlo or Molecular Dynamics sampling. Note that there is an extensive literature purely on computing minimum potential energy configurations: these are are not relevant to this paper; neither are simulation-based methods for free-energy computation of large assemblies starting from known free energy values and formation rates for assembly intermediates formed from a small number of subassemblies. Essentially, even for small assemblies, barring a few exceptions such as \cite{Holmes-Cerfon2013}, \cite{Porta2007}, \cite{Yao_Sun_Huang_Bowman_Singh_Lesnick_Guibas_Pande_Carlsson_2009}, and \cite{Gfeller_DeLachapelle_DeLos_Rios_Caldarelli_Rao_2007,Varadhan_Kim_Krishnan_Manocha_2006, Lai_Su_Chen_Wang_2009, 10.1371/journal.pcbi.1000415}, most prevailing methods do not extract a high-level, topological roadmap of the boundary relationships between the constant-potential-energy regions. Similarly, most prevailing methods of sampling and volume computation are not \emph{explicitly} tailored or specialized to leverage this relative geometric simplicity of constant-potential-energy regions of assembly configuration spaces. Hence for small assemblies, the basic EASAL \cite{Ozkan2011, Ozkan2014MainEasal} addresses the demand for a method that satisfies two criteria: it should (i) generate a comprehensive roadmap of the assembly configuration space as a topological complex of constant-potential-energy regions, their neighborhood relationships and boundaries; and (ii) explicitly formalize and leverage the geometric simplicity of these regions (in the case of assembly relative to folding) to give an efficient and accurate computation of their volume by isolation of the region and its boundaries and customized sampling. In order to effectively combine the complementary advantages of EASAL with the abovementioned prevailing methods, the goal of this paper is to maintain these advantages of EASAL and Cayley sampling while ensuring certain minimum distance and coverage relationships between sampled points in Cartesian space.
1,314,259,996,961
arxiv
\section{Introduction} \IEEEPARstart{T}{he} COmmon Muon and Proton Apparatus for Structure and Spectroscopy (COMPASS) at the CERN SPS \cite{COMPASS1} is a state-of-the-art two stage magnetic spectrometer \cite{COMPASS-NIM} with a flexible setup to allow for a rich variety of physics programs to be performed with secondary muon or hadron beams. Common to all measurements is the requirement for highest beam intensity and interaction rates with the needs of a high readout speed. Recently a proposal has been submitted \cite{COMPASS2} for studies of Generalized Parton Distributions (GPD), which combine both nucleon electromagnetic form factors and Parton Distribution Functions. Constraining quark GPDs experimentally by measuring exclusive Deeply Virtual Compton Scattering (DVCS) shows great promise for the disentanglement of the nucleon's spin budget. For the upcoming DVCS measurements the existing COMPASS spectrometer will be extended by a new 2.5\:m long liquid hydrogen target, which will be surrounded by a new recoil proton detector based on scintillating counters. The high luminosity of about $10^{32}\:\text{cm}^{-2}\text{s}^{-1}$ and the background induced by the wide beam halo will yield rates of the order of several MHz in the recoil detector counters. This imposes great demands on the digitization units and on a hardware trigger based on the recoiled particle. For this purpose we have developed within the GANDALF framework \cite{GANDALF-NIM} a modular high speed and high resolution transient recorder system featuring digital pulse processing in real-time. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop1} \caption{Picture of the GANDALF carrier board equipped with two ADC mezzanine cards. The center mezzanine card hosts an optical receiver for the COMPASS trigger and clock distribution system.} \label{fig_gandalf} \end{figure} \section{The GANDALF Framework} GANDALF (Fig. \ref{fig_gandalf}) is a 6U-VME64x/VXS \cite{VITA} carrier board which can host two custom mezzanine cards. It has been designed to cope with a variety of readout tasks in high energy and nuclear physics experiments. The exchangeable mezzanine cards allow an employment of the system in very different applications such as analog-to-digital or time-to-digital conversions, coincidence matrix formation, fast pattern recognition or fast trigger generation. Currently two types of mezzanine cards are available: ADC cards and LVDS input cards. Another model with optical interfaces is foreseen to receive data from remote detector frontend modules. When GANDALF is used as a transient recorder, the carrier board is equipped with two ADC mezzanine cards. A schematic overview is provided in Fig. \ref{fig_gandalf_scem}. The heart of the board is a Xilinx VIRTEX5-SXT FPGA which is connected to each mezzanine card by several single ended and 120 differential signal interconnections. The data processing FPGA can perform complex calculations on data which have been acquired on the mezzanine cards to extract time and amplitude information of the sampled pulses. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop2} \caption{Block diagram of GANDALF as a transient recorder.} \label{fig_gandalf_scem} \end{figure} Fast and deep memory extensions of 144-Mbit QDRII+ and 4-Gbit DDR2 RAM are connected to a second Virtex5 FPGA. Both FPGAs are linked to each other by eight bidirectional high-speed Aurora lanes with a total bandwidth of 25\:Gbit/s per direction. Connected to the VXS backplane GANDALF has 16 high-speed lanes for data transfer to a central VXS module, where the lanes of up to 18 GANDALF modules merge. This connection can be used for continuous transmission of the amplitudes and the time stamps from sampled signals to the VXS trigger processor, which then forms an input to the experiment-wide first-level trigger based on the energy loss and the time-of-flight in the recoil detector. A dead-time free data output can either be realized by dedicated backplane link cards connected to each GANDALF P2-connector, i.e. following the 160 MByte/s SLink \cite{SLINK} or Ethernet protocol, or by the VME64x bus in block read mode \cite{lauser} or by USB2.0 from the front panel. Depending on requirements the data output may contain the results of the digital pulse processing only or also the full sample list. \section{Analog-to-Digital Converter} Two models of analog-to-digital converters (ADC) can be used with the GANDALF board, depending on the desired resolution. With the Texas Instruments models ADS5463 (12bit@500MS/s) and ADS5474 (14bit@400MS/s) we chose two of the fastest pipelined high resolution ADC chips that are currently available. Their low latency of only 3.5 clock cycles gives valuable time for the signal processing and the following trigger generation with its tight timing constraints defined by existent readout electronics. The DC-coupled analog input circuit (Fig. \ref{fig_analog}) uses the differential amplifier LMH6552 from National Semiconductor and has a bandwidth of 500\:MHz. It adapts the incoming single ended signal, e.g. from a photomultiplier tube (PMT), to the dynamic range of the ADC while the baseline of each channel can be adjusted individually by 16-bit digital-to-analog converters (DAC). Depending on the DAC settings, the input stage accepts unipolar or bipolar pulses. Two adjacent channels can be interleaved to achieve an effective sampling rate of 1\:GS/s (800\:MS/s with the ADS5474) at the cost of the number of channels per mezzanine card. In this time-interleaved mode the second ADC receives a sampling clock which is phase-shifted by 180 degree and the input signal is passively split to both channels. Thus the signal is sampled alternately by two ADCs. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop3} \caption{Schematic of the DC-coupled analog input circuit. For each channel $\text{U}_\text{Offset}$ can be set by 16-bit DACs.} \label{fig_analog} \end{figure} On each ADC mezzanine card the high frequency sampling clock is generated by a digital clock synthesizer chip SI5326 from Silicon Labs, which comprises an integrated PLL consisting of an oscillator, a digital phase detector and a programmable loop filter. The experiment-wide 155.52-MHz clock, distributed by the COMPASS trigger and clock distribution system (TCS), is used as reference. Particular attention has been paid to the design of the clock filter networks and the board layout to reach a time interval error smaller than 730\:fs (Fig. \ref{fig_jitter}) \cite{schopf}, which is essential for high bandwidth sampling applications. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop4} \caption{Time interval error of the sampling clock. The measurement was performed with a Tektronix TDS6154C using the TDSJIT3 software.} \label{fig_jitter} \end{figure} We determined the signal-to-noise ratio (SNR) of the ADC system in a test setup consisting of a high precision function generator (Tektronix AFG3252) and a selection of narrow band pass filters. The filters were connected directly to the analog input of the GANDALF module to suppress the harmonics of the signal source. Sine waveforms of different frequencies were sampled and from the fast Fourier transform the SNR was calculated. The result of these measurements for the 12-bit version (ADS5463) is shown in Fig. \ref{fig_snr} as a function of the frequency of the input analog signal and is expressed in dB as well as ENOB (effective number of bits). We achieved an effective resolution of above 10.1\:ENOB (ADS5463) and 10.6\:ENOB (ADS5474) respectively over an input frequency range up to 240\:MHz. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop5} \caption{Signal-to-noise ratio (full-scale) and effective resolution of the 12-bit GANDALF digitization unit using the ADS5463 ADC at 500\:MHz. Values from the ADS5463 datasheet are given for comparison for selected analog input frequencies.} \label{fig_snr} \end{figure} \section{Digital Pulse Processing} The sampled detector signals are processed by DSP algorithms inside the VIRTEX5-SXT FPGA utilizing the high compute power of its 640 DSP48E Slices. Quantities of interest like pulse arrival time, pulse height and integrated charge are extracted and can be used for real-time calculation of derived quantities such as time-of-flight and energy loss. \subsection{Pulse Time Determination} To extract the time information from the detector signals a digital constant fraction discrimination (dCFD) algorithm was chosen. Inside the FPGA the digitized samples are delayed, multiplied by a fraction factor and added to the original samples (Fig. \ref{fig_cfd}). The zero-crossing of the resulting curve is determined by linear interpolation and forms the time stamp of the pulse. Extended simulations \cite{bartkn} helped to determine optimal parameters for the dCFD algorithm also in case of pile-up pulses. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop6} \caption{Illustration of the digital constant fraction (dCF) method. The original samples ($\times$) and the delayed and inverted samples ($+$) add up to the dCF function ($\bullet$).} \label{fig_cfd} \end{figure} \subsection{Performance Verification} First measurements of the timing resolution of the GANDALF digitizer were performed by using a Tektronix Arbitrary Function Generator (AFG3252) to simulate realistic detector pulses. The PMT signals are described by a moyal distribution \[ f(t) = A \cdot exp \left(-\frac{1}{2} \left(\frac{t-t_0}{k \cdot t_r} + exp\left(-\frac{t-t_0}{k \cdot t_r}\right) -1\right)\right), \] with maximum amplitude $A$ at $t=t_0$. For $k=0.69$, $t_r$ is the 10\%-90\% rise time. Two copies of the signal with constant but arbitrary delay are sampled and the constant fraction timing is performed by the DSP FPGA. The differences between the resulting time stamps show a distribution, whose width is then used to calculate the timing resolution. The dCFD resolution depends on the pulse amplitude. Therefore the measurements were done with a signal amplitude variation over the dynamic range of the input stage. In this test setup pulses with amplitudes from -50\:mV to -4\:V were measured with 1\:GS/s (Fig. \ref{fig_tim1}). One can see that with the GANDALF transient recorder a timing resolution of better than 50\:ps can be reached for signal amplitudes as small as 4\% of the relative dynamic range. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop7} \caption{GANDALF timing resolution measured with pulses of 2\:ns risetime generated by an AFG3252 arbitrary function generator. The dynamic range in this case is 0\:V to -4\:V. The expected resolution of the dCFD algorithm is determined by simulation.} \label{fig_tim1} \end{figure} To confirm these results in a real environment a measurement setup with a Hamamatsu PMT (R1450) as a signal source was installed. In this configuration a PiLas EIG1000D laser pulser with PiL040 Optical Head (from Advanced Laser Diode Systems) is used as light source. The laser emits very short optical pulses with pulse widths below 45\:ps (FWHM). The pulser features an additional TTL trigger output, which is used as a time reference. The jitter between the trigger and the optical output is typically below 3\:ps. In Fig. \ref{fig_tim2} the timing resolution of the laser and PMT system measured by GANDALF in 1\:GS/s mode is shown. The resolution of the system as a whole (green continuous line) is composed of the resolution of the GANDALF timing determination (blue dashed line) and the resolution of the laser and PMT system (red dash-dotted line). The latter is quantified in an independent measurement to be 39\:ps. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop8} \caption{Timing resolution of PMT signals generated from a Picosecond Injection Laser and measured with GANDALF.} \label{fig_tim2} \end{figure} The ability of the dCFD algorithm to separate pile-up pulses depends on the ratio of the pulse amplitudes. Fig. \ref{fig_doublepulses} shows the minimum delay $\Delta t$ between two pulses which is needed to separate them. The plot is obtained by simulation of two consecutive pulses with risetime $t_r=3\:ns$ for all combinations of amplitudes of the first and second pulse. The simulation result is verified by lab measurements using the AFG3252 to generate pile-up pulses with selected amplitude ratios. \begin{figure}[!t] \centering \includegraphics[width=3.47in]{schop9} \caption{Pulse separation ability of the dCFD algorithm depending on the amplitudes of the first (x-axis) and second (y-axis) pulse. The color code denotes the minimum delay between the pulses which is required for separation.} \label{fig_doublepulses} \end{figure} \section{Conclusion} A low cost VME64x system aimed at digitizing and processing detector signals has been designed and implemented to our full satisfaction. The design is modular, consisting of a carrier board on which two mezzanine boards with either analog or digital inputs can be plugged. The ADC mezzanine cards have been characterized and show excellent performance over a wide input frequency range. An optional high-speed serial VXS backplane offers inter-module communication for sophisticated trigger processing covering up to 288 detector channels. The GANDALF transient recorder has been installed at the COMPASS experiment during a two-week DVCS pilot run in September 2009. Extensive data have been recorded in order to verify the performance of the hardware and the signal processing algorithms. \section{Outlook} Recently an additional type of mezzanine card with 64 digital inputs has been designed, which accepts LVDS and LVPECL signals over a VHDCI connector. Using this digital mezzanine cards a 64-channel mean-timer and subsequent trigger matrix was implemented \cite{bieling} in the GANDALF module and is in action at COMPASS since April 2010. In a forthcoming paper we will describe the realization of GANDALF as a 128-channel time-to-digital converter module with 100\:ps digitization units, comparable to the F1-TDC chip \cite{fischer}. The TDC design is implemented inside the main FPGA which can host 128 channels of 500-MHz scalers at the same time. \section*{Acknowledgment} The authors gratefully acknowledge the discussions with their colleagues from the COMPASS collaboration and the support of their local workshops.
1,314,259,996,962
arxiv
\subsection{From sequences of discrete symbols to real valued embedding time series} For tasks involving sequences of symbols belonging to large vocabularies ($10^4$ to $10^7$ in size), it has become standard practice to embed each item in the form of learned vector of real values since the seminal work on word2vec~\cite{mikolov2013distributed,mikolov2013linguistic} and Glove embedding~\cite{pennington2014glove}. These methods map discrete symbols to real-valued vectors in $\mathbb{R}^p$. In practice, a few hundred embedding dimensions are sufficient to provide state-of-the-art predictive performance for tasks with vocabularies of several millions of symbols~\cite{mikolov2013distributed,covington2016deep,belletti2018factorized}. Close examination of the inter-item relationships inherited from these continuous representations~\cite{mikolov2013distributed,mikolov2013linguistic,maaten2008visualizing,xin2017folding} suggests that related items are indeed collocated in the embedding space. With these embeddings, we can map sequences of discrete symbols to sequences of real-valued vectors, and use methods developed for real-valued multi-variate time series for analysis. In particular, the well established theory of LRD~\cite{pipiras2017long} can be used to characterize the sequential dependence properties of sequences of learned item embeddings. While most existing work focuses on interpreting~\cite{mikolov2013distributed,mikolov2013linguistic}, assessing~\cite{xin2017folding}, and visualizing~\cite{maaten2008visualizing} inter-item relationships, to the best of our knowledge, we are the first to examine such relationships \emph{longitudinally along the time axis}. \subsection{Estimation methods for LRD} Although we have exposed the definition of the LRD coefficient $d$ in the mono-variate setting, we still need to extend the presentation to multi-variate time series as item embeddings are real-valued vectors. Here we again follow the presentation given in~\cite{pipiras2017long}. Consider a multi-variate second-order stationary time-series $(X_t)$ with $X_t \in \mathbb{R}^p$. We denote $\gamma_X(h) = Cov(X_t, X_{t+h})$ the matrix-valued auto-covariance function of $(X)$ which takes values in $\mathbb{R}^{p\times p}$ and $f_X$ the corresponding spectral density matrix which also takes values in $\mathbb{R}^{p\times p}$. By definition $\gamma_X$ and $f_X$ satisfy $ \forall j, k\in {1 \dots p}, \; \forall h \in \mathbb{Z}, \;$ $$\gamma_{X, j, k}(h) = \int_{-\pi}^{\pi} f_{X, j, k}(\lambda) e^{i h \lambda} d \lambda. $$ \begin{definition}{\textbf{Long Range Dependent (LRD) multi-variate real-valued time series:}\label{def:multi_LRD}} The multi-variate real-valued time series $(X_t)_{t \in \mathbb{Z}}$ is LRD iff. there exists a real vector $(d_i)_{i = 1 \ldots p} \in (0, \frac{1}{2})^p$ such that $$ \gamma_{X,j, k}(h)(\lambda) = L^{j, k}_{+\infty}(h)\;h^{d_j + d_k - 1} $$ or equivalently $$ f_{X, j, k}(\lambda) = L^{j, k}_{0+}(\lambda) \;\lambda^{-\left(d_j + d_k\right)} $$ where $L^{j, k}_{+\infty}$ are slow varying functions at infinity and $L^{j, k}_{0+}$ are slow varying functions close to $0+$. \end{definition} As a result, each element $j, k$ of the spectral density matrix of a multi-variate LRD time series can be written as $ f_{X, j, k} \sim g_{j, k} \lambda^{-(d_j + d_k)} $ for low frequencies $\lambda$. Similar to the mono-variate case, we can use a log-periodogram regression in low frequencies as a way to estimate $d$. \subsection{LRD and Mutual Information} Assuming $(X)$ is Gaussian, a relation between the rate of decay of the auto-correlation of $(X)$ and that of the Mutual Information $I\left(X_t, X_{t+h}\right)$ can be established~\cite{cover2012elements,mackay2003information}. One can easily prove that the mutual information of two multi-variate Gaussian random variables $U, V$ is $$I(U; V) = \frac{1}{2}\log\left(\frac{\det(\sigma_U) \det(\sigma_V)}{\det(\sigma)}\right)$$ where $\sigma_{U}$ and $\sigma_{V}$ are the corresponding covariance matrices and $\sigma$ is the $2p\times 2p$ covariance matrix. Given that $(X)$ is second-order stationary and Gaussian, one can show that~\cite{guo2005additive,guo2005mutual} \begin{align*} I & \left(X_t, X_{t+h}\right) - \log \det \left( \gamma_X(0) \right)\\ & = - \frac{1}{2} \left( \log \det \left ( \gamma_X(0) \gamma_X(0) - \gamma_X(h) \gamma_X(h) \right) \right) \\ & \sim \sum_{i=1}^p L^i_{+\infty}(h) h^{2 (2 d_i -1)} \end{align*} in the simple case where $\gamma_X$ is diagonal. Here $(L^i_{+\infty})_{i=1 \dots p}$ are slow varying functions at infinity. Therefore one can assume a characteristic power-law decay of the mutual information based on the values of $d$, which relates our spectral method for characterizing sequential memory to the mutual information based approach in~\cite{lin2016criticality}. In particular, in the mono-variate case, that is $p=1$, we have $$\log I\left(X_t, X_{t+h}\right) \propto 2(2d-1) \log h.$$ That is, the slope of decay of mutual information w.r.t. separation in the log-log space corresponds to the LRD coefficient $d$. Unfortunately in the multivariate case, the slope does not give access, even with the Gaussian diagonal assumption, to individual estimates of the components of $d$. \subsection{Implementation at scale} Let us first focus on a detailed algorithmic presentation of the estimation procedure we designed to estimate LRD coefficients in long sequences of symbols. Algorithm~\ref{alg:OLS_procedure} details the actual implementation for a given sequence and Figure~\ref{fig:schema_procedure} presents it schematically. In order to scale the method to data sets comprising millions of sequences we run the procedure in a mini-batched manner, computing FFTs and OLSs in parallel, while the estimates for the coefficients $d$ update a global estimate with a chosen learning rate. More details on the implementation are given in appendix. \begin{figure} \centering \includegraphics[width=\linewidth]{estimation_of_LRD_procedure.pdf} \caption{Schema of the log-periodogram estimation procedure employed in our study} \label{fig:schema_procedure} \end{figure} \begin{algorithm} \caption{Estimate LRD coefficients for a sequence of symbols} \label{alg:OLS_procedure} \begin{algorithmic} \REQUIRE{$L$ \COMMENT{padding length}, $p$ \COMMENT{embedding dimension}, $\mathbf{E}$ \COMMENT{symbol embeddings}} \ENSURE{$d \in \mathbb{R}^p$} \STATE{embeddingSequence $\gets$ lookupEmbedding($\mathbf{E}$, symbolSequence)} \STATE{paddedEmbeddingSequence $\gets$ pad(embeddingSequence, $L$, $\mathbf{0}$) \COMMENT{pad the beginning of sequence with zero valued vectors to obtain a sequence of length $L$}} \STATE{spectrum $\gets$ $\left| \text{RFFT}(\text{paddedEmbeddingSequence})[1:] \right|^2$ \COMMENT{remove the frequency $0$ term}} \FOR{$i \gets 0$ to $p-1$} \STATE{d[i] $\gets$ OLS($\log$(range(1, $L$ // 2 + 1), $\log$(spectrum[:, i]))} \ENDFOR \STATE{return $d$} \end{algorithmic} \end{algorithm} \subsection{Observations of LRD on actual sequential data sets} We now apply the estimation of the memory coefficient vector $d$ to sequences of learned item embeddings in a language and a user-behavior dataset. It is worth pointing out that the log-periodogram estimate of LRD assumes that the time series are second-order stationary and the spectrum measurement using FFT uncovers only linear sequential dependency patterns. Although neither assumptions are guaranteed in any arbitrary time series, our method is sufficient to detect linear second order stationary LRD patterns if they exist, without guarantee that it will unravel any kind of non-linear or non-stationary LRD. Our empirical results show that such a linear LRD does exist in the sequences of word embeddings and item embeddings corresponding respectively to text documents and user/item interactions on YouTube. \begin{figure} \centering \includegraphics[width=0.49\linewidth]{glove_none_padding} \includegraphics[width=0.49\linewidth]{sh_glove_none_padding} \caption{ \footnotesize{ Spectral density estimate (in log space), the LRD coefficients and the p-value for the OLS regression on the Wikipedia dataset (left). Similar estimation (right) with sequences whose words have been randomly shuffled. } \label{fig:LRD_lm1b}} \end{figure} \subsubsection{LRD in word sequences}\label{sec:estimate_lm} We start with measuring the LRD coefficients $d$ on a subset of the Wikipedia dump consisting of concatenated Wikipedia articles (100 MB from Wikipedia) which keeps the sequential structure of the documents intact --- no processing is done besides removing punctuation, converting all letters to lowercase and removing other artifacts. We break the documents into long sequences of 2048 words. Each sequence is then transformed into a multi-variate real-valued time series by mapping each word of the sequence to a pre-trained 300-dimensional Glove embedding~\cite{pennington2014glove}, learned on the 2014 Wikipedia dump. Next, we compute the average of the squared magnitude $\widehat{f_X(\lambda)}$ of the Fourier Transform of the sequence of word embeddings. The slope of $\log\left(\widehat{f_X(\lambda)}\right)$ versus $\log(\lambda)$ on low frequencies is then estimated through OLS by minimizing the corresponding log-periodogram loss as shown in Equation~\ref{eq:log-periodogram}. We tried different padding strategies for words whose embedding was unknown --- skipping, zero padding and mean learned embedding padding --- and did not find any substantial change in the resulting estimates of $d$. We present the spectral density estimate, the LRD coefficients and the p-value for the OLS regression in Figure~\ref{fig:LRD_lm1b} (left). As shown in the figure, the spectral density estimates --- each curve corresponding to FFT of one dimension of the word embedding --- decay linearly near $0$ in the log-log space, and the estimates for the coefficients $d$, with all the 300 dimensions shown in the x-axis, are significantly positive. These observations suggest that the time series under consideration is long range dependent according to the canonical statistical definition of LRD, which in return indicates that the input sequence is LRD. A higher coefficient demonstrates that a higher amount of memory is present. To give a confidence of our estimate, we also include the p-value of OLS estimating a slope of zero, that is $d = 0$. Extremely small p-values are returned, which again corroborates that the sequences are indeed LRD. Ideally, one would tailor the p-value tests to the particular setting of OLS in log-scale (which violates some assumptions on the distribution of errors), but this is outside the score of the paper. Detailed theory on the log-periodogram estimator can be found in~\cite{robinson1995log}. We now proceed with a sanity check for the estimator of LRD through the log-periodogram method operating on vector-valued sequences of embedded symbols. On Figure~\ref{fig:LRD_lm1b} (right) we include the estimates on the same data-sets with the words randomly shuffled within sequences of $2048$ words. Random shuffling dissolves any dependence patterns existing in the original data-set and gives a white-noise-like statistical structure to the embedding sequences. We can see that the spectral density no longer concentrate its mass around $0$ (low frequencies) and our log-periodogram estimator gives close to $0$ estimates for the LRD coefficient of the shuffled sequences with correspondingly high p-values. The side-by-side comparison showcased that our method is able to detect the presence of LRD (as in the original word sequences) vs not (as in the shuffled sequences). \subsubsection{LRD in user behavior sequences}\label{sec:estimate_seq_rec} Next we measure the LRD coefficients $d$ on user behavior data. The data set consists of user generated interaction sequences in a large-scale anonymized dataset from YouTube to which we have access through employment at YouTube working on improving YouTube for users. Each sequence records a series of timestamped item ids corresponding to a given user $u$ starting to access an item $v$ at time $t_i$: $\left\{\xi^u_{t_i} \right\}_{i = 1 \dots N_u}$ --- where $N_u$ is the number of interactions available for user $u$. We clip the sequences to at most $500$ observations per user. This production data-set comprises of more than $200$ million training sequences, more than $1$ million test sequences and has an average sequence length of $200$. Different from the word sequences where each $X_t$ involves a single symbol, $\xi^u_{t_i}$ includes multiple sub-symbols, each corresponding to one different aspect of the interaction: the item watched (from a vocabulary of 2 million items), the creator/publisher of the watched item (from a vocabulary of 1 million), the page the item was displayed (order of tens) and the OS employed by the user (orders of hundreds). The discrete values of these four groups of symbols are embedded with $128$, $128$, $32$ and $32$ dimensional real-valued vectors respectively. The embeddings are concatenated and trained as part of the sequential neural model aiming at predicting the next item the user will consume, which we are going to detail in section~\ref{sec:experiment_recommendation}. With the learned embeddings, we follow the same procedure as described in the word sequence case to estimate the LRD coefficients. Figure~\ref{fig:spectrum_seq_rec} plots the spectral density of the embedding sequences. Again, the power law decay (left) and the linear decay in the log-log space (right) near $0$ are the clear marks of a LRD pattern. Figure~\ref{fig:d_estimates_seq_rec} (left) shows the estimated four groups of coefficients $d$. We notice that the embedding representing creators/publishers estimates larger coefficients $d$ than the embedding representing individual items, indicating more LRD. Also, the software interface embedding carries the highest amount of LRD which is expected as it is less likely to change within short sequences of interactions. The maximum OLS p-value for the linear slopes in log-log scale being zero is $1.32 \times 10^{-108}$. Another aspect in which these user-behavior sequences differ from the word sequences is the irregularly-spaced events. In this first work we do not take into account of that in order to use the Fast Fourier Transform algorithm readily when computing the spectral density of $(\widetilde{\xi}_t)$. The Fourier transform however is well defined for irregularly spaced time stamps~\cite{brillinger1981time,rahimi2008random,belletti2017random} and in future work we plan to apply dedicated estimators to improve our estimation under these settings. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{spectra.pdf} \caption{ \footnotesize{ Spectral density of time series of embedded symbols in the original (left) and log-log space (right) where a linear decay is clear for several estimated spectra near $0$. }} \label{fig:spectrum_seq_rec} \end{figure} \subsection{LM1B language modeling task} The Billion word data set~\cite{chelba2013one} is a standard benchmark for language modeling~\cite{mikolov2010recurrent,bai2018empirical,chen2018dynamical} aimed at predicting the next word in a text. We slightly modify the benchmark to create a long range prediction task involving longer sequences. Sequences of $128$ words are considered and the model's task is now to predict the last $4$ words. \begin{table*} \centering \begin{tabular}{cccr} \toprule \textbf{Model} & \textbf{Sub-sequence lengths} & \textbf{\# of hidden units} & \textbf{\# of add/mul.}\\ \midrule LM baseline & $128$ & $2048$ & 536870912\\ \midrule LM PowerLawEvoRNN & $64, 32, 16, 8, 4, 4$ & $64, 128, 256, 512, 1024, 2048$ & 24903680\\ \midrule LM ExpEvoRNN & $108, 4, 4, 4, 4, 4$ & $64, 128, 256, 512, 1024, 2048$ & 22790144\\ \midrule Seq. rec. baseline & $512$ & $256$ & $33554432$ \\ \midrule Seq. rec. PowerLawEvoRNN & $256, 128, 64, 32, 32$ & $32, 64, 128, 256, 256$ & $6029312$\\ \midrule Seq. rec. ExpEvoRNN & $384, 32, 32, 32, 32$ & $34, 69, 138, 276, 276$ & $6080928$\\ \midrule Seq. rec. ExtrExpEvoRNN & $480, 8, 8, 8, 8$ & $2, 8, 64, 256, 1024$ & $8948096$\\ \bottomrule \end{tabular} \caption{ \footnotesize{ RNN architectures employed in the sequential recommendation and language modeling tasks. Although they learn more parameters, EvoRNNs require much less compute time than baselines and therefore can be served under lower latency constraints. Here, \# add/mutiply give asymptotic complexity estimates to consider in relative magnitude. }} \label{tab:num_units} \end{table*} The baseline model is a LSTM~\cite{hochreiter1997long} following the implementation of~\cite{jozefowicz2016exploring}. EvoRNNs follow the same setup except the different distribution of compute resources. We consider two variants of the EvoRNN architecture: a power law decay variant and an exponential decay variant. The cell sizes and computational footprint of both variants are detailed in Table~\ref{tab:num_units}. As an example, the power law decay variant break the input sequence into six subsquences of length 64, 32, 16, 8, 4 and 4, each using RNNs of hidden units of 64, 128, 256, 512, 1024 and 2048, with most compute spending near the end of the sequence.~\footnote{ Note that as RNN cells of different number of hidden units are instantiated through the sequence, additional projection matrices, one between two sub-sequences, are learned in EvoRNNs. We can further save the computational cost by using fast random projections as in~\cite{yang2015deep}.} The total number of add/multiply performed using the baseline model as well as EvoRNNs with different scheduling are shown in the last column. \begin{figure}[h!] \centering \includegraphics[width=0.9\linewidth]{lm1b_results.pdf} \caption{ \footnotesize{ Performance results on language modeling task. EvoRNN architectures provide better performance than the baseline model and train considerably faster. The increase in training speed is expected as the EvoRNN architectures spend less compute-time on earlier inputs. }} \label{fig:lm1b_task} \end{figure} Figure~\ref{fig:lm1b_task} shows that on this language modeling task both variants of the cheaper EvoRNN architectures out-perform the baseline model with only fraction of compute resources, as indicated by the wall time (bottom two plots) in addition to the estimate in Table~\ref{tab:num_units}. This can be attributed to the implicit architectural prior of EvoRNN giving fewer degrees of freedom to parameters involved in processing inputs located further into the past, which inherently has less signal for predicting the next word near the end of the sequence. Architectures having fewer hidden units for these inputs may resist more robustly to the lower levels of signal to noise ratio present at the beginning of the sequence. \subsection{Sequential recommendation task}~\label{sec:experiment_recommendation} Next we consider a sequential recommendation task, where the sequential recommender under consideration serves users accessing a browsing page on which impressions are displayed. It has access to historical interactions, \textit{i.e.,} watched items, from the same user identified by the same personal account. This sequential neural model nominating items for recommendation therefore maps the sequence of observations $\left\{\xi^u_{t_i} \right\}_{i = 1 \dots N_u}$ to a predicted item $\xi$. Neural recommender systems attempt at foreseeing the interest of users under extreme constraints of latency and scale. We define the task as predicting the next item the user will consume given a recorded history of items already consumed. Such a problem setting is indeed common in collaborative filtering~\cite{sarwar2001item,linden2003amazon} recommendations. While the user history can span over months, only watches from the last 7 days are used for labels in training and watches in the last 2 days are used for testing. The train/test split is $90/10\%$. The test set does not overlap with the train set and corresponds to the last temporal slice of the dataset. \begin{figure} \centering \includegraphics[trim=0.0cm 1cm 0.0cm 6cm, width=0.9\linewidth]{estimation_schema.pdf} \caption{ \footnotesize{ Architecture for the sequential recommender. Input items are embedded into dense real-valued embeddings as described in section~\ref{sec:estimate_seq_rec} and sent through an RNN to predict the next item to be consumed. Left part of the figure, shown in green blocks, depicts the estimation process we detailed in section~\ref{sec:estimate_lm} } } \label{fig:estimation_procedure} \end{figure} Figure~\ref{fig:estimation_procedure} shows the underlying architecture powering the recommender. It relies on an RNN, a gated recurrent unit (GRU)~\cite{chung2014empirical}, to read through the sequence of past observations and predict the ID of the next item to be consumed by the user. The network parameters are trained using Adagrad to minimize a weighted cross-entropy loss. The embedding dimensions and RNN cells are summarized in appendix (Table~\ref{tab:baseline_youtube_details}). The model is a standard GRU fed with embedded symbols (after concatenation) producing predictions on the items the user will click with a softmax layer trained by negative sampling. Table~\ref{tab:num_units} shows the number of hidden units used in our baseline and the EvoRNNs for this task as well. Similarly, EvoRNNs only need a fraction of the add/multiplies used in the baseline RNN. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{seq_rec_results.pdf} \caption{ \footnotesize{ Performance results on the sequential recommendation task. For a lower computational budget, the EvoRNNs provide similar performance to the more expensive constant size baseline RNN. Although they use more memory because more parameters are learned in the present implementation, the PowerLaw and Exp EvoRNN networks need much fewer add/multiplies to compute a prediction as they spend less time processing inputs located further in the past. The extremely myopic ExtrExp RNN however does not perform as well as the baseline although it has many more free parameters and a higher computational budget than the other EvoRNNs. CstEvoRNN is an EvoRNN with as many independently learned cells as PowerLawEvoRNN but all as large as the baseline's cell. In spite of the increased number of free parameters, CstEvoRNN does not show much improvement. }} \label{fig:seq_rec_task} \end{figure} We report the Mean-average-precision-at-20 (MAP@20)~\cite{guillaumin2009tagprop} as the main performance metric. Figure~\ref{fig:seq_rec_task} shows the progress of the MAP@20 score for the baseline and different variants of the EvoRNNs, up to $1.2$ million steps of training updates. We can see that 1) Variants of EvoRNNs reaches comparable performance as the baseline model, which uses 5 times more compute than EvoRNNs; 2) ExtrExpRNN, a myopic RNN designed to spend all of its compute near the very end of the sequences, and little on inputs in the past performed slightly worse, which confirms that the user behavior data is indeed LRD, and completely ignoring inputs from the past results in performance degradation. \subsection{Conclusion and discussion on experiments} After having demonstrated that the sequences of inputs we consider are LRD, we also showed that a parsimonious use of computational spending is sufficient to produce competitive expressive models for language modeling and sequential recommendation tasks. The models we introduce, EvoRNNs, dedicate their serving time computational budget in priority to recent inputs while also leveraging information located further into the past. While the new architectures train more parameters (learning multiple RNN cells) in their present implementation, they help process longer sequences of inputs under a given latency deadline which is crucial to improve predictive accuracy for LRD sequential prediction tasks. In future work we aim at reducing the memory footprint by reusing parameters when processing earlier inputs in the sequence as larger parameter matrices can be projected to fewer dimensions. \section{Introduction} \input intro.tex \section{Related work} \input related.tex \section{Estimation of LRD with distributed word and item representations} \label{sec:estimate} \input estimate.tex \section{LRD and neural architecture design} \input method.tex \section{Experimental results with EvoRNN} \input experiment.tex \section{Conclusion} While the issue of LRD has been considered widely in neural sequential modeling, LRD has not been quantified methodically for the corresponding sequences of inputs. For tasks such as language understanding and sequential recommendations --- where neural sequential models are pervasive and provide state-of-the-art performance --- model dependent gradient based considerations dominate when it comes to measuring LRD. In the present work, we employed a well established LRD theory for real-valued time series on sequences of vector-valued item embeddings to estimate LRD in sequences of discrete symbols belonging to large vocabularies. The resulting estimates of LRD coefficients unraveled new exploratory insights on modeling sequences of words and user interactions. Considering the power law decay of relevance of past inputs led to the construction of new recurrent architectures: EvoRNNs. EvoRNNs showed a performance at worst comparable with state-of-the-art baselines for language modeling and sequential recommendations using only a fraction of the computational cost. \bibliographystyle{ACM-Reference-Format} \subsection{LRD in real-valued time series} We follow the presentation of LRD given in~\cite{pipiras2017long} and consider a second-order stationary real-valued time series $(X_t)_{t \in \mathbb{Z}}$, abbreviated to $(X)$. By assumption, the mean of the time series $\mu_X = E[X_t]$ and its auto-covariance function $\gamma_X(h) = Cov(X_t, X_{t + h})$ are well defined and do not change over time. The spectral density $f_X$ of $(X)$ is also well defined and verifies $\int_{-\pi}^{\pi} f_X(\lambda) e^{i h \lambda} d\lambda = \gamma_X(h)$. Prior to delving in the topic of LRD, we recall the definition of slow varying functions, \textit{e.g.}, the logarithm. \begin{definition}{\textbf{Slow varying function:}} A function $L$ is slow varying at infinity if it is positive on some interval $[c, \infty)$ where $c \geq 0$ and for any $a > 0$ $$ \lim_{u \rightarrow \infty} \frac{L(au)}{L(u)} = 1. $$ A function $L$ is slow varying near $0$ if $u \rightarrow L(\frac{1}{u})$ is slow varying at infinity. \end{definition} Five different non-equivalent definitions of LRD are given in~\cite{pipiras2017long}. Here we only consider two equivalent definitions given respectively in the time and frequency domain. \begin{definition}{\textbf{Long Range Dependent (LRD) mono-variate real-valued time series:}\label{def:mono-LRD}} The mono-variate real-valued time series $(X)$ is LRD iff. there exists a real $d \in (0, \frac{1}{2})$, referred to as the \textbf{LRD coefficient of $(X)$}, such that $$ \gamma_X(h) = L_{\infty}(h) h^{2d - 1} \text{, or equivalently } f_X(\lambda) = L_{0^+}(\lambda) \lambda^{-2d}, $$ where $L_{\infty}$ and $L_{0^+}$ are slow varying functions at infinity and $0$ respectively. \end{definition} Higher value of $d$ indicates a slower decay of temporal dependence and therefore a higher amount of memory in the time series. Some readers may be acquainted with the Hurst index $H$ which measures the amount of LRD in a stochastic process through its self-similarity and scaling properties~\cite{pipiras2017long,mandelbrot1998fractals,sornette2006critical}. If $H \in (\frac{1}{2}, 1)$ then $H$ verifies $d = H - \frac{1}{2}$ (this property is for instance proven for Fractional Brownian Motions in~\cite{pipiras2017long}). A consequence of the time series $(X)$ being LRD is that the variance of $\frac{\Sigma_{t=1}^N X_t}{N}$ decays much slower as $L_{\infty}(N) N^{2d - 1}$ where $L_{\infty}$ is another slow varying function at infinity. LRD is indeed notorious for changing convergence rates of $M$ estimators as compared to the case of \emph{iid}. observations~\cite{doukhan2002theory,samorodnitsky2007long,pipiras2017long,beran2017statistics}. As explained in~\cite{doukhan2002theory,samorodnitsky2007long,pipiras2017long,beran2017statistics} there are multiple standard estimators for the LRD coefficient $d$ such as the Rescaled Range estimator $R/S$, wavelet based, and variance estimation based estimator. Maximum Likelihood Estimators for generative linear LRD models such as FARIMA models are also available. One long-standing estimator for $d$ is the log-periodogram estimator~\cite{robinson1995log} which focuses on the spectral density $ f_X(\lambda) = L_{0^+}(\lambda) \lambda^{-2d} $. Let $\widehat{f_X(\lambda)} \equiv |FFT_N[\lambda](X)|^2 = |\sum_{t=1}^N X_t e^{-it\lambda}|^2$ denote the empirical spectrum of $(X)$ --- assuming $N$ observations of the time series are available --- then one can measure $d$ through the estimation of the slope $b=-2d$ in the affine relationship \begin{equation}\label{eq:log-periodogram} \log \left( \widehat{f_X(\lambda)}\right) = a + b \log(\lambda) \end{equation} by ordinary least squares regression in the domain of low frequencies~\cite{robinson1995log}. Although Maximum Likelihood Estimation is now preferred for measuring LRD in time series~\cite{pipiras2017long}, we employ the log-periodogram estimate here to avoid assuming a particular generative model for the data. Therefore, we propose to methodically quantify LRD in sequences of symbols in large vocabularies/inventories through the spectral density of sequences. \subsection{LRD in sequences of symbols} LRD often manifests itself in physical and societal phenomena through a slow decay of temporal dependence which is usually observed in the form of a power-law decaying auto-covariance function~\cite{pipiras2017long}. Per Definition~\ref{def:mono-LRD}, this time domain power-law decay at infinity is equivalent to a power-law divergence of the spectral density in the frequency domain near $0$. LRD estimation has become standard in the study of real-valued time series and has led to improvements in LRD predictions or risk assessment thanks to models such as FARIMA~\cite{pipiras2017long,sornette2006critical}. In contrast, while it is widely assumed that LRD is a key feature of the input sequences that needs to be captured for better predictions in language modeling and sequential recommendations, the amount of LRD in these tasks remains to be estimated in a principled manner. A key difference between real-valued time series and language modeling or sequential recommendation tasks is that the latter generally involve sequences of discrete symbols or items from vocabularies of $10^5$ to $10^7$ distinct values. For small vocabularies of symbols, computing the decay of mutual information along the time axis helps quantify the amount of LRD as in sequences of characters~\cite{lin2016criticality}. Unfortunately, these techniques do not scale to large vocabularies due to sparse observations and the prohibitively large number of possible combinations. In the present paper we show how alternate representations of symbols can scale estimates of the LRD coefficient $d$ to sequences involving large vocabularies of symbols. \subsection{Gradient propagation and LRD in RNNs} Model-free estimators of LRD such as the log-periodogram estimator differ radically from usual measures of LRD employed in sequential neural models. A substantial body of work concerned with the application of RNNs to LRD sequences of inputs focuses on the propagation of gradients through time. From the seminal paper on the difficulty of training RNNs~\cite{pascanu2013difficulty} to recent developments~\cite{belletti2018factorized,miller2018recurrent}, exploding or vanishing gradients are considered the main obstacle to LRD modeling in RNNs. Various approaches have been proposed to address the issues. Modifications started by introducing gating as in LSTMs~\cite{hochreiter1997long} and GRUs~\cite{chung2014empirical} and later by building multi-scale temporal structure~\cite{chung2016hierarchical,chang2017dilated}, constraining on the spectrum of learned parameter matrices~\cite{arjovsky2016unitary,jing2016tunable,vorontsov2017orthogonality}, regularization~\cite{trinh2018learning,merity2017regularizing} and initialization schemes~\cite{chen2018dynamical} to improve the trainability of RNNs. It is worth mentioning here that RNNs are not the only neural models to provide good performance with long sequences of inputs. For instance, dilated convolutional architectures~\cite{van2016wavenet,yu2015multi} have been offered as an effective alternative. Attention is also readily able to capture LRD patterns as part of Transformer~\cite{vaswani2017attention} but unfortunately it is challenging to serve such a multi-layer attention network with the very low latency required by recommender systems. An alternate solution may be to use a single attention layer as part of a Mixture-of-Experts~\cite{tang2019towards}. The key novelty of our approach is to not measure LRD as the propagation of information through a RNN but rather estimate LRD in the sequences of inputs themselves and design model architectures to match the dependence patterns. Therefore, although we mostly focus on architectural insights for RNNs, our approach is in no way limited to this class of models and could inform the design of other sequential models such as convolution or attention~\cite{bahdanau2014neural} based neural architectures.